Li, Siwei; Ding, Wentao; Zhang, Xueli; Jiang, Huifeng; Bi, Changhao
2016-01-01
Saccharomyces cerevisiae has already been used for heterologous production of fuel chemicals and valuable natural products. The establishment of complicated heterologous biosynthetic pathways in S. cerevisiae became the research focus of Synthetic Biology and Metabolic Engineering. Thus, simple and efficient genomic integration techniques of large number of transcription units are demanded urgently. An efficient DNA assembly and chromosomal integration method was created by combining homologous recombination (HR) in S. cerevisiae and Golden Gate DNA assembly method, designated as modularized two-step (M2S) technique. Two major assembly steps are performed consecutively to integrate multiple transcription units simultaneously. In Step 1, Modularized scaffold containing a head-to-head promoter module and a pair of terminators was assembled with two genes. Thus, two transcription units were assembled with Golden Gate method into one scaffold in one reaction. In Step 2, the two transcription units were mixed with modules of selective markers and integration sites and transformed into S. cerevisiae for assembly and integration. In both steps, universal primers were designed for identification of correct clones. Establishment of a functional β-carotene biosynthetic pathway in S. cerevisiae within 5 days demonstrated high efficiency of this method, and a 10-transcriptional-unit pathway integration illustrated the capacity of this method. Modular design of transcription units and integration elements simplified assembly and integration procedure, and eliminated frequent designing and synthesis of DNA fragments in previous methods. Also, by assembling most parts in Step 1 in vitro, the number of DNA cassettes for homologous integration in Step 2 was significantly reduced. Thus, high assembly efficiency, high integration capacity, and low error rate were achieved.
Finn, John M.
2015-03-01
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a 'special divergence-free' property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint. Wemore » also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Ref. [11], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Ref. [35], appears to work very well.« less
Developing an Integrated Library Program. Professional Growth Series.
ERIC Educational Resources Information Center
Miller, Donna P.; Anderson, J'Lynn
This book provides teachers, media specialists, and administrators with a step-by-step method for integrating library resources and skills into the classroom curriculum. In this method, all curriculum areas are integrated into major units of study that are team-planned, team-produced, and team-taught. Topics include: components of the program and…
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
NASA Astrophysics Data System (ADS)
Lafitte, Pauline; Melis, Ward; Samaey, Giovanni
2017-07-01
We present a general, high-order, fully explicit relaxation scheme which can be applied to any system of nonlinear hyperbolic conservation laws in multiple dimensions. The scheme consists of two steps. In a first (relaxation) step, the nonlinear hyperbolic conservation law is approximated by a kinetic equation with stiff BGK source term. Then, this kinetic equation is integrated in time using a projective integration method. After taking a few small (inner) steps with a simple, explicit method (such as direct forward Euler) to damp out the stiff components of the solution, the time derivative is estimated and used in an (outer) Runge-Kutta method of arbitrary order. We show that, with an appropriate choice of inner step size, the time step restriction on the outer time step is similar to the CFL condition for the hyperbolic conservation law. Moreover, the number of inner time steps is also independent of the stiffness of the BGK source term. We discuss stability and consistency, and illustrate with numerical results (linear advection, Burgers' equation and the shallow water and Euler equations) in one and two spatial dimensions.
Calculating Time-Integral Quantities in Depletion Calculations
Isotalo, Aarno
2016-06-02
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
Multi-off-grid methods in multi-step integration of ordinary differential equations
NASA Technical Reports Server (NTRS)
Beaudet, P. R.
1974-01-01
Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.
NASA Astrophysics Data System (ADS)
Pârv, Bazil
This paper deals with the Everhart numerical integration method, a well-known method in astronomical research. This method, a single-step one, is widely used for numerical integration of motion equation of celestial bodies. For an integration step, this method uses unequally-spaced substeps, defined by the roots of the so-called generating polynomial of Everhart's method. For this polynomial, this paper proposes and proves new recurrence formulae. The Maple computer algebra system was used to find and prove these formulae. Again, Maple seems to be well suited and easy to use in mathematical research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finn, John M., E-mail: finn@lanl.gov
2015-03-15
Properties of integration schemes for solenoidal fields in three dimensions are studied, with a focus on integrating magnetic field lines in a plasma using adaptive time stepping. It is shown that implicit midpoint (IM) and a scheme we call three-dimensional leapfrog (LF) can do a good job (in the sense of preserving KAM tori) of integrating fields that are reversible, or (for LF) have a “special divergence-free” (SDF) property. We review the notion of a self-adjoint scheme, showing that such schemes are at least second order accurate and can always be formed by composing an arbitrary scheme with its adjoint.more » We also review the concept of reversibility, showing that a reversible but not exactly volume-preserving scheme can lead to a fractal invariant measure in a chaotic region, although this property may not often be observable. We also show numerical results indicating that the IM and LF schemes can fail to preserve KAM tori when the reversibility property (and the SDF property for LF) of the field is broken. We discuss extensions to measure preserving flows, the integration of magnetic field lines in a plasma and the integration of rays for several plasma waves. The main new result of this paper relates to non-uniform time stepping for volume-preserving flows. We investigate two potential schemes, both based on the general method of Feng and Shang [Numer. Math. 71, 451 (1995)], in which the flow is integrated in split time steps, each Hamiltonian in two dimensions. The first scheme is an extension of the method of extended phase space, a well-proven method of symplectic integration with non-uniform time steps. This method is found not to work, and an explanation is given. The second method investigated is a method based on transformation to canonical variables for the two split-step Hamiltonian systems. This method, which is related to the method of non-canonical generating functions of Richardson and Finn [Plasma Phys. Controlled Fusion 54, 014004 (2012)], appears to work very well.« less
Photodiodes integration on a suspended ridge structure VOA using 2-step flip-chip bonding method
NASA Astrophysics Data System (ADS)
Kim, Seon Hoon; Kim, Tae Un; Ki, Hyun Chul; Kim, Doo Gun; Kim, Hwe Jong; Lim, Jung Woon; Lee, Dong Yeol; Park, Chul Hee
2015-01-01
In this works, we have demonstrated a VOA integrated with mPDs, based on silica-on-silicon PLC and flip-chip bonding technologies. The suspended ridge structure was applied to reduce the power consumption. It achieves the attenuation of 30dB in open loop operation with the power consumption of below 30W. We have applied two-step flipchip bonding method using passive alignment to perform high density multi-chip integration on a VOA with eutectic AuSn solder bumps. The average bonding strength of the two-step flip-chip bonding method was about 90gf.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Addressable-Matrix Integrated-Circuit Test Structure
NASA Technical Reports Server (NTRS)
Sayah, Hoshyar R.; Buehler, Martin G.
1991-01-01
Method of quality control based on use of row- and column-addressable test structure speeds collection of data on widths of resistor lines and coverage of steps in integrated circuits. By use of straightforward mathematical model, line widths and step coverages deduced from measurements of electrical resistances in each of various combinations of lines, steps, and bridges addressable in test structure. Intended for use in evaluating processes and equipment used in manufacture of application-specific integrated circuits.
Gulmans, J; Vollenbroek-Hutten, M M R; Van Gemert-Pijnen, J E W C; Van Harten, W H
2007-10-01
Owing to the involvement of multiple professionals from various institutions, integrated care settings are prone to suboptimal patient care communication. To assure continuity, communication gaps should be identified for targeted improvement initiatives. However, available assessment methods are often one-sided evaluations not appropriate for integrated care settings. We developed an evaluation approach that takes into account the multiple communication links and evaluation perspectives inherent to these settings. In this study, we describe this approach, using the integrated care setting of Cerebral Palsy as illustration. The approach follows a three-step mixed design in which the results of each step are used to mark out the subsequent step's focus. The first step patient questionnaire aims to identify quality gaps experienced by patients, comparing their expectancies and experiences with respect to patient-professional and inter-professional communication. Resulting gaps form the input of in-depth interviews with a subset of patients to evaluate underlying factors of ineffective communication. Resulting factors form the input of the final step's focus group meetings with professionals to corroborate and complete the findings. By combining methods, the presented approach aims to minimize limitations inherent to the application of single methods. The comprehensiveness of the approach enables its applicability in various integrated care settings. Its sequential design allows for in-depth evaluation of relevant quality gaps. Further research is needed to evaluate the approach's feasibility in practice. In our subsequent study, we present the results of the approach in the integrated care setting of children with Cerebral Palsy in three Dutch care regions.
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Darian, Armen; Sindir, Munir
1992-01-01
We have applied and compared the efficiency and accuracy of two commonly used numerical methods for the solution of Navier-Stokes equations. The artificial compressibility method augments the continuity equation with a transient pressure term and allows one to solve the modified equations as a coupled system. Due to its implicit nature, one can have the luxury of taking a large temporal integration step at the expense of higher memory requirement and larger operation counts per step. Meanwhile, the fractional step method splits the Navier-Stokes equations into a sequence of differential operators and integrates them in multiple steps. The memory requirement and operation count per time step are low, however, the restriction on the size of time marching step is more severe. To explore the strengths and weaknesses of these two methods, we used them for the computation of a two-dimensional driven cavity flow with Reynolds number of 100 and 1000, respectively. Three grid sizes, 41 x 41, 81 x 81, and 161 x 161 were used. The computations were considered after the L2-norm of the change of the dependent variables in two consecutive time steps has fallen below 10(exp -5).
A point implicit time integration technique for slow transient flow problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kadioglu, Samet Y.; Berry, Ray A.; Martineau, Richard C.
2015-05-01
We introduce a point implicit time integration technique for slow transient flow problems. The method treats the solution variables of interest (that can be located at cell centers, cell edges, or cell nodes) implicitly and the rest of the information related to same or other variables are handled explicitly. The method does not require implicit iteration; instead it time advances the solutions in a similar spirit to explicit methods, except it involves a few additional function(s) evaluation steps. Moreover, the method is unconditionally stable, as a fully implicit method would be. This new approach exhibits the simplicity of implementation ofmore » explicit methods and the stability of implicit methods. It is specifically designed for slow transient flow problems of long duration wherein one would like to perform time integrations with very large time steps. Because the method can be time inaccurate for fast transient problems, particularly with larger time steps, an appropriate solution strategy for a problem that evolves from a fast to a slow transient would be to integrate the fast transient with an explicit or semi-implicit technique and then switch to this point implicit method as soon as the time variation slows sufficiently. We have solved several test problems that result from scalar or systems of flow equations. Our findings indicate the new method can integrate slow transient problems very efficiently; and its implementation is very robust.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isotalo, Aarno
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
Tricco, Andrea C; Antony, Jesmin; Soobiah, Charlene; Kastner, Monika; MacDonald, Heather; Cogo, Elise; Lillie, Erin; Tran, Judy; Straus, Sharon E
2016-05-01
To describe and compare, through a scoping review, emerging knowledge synthesis methods for integrating qualitative and quantitative evidence in health care, in terms of expertise required, similarities, differences, strengths, limitations, and steps involved in using the methods. Electronic databases (e.g., MEDLINE) were searched, and two reviewers independently selected studies and abstracted data for qualitative analysis. In total, 121 articles reporting seven knowledge synthesis methods (critical interpretive synthesis, integrative review, meta-narrative review, meta-summary, mixed studies review, narrative synthesis, and realist review) were included after screening of 17,962 citations and 1,010 full-text articles. Common similarities among methods related to the entire synthesis process, while common differences related to the research question and eligibility criteria. The most common strength was a comprehensive synthesis providing rich contextual data, whereas the most common weakness was a highly subjective method that was not reproducible. For critical interpretive synthesis, meta-narrative review, meta-summary, and narrative synthesis, guidance was not provided for some steps of the review process. Some of the knowledge synthesis methods provided guidance on all steps, whereas other methods were missing guidance on the synthesis process. Further work is needed to clarify these emerging knowledge synthesis methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Analysis of 3D poroelastodynamics using BEM based on modified time-step scheme
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Petrov, A. N.; Vorobtsov, I. V.
2017-10-01
The development of 3d boundary elements modeling of dynamic partially saturated poroelastic media using a stepping scheme is presented in this paper. Boundary Element Method (BEM) in Laplace domain and the time-stepping scheme for numerical inversion of the Laplace transform are used to solve the boundary value problem. The modified stepping scheme with a varied integration step for quadrature coefficients calculation using the symmetry of the integrand function and integral formulas of Strongly Oscillating Functions was applied. The problem with force acting on a poroelastic prismatic console end was solved using the developed method. A comparison of the results obtained by the traditional stepping scheme with the solutions obtained by this modified scheme shows that the computational efficiency is better with usage of combined formulas.
Leblond, Veronique; Ouzegdouh, Maya; Button, Paul
2017-01-01
Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561
Integrals for IBS and beam cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burov, A.; /Fermilab
Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.
Integrals for IBS and Beam Cooling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burov, A.
Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Method and apparatus for in-system redundant array repair on integrated circuits
Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN
2008-07-29
Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.
Method and apparatus for in-system redundant array repair on integrated circuits
Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN
2008-07-08
Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.
Method and apparatus for in-system redundant array repair on integrated circuits
Bright, Arthur A.; Crumley, Paul G.; Dombrowa, Marc B.; Douskey, Steven M.; Haring, Rudolf A.; Oakland, Steven F.; Ouellette, Michael R.; Strissel, Scott A.
2007-12-18
Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.
Method for integrating microelectromechanical devices with electronic circuitry
Montague, Stephen; Smith, James H.; Sniegowski, Jeffry J.; McWhorter, Paul J.
1998-01-01
A method for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.
Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.
2017-10-12
Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Analysis of real-time numerical integration methods applied to dynamic clamp experiments.
Butera, Robert J; McCarthy, Maeve L
2004-12-01
Real-time systems are frequently used as an experimental tool, whereby simulated models interact in real time with neurophysiological experiments. The most demanding of these techniques is known as the dynamic clamp, where simulated ion channel conductances are artificially injected into a neuron via intracellular electrodes for measurement and stimulation. Methodologies for implementing the numerical integration of the gating variables in real time typically employ first-order numerical methods, either Euler or exponential Euler (EE). EE is often used for rapidly integrating ion channel gating variables. We find via simulation studies that for small time steps, both methods are comparable, but at larger time steps, EE performs worse than Euler. We derive error bounds for both methods, and find that the error can be characterized in terms of two ratios: time step over time constant, and voltage measurement error over the slope factor of the steady-state activation curve of the voltage-dependent gating variable. These ratios reliably bound the simulation error and yield results consistent with the simulation analysis. Our bounds quantitatively illustrate how measurement error restricts the accuracy that can be obtained by using smaller step sizes. Finally, we demonstrate that Euler can be computed with identical computational efficiency as EE.
Method for integrating microelectromechanical devices with electronic circuitry
Montague, S.; Smith, J.H.; Sniegowski, J.J.; McWhorter, P.J.
1998-08-25
A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry. 13 figs.
Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan
2012-01-01
Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.
AN INTEGRATED PERSPECTIVE ON THE ASSESSMENT OF TECHNOLOGIES: INTEGRATE-HTA.
Wahlster, Philip; Brereton, Louise; Burns, Jacob; Hofmann, Björn; Mozygemba, Kati; Oortwijn, Wija; Pfadenhauer, Lisa; Polus, Stephanie; Rehfuess, Eva; Schilling, Imke; van der Wilt, Gert Jan; Gerhardus, Ansgar
2017-01-01
Current health technology assessment (HTA) is not well equipped to assess complex technologies as insufficient attention is being paid to the diversity in patient characteristics and preferences, context, and implementation. Strategies to integrate these and several other aspects, such as ethical considerations, in a comprehensive assessment are missing. The aim of the European research project INTEGRATE-HTA was to develop a model for an integrated HTA of complex technologies. A multi-method, four-stage approach guided the development of the INTEGRATE-HTA Model: (i) definition of the different dimensions of information to be integrated, (ii) literature review of existing methods for integration, (iii) adjustment of concepts and methods for assessing distinct aspects of complex technologies in the frame of an integrated process, and (iv) application of the model in a case study and subsequent revisions. The INTEGRATE-HTA Model consists of five steps, each involving stakeholders: (i) definition of the technology and the objective of the HTA; (ii) development of a logic model to provide a structured overview of the technology and the system in which it is embedded; (iii) evidence assessment on effectiveness, economic, ethical, legal, and socio-cultural aspects, taking variability of participants, context, implementation issues, and their interactions into account; (iv) populating the logic model with the data generated in step 3; (v) structured process of decision-making. The INTEGRATE-HTA Model provides a structured process for integrated HTAs of complex technologies. Stakeholder involvement in all steps is essential as a means of ensuring relevance and meaningful interpretation of the evidence.
NASA Astrophysics Data System (ADS)
Yoneda, Makoto; Dohmeki, Hideo
The position control system with the advantage large torque, low vibration, and high resolution can be obtained by the constant current micro step drive applied to hybrid stepping motor. However loss is large, in order not to be concerned with load torque but to control current uniformly. As the one technique of a position control system in which high efficiency is realizable, the same sensorless control as a permanent magnet motor is effective. But, it was the purpose that the control method proposed until now controls speed. Then, this paper proposed changing the drive method of micro step drive and sensorless drive. The change of the drive method was verified from the simulation and the experiment. On no load, it was checked not producing change of a large speed at the time of a change by making electrical angle and carrying out zero reset of the integrator. On load, it was checked that a large speed change arose. The proposed system could change drive method by setting up the initial value of an integrator using the estimated result, without producing speed change. With this technique, the low loss position control system, which employed the advantage of the hybrid stepping motor, has been built.
Integrating ethics in health technology assessment: many ways to Rome.
Hofmann, Björn; Oortwijn, Wija; Bakke Lysdahl, Kristin; Refolo, Pietro; Sacchini, Dario; van der Wilt, Gert Jan; Gerhardus, Ansgar
2015-01-01
The aim of this study was to identify and discuss appropriate approaches to integrate ethical inquiry in health technology assessment (HTA). The key question is how ethics can be integrated in HTA. This is addressed in two steps: by investigating what it means to integrate ethics in HTA, and by assessing how suitable the various methods in ethics are to be integrated in HTA according to these meanings of integration. In the first step, we found that integrating ethics can mean that ethics is (a) subsumed under or (b) combined with other parts of the HTA process; that it can be (c) coordinated with other parts; or that (d) ethics actively interacts and changes other parts of the HTA process. For the second step, we found that the various methods in ethics have different merits with respect to the four conceptions of integration in HTA. Traditional approaches in moral philosophy tend to be most suited to be subsumed or combined, while processual approaches being close to the HTA or implementation process appear to be most suited to coordinated and interactive types of integration. The article provides a guide for choosing the ethics approach that appears most appropriate for the goals and process of a particular HTA.
Adaptive Implicit Non-Equilibrium Radiation Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philip, Bobby; Wang, Zhen; Berrill, Mark A
2013-01-01
We describe methods for accurate and efficient long term time integra- tion of non-equilibrium radiation diffusion systems: implicit time integration for effi- cient long term time integration of stiff multiphysics systems, local control theory based step size control to minimize the required global number of time steps while control- ling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul
2017-12-01
The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
Twostep-by-twostep PIRK-type PC methods with continuous output formulas
NASA Astrophysics Data System (ADS)
Cong, Nguyen Huu; Xuan, Le Ngoc
2008-11-01
This paper deals with parallel predictor-corrector (PC) iteration methods based on collocation Runge-Kutta (RK) corrector methods with continuous output formulas for solving nonstiff initial-value problems (IVPs) for systems of first-order differential equations. At nth step, the continuous output formulas are used not only for predicting the stage values in the PC iteration methods but also for calculating the step values at (n+2)th step. In this case, the integration processes can be proceeded twostep-by-twostep. The resulting twostep-by-twostep (TBT) parallel-iterated RK-type (PIRK-type) methods with continuous output formulas (twostep-by-twostep PIRKC methods or TBTPIRKC methods) give us a faster integration process. Fixed stepsize applications of these TBTPIRKC methods to a few widely-used test problems reveal that the new PC methods are much more efficient when compared with the well-known parallel-iterated RK methods (PIRK methods), parallel-iterated RK-type PC methods with continuous output formulas (PIRKC methods) and sequential explicit RK codes DOPRI5 and DOP853 available from the literature.
The reduced basis method for the electric field integral equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less
Kaabi, Mohamed Ghaith; Tonnelier, Arnaud; Martinez, Dominique
2011-05-01
In traditional event-driven strategies, spike timings are analytically given or calculated with arbitrary precision (up to machine precision). Exact computation is possible only for simplified neuron models, mainly the leaky integrate-and-fire model. In a recent paper, Zheng, Tonnelier, and Martinez (2009) introduced an approximate event-driven strategy, named voltage stepping, that allows the generic simulation of nonlinear spiking neurons. Promising results were achieved in the simulation of single quadratic integrate-and-fire neurons. Here, we assess the performance of voltage stepping in network simulations by considering more complex neurons (quadratic integrate-and-fire neurons with adaptation) coupled with multiple synapses. To handle the discrete nature of synaptic interactions, we recast voltage stepping in a general framework, the discrete event system specification. The efficiency of the method is assessed through simulations and comparisons with a modified time-stepping scheme of the Runge-Kutta type. We demonstrated numerically that the original order of voltage stepping is preserved when simulating connected spiking neurons, independent of the network activity and connectivity.
The integration of the motion equations of low-orbiting earth satellites using Taylor's method
NASA Astrophysics Data System (ADS)
Krivov, A. V.; Chernysheva, N. A.
1990-04-01
A method for the numerical integration of the equations of motion for a satellite is proposed, taking the earth's oblateness and atmospheric drag into account. The method is based on Taylor's representation of the solution to the corresponding polynomial system. The algorithm for choosing the integration step and error estimation is constructed. The method is realized as a subrouting package. The method is applied to a low-orbiting earth satellite and the results are compared with those obtained using Everhart's method.
Multistep integration formulas for the numerical integration of the satellite problem
NASA Technical Reports Server (NTRS)
Lundberg, J. B.; Tapley, B. D.
1981-01-01
The use of two Class 2/fixed mesh/fixed order/multistep integration packages of the PECE type for the numerical integration of the second order, nonlinear, ordinary differential equation of the satellite orbit problem. These two methods are referred to as the general and the second sum formulations. The derivation of the basic equations which characterize each formulation and the role of the basic equations in the PECE algorithm are discussed. Possible starting procedures are examined which may be used to supply the initial set of values required by the fixed mesh/multistep integrators. The results of the general and second sum integrators are compared to the results of various fixed step and variable step integrators.
Trigonometrically-fitted Scheifele two-step methods for perturbed oscillators
NASA Astrophysics Data System (ADS)
You, Xiong; Zhang, Yonghui; Zhao, Jinxi
2011-07-01
In this paper, a new family of trigonometrically-fitted Scheifele two-step (TFSTS) methods for the numerical integration of perturbed oscillators is proposed and investigated. An essential feature of TFSTS methods is that they are exact in both the internal stages and the updates when solving the unperturbed harmonic oscillator y″ = -ω2 y for known frequency ω. Based on the linear operator theory, the necessary and sufficient conditions for TFSTS methods of up to order five are derived. Two specific TFSTS methods of orders four and five respectively are constructed and their stability and phase properties are examined. In the five numerical experiments carried out the new integrators are shown to be more efficient and competent than some well-known methods in the literature.
Thermal Model Development for Ares I-X
NASA Technical Reports Server (NTRS)
Amundsen, Ruth M.; DelCorso, Joe
2008-01-01
Thermal analysis for the Ares I-X vehicle has involved extensive thermal model integration, since thermal models of vehicle elements came from several different NASA and industry organizations. Many valuable lessons were learned in terms of model integration and validation. Modeling practices such as submodel, analysis group and symbol naming were standardized to facilitate the later model integration. Upfront coordination of coordinate systems, timelines, units, symbols and case scenarios was very helpful in minimizing integration rework. A process for model integration was developed that included pre-integration runs and basic checks of both models, and a step-by-step process to efficiently integrate one model into another. Extensive use of model logic was used to create scenarios and timelines for avionics and air flow activation. Efficient methods of model restart between case scenarios were developed. Standardization of software version and even compiler version between organizations was found to be essential. An automated method for applying aeroheating to the full integrated vehicle model, including submodels developed by other organizations, was developed.
EXPLICIT SYMPLECTIC-LIKE INTEGRATORS WITH MIDPOINT PERMUTATIONS FOR SPINNING COMPACT BINARIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Junjie; Wu, Xin; Huang, Guoqing
2017-01-01
We refine the recently developed fourth-order extended phase space explicit symplectic-like methods for inseparable Hamiltonians using Yoshida’s triple product combined with a midpoint permuted map. The midpoint between the original variables and their corresponding extended variables at every integration step is readjusted as the initial values of the original variables and their corresponding extended ones at the next step integration. The triple-product construction is apparently superior to the composition of two triple products in computational efficiency. Above all, the new midpoint permutations are more effective in restraining the equality of the original variables and their corresponding extended ones at each integration step thanmore » the existing sequent permutations of momenta and coordinates. As a result, our new construction shares the benefit of implicit symplectic integrators in the conservation of the second post-Newtonian Hamiltonian of spinning compact binaries. Especially for the chaotic case, it can work well, but the existing sequent permuted algorithm cannot. When dissipative effects from the gravitational radiation reaction are included, the new symplectic-like method has a secular drift in the energy error of the dissipative system for the orbits that are regular in the absence of radiation, as an implicit symplectic integrator does. In spite of this, it is superior to the same-order implicit symplectic integrator in accuracy and efficiency. The new method is particularly useful in discussing the long-term evolution of inseparable Hamiltonian problems.« less
hp-Adaptive time integration based on the BDF for viscous flows
NASA Astrophysics Data System (ADS)
Hay, A.; Etienne, S.; Pelletier, D.; Garon, A.
2015-06-01
This paper presents a procedure based on the Backward Differentiation Formulas of order 1 to 5 to obtain efficient time integration of the incompressible Navier-Stokes equations. The adaptive algorithm performs both stepsize and order selections to control respectively the solution accuracy and the computational efficiency of the time integration process. The stepsize selection (h-adaptivity) is based on a local error estimate and an error controller to guarantee that the numerical solution accuracy is within a user prescribed tolerance. The order selection (p-adaptivity) relies on the idea that low-accuracy solutions can be computed efficiently by low order time integrators while accurate solutions require high order time integrators to keep computational time low. The selection is based on a stability test that detects growing numerical noise and deems a method of order p stable if there is no method of lower order that delivers the same solution accuracy for a larger stepsize. Hence, it guarantees both that (1) the used method of integration operates inside of its stability region and (2) the time integration procedure is computationally efficient. The proposed time integration procedure also features a time-step rejection and quarantine mechanisms, a modified Newton method with a predictor and dense output techniques to compute solution at off-step points.
Integration of alternative feedstreams for biomass treatment and utilization
Hennessey, Susan Marie [Avondale, PA; Friend, Julie [Claymont, DE; Dunson, Jr., James B.; Tucker, III, Melvin P.; Elander, Richard T [Evergreen, CO; Hames, Bonnie [Westminster, CO
2011-03-22
The present invention provides a method for treating biomass composed of integrated feedstocks to produce fermentable sugars. One aspect of the methods described herein includes a pretreatment step wherein biomass is integrated with an alternative feedstream and the resulting integrated feedstock, at relatively high concentrations, is treated with a low concentration of ammonia relative to the dry weight of biomass. In another aspect, a high solids concentration of pretreated biomass is integrated with an alternative feedstream for saccharifiaction.
NASA Technical Reports Server (NTRS)
Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.
A two-step, fourth-order method with energy preserving properties
NASA Astrophysics Data System (ADS)
Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato
2012-09-01
We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.
Spatial Data Integration Using Ontology-Based Approach
NASA Astrophysics Data System (ADS)
Hasani, S.; Sadeghi-Niaraki, A.; Jelokhani-Niaraki, M.
2015-12-01
In today's world, the necessity for spatial data for various organizations is becoming so crucial that many of these organizations have begun to produce spatial data for that purpose. In some circumstances, the need to obtain real time integrated data requires sustainable mechanism to process real-time integration. Case in point, the disater management situations that requires obtaining real time data from various sources of information. One of the problematic challenges in the mentioned situation is the high degree of heterogeneity between different organizations data. To solve this issue, we introduce an ontology-based method to provide sharing and integration capabilities for the existing databases. In addition to resolving semantic heterogeneity, better access to information is also provided by our proposed method. Our approach is consisted of three steps, the first step is identification of the object in a relational database, then the semantic relationships between them are modelled and subsequently, the ontology of each database is created. In a second step, the relative ontology will be inserted into the database and the relationship of each class of ontology will be inserted into the new created column in database tables. Last step is consisted of a platform based on service-oriented architecture, which allows integration of data. This is done by using the concept of ontology mapping. The proposed approach, in addition to being fast and low cost, makes the process of data integration easy and the data remains unchanged and thus takes advantage of the legacy application provided.
Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation
NASA Astrophysics Data System (ADS)
Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji
2018-04-01
In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.
Kinetic energy definition in velocity Verlet integration for accurate pressure evaluation.
Jung, Jaewoon; Kobayashi, Chigusa; Sugita, Yuji
2018-04-28
In molecular dynamics (MD) simulations, a proper definition of kinetic energy is essential for controlling pressure as well as temperature in the isothermal-isobaric condition. The virial theorem provides an equation that connects the average kinetic energy with the product of particle coordinate and force. In this paper, we show that the theorem is satisfied in MD simulations with a larger time step and holonomic constraints of bonds, only when a proper definition of kinetic energy is used. We provide a novel definition of kinetic energy, which is calculated from velocities at the half-time steps (t - Δt/2 and t + Δt/2) in the velocity Verlet integration method. MD simulations of a 1,2-dispalmitoyl-sn-phosphatidylcholine (DPPC) lipid bilayer and a water box using the kinetic energy definition could reproduce the physical properties in the isothermal-isobaric condition properly. We also develop a multiple time step (MTS) integration scheme with the kinetic energy definition. MD simulations with the MTS integration for the DPPC and water box systems provided the same quantities as the velocity Verlet integration method, even when the thermostat and barostat are updated less frequently.
Tricco, Andrea C; Antony, Jesmin; Soobiah, Charlene; Kastner, Monika; Cogo, Elise; MacDonald, Heather; D'Souza, Jennifer; Hui, Wing; Straus, Sharon E
2016-05-01
To describe and compare, through a scoping review, emerging knowledge synthesis methods for generating and refining theory, in terms of expertise required, similarities, differences, strengths, limitations, and steps involved in using the methods. Electronic databases (e.g., MEDLINE) were searched, and two reviewers independently selected studies and abstracted data for qualitative analysis. In total, 287 articles reporting nine knowledge synthesis methods (concept synthesis, critical interpretive synthesis, integrative review, meta-ethnography, meta-interpretation, meta-study, meta-synthesis, narrative synthesis, and realist review) were included after screening of 17,962 citations and 1,010 full-text articles. Strengths of the methods included comprehensive synthesis providing rich contextual data and suitability for identifying gaps in the literature, informing policy, aiding in clinical decisions, addressing complex research questions, and synthesizing patient preferences, beliefs, and values. However, many of the methods were highly subjective and not reproducible. For integrative review, meta-ethnography, and realist review, guidance was provided on all steps of the review process, whereas meta-synthesis had guidance on the fewest number of steps. Guidance for conducting the steps was often vague and sometimes absent. Further work is needed to provide direction on operationalizing these methods. Copyright © 2016 Elsevier Inc. All rights reserved.
Towards a better use of psychoanalytic concepts: a model illustrated using the concept of enactment.
Bohleber, Werner; Fonagy, Peter; Jiménez, Juan Pablo; Scarfone, Dominique; Varvin, Sverre; Zysman, Samuel
2013-06-01
It is well known that there is a lack of consensus about how to decide between competing and sometimes mutually contradictory theories, and how to integrate divergent concepts and theories. In view of this situation the IPA Project Committee on Conceptual Integration developed a method that allows comparison between different versions of concepts, their underlying theories and basic assumptions. Only when placed in a frame of reference can similarities and differences be seen in a methodically comprehensible and reproducible way. We used "enactment" to study the problems of comparing concepts systematically. Almost all psychoanalytic schools have developed a conceptualization of it. We made a sort of provisional canon of relevant papers we have chosen from the different schools. The five steps of our method for analyzing the concept of enactment will be presented. The first step is the history of the concept; the second the phenomenology; the third a methodological analysis of the construction of the concept. In order to compare different conceptualizations we must know the main dimensions of the meaning space of the concept, this is the fourth step. Finally, in step five we discuss if and to what extent an integration of the different versions of enactment is possible. Copyright © 2013 Institute of Psychoanalysis.
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
Steyer, Andrew J.; Van Vleck, Erik S.
2018-04-13
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
A Lyapunov and Sacker–Sell spectral stability theory for one-step methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steyer, Andrew J.; Van Vleck, Erik S.
Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less
Assembly and Multiplex Genome Integration of Metabolic Pathways in Yeast Using CasEMBLR.
Jakočiūnas, Tadas; Jensen, Emil D; Jensen, Michael K; Keasling, Jay D
2018-01-01
Genome integration is a vital step for implementing large biochemical pathways to build a stable microbial cell factory. Although traditional strain construction strategies are well established for the model organism Saccharomyces cerevisiae, recent advances in CRISPR/Cas9-mediated genome engineering allow much higher throughput and robustness in terms of strain construction. In this chapter, we describe CasEMBLR, a highly efficient and marker-free genome engineering method for one-step integration of in vivo assembled expression cassettes in multiple genomic sites simultaneously. CasEMBLR capitalizes on the CRISPR/Cas9 technology to generate double-strand breaks in genomic loci, thus prompting native homologous recombination (HR) machinery to integrate exogenously derived homology templates. As proof-of-principle for microbial cell factory development, CasEMBLR was used for one-step assembly and marker-free integration of the carotenoid pathway from 15 exogenously supplied DNA parts into three targeted genomic loci. As a second proof-of-principle, a total of ten DNA parts were assembled and integrated in two genomic loci to construct a tyrosine production strain, and at the same time knocking out two genes. This new method complements and improves the field of genome engineering in S. cerevisiae by providing a more flexible platform for rapid and precise strain building.
Method for improving the limit of detection in a data signal
Synovec, Robert E.; Yueng, Edward S.
1989-10-17
A method for improving the limit of detection for a data set in which experimental noise is uncorrelated along a given abscissa and an analytical signal is correlated to the abscissa, the steps comprising collecting the data set, converting the data set into a data signal including an analytical portion and the experimental noise portion, designating and adjusting a baseline of the data signal to center the experimental noise numerically about a zero reference, and integrating the data signal preserving the corresponding information for each point of the data signal. The steps of the method produce an enhanced integrated data signal which improves the limit of detection of the data signal.
Method for improving the limit of detection in a data signal
Synovec, R.E.; Yueng, E.S.
1989-10-17
Disclosed is a method for improving the limit of detection for a data set in which experimental noise is uncorrelated along a given abscissa and an analytical signal is correlated to the abscissa, the steps comprising collecting the data set, converting the data set into a data signal including an analytical portion and the experimental noise portion, designating and adjusting a baseline of the data signal to center the experimental noise numerically about a zero reference, and integrating the data signal preserving the corresponding information for each point of the data signal. The steps of the method produce an enhanced integrated data signal which improves the limit of detection of the data signal. 8 figs.
NASA Astrophysics Data System (ADS)
Dessens, Olivier
2016-04-01
Integrated Assessment Models (IAMs) are used as crucial inputs to policy-making on climate change. These models simulate aspect of the economy and climate system to deliver future projections and to explore the impact of mitigation and adaptation policies. The IAMs' climate representation is extremely important as it can have great influence on future political action. The step-function-response is a simple climate model recently developed by the UK Met Office and is an alternate method of estimating the climate response to an emission trajectory directly from global climate model step simulations. Good et al., (2013) have formulated a method of reconstructing general circulation models (GCMs) climate response to emission trajectories through an idealized experiment. This method is called the "step-response approach" after and is based on an idealized abrupt CO2 step experiment results. TIAM-UCL is a technology-rich model that belongs to the family of, partial-equilibrium, bottom-up models, developed at University College London to represent a wide spectrum of energy systems in 16 regions of the globe (Anandarajah et al. 2011). The model uses optimisation functions to obtain cost-efficient solutions, in meeting an exogenously defined set of energy-service demands, given certain technological and environmental constraints. Furthermore, it employs linear programming techniques making the step function representation of the climate change response adapted to the model mathematical formulation. For the first time, we have introduced the "step-response approach" method developed at the UK Met Office in an IAM, the TIAM-UCL energy system, and we investigate the main consequences of this modification on the results of the model in term of climate and energy system responses. The main advantage of this approach (apart from the low computational cost it entails) is that its results are directly traceable to the GCM involved and closely connected to well-known methods of analysing GCMs with the step-experiments. Acknowledgments: This work is supported by the FP7 HELIX project (www.helixclimate.eu) References: Anandarajah, G., Pye, S., Usher, W., Kesicki, F., & Mcglade, C. (2011). TIAM-UCL Global model documentation. https://www.ucl.ac.uk/energy-models/models/tiam-ucl/tiam-ucl-manual Good, P., Gregory, J. M., Lowe, J. A., & Andrews, T. (2013). Abrupt CO2 experiments as tools for predicting and understanding CMIP5 representative concentration pathway projections. Climate Dynamics, 40(3-4), 1041-1053.
Step-by-step seeding procedure for preparing HKUST-1 membrane on porous α-alumina support.
Nan, Jiangpu; Dong, Xueliang; Wang, Wenjin; Jin, Wanqin; Xu, Nanping
2011-04-19
Metal-organic framework (MOF) membranes have attracted considerable attention because of their striking advantages in small-molecule separation. The preparation of an integrated MOF membrane is still a major challenge. Depositing a uniform seed layer on a support for secondary growth is a main route to obtaining an integrated MOF membrane. A novel seeding method to prepare HKUST-1 (known as Cu(3)(btc)(2)) membranes on porous α-alumina supports is reported. The in situ production of the seed layer was realized in step-by-step fashion via the coordination of H(3)btc and Cu(2+) on an α-alumina support. The formation process of the seed layer was observed by ultraviolet-visible absorption spectroscopy and atomic force microscopy. An integrated HKUST-1 membrane could be synthesized by the secondary hydrothermal growth on the seeded support. The gas permeation performance of the membrane was evaluated. © 2011 American Chemical Society
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Hoggan, Philip
2003-01-01
This review on molecular integrals for large electronic systems (MILES) places the problem of analytical integration over exponential-type orbitals (ETOs) in a historical context. After reference to the pioneering work, particularly by Barnett, Shavitt and Yoshimine, it focuses on recent progress towards rapid and accurate analytic solutions of MILES over ETOs. Software such as the hydrogenlike wavefunction package Alchemy by Yoshimine and collaborators is described. The review focuses on convergence acceleration of these highly oscillatory integrals and in particular it highlights suitable nonlinear transformations. Work by Levin and Sidi is described and applied to MILES. A step by step description of progress in the use of nonlinear transformation methods to obtain efficient codes is provided. The recent approach developed by Safouhi is also presented. The current state of the art in this field is summarized to show that ab initio analytical work over ETOs is now a viable option.
One-step methods for the prediction of orbital motion, taking its periodic components into account
NASA Astrophysics Data System (ADS)
Lavrov, K. N.
1988-03-01
The paper examines the design and analysis of the properties of implicit one-step integration methods which use the trigonometric approximation of ordinary differential equations containing periodic components. With reference to an orbital-motion prediction example, it is shown that the proposed schemes are more efficient in terms of computer memory than Everhart's (1974) approach. The results obtained make it possible to improve Everhart's method.
Numerical solution of second order ODE directly by two point block backward differentiation formula
NASA Astrophysics Data System (ADS)
Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini
2015-12-01
Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.
Macro-fingerprint analysis-through-separation of licorice based on FT-IR and 2DCOS-IR
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Ping; Xu, Changhua; Yang, Yan; Li, Jin; Chen, Tao; Li, Zheng; Cui, Weili; Zhou, Qun; Sun, Suqin; Li, Huifen
2014-07-01
In this paper, a step-by-step analysis-through-separation method under the navigation of multi-step IR macro-fingerprint (FT-IR integrated with second derivative IR (SD-IR) and 2DCOS-IR) was developed for comprehensively characterizing the hierarchical chemical fingerprints of licorice from entirety to single active components. Subsequently, the chemical profile variation rules of three parts (flavonoids, saponins and saccharides) in the separation process were holistically revealed and the number of matching peaks and correlation coefficients with standards of pure compounds was increasing along the extracting directions. The findings were supported by UPLC results and a verification experiment of aqueous separation process. It has been demonstrated that the developed multi-step IR macro-fingerprint analysis-through-separation approach could be a rapid, effective and integrated method not only for objectively providing comprehensive chemical characterization of licorice and all its separated parts, but also for rapidly revealing the global enrichment trend of the active components in licorice separation process.
NASA Astrophysics Data System (ADS)
Sablik, Thomas; Velten, Jörg; Kummert, Anton
2015-03-01
An novel system for automatic privacy protection in digital media based on spectral domain watermarking and JPEG compression is described in the present paper. In a first step private areas are detected. Therefore a detection method is presented. The implemented method uses Haar cascades to detects faces. Integral images are used to speed up calculations and the detection. Multiple detections of one face are combined. Succeeding steps comprise embedding the data into the image as part of JPEG compression using spectral domain methods and protecting the area of privacy. The embedding process is integrated into and adapted to JPEG compression. A Spread Spectrum Watermarking method is used to embed the size and position of the private areas into the cover image. Different methods for embedding regarding their robustness are compared. Moreover the performance of the method concerning tampered images is presented.
NASA Astrophysics Data System (ADS)
Lv, Z. H.; Li, Q.; Huang, R. W.; Liu, H. M.; Liu, D.
2016-08-01
Based on the discussion about topology structure of integrated distributed photovoltaic (PV) power generation system and energy storage (ES) in single or mixed type, this paper focuses on analyzing grid-connected performance of integrated distributed photovoltaic and energy storage (PV-ES) systems, and proposes a comprehensive evaluation index system. Then a multi-level fuzzy comprehensive evaluation method based on grey correlation degree is proposed, and the calculations for weight matrix and fuzzy matrix are presented step by step. Finally, a distributed integrated PV-ES power generation system connected to a 380 V low voltage distribution network is taken as the example, and some suggestions are made based on the evaluation results.
A multilevel finite element method for Fredholm integral eigenvalue problems
NASA Astrophysics Data System (ADS)
Xie, Hehu; Zhou, Tao
2015-12-01
In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.
A highly accurate boundary integral equation method for surfactant-laden drops in 3D
NASA Astrophysics Data System (ADS)
Sorgentone, Chiara; Tornberg, Anna-Karin
2018-05-01
The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ochiai, Yoshihiro
Heat-conduction analysis under steady state without heat generation can easily be treated by the boundary element method. However, in the case with heat conduction with heat generation can approximately be solved without a domain integral by an improved multiple-reciprocity boundary element method. The convention multiple-reciprocity boundary element method is not suitable for complicated heat generation. In the improved multiple-reciprocity boundary element method, on the other hand, the domain integral in each step is divided into point, line, and area integrals. In order to solve the problem, the contour lines of heat generation, which approximate the actual heat generation, are used.
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Turkel, E.
1980-01-01
New methods are introduced for the time integration of the Fourier and Chebyshev methods of solution for dynamic differential equations. These methods are unconditionally stable, even though no matrix inversions are required. Time steps are chosen by accuracy requirements alone. For the Fourier method both leapfrog and Runge-Kutta methods are considered. For the Chebyshev method only Runge-Kutta schemes are tested. Numerical calculations are presented to verify the analytic results. Applications to the shallow water equations are presented.
Dynamical Chaos in the Wisdom-Holman Integrator: Origins and Solutions
NASA Technical Reports Server (NTRS)
Rauch, Kevin P.; Holman, Matthew
1999-01-01
We examine the nonlinear stability of the Wisdom-Holman (WH) symplectic mapping applied to the integration of perturbed, highly eccentric (e-0.9) two-body orbits. We find that the method is unstable and introduces artificial chaos into the computed trajectories for this class of problems, unless the step size chosen 1s small enough that PeriaPse is always resolved, in which case the method is generically stable. This 'radial orbit instability' persists even for weakly perturbed systems. Using the Stark problem as a fiducial test case, we investigate the dynamical origin of this instability and argue that the numerical chaos results from the overlap of step-size resonances; interestingly, for the Stark-problem many of these resonances appear to be absolutely stable. We similarly examine the robustness of several alternative integration methods: a time-regularized version of the WH mapping suggested by Mikkola; the potential-splitting (PS) method of Duncan, Levison, Lee; and two original methods incorporating approximations based on Stark motion instead of Keplerian motion. The two fixed point problem and a related, more general problem are used to conduct a comparative test of the various methods for several types of motion. Among the algorithms tested, the time-transformed WH mapping is clearly the most efficient and stable method of integrating eccentric, nearly Keplerian orbits in the absence of close encounters. For test particles subject to both high eccentricities and very close encounters, we find an enhanced version of the PS method-incorporating time regularization, force-center switching, and an improved kernel function-to be both economical and highly versatile. We conclude that Stark-based methods are of marginal utility in N-body type integrations. Additional implications for the symplectic integration of N-body systems are discussed.
Structures to Resist the Effects of Accidental Explosions. Volume 3. Principles of Dynamic Analysis
1984-06-01
multi-degree-of-freedom systems) is presented. A step-by-step numerical integration of an element’s motion under dynamic loads using the...structural arrangements; providing closures, and preventing damage to interior portions of structures due to structual motion , shock, and fragment...an element’s motion under dynamic loads utilizing the Acceleration-Impulse- Extrapolation Method or the Average Acceleration Method and design charts
Shorofsky, Stephen R; Peters, Robert W; Rashba, Eric J; Gold, Michael R
2004-02-01
Determination of DFT is an integral part of ICD implantation. Two commonly used methods of DFT determination, the step-down method and the binary search method, were compared in 44 patients undergoing ICD testing for standard clinical indications. The step-down protocol used an initial shock of 18 J. The binary search method began with a shock energy of 9 J and successive shock energies were increased or decreased depending on the success of the previous shock. The DFT was defined as the lowest energy that successfully terminated ventricular fibrillation. The binary search method has the advantage of requiring a predetermined number of shocks, but some have questioned its accuracy. The study found that (mean) DFT obtained by the step-down method was 8.2 +/- 5.0, whereas by the binary search method DFT was 8.1 +/- 0.7 J, P = NS. DFT differed by no more than one step between methods in 32 (71%) of patients. The number of shocks required to determine DFT by the step-down method was 4.6 +/- 1.4, whereas by definition, the binary search method always required three shocks. In conclusion, the binary search method is preferable because it is of comparable efficacy and requires fewer shocks.
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
Advanced time integration algorithms for dislocation dynamics simulations of work hardening
Sills, Ryan B.; Aghaei, Amin; Cai, Wei
2016-04-25
Efficient time integration is a necessity for dislocation dynamics simulations of work hardening to achieve experimentally relevant strains. In this work, an efficient time integration scheme using a high order explicit method with time step subcycling and a newly-developed collision detection algorithm are evaluated. First, time integrator performance is examined for an annihilating Frank–Read source, showing the effects of dislocation line collision. The integrator with subcycling is found to significantly out-perform other integration schemes. The performance of the time integration and collision detection algorithms is then tested in a work hardening simulation. The new algorithms show a 100-fold speed-up relativemore » to traditional schemes. As a result, subcycling is shown to improve efficiency significantly while maintaining an accurate solution, and the new collision algorithm allows an arbitrarily large time step size without missing collisions.« less
NASA Technical Reports Server (NTRS)
Thompson, J. F.; Mcwhorter, J. C.; Siddiqi, S. A.; Shanks, S. P.
1973-01-01
Numerical methods of integration of the equations of motion of a controlled satellite under the influence of gravity-gradient torque are considered. The results of computer experimentation using a number of Runge-Kutta, multi-step, and extrapolation methods for the numerical integration of this differential system are presented, and particularly efficient methods are noted. A large bibliography of numerical methods for initial value problems for ordinary differential equations is presented, and a compilation of Runge-Kutta and multistep formulas is given. Less common numerical integration techniques from the literature are noted for further consideration.
Direct Sensor Orientation of a Land-Based Mobile Mapping System
Rau, Jiann-Yeou; Habib, Ayman F.; Kersting, Ana P.; Chiang, Kai-Wei; Bang, Ki-In; Tseng, Yi-Hsing; Li, Yu-Hua
2011-01-01
A land-based mobile mapping system (MMS) is flexible and useful for the acquisition of road environment geospatial information. It integrates a set of imaging sensors and a position and orientation system (POS). The positioning quality of such systems is highly dependent on the accuracy of the utilized POS. This limitation is the major drawback due to the elevated cost associated with high-end GPS/INS units, particularly the inertial system. The potential accuracy of the direct sensor orientation depends on the architecture and quality of the GPS/INS integration process as well as the validity of the system calibration (i.e., calibration of the individual sensors as well as the system mounting parameters). In this paper, a novel single-step procedure using integrated sensor orientation with relative orientation constraint for the estimation of the mounting parameters is introduced. A comparative analysis between the proposed single-step and the traditional two-step procedure is carried out. Moreover, the estimated mounting parameters using the different methods are used in a direct geo-referencing procedure to evaluate their performance and the feasibility of the implemented system. Experimental results show that the proposed system using single-step system calibration method can achieve high 3D positioning accuracy. PMID:22164015
X-ray simulations method for the large field of view
NASA Astrophysics Data System (ADS)
Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.
2018-03-01
In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.
Numerical solution methods for viscoelastic orthotropic materials
NASA Technical Reports Server (NTRS)
Gramoll, K. C.; Dillard, D. A.; Brinson, H. F.
1988-01-01
Numerical solution methods for viscoelastic orthotropic materials, specifically fiber reinforced composite materials, are examined. The methods include classical lamination theory using time increments, direction solution of the Volterra Integral, Zienkiewicz's linear Prony series method, and a new method called Nonlinear Differential Equation Method (NDEM) which uses a nonlinear Prony series. The criteria used for comparison of the various methods include the stability of the solution technique, time step size stability, computer solution time length, and computer memory storage. The Volterra Integral allowed the implementation of higher order solution techniques but had difficulties solving singular and weakly singular compliance function. The Zienkiewicz solution technique, which requires the viscoelastic response to be modeled by a Prony series, works well for linear viscoelastic isotropic materials and small time steps. The new method, NDEM, uses a modified Prony series which allows nonlinear stress effects to be included and can be used with orthotropic nonlinear viscoelastic materials. The NDEM technique is shown to be accurate and stable for both linear and nonlinear conditions with minimal computer time.
Scaled Runge-Kutta algorithms for handling dense output
NASA Technical Reports Server (NTRS)
Horn, M. K.
1981-01-01
Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, R.; Harrison, D. E. Jr.
A variable time step integration algorithm for carrying out molecular dynamics simulations of atomic collision cascades is proposed which evaluates the interaction forces only once per time step. The algorithm is tested on some model problems which have exact solutions and is compared against other common methods. These comparisons show that the method has good stability and accuracy. Applications to Ar/sup +/ bombardment of Cu and Si show good accuracy and improved speed to the original method (D. E. Harrison, W. L. Gay, and H. M. Effron, J. Math. Phys. /bold 10/, 1179 (1969)).
Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks.
Shelley, M J; Tao, L
2001-01-01
To avoid the numerical errors associated with resetting the potential following a spike in simulations of integrate-and-fire neuronal networks, Hansel et al. and Shelley independently developed a modified time-stepping method. Their particular scheme consists of second-order Runge-Kutta time-stepping, a linear interpolant to find spike times, and a recalibration of postspike potential using the spike times. Here we show analytically that such a scheme is second order, discuss the conditions under which efficient, higher-order algorithms can be constructed to treat resets, and develop a modified fourth-order scheme. To support our analysis, we simulate a system of integrate-and-fire conductance-based point neurons with all-to-all coupling. For six-digit accuracy, our modified Runge-Kutta fourth-order scheme needs a time-step of Delta(t) = 0.5 x 10(-3) seconds, whereas to achieve comparable accuracy using a recalibrated second-order or a first-order algorithm requires time-steps of 10(-5) seconds or 10(-9) seconds, respectively. Furthermore, since the cortico-cortical conductances in standard integrate-and-fire neuronal networks do not depend on the value of the membrane potential, we can attain fourth-order accuracy with computational costs normally associated with second-order schemes.
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
NASA Astrophysics Data System (ADS)
Särkimäki, K.; Hirvijoki, E.; Terävä, J.
2018-01-01
We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.
Bilateral step length estimation using a single inertial measurement unit attached to the pelvis
2012-01-01
Background The estimation of the spatio-temporal gait parameters is of primary importance in both physical activity monitoring and clinical contexts. A method for estimating step length bilaterally, during level walking, using a single inertial measurement unit (IMU) attached to the pelvis is proposed. In contrast to previous studies, based either on a simplified representation of the human gait mechanics or on a general linear regressive model, the proposed method estimates the step length directly from the integration of the acceleration along the direction of progression. Methods The IMU was placed at pelvis level fixed to the subject's belt on the right side. The method was validated using measurements from a stereo-photogrammetric system as a gold standard on nine subjects walking ten laps along a closed loop track of about 25 m, varying their speed. For each loop, only the IMU data recorded in a 4 m long portion of the track included in the calibrated volume of the SP system, were used for the analysis. The method takes advantage of the cyclic nature of gait and it requires an accurate determination of the foot contact instances. A combination of a Kalman filter and of an optimally filtered direct and reverse integration applied to the IMU signals formed a single novel method (Kalman and Optimally filtered Step length Estimation - KOSE method). A correction of the IMU displacement due to the pelvic rotation occurring in gait was implemented to estimate the step length and the traversed distance. Results The step length was estimated for all subjects with less than 3% error. Traversed distance was assessed with less than 2% error. Conclusions The proposed method provided estimates of step length and traversed distance more accurate than any other method applied to measurements obtained from a single IMU that can be found in the literature. In healthy subjects, it is reasonable to expect that, errors in traversed distance estimation during daily monitoring activity would be of the same order of magnitude of those presented. PMID:22316235
Multigrid methods with space–time concurrency
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...
2017-10-06
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
Multigrid methods with space–time concurrency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
On the performance of exponential integrators for problems in magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Einkemmer, Lukas; Tokman, Mayya; Loffeld, John
2017-02-01
Exponential integrators have been introduced as an efficient alternative to explicit and implicit methods for integrating large stiff systems of differential equations. Over the past decades these methods have been studied theoretically and their performance was evaluated using a range of test problems. While the results of these investigations showed that exponential integrators can provide significant computational savings, the research on validating this hypothesis for large scale systems and understanding what classes of problems can particularly benefit from the use of the new techniques is in its initial stages. Resistive magnetohydrodynamic (MHD) modeling is widely used in studying large scale behavior of laboratory and astrophysical plasmas. In many problems numerical solution of MHD equations is a challenging task due to the temporal stiffness of this system in the parameter regimes of interest. In this paper we evaluate the performance of exponential integrators on large MHD problems and compare them to a state-of-the-art implicit time integrator. Both the variable and constant time step exponential methods of EPIRK-type are used to simulate magnetic reconnection and the Kevin-Helmholtz instability in plasma. Performance of these methods, which are part of the EPIC software package, is compared to the variable time step variable order BDF scheme included in the CVODE (part of SUNDIALS) library. We study performance of the methods on parallel architectures and with respect to magnitudes of important parameters such as Reynolds, Lundquist, and Prandtl numbers. We find that the exponential integrators provide superior or equal performance in most circumstances and conclude that further development of exponential methods for MHD problems is warranted and can lead to significant computational advantages for large scale stiff systems of differential equations such as MHD.
Time-symmetric integration in astrophysics
NASA Astrophysics Data System (ADS)
Hernandez, David M.; Bertschinger, Edmund
2018-04-01
Calculating the long-term solution of ordinary differential equations, such as those of the N-body problem, is central to understanding a wide range of dynamics in astrophysics, from galaxy formation to planetary chaos. Because generally no analytic solution exists to these equations, researchers rely on numerical methods that are prone to various errors. In an effort to mitigate these errors, powerful symplectic integrators have been employed. But symplectic integrators can be severely limited because they are not compatible with adaptive stepping and thus they have difficulty in accommodating changing time and length scales. A promising alternative is time-reversible integration, which can handle adaptive time-stepping, but the errors due to time-reversible integration in astrophysics are less understood. The goal of this work is to study analytically and numerically the errors caused by time-reversible integration, with and without adaptive stepping. We derive the modified differential equations of these integrators to perform the error analysis. As an example, we consider the trapezoidal rule, a reversible non-symplectic integrator, and show that it gives secular energy error increase for a pendulum problem and for a Hénon-Heiles orbit. We conclude that using reversible integration does not guarantee good energy conservation and that, when possible, use of symplectic integrators is favoured. We also show that time-symmetry and time-reversibility are properties that are distinct for an integrator.
Finite element implementation of state variable-based viscoplasticity models
NASA Technical Reports Server (NTRS)
Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.
1991-01-01
The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.
NASA Astrophysics Data System (ADS)
Tisdell, Christopher C.
2017-11-01
This paper presents some critical perspectives regarding pedagogical approaches to the method of reversing the order of integration in double integrals from prevailing educational literature on multivariable calculus. First, we question the message found in popular textbooks that the traditional process of reversing the order of integration is necessary when solving well-known problems. Second, we illustrate that the method of integration by parts can be directly applied to many of the classic pedagogical problems in the literature concerning double integrals, without taking the well-worn steps associated with reversing the order of integration. Third, we examine the benefits and limitations of such a method. In our conclusion, we advocate for integration by parts to be a part of the pedagogical conversation in the learning and teaching of double integral methods; and call for more debate around its use in the learning and teaching of other areas of mathematics. Finally, we emphasize the need for critical approaches in the pedagogy of mathematics more broadly.
On computational methods for crashworthiness
NASA Technical Reports Server (NTRS)
Belytschko, T.
1992-01-01
The evolution of computational methods for crashworthiness and related fields is described and linked with the decreasing cost of computational resources and with improvements in computation methodologies. The latter includes more effective time integration procedures and more efficient elements. Some recent developments in methodologies and future trends are also summarized. These include multi-time step integration (or subcycling), further improvements in elements, adaptive meshes, and the exploitation of parallel computers.
Introduction to Remote Sensing Image Registration
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline
2017-01-01
For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications
NASA Astrophysics Data System (ADS)
Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng
In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.
Algorithms and software for nonlinear structural dynamics
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.
1989-01-01
The objective of this research is to develop efficient methods for explicit time integration in nonlinear structural dynamics for computers which utilize both concurrency and vectorization. As a framework for these studies, the program WHAMS, which is described in Explicit Algorithms for the Nonlinear Dynamics of Shells (T. Belytschko, J. I. Lin, and C.-S. Tsay, Computer Methods in Applied Mechanics and Engineering, Vol. 42, 1984, pp 225 to 251), is used. There are two factors which make the development of efficient concurrent explicit time integration programs a challenge in a structural dynamics program: (1) the need for a variety of element types, which complicates the scheduling-allocation problem; and (2) the need for different time steps in different parts of the mesh, which is here called mixed delta t integration, so that a few stiff elements do not reduce the time steps throughout the mesh.
Virot, Matthieu; Tomao, Valérie; Ginies, Christian; Visinoni, Franco; Chemat, Farid
2008-07-04
Here is described a green and original alternative procedure for fats and oils' determination in oleaginous seeds. Extractions were carried out using a by-product of the citrus industry as extraction solvent, namely d-limonene, instead of hazardous petroleum solvents such as n-hexane. The described method is achieved in two steps using microwave energy: at first, extractions are attained using microwave-integrated Soxhlet, followed by the elimination of the solvent from the medium using a microwave Clevenger distillation in the second step. Oils extracted from olive seeds were compared with both conventional Soxhlet and microwave-integrated Soxhlet extraction procedures performed with n-hexane in terms of qualitative and quantitative determination. No significant difference was obtained between each extract allowing us to conclude that the proposed method is effective and valuable.
Monolithic integration of a MOSFET with a MEMS device
Bennett, Reid; Draper, Bruce
2003-01-01
An integrated microelectromechanical system comprises at least one MOSFET interconnected to at least one MEMS device on a common substrate. A method for integrating the MOSFET with the MEMS device comprises fabricating the MOSFET and MEMS device monolithically on the common substrate. Conveniently, the gate insulator, gate electrode, and electrical contacts for the gate, source, and drain can be formed simultaneously with the MEMS device structure, thereby eliminating many process steps and materials. In particular, the gate electrode and electrical contacts of the MOSFET and the structural layers of the MEMS device can be doped polysilicon. Dopant diffusion from the electrical contacts is used to form the source and drain regions of the MOSFET. The thermal diffusion step for forming the source and drain of the MOSFET can comprise one or more of the thermal anneal steps to relieve stress in the structural layers of the MEMS device.
NASA Astrophysics Data System (ADS)
Franco, J. M.; Rández, L.
The construction of new two-step hybrid (TSH) methods of explicit type with symmetric nodes and weights for the numerical integration of orbital and oscillatory second-order initial value problems (IVPs) is analyzed. These methods attain algebraic order eight with a computational cost of six or eight function evaluations per step (it is one of the lowest costs that we know in the literature) and they are optimal among the TSH methods in the sense that they reach a certain order of accuracy with minimal cost per step. The new TSH schemes also have high dispersion and dissipation orders (greater than 8) in order to be adapted to the solution of IVPs with oscillatory solutions. The numerical experiments carried out with several orbital and oscillatory problems show that the new eighth-order explicit TSH methods are more efficient than other standard TSH or Numerov-type methods proposed in the scientific literature.
NASA Astrophysics Data System (ADS)
Igumnov, Leonid; Ipatov, Aleksandr; Belov, Aleksandr; Petrov, Andrey
2015-09-01
The report presents the development of the time-boundary element methodology and a description of the related software based on a stepped method of numerical inversion of the integral Laplace transform in combination with a family of Runge-Kutta methods for analyzing 3-D mixed initial boundary-value problems of the dynamics of inhomogeneous elastic and poro-elastic bodies. The results of the numerical investigation are presented. The investigation methodology is based on direct-approach boundary integral equations of 3-D isotropic linear theories of elasticity and poroelasticity in Laplace transforms. Poroelastic media are described using Biot models with four and five base functions. With the help of the boundary-element method, solutions in time are obtained, using the stepped method of numerically inverting Laplace transform on the nodes of Runge-Kutta methods. The boundary-element method is used in combination with the collocation method, local element-by-element approximation based on the matched interpolation model. The results of analyzing wave problems of the effect of a non-stationary force on elastic and poroelastic finite bodies, a poroelastic half-space (also with a fictitious boundary) and a layered half-space weakened by a cavity, and a half-space with a trench are presented. Excitation of a slow wave in a poroelastic medium is studied, using the stepped BEM-scheme on the nodes of Runge-Kutta methods.
An evaluation of a reagentless method for the determination of total mercury in aquatic life
Haynes, Sekeenia; Gragg, Richard D.; Johnson, Elijah; Robinson, Larry; Orazio, Carl E.
2006-01-01
Multiple treatment (i.e., drying, chemical digestion, and oxidation) steps are often required during preparation of biological matrices for quantitative analysis of mercury; these multiple steps could potentially lead to systematic errors and poor recovery of the analyte. In this study, the Direct Mercury Analyzer (Milestone Inc., Monroe, CT) was utilized to measure total mercury in fish tissue by integrating steps of drying, sample combustion and gold sequestration with successive identification using atomic absorption spectrometry. We also evaluated the differences between the mercury concentrations found in samples that were homogenized and samples with no preparation. These results were confirmed with cold vapor atomic absorbance and fluorescence spectrometric methods of analysis. Finally, total mercury in wild captured largemouth bass (n = 20) were assessed using the Direct Mercury Analyzer to examine internal variability between mercury concentrations in muscle, liver and brain organs. Direct analysis of total mercury measured in muscle tissue was strongly correlated with muscle tissue that was homogenized before analysis (r = 0.81, p < 0.0001). Additionally, results using this integrated method compared favorably (p < 0.05) with conventional cold vapor spectrometry with atomic absorbance and fluorescence detection methods. Mercury concentrations in brain were significantly lower than concentrations in muscle (p < 0.001) and liver (p < 0.05) tissues. This integrated method can measure a wide range of mercury concentrations (0-500 ??g) using small sample sizes. Total mercury measurements in this study are comparative to the methods (cold vapor) commonly used for total mercury analysis and are devoid of laborious sample preparation and expensive hazardous waste. ?? Springer 2006.
NASA Technical Reports Server (NTRS)
Turpin, Jason B.
2004-01-01
One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.
NASA Astrophysics Data System (ADS)
Abate, A.; Pressello, M. C.; Benassi, M.; Strigari, L.
2009-12-01
The aim of this study was to evaluate the effectiveness and efficiency in inverse IMRT planning of one-step optimization with the step-and-shoot (SS) technique as compared to traditional two-step optimization using the sliding windows (SW) technique. The Pinnacle IMRT TPS allows both one-step and two-step approaches. The same beam setup for five head-and-neck tumor patients and dose-volume constraints were applied for all optimization methods. Two-step plans were produced converting the ideal fluence with or without a smoothing filter into the SW sequence. One-step plans, based on direct machine parameter optimization (DMPO), had the maximum number of segments per beam set at 8, 10, 12, producing a directly deliverable sequence. Moreover, the plans were generated whether a split-beam was used or not. Total monitor units (MUs), overall treatment time, cost function and dose-volume histograms (DVHs) were estimated for each plan. PTV conformality and homogeneity indexes and normal tissue complication probability (NTCP) that are the basis for improving therapeutic gain, as well as non-tumor integral dose (NTID), were evaluated. A two-sided t-test was used to compare quantitative variables. All plans showed similar target coverage. Compared to two-step SW optimization, the DMPO-SS plans resulted in lower MUs (20%), NTID (4%) as well as NTCP values. Differences of about 15-20% in the treatment delivery time were registered. DMPO generates less complex plans with identical PTV coverage, providing lower NTCP and NTID, which is expected to reduce the risk of secondary cancer. It is an effective and efficient method and, if available, it should be favored over the two-step IMRT planning.
Implicit time accurate simulation of unsteady flow
NASA Astrophysics Data System (ADS)
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
NASA Astrophysics Data System (ADS)
Caplan, R. M.; Mikić, Z.; Linker, J. A.; Lionello, R.
2017-05-01
We explore the performance and advantages/disadvantages of using unconditionally stable explicit super time-stepping (STS) algorithms versus implicit schemes with Krylov solvers for integrating parabolic operators in thermodynamic MHD models of the solar corona. Specifically, we compare the second-order Runge-Kutta Legendre (RKL2) STS method with the implicit backward Euler scheme computed using the preconditioned conjugate gradient (PCG) solver with both a point-Jacobi and a non-overlapping domain decomposition ILU0 preconditioner. The algorithms are used to integrate anisotropic Spitzer thermal conduction and artificial kinematic viscosity at time-steps much larger than classic explicit stability criteria allow. A key component of the comparison is the use of an established MHD model (MAS) to compute a real-world simulation on a large HPC cluster. Special attention is placed on the parallel scaling of the algorithms. It is shown that, for a specific problem and model, the RKL2 method is comparable or surpasses the implicit method with PCG solvers in performance and scaling, but suffers from some accuracy limitations. These limitations, and the applicability of RKL methods are briefly discussed.
Zuo, Peng; Li, XiuJun; Dominguez, Delfina C; Ye, Bang-Ce
2013-10-07
Infectious pathogens often cause serious public health concerns throughout the world. There is an increasing demand for simple, rapid and sensitive approaches for multiplexed pathogen detection. In this paper we have developed a polydimethylsiloxane (PDMS)/paper/glass hybrid microfluidic system integrated with aptamer-functionalized graphene oxide (GO) nano-biosensors for simple, one-step, multiplexed pathogen detection. The paper substrate used in this hybrid microfluidic system facilitated the integration of aptamer biosensors on the microfluidic biochip, and avoided complicated surface treatment and aptamer probe immobilization in a PDMS or glass-only microfluidic system. Lactobacillus acidophilus was used as a bacterium model to develop the microfluidic platform with a detection limit of 11.0 cfu mL(-1). We have also successfully extended this method to the simultaneous detection of two infectious pathogens - Staphylococcus aureus and Salmonella enterica. This method is simple and fast. The one-step 'turn on' pathogen assay in a ready-to-use microfluidic device only takes ~10 min to complete on the biochip. Furthermore, this microfluidic device has great potential in rapid detection of a wide variety of different other bacterial and viral pathogens.
Zuo, Peng; Dominguez, Delfina C.; Ye, Bang-Ce
2014-01-01
Infectious pathogens often cause serious public health concerns throughout the world. There is an increasing demand for simple, rapid and sensitive approaches for multiplexed pathogen detection. In this paper we have developed a polydimethylsiloxane (PDMS)/paper/glass hybrid microfluidic system integrated with aptamer-functionalized graphene oxide (GO) nano-biosensors for simple, one-step, multiplexed pathogen detection. The paper substrate used in this hybrid microfluidic system facilitated the integration of aptamer biosensors on the microfluidic biochip, and avoided complicated surface treatment and aptamer probe immobilization in a PDMS or glass-only microfluidic system. Lactobacillus acidophilus was used as a bacterium model to develop the microfluidic platform with a detection limit of 11.0 cfu mL−1. We have also successfully extended this method to the simultaneous detection of two infectious pathogens - Staphylococcus aureus and Salmonella enterica. This method is simple and fast. The one-step ‘turn on’ pathogen assay in a ready-to-use microfluidic device only takes ~10 min to complete on the biochip. Furthermore, this microfluidic device has great potential in rapid detection of a wide variety of different other bacterial and viral pathogens. PMID:23929394
Modified Runge-Kutta methods for solving ODES. M.S. Thesis
NASA Technical Reports Server (NTRS)
Vanvu, T.
1981-01-01
A class of Runge-Kutta formulas is examined which permit the calculation of an accurate solution anywhere in the interval of integration. This is used in a code which seldom has to reject a step; rather it takes a reduced step if the estimated error is too large. The absolute stability implications of this are examined.
An Integrated Approach for Gear Health Prognostics
NASA Technical Reports Server (NTRS)
He, David; Bechhoefer, Eric; Dempsey, Paula; Ma, Jinghua
2012-01-01
In this paper, an integrated approach for gear health prognostics using particle filters is presented. The presented method effectively addresses the issues in applying particle filters to gear health prognostics by integrating several new components into a particle filter: (1) data mining based techniques to effectively define the degradation state transition and measurement functions using a one-dimensional health index obtained by whitening transform; (2) an unbiased l-step ahead RUL estimator updated with measurement errors. The feasibility of the presented prognostics method is validated using data from a spiral bevel gear case study.
Hasegawa, Chihiro; Duffull, Stephen B
2018-02-01
Pharmacokinetic-pharmacodynamic systems are often expressed with nonlinear ordinary differential equations (ODEs). While there are numerous methods to solve such ODEs these methods generally rely on time-stepping solutions (e.g. Runge-Kutta) which need to be matched to the characteristics of the problem at hand. The primary aim of this study was to explore the performance of an inductive approximation which iteratively converts nonlinear ODEs to linear time-varying systems which can then be solved algebraically or numerically. The inductive approximation is applied to three examples, a simple nonlinear pharmacokinetic model with Michaelis-Menten elimination (E1), an integrated glucose-insulin model and an HIV viral load model with recursive feedback systems (E2 and E3, respectively). The secondary aim of this study was to explore the potential advantages of analytically solving linearized ODEs with two examples, again E3 with stiff differential equations and a turnover model of luteinizing hormone with a surge function (E4). The inductive linearization coupled with a matrix exponential solution provided accurate predictions for all examples with comparable solution time to the matched time-stepping solutions for nonlinear ODEs. The time-stepping solutions however did not perform well for E4, particularly when the surge was approximated by a square wave. In circumstances when either a linear ODE is particularly desirable or the uncertainty in matching the integrator to the ODE system is of potential risk, then the inductive approximation method coupled with an analytical integration method would be an appropriate alternative.
Optimizing How We Teach Research Methods
ERIC Educational Resources Information Center
Cvancara, Kristen E.
2017-01-01
Courses: Research Methods (undergraduate or graduate level). Objective: The aim of this exercise is to optimize the ability for students to integrate an understanding of various methodologies across research paradigms within a 15-week semester, including a review of procedural steps and experiential learning activities to practice each method, a…
NASA Technical Reports Server (NTRS)
Madsen, Niel K.
1992-01-01
Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.
Implicit-Explicit Time Integration Methods for Non-hydrostatic Atmospheric Models
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Guerra, J. E.; Hamon, F. P.; Reynolds, D. R.; Ullrich, P. A.; Woodward, C. S.
2016-12-01
The Accelerated Climate Modeling for Energy (ACME) project is developing a non-hydrostatic atmospheric dynamical core for high-resolution coupled climate simulations on Department of Energy leadership class supercomputers. An important factor in computational efficiency is avoiding the overly restrictive time step size limitations of fully explicit time integration methods due to the stiffest modes present in the model (acoustic waves). In this work we compare the accuracy and performance of different Implicit-Explicit (IMEX) splittings of the non-hydrostatic equations and various Additive Runge-Kutta (ARK) time integration methods. Results utilizing the Tempest non-hydrostatic atmospheric model and the ARKode package show that the choice of IMEX splitting and ARK scheme has a significant impact on the maximum stable time step size as well as solution quality. Horizontally Explicit Vertically Implicit (HEVI) approaches paired with certain ARK methods lead to greatly improved runtimes. With effective preconditioning IMEX splittings that incorporate some implicit horizontal dynamics can be competitive with HEVI results. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-699187
NASA Astrophysics Data System (ADS)
Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.
2017-12-01
We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.
Ingham, Richard J; Battilocchio, Claudio; Fitzpatrick, Daniel E; Sliwinski, Eric; Hawkins, Joel M; Ley, Steven V
2015-01-01
Performing reactions in flow can offer major advantages over batch methods. However, laboratory flow chemistry processes are currently often limited to single steps or short sequences due to the complexity involved with operating a multi-step process. Using new modular components for downstream processing, coupled with control technologies, more advanced multi-step flow sequences can be realized. These tools are applied to the synthesis of 2-aminoadamantane-2-carboxylic acid. A system comprising three chemistry steps and three workup steps was developed, having sufficient autonomy and self-regulation to be managed by a single operator. PMID:25377747
NASA Astrophysics Data System (ADS)
Ha, Sanghyun; Park, Junshin; You, Donghyun
2017-11-01
Utility of the computational power of modern Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. Due to its serial and bandwidth-bound nature, the present choice of numerical methods is considered to be a good candidate for evaluating the potential of GPUs for solving Navier-Stokes equations using non-explicit time integration. An efficient algorithm is presented for GPU acceleration of the Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution method used in the semi-implicit fractional-step method. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while Navier-Stokes equations are computed on a GPU. Extension to multiple NVIDIA GPUs is implemented using NVLink supported by the Pascal architecture. Performance of the present method is experimented on multiple Tesla P100 GPUs compared with a single-core Xeon E5-2650 v4 CPU in simulations of boundary-layer flow over a flat plate. Supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (Ministry of Science, ICT and Future Planning NRF-2016R1E1A2A01939553, NRF-2014R1A2A1A11049599, and Ministry of Trade, Industry and Energy 201611101000230).
Method for double-sided processing of thin film transistors
Yuan, Hao-Chih; Wang, Guogong; Eriksson, Mark A.; Evans, Paul G.; Lagally, Max G.; Ma, Zhenqiang
2008-04-08
This invention provides methods for fabricating thin film electronic devices with both front- and backside processing capabilities. Using these methods, high temperature processing steps may be carried out during both frontside and backside processing. The methods are well-suited for fabricating back-gate and double-gate field effect transistors, double-sided bipolar transistors and 3D integrated circuits.
A new algorithm for modeling friction in dynamic mechanical systems
NASA Technical Reports Server (NTRS)
Hill, R. E.
1988-01-01
A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.
Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
Rangan, Aaditya V; Cai, David
2007-02-01
We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.
NASA Astrophysics Data System (ADS)
Sawall, Mathias; von Harbou, Erik; Moog, Annekathrin; Behrens, Richard; Schröder, Henning; Simoneau, Joël; Steimers, Ellen; Neymeyr, Klaus
2018-04-01
Spectral data preprocessing is an integral and sometimes inevitable part of chemometric analyses. For Nuclear Magnetic Resonance (NMR) spectra a possible first preprocessing step is a phase correction which is applied to the Fourier transformed free induction decay (FID) signal. This preprocessing step can be followed by a separate baseline correction step. Especially if series of high-resolution spectra are considered, then automated and computationally fast preprocessing routines are desirable. A new method is suggested that applies the phase and the baseline corrections simultaneously in an automated form without manual input, which distinguishes this work from other approaches. The underlying multi-objective optimization or Pareto optimization provides improved results compared to consecutively applied correction steps. The optimization process uses an objective function which applies strong penalty constraints and weaker regularization conditions. The new method includes an approach for the detection of zero baseline regions. The baseline correction uses a modified Whittaker smoother. The functionality of the new method is demonstrated for experimental NMR spectra. The results are verified against gravimetric data. The method is compared to alternative preprocessing tools. Additionally, the simultaneous correction method is compared to a consecutive application of the two correction steps.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2003-01-01
A variable order method of integrating the structural dynamics equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. When the time variation of the system can be modeled exactly by a polynomial it produces nearly exact solutions for a wide range of time step sizes. Solutions of a model nonlinear dynamic response exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with solutions obtained by established methods.
NASA Technical Reports Server (NTRS)
Rummel, R.; Sjoeberg, L.; Rapp, R. H.
1978-01-01
A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite.
NASA Astrophysics Data System (ADS)
Syamsuri, B. S.; Anwar, S.; Sumarna, O.
2017-09-01
This research aims to develop oxidation-reduction reactions (redox) teaching material used the Four Steps Teaching Material Development (4S TMD) method consists of four steps: selection, structuring, characterization and didactical reduction. This paper is the first part of the development of teaching material that includes selection and structuring steps. At the selection step, the development of teaching material begins with the development concept of redox based on curriculum demands, then the development of fundamental concepts sourced from the international textbook, and last is the development of values or skills can be integrated with redox concepts. The results of this selection step are the subject matter of the redox concept and values can be integrated with it. In the structuring step was developed concept map that provide on the relationship between redox concepts; Macro structure that guide systematic on the writing of teaching material; And multiple representations which are the development of teaching material that connection between macroscopic, submicroscopic, and symbolic level representations. The result of the two steps in this first part of the study produced a draft of teaching material. Evaluation of the draft of teaching material is done by an expert lecturer in the field of chemical education to assess the feasibility of teaching material.
Executing on Integration: The Key to Success in Mergers and Acquisitions.
Bradley, Carol
2016-01-01
Health care mergers and acquisitions require a clearly stated vision and exquisite planning of integration activities to provide the best possible conditions for a successful transaction. During the due diligence process, key steps can be taken to create a shared vision and a plan to inspire confidence and build enthusiasm for all stakeholders. Integration planning should include a defined structure, roles and responsibilities, as well as a method for evaluation.
Sattler, Bernhard; Jochimsen, Thies; Barthel, Henryk; Sommerfeld, Kerstin; Stumpp, Patrick; Hoffmann, Karl-Titus; Gutberlet, Matthias; Villringer, Arno; Kahn, Thomas; Sabri, Osama
2013-02-01
The implementation of hybrid imaging systems requires thorough and anticipatory planning at local and regional levels. For installation of combined positron emission and magnetic resonance imaging systems (PET/MRI), a number of physical and constructional provisions concerning shielding of electromagnetic fields (RF- and high-field) as well as handling of radionuclides have to be met, the latter of which includes shielding for the emitted 511 keV gamma rays. Based on our experiences with a SIEMENS Biograph mMR system, a step-by-step approach is required to allow a trouble-free installation. In this article, we present a proposal for a standardized step-by-step plan to accomplish the installation of a combined PET/MRI system. Moreover, guidelines for the smooth operation of combined PET/MRI in an integrated research and clinical setting will be proposed. Overall, the most important preconditions for the successful implementation of PET/MRI in an integrated research and clinical setting is the interdisciplinary target-oriented cooperation between nuclear medicine, radiology, and all referring and collaborating institutions at all levels of interaction (personnel, imaging protocols, reporting, selection of the data transfer and communication methods).
NASA Astrophysics Data System (ADS)
Bittencourt, Tulio N.; Barry, Ahmabou; Ingraffea, Anthony R.
This paper presents a comparison among stress-intensity factors for mixed-mode two-dimensional problems obtained through three different approaches: displacement correlation, J-integral, and modified crack-closure integral. All mentioned procedures involve only one analysis step and are incorporated in the post-processor page of a finite element computer code for fracture mechanics analysis (FRANC). Results are presented for a closed-form solution problem under mixed-mode conditions. The accuracy of these described methods then is discussed and analyzed in the framework of their numerical results. The influence of the differences among the three methods on the predicted crack trajectory of general problems is also discussed.
Numerical integration of KPZ equation with restrictions
NASA Astrophysics Data System (ADS)
Torres, M. F.; Buceta, R. C.
2018-03-01
In this paper, we introduce a novel integration method of Kardar–Parisi–Zhang (KPZ) equation. It is known that if during the discrete integration of the KPZ equation the nearest-neighbor height-difference exceeds a critical value, instabilities appear and the integration diverges. One way to avoid these instabilities is to replace the KPZ nonlinear-term by a function of the same term that depends on a single adjustable parameter which is able to control pillars or grooves growing on the interface. Here, we propose a different integration method which consists of directly limiting the value taken by the KPZ nonlinearity, thereby imposing a restriction rule that is applied in each integration time-step, as if it were the growth rule of a restricted discrete model, e.g. restricted-solid-on-solid (RSOS). Taking the discrete KPZ equation with restrictions to its dimensionless version, the integration depends on three parameters: the coupling constant g, the inverse of the time-step k, and the restriction constant ε which is chosen to eliminate divergences while keeping all the properties of the continuous KPZ equation. We study in detail the conditions in the parameters’ space that avoid divergences in the 1-dimensional integration and reproduce the scaling properties of the continuous KPZ with a particular parameter set. We apply the tested methodology to the d-dimensional case (d = 3, 4 ) with the purpose of obtaining the growth exponent β, by establishing the conditions of the coupling constant g under which we recover known values reached by other authors, particularly for the RSOS model. This method allows us to infer that d = 4 is not the critical dimension of the KPZ universality class, where the strong-coupling phase disappears.
A Semi-Implicit, Three-Dimensional Model for Estuarine Circulation
Smith, Peter E.
2006-01-01
A semi-implicit, finite-difference method for the numerical solution of the three-dimensional equations for circulation in estuaries is presented and tested. The method uses a three-time-level, leapfrog-trapezoidal scheme that is essentially second-order accurate in the spatial and temporal numerical approximations. The three-time-level scheme is shown to be preferred over a two-time-level scheme, especially for problems with strong nonlinearities. The stability of the semi-implicit scheme is free from any time-step limitation related to the terms describing vertical diffusion and the propagation of the surface gravity waves. The scheme does not rely on any form of vertical/horizontal mode-splitting to treat the vertical diffusion implicitly. At each time step, the numerical method uses a double-sweep method to transform a large number of small tridiagonal equation systems and then uses the preconditioned conjugate-gradient method to solve a single, large, five-diagonal equation system for the water surface elevation. The governing equations for the multi-level scheme are prepared in a conservative form by integrating them over the height of each horizontal layer. The layer-integrated volumetric transports replace velocities as the dependent variables so that the depth-integrated continuity equation that is used in the solution for the water surface elevation is linear. Volumetric transports are computed explicitly from the momentum equations. The resulting method is mass conservative, efficient, and numerically accurate.
Stability of numerical integration techniques for transient rotor dynamics
NASA Technical Reports Server (NTRS)
Kascak, A. F.
1977-01-01
A finite element model of a rotor bearing system was analyzed to determine the stability limits of the forward, backward, and centered Euler; Runge-Kutta; Milne; and Adams numerical integration techniques. The analysis concludes that the highest frequency mode determines the maximum time step for a stable solution. Thus, the number of mass elements should be minimized. Increasing the damping can sometimes cause numerical instability. For a uniform shaft, with 10 mass elements, operating at approximately the first critical speed, the maximum time step for the Runge-Kutta, Milne, and Adams methods is that which corresponds to approximately 1 degree of shaft movement. This is independent of rotor dimensions.
Lu, Yuhua; Liu, Qian
2018-01-01
We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications. PMID:29515870
Xu, Lang; Lu, Yuhua; Liu, Qian
2018-02-01
We propose a novel method to simulate soft tissue deformation for virtual surgery applications. The method considers the mechanical properties of soft tissue, such as its viscoelasticity, nonlinearity and incompressibility; its speed, stability and accuracy also meet the requirements for a surgery simulator. Modifying the traditional equation for mass spring dampers (MSD) introduces nonlinearity and viscoelasticity into the calculation of elastic force. Then, the elastic force is used in the constraint projection step for naturally reducing constraint potential. The node position is enforced by the combined spring force and constraint conservative force through Newton's second law. We conduct a comparison study of conventional MSD and position-based dynamics for our new integrating method. Our approach enables stable, fast and large step simulation by freely controlling visual effects based on nonlinearity, viscoelasticity and incompressibility. We implement a laparoscopic cholecystectomy simulator to demonstrate the practicality of our method, in which liver and gallbladder deformation can be simulated in real time. Our method is an appropriate choice for the development of real-time virtual surgery applications.
METHOD OF MEASURING THE INTEGRATED ENERGY OUTPUT OF A NEUTRONIC CHAIN REACTOR
Sturm, W.J.
1958-12-01
A method is presented for measuring the integrated energy output of a reactor conslsting of the steps of successively irradiating calibrated thin foils of an element, such as gold, which is rendered radioactive by exposure to neutron flux for periods of time not greater than one-fifth the mean life of the induced radioactlvity and producing an indication of the radioactivity induced in each foil, each foil belng introduced into the reactor immediately upon removal of its predecessor.
Front and backside processed thin film electronic devices
Yuan, Hao-Chih; Wang, Guogong; Eriksson, Mark A.; Evans, Paul G.; Lagally, Max G.; Ma, Zhenqiang
2010-10-12
This invention provides methods for fabricating thin film electronic devices with both front- and backside processing capabilities. Using these methods, high temperature processing steps may be carried out during both frontside and backside processing. The methods are well-suited for fabricating back-gate and double-gate field effect transistors, double-sided bipolar transistors and 3D integrated circuits.
A New Sampling Strategy for the Detection of Fecal Bacteria Integrated with USEPA Method 1622/1623
USEPA Method 1622/1623 requires the concentration of Cryptosporidium and Giardia from 10 liters of water samples prior to detection. During this process the supernatant is discarded because it is assumed that most protozoa are retained in the filtration and centrifugation steps....
Efficient variable time-stepping scheme for intense field-atom interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, C.; Kosloff, R.
1993-03-01
The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less
NASA Astrophysics Data System (ADS)
Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto
2017-10-01
This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.
Epidermal segmentation in high-definition optical coherence tomography.
Li, Annan; Cheng, Jun; Yow, Ai Ping; Wall, Carolin; Wong, Damon Wing Kee; Tey, Hong Liang; Liu, Jiang
2015-01-01
Epidermis segmentation is a crucial step in many dermatological applications. Recently, high-definition optical coherence tomography (HD-OCT) has been developed and applied to imaging subsurface skin tissues. In this paper, a novel epidermis segmentation method using HD-OCT is proposed in which the epidermis is segmented by 3 steps: the weighted least square-based pre-processing, the graph-based skin surface detection and the local integral projection-based dermal-epidermal junction detection respectively. Using a dataset of five 3D volumes, we found that this method correlates well with the conventional method of manually marking out the epidermis. This method can therefore serve to effectively and rapidly delineate the epidermis for study and clinical management of skin diseases.
Molecular dynamics based enhanced sampling of collective variables with very large time steps.
Chen, Pei-Yang; Tuckerman, Mark E
2018-01-14
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Molecular dynamics based enhanced sampling of collective variables with very large time steps
NASA Astrophysics Data System (ADS)
Chen, Pei-Yang; Tuckerman, Mark E.
2018-01-01
Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.
Quadratic adaptive algorithm for solving cardiac action potential models.
Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing
2016-10-01
An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. Copyright © 2016 Elsevier Ltd. All rights reserved.
Getting the message across: using ecological integrity to communicate with resource managers
Mitchell, Brian R.; Tierney, Geraldine L.; Schweiger, E. William; Miller, Kathryn M.; Faber-Langendoen, Don; Grace, James B.
2014-01-01
This chapter describes and illustrates how concepts of ecological integrity, thresholds, and reference conditions can be integrated into a research and monitoring framework for natural resource management. Ecological integrity has been defined as a measure of the composition, structure, and function of an ecosystem in relation to the system’s natural or historical range of variation, as well as perturbations caused by natural or anthropogenic agents of change. Using ecological integrity to communicate with managers requires five steps, often implemented iteratively: (1) document the scale of the project and the current conceptual understanding and reference conditions of the ecosystem, (2) select appropriate metrics representing integrity, (3) define externally verified assessment points (metric values that signify an ecological change or need for management action) for the metrics, (4) collect data and calculate metric scores, and (5) summarize the status of the ecosystem using a variety of reporting methods. While we present the steps linearly for conceptual clarity, actual implementation of this approach may require addressing the steps in a different order or revisiting steps (such as metric selection) multiple times as data are collected. Knowledge of relevant ecological thresholds is important when metrics are selected, because thresholds identify where small changes in an environmental driver produce large responses in the ecosystem. Metrics with thresholds at or just beyond the limits of a system’s range of natural variability can be excellent, since moving beyond the normal range produces a marked change in their values. Alternatively, metrics with thresholds within but near the edge of the range of natural variability can serve as harbingers of potential change. Identifying thresholds also contributes to decisions about selection of assessment points. In particular, if there is a significant resistance to perturbation in an ecosystem, with threshold behavior not occurring until well beyond the historical range of variation, this may provide a scientific basis for shifting an ecological assessment point beyond the historical range. We present two case studies using ongoing monitoring by the US National Park Service Vital Signs program that illustrate the use of an ecological integrity approach to communicate ecosystem status to resource managers. The Wetland Ecological Integrity in Rocky Mountain National Park case study uses an analytical approach that specifically incorporates threshold detection into the process of establishing assessment points. The Forest Ecological Integrity of Northeastern National Parks case study describes a method for reporting ecological integrity to resource managers and other decision makers. We believe our approach has the potential for wide applicability for natural resource management.
Progress in development of HEDP capabilities in FLASH's Unsplit Staggered Mesh MHD solver
NASA Astrophysics Data System (ADS)
Lee, D.; Xia, G.; Daley, C.; Dubey, A.; Gopal, S.; Graziani, C.; Lamb, D.; Weide, K.
2011-11-01
FLASH is a publicly available astrophysical community code designed to solve highly compressible multi-physics reactive flows. We are adding capabilities to FLASH that will make it an open science code for the academic HEDP community. Among many important numerical requirements, we consider the following features to be important components necessary to meet our goals for FLASH as an HEDP open toolset. First, we are developing computationally efficient time-stepping integration methods that overcome the stiffness that arises in the equations describing a physical problem when there are disparate time scales. To this end, we are adding two different time-stepping schemes to FLASH that relax the time step limit when diffusive effects are present: an explicit super-time-stepping algorithm (Alexiades et al. in Com. Num. Mech. Eng. 12:31-42, 1996) and a Jacobian-Free Newton-Krylov implicit formulation. These two methods will be integrated into a robust, efficient, and high-order accurate Unsplit Staggered Mesh MHD (USM) solver (Lee and Deane in J. Comput. Phys. 227, 2009). Second, we have implemented an anisotropic Spitzer-Braginskii conductivity model to treat thermal heat conduction along magnetic field lines. Finally, we are implementing the Biermann Battery term to account for spontaneous generation of magnetic fields in the presence of non-parallel temperature and density gradients.
NASA Astrophysics Data System (ADS)
Rößler, Thomas; Stein, Olaf; Heng, Yi; Baumeister, Paul; Hoffmann, Lars
2018-02-01
The accuracy of trajectory calculations performed by Lagrangian particle dispersion models (LPDMs) depends on various factors. The optimization of numerical integration schemes used to solve the trajectory equation helps to maximize the computational efficiency of large-scale LPDM simulations. We analyzed global truncation errors of six explicit integration schemes of the Runge-Kutta family, which we implemented in the Massive-Parallel Trajectory Calculations (MPTRAC) advection module. The simulations were driven by wind fields from operational analysis and forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF) at T1279L137 spatial resolution and 3 h temporal sampling. We defined separate test cases for 15 distinct regions of the atmosphere, covering the polar regions, the midlatitudes, and the tropics in the free troposphere, in the upper troposphere and lower stratosphere (UT/LS) region, and in the middle stratosphere. In total, more than 5000 different transport simulations were performed, covering the months of January, April, July, and October for the years 2014 and 2015. We quantified the accuracy of the trajectories by calculating transport deviations with respect to reference simulations using a fourth-order Runge-Kutta integration scheme with a sufficiently fine time step. Transport deviations were assessed with respect to error limits based on turbulent diffusion. Independent of the numerical scheme, the global truncation errors vary significantly between the different regions. Horizontal transport deviations in the stratosphere are typically an order of magnitude smaller compared with the free troposphere. We found that the truncation errors of the six numerical schemes fall into three distinct groups, which mostly depend on the numerical order of the scheme. Schemes of the same order differ little in accuracy, but some methods need less computational time, which gives them an advantage in efficiency. The selection of the integration scheme and the appropriate time step should possibly take into account the typical altitude ranges as well as the total length of the simulations to achieve the most efficient simulations. However, trying to summarize, we recommend the third-order Runge-Kutta method with a time step of 170 s or the midpoint scheme with a time step of 100 s for efficient simulations of up to 10 days of simulation time for the specific ECMWF high-resolution data set considered in this study. Purely stratospheric simulations can use significantly larger time steps of 800 and 1100 s for the midpoint scheme and the third-order Runge-Kutta method, respectively.
Integrated Low-Rank-Based Discriminative Feature Learning for Recognition.
Zhou, Pan; Lin, Zhouchen; Zhang, Chao
2016-05-01
Feature learning plays a central role in pattern recognition. In recent years, many representation-based feature learning methods have been proposed and have achieved great success in many applications. However, these methods perform feature learning and subsequent classification in two separate steps, which may not be optimal for recognition tasks. In this paper, we present a supervised low-rank-based approach for learning discriminative features. By integrating latent low-rank representation (LatLRR) with a ridge regression-based classifier, our approach combines feature learning with classification, so that the regulated classification error is minimized. In this way, the extracted features are more discriminative for the recognition tasks. Our approach benefits from a recent discovery on the closed-form solutions to noiseless LatLRR. When there is noise, a robust Principal Component Analysis (PCA)-based denoising step can be added as preprocessing. When the scale of a problem is large, we utilize a fast randomized algorithm to speed up the computation of robust PCA. Extensive experimental results demonstrate the effectiveness and robustness of our method.
Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng
2013-01-01
Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887
Vijayakumar, Supreeta; Conway, Max; Lió, Pietro; Angione, Claudio
2017-05-30
Metabolic modelling has entered a mature phase with dozens of methods and software implementations available to the practitioner and the theoretician. It is not easy for a modeller to be able to see the wood (or the forest) for the trees. Driven by this analogy, we here present a 'forest' of principal methods used for constraint-based modelling in systems biology. This provides a tree-based view of methods available to prospective modellers, also available in interactive version at http://modellingmetabolism.net, where it will be kept updated with new methods after the publication of the present manuscript. Our updated classification of existing methods and tools highlights the most promising in the different branches, with the aim to develop a vision of how existing methods could hybridize and become more complex. We then provide the first hands-on tutorial for multi-objective optimization of metabolic models in R. We finally discuss the implementation of multi-view machine learning approaches in poly-omic integration. Throughout this work, we demonstrate the optimization of trade-offs between multiple metabolic objectives, with a focus on omic data integration through machine learning. We anticipate that the combination of a survey, a perspective on multi-view machine learning and a step-by-step R tutorial should be of interest for both the beginner and the advanced user. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Stability and delay sensitivity of neutral fractional-delay systems.
Xu, Qi; Shi, Min; Wang, Zaihua
2016-08-01
This paper generalizes the stability test method via integral estimation for integer-order neutral time-delay systems to neutral fractional-delay systems. The key step in stability test is the calculation of the number of unstable characteristic roots that is described by a definite integral over an interval from zero to a sufficient large upper limit. Algorithms for correctly estimating the upper limits of the integral are given in two concise ways, parameter dependent or independent. A special feature of the proposed method is that it judges the stability of fractional-delay systems simply by using rough integral estimation. Meanwhile, the paper shows that for some neutral fractional-delay systems, the stability is extremely sensitive to the change of time delays. Examples are given for demonstrating the proposed method as well as the delay sensitivity.
Free energy of steps using atomistic simulations
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
The properties of solid-liquid interfaces are known to play critical roles in solidification processes. Particularly special importance is given to thermodynamic quantities that describe the equilibrium state of these surfaces. For example, on the solid-liquid-vapor heteroepitaxial growth of semiconductor nanowires the crystal nucleation process on the faceted solid-liquid interface is influenced by the solid-liquid and vapor-solid interfacial free energies, and also by the free energies of associated steps at these faceted interfaces. Crystal-growth theories and mesoscale simulation methods depend on quantitative information about these properties, which are often poorly characterized from experimental measurements. In this work we propose an extension of the capillary fluctuation method for calculation of the free energy of steps on faceted crystal surfaces. From equilibrium atomistic simulations of steps on (111) surfaces of Copper we computed accurately the step free energy for different step orientations. We show that the step free energy remains finite at all temperature up to the melting point and that the results obtained agree with the more well established method of thermodynamic integration if finite size effects are taken into account. The research of RF and MA at UC Berkeley were supported by the US National Science Foundation (Grant No. DMR-1105409). TF acknowledges support through a postdoctoral fellowship from the Miller Institute for Basic Research in Science.
Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm
NASA Technical Reports Server (NTRS)
Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.
1991-01-01
The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.
Rotorcraft Brownout: Advanced Understanding, Control and Mitigation
2008-12-31
the Gauss Seidel iterative method . The overall steps of SIMPLER algorithm can be summarized as: 1. Guess velocity field, 2. Calculate the momentum...techniques and numerical methods , and the team will begin to develop a methodology that is capable of integrating these solutions and highlighting...rotorcraft design optimization techniques will then be undertaken using the validated computational methods . 15. SUBJECT TERMS Rotorcraft
NASA Astrophysics Data System (ADS)
Iuga, Virginia; Kifor, Claudiu
2014-12-01
The key to achieve a sustainable development lies in the customer satisfaction through improved quality, reduced cost, reduced delivery lead times and proper communication. The objective of the lean manufacturing system (LMS) is to identify and eliminate the processes and resources which do not add value to a product. The following paper aims to present a proposal of further development of integrated management systems in organizations through the implementation of lean shop floor management. In the first part of the paper, a dynamic model of the implementation steps will be presented. Furthermore, the paper underlines the importance of implementing a lean culture parallel with each step of integrating the lean methods and tools. The paper also describes the Toyota philosophy, tools, and the supporting lean culture necessary to implementing an efficient lean system in productive organizations
NASA Astrophysics Data System (ADS)
Singh, R. A.; Satyanarayana, N.; Kustandi, T. S.; Sinha, S. K.
2011-01-01
Micro/nano-electro-mechanical-systems (MEMS/NEMS) are miniaturized devices built at micro/nanoscales. At these scales, the surface/interfacial forces are extremely strong and they adversely affect the smooth operation and the useful operating lifetimes of such devices. When these forces manifest in severe forms, they lead to material removal and thereby reduce the wear durability of the devices. In this paper, we present a simple, yet robust, two-step surface modification method to significantly enhance the tribological performance of MEMS/NEMS materials. The two-step method involves oxygen plasma treatment of polymeric films and the application of a nanolubricant, namely perfluoropolyether. We apply the two-step method to the two most important MEMS/NEMS structural materials, namely silicon and SU8 polymer. On applying surface modification to these materials, their initial coefficient of friction reduces by ~4-7 times and the steady-state coefficient of friction reduces by ~2.5-3.5 times. Simultaneously, the wear durability of both the materials increases by >1000 times. The two-step method is time effective as each of the steps takes the time duration of approximately 1 min. It is also cost effective as the oxygen plasma treatment is a part of the MEMS/NEMS fabrication process. The two-step method can be readily and easily integrated into MEMS/NEMS fabrication processes. It is anticipated that this method will work for any kind of structural material from which MEMS/NEMS are or can be made.
An arbitrary-order staggered time integrator for the linear acoustic wave equation
NASA Astrophysics Data System (ADS)
Lee, Jaejoon; Park, Hyunseo; Park, Yoonseo; Shin, Changsoo
2018-02-01
We suggest a staggered time integrator whose order of accuracy can arbitrarily be extended to solve the linear acoustic wave equation. A strategy to select the appropriate order of accuracy is also proposed based on the error analysis that quantitatively predicts the truncation error of the numerical solution. This strategy not only reduces the computational cost several times, but also allows us to flexibly set the modelling parameters such as the time step length, grid interval and P-wave speed. It is demonstrated that the proposed method can almost eliminate temporal dispersive errors during long term simulations regardless of the heterogeneity of the media and time step lengths. The method can also be successfully applied to the source problem with an absorbing boundary condition, which is frequently encountered in the practical usage for the imaging algorithms or the inverse problems.
Bouschen, Werner; Schulz, Oliver; Eikel, Daniel; Spengler, Bernhard
2010-02-01
Matrix preparation techniques such as air spraying or vapor deposition were investigated with respect to lateral migration, integration of analyte into matrix crystals and achievable lateral resolution for the purpose of high-resolution biological imaging. The accessible mass range was found to be beyond 5000 u with sufficient analytical sensitivity. Gas-assisted spraying methods (using oxygen-free gases) provide a good compromise between crystal integration of analyte and analyte migration within the sample. Controlling preparational parameters with this method, however, is difficult. Separation of the preparation procedure into two steps, instead, leads to an improved control of migration and incorporation. The first step is a dry vapor deposition of matrix onto the investigated sample. In a second step, incorporation of analyte into the matrix crystal is enhanced by a controlled recrystallization of matrix in a saturated water atmosphere. With this latter method an effective analytical resolution of 2 microm in the x and y direction was achieved for scanning microprobe matrix-assisted laser desorption/ionization imaging mass spectrometry (SMALDI-MS). Cultured A-498 cells of human renal carcinoma were successfully investigated by high-resolution MALDI imaging using the new preparation techniques. Copyright 2010 John Wiley & Sons, Ltd.
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J
2014-01-01
We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840
Storybridging: Four steps for constructing effective health narratives
Boeijinga, Anniek; Hoeken, Hans; Sanders, José
2017-01-01
Objective: To develop a practical step-by-step approach to constructing narrative health interventions in response to the mixed results and wide diversity of narratives used in health-related narrative persuasion research. Method: Development work was guided by essential narrative characteristics as well as principles enshrined in the Health Action Process Approach. Results: The ‘storybridging’ method for constructing health narratives is described as consisting of four concrete steps: (a) identifying the stage of change, (b) identifying the key elements, (c) building the story, and (d) pre-testing the story. These steps are illustrated by means of a case study in which an effective narrative health intervention was developed for Dutch truck drivers: a high-risk, underprivileged occupational group. Conclusion: Although time and labour intensive, the Storybridging approach suggests integrating the target audience as an important stakeholder throughout the development process. Implications and recommendations are provided for health promotion targeting truck drivers specifically and for constructing narrative health interventions in general. PMID:29276232
From fatalism to resilience: reducing disaster impacts through systematic investments.
Hill, Harvey; Wiener, John; Warner, Koko
2012-04-01
This paper describes a method for reducing the economic risks associated with predictable natural hazards by enhancing the resilience of national infrastructure systems. The three-step generalised framework is described along with examples. Step one establishes economic baseline growth without the disaster impact. Step two characterises economic growth constrained by a disaster. Step three assesses the economy's resilience to the disaster event when it is buffered by alternative resiliency investments. The successful outcome of step three is a disaster-resistant core of infrastructure systems and social capacity more able to maintain the national economy and development post disaster. In addition, the paper considers ways to achieve this goal in data-limited environments. The method provides a methodology to address this challenge via the integration of physical and social data of different spatial scales into macroeconomic models. This supports the disaster risk reduction objectives of governments, donor agencies, and the United Nations International Strategy for Disaster Reduction. © 2012 The Author(s). Disasters © Overseas Development Institute, 2012.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Automating Guidelines for Clinical Decision Support: Knowledge Engineering and Implementation.
Tso, Geoffrey J; Tu, Samson W; Oshiro, Connie; Martins, Susana; Ashcraft, Michael; Yuen, Kaeli W; Wang, Dan; Robinson, Amy; Heidenreich, Paul A; Goldstein, Mary K
2016-01-01
As utilization of clinical decision support (CDS) increases, it is important to continue the development and refinement of methods to accurately translate the intention of clinical practice guidelines (CPG) into a computable form. In this study, we validate and extend the 13 steps that Shiffman et al. 5 identified for translating CPG knowledge for use in CDS. During an implementation project of ATHENA-CDS, we encoded complex CPG recommendations for five common chronic conditions for integration into an existing clinical dashboard. Major decisions made during the implementation process were recorded and categorized according to the 13 steps. During the implementation period, we categorized 119 decisions and identified 8 new categories required to complete the project. We provide details on an updated model that outlines all of the steps used to translate CPG knowledge into a CDS integrated with existing health information technology.
Symplectic molecular dynamics simulations on specially designed parallel computers.
Borstnik, Urban; Janezic, Dusanka
2005-01-01
We have developed a computer program for molecular dynamics (MD) simulation that implements the Split Integration Symplectic Method (SISM) and is designed to run on specialized parallel computers. The MD integration is performed by the SISM, which analytically treats high-frequency vibrational motion and thus enables the use of longer simulation time steps. The low-frequency motion is treated numerically on specially designed parallel computers, which decreases the computational time of each simulation time step. The combination of these approaches means that less time is required and fewer steps are needed and so enables fast MD simulations. We study the computational performance of MD simulation of molecular systems on specialized computers and provide a comparison to standard personal computers. The combination of the SISM with two specialized parallel computers is an effective way to increase the speed of MD simulations up to 16-fold over a single PC processor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jinsuo; Guo, Shaoqiang
Pyroprocessing is a promising alternative for the reprocessing of used nuclear fuel (UNF) that uses electrochemical methods. Compared to the hydrometallurgical reprocessing method, pyroprocessing has many advantages such as reduced volume of radioactive waste, simple waste processing, ability to treat refractory material, and compatibility with fast reactor fuel recycle. The key steps of the process are the electro-refining of the spent metallic fuel in the LiCl-KCl eutectic salt, which can be integrated with an electrolytic reduction step for the reprocessing of spent oxide fuels.
Linear stability analysis of detonations via numerical computation and dynamic mode decomposition
NASA Astrophysics Data System (ADS)
Kabanov, Dmitry I.; Kasimov, Aslan R.
2018-03-01
We introduce a new method to investigate linear stability of gaseous detonations that is based on an accurate shock-fitting numerical integration of the linearized reactive Euler equations with a subsequent analysis of the computed solution via the dynamic mode decomposition. The method is applied to the detonation models based on both the standard one-step Arrhenius kinetics and two-step exothermic-endothermic reaction kinetics. Stability spectra for all cases are computed and analyzed. The new approach is shown to be a viable alternative to the traditional normal-mode analysis used in detonation theory.
Bayesian functional integral method for inferring continuous data from discrete measurements.
Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul
2012-02-08
Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
2D trajectory estimation during free walking using a tiptoe-mounted inertial sensor.
Sagawa, Koichi; Ohkubo, Kensuke
2015-07-16
An estimation method for a two-dimensional walking trajectory during free walking, such as forward walking, side stepping and backward walking, was investigated using a tiptoe-mounted inertial sensor. The horizontal trajectory of the toe-tip is obtained by double integration of toe-tip acceleration during the moving phase in which the sensor is rotated before foot-off or after foot-contact, in addition to the swing phase. Special functions that determine the optimum moving phase as the integral duration in every one step are developed statistically using the gait cycle and the resultant angular velocity of dorsi/planter flexion, pronation/supination and inversion/eversion so that the difference between the estimated trajectory and actual one gives a minimum value during free walking with several cadences. To develop the functions, twenty healthy volunteers participated in free walking experiments in which subjects performed forward walking, side stepping to the right, side stepping to the left, and backward walking at 39 m down a straight corridor with several predetermined cadences. To confirm the effect of the developed functions, five healthy subjects participated in the free walking experiment in which each subject performed free walking with different velocities of normal, fast, and slow based on their own assessment in a square course with 7 m side. The experimentally obtained results of free walking with a combination of forward walking, backward walking, and side stepping indicate that the proposed method produces walking trajectory with high precision compared with the constant threshold method which determines swing phase using the size of the angular velocity. Copyright © 2015 Elsevier Ltd. All rights reserved.
A computational method for sharp interface advection.
Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje
2016-11-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.
An Operator-Integration-Factor Splitting (OIFS) method for Incompressible Flows in Moving Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patel, Saumil S.; Fischer, Paul F.; Min, Misun
In this paper, we present a characteristic-based numerical procedure for simulating incompressible flows in domains with moving boundaries. Our approach utilizes an operator-integration-factor splitting technique to help produce an effcient and stable numerical scheme. Using the spectral element method and an arbitrary Lagrangian-Eulerian formulation, we investigate flows where the convective acceleration effects are non-negligible. Several examples, ranging from laminar to turbulent flows, are considered. Comparisons with a standard, semi-implicit time-stepping procedure illustrate the improved performance of the scheme.
Methods of integrating Islamic values in teaching biology for shaping attitude and character
NASA Astrophysics Data System (ADS)
Listyono; Supardi, K. I.; Hindarto, N.; Ridlo, S.
2018-03-01
Learning is expected to develop the potential of learners to have the spiritual attitude: moral strength, self-control, personality, intelligence, noble character, as well as the skills needed by themselves, society, and nation. Implementation of role and morale in learning is an alternative way which is expected to answer the challenge. The solution offered is to inject student with religious material Islamic in learning biology. The content value of materials teaching biology includes terms of practical value, religious values, daily life value, socio-political value, and the value of art. In Islamic religious values (Qur'an and Hadith) various methods can touch human feelings, souls, and generate motivation. Integrating learning with Islamic value can be done by the deductive or inductive approach. The appropriate method of integration is the amtsal (analog) method, hiwar (dialog) method, targhib & tarhib (encouragement & warning) method, and example method (giving a noble role model / good example). The right strategy in integrating Islamic values is outlined in the design of lesson plan. The integration of Islamic values in lesson plan will facilitate teachers to build students' character because Islamic values can be implemented in every learning steps so students will be accustomed to receiving the character value in this integrated learning.
Fiser, P S; Fairfull, R W
1989-02-01
Ram semen, collected by artificial vagina, was diluted and processed for long-term storage as described by P. S. Fiser, L. Ainsworth, and R. W. Fairfull (Canad. J. Anim. Sci. 62, 425-428, 1982). The concentration of the cryoprotectant, glycerol, was adjusted to 4% in the diluted semen prior to freezing by a one-step addition at 30 degrees C (Method 1), by cooling the semen to 5 degrees C and addition of the glycerol gradually over 30 min (Method 2), by one-step addition of glycerol prior to equilibration for 2 hr (Method 3), or by cooling to 5 degrees C, followed by a holding period of 2 hr at 5 degrees C, and the one-step addition of glycerol just prior to freezing (Method 4). After thawing, the glycerol concentration of the semen was reduced by stepwise dilution from 4 to 0.4% over 15 or 30 min or by a one-step ten-fold dilution. The average post-thaw percentage of motile spermatozoa was significantly lower after addition of glycerol by Method 1 (39.9%) than when the glycerol was added by the other three methods (range, 44.0-46.4% averaged over the glycerol dilution). The average post-thaw percentage of intact acrosomes (61.2%), highest in semen in which the glycerol was added by Method 2, was not significantly different from those in which glycerol was added to semen by Methods 3 and 4, but it was significantly higher than that found in semen in which the glycerol was added by Method 1 (54.4%). However, when averaged over the method of glycerolation, the post-thaw percentage of motile spermatozoa (range, 43.7-44.2%) and the percentage of intact acrosomes (range, 56.8-59.5%) did not differ significantly in semen subjected to gradual decrease in glycerol concentration and diluent osmolality (over 15 and 30 min) or by a one-step, 10-fold dilution. These data indicate that post-thaw survival of spermatozoa can be influenced by the way in which glycerol is added prior to freezing. However, post-thaw spermatozoa motility and acrosomal integrity can be maintained even after a rapid decrease in glycerol concentration such as that which accompanies insemination or dilution of semen for assessment of motility.
Identification of nonlinear normal modes of engineering structures under broadband forcing
NASA Astrophysics Data System (ADS)
Noël, Jean-Philippe; Renson, L.; Grappasonni, C.; Kerschen, G.
2016-06-01
The objective of the present paper is to develop a two-step methodology integrating system identification and numerical continuation for the experimental extraction of nonlinear normal modes (NNMs) under broadband forcing. The first step processes acquired input and output data to derive an experimental state-space model of the structure. The second step converts this state-space model into a model in modal space from which NNMs are computed using shooting and pseudo-arclength continuation. The method is demonstrated using noisy synthetic data simulated on a cantilever beam with a hardening-softening nonlinearity at its free end.
Kang, Junsu; Lee, Donghyeon; Heo, Young Jin; Chung, Wan Kyun
2017-11-07
For highly-integrated microfluidic systems, an actuation system is necessary to control the flow; however, the bulk of actuation devices including pumps or valves has impeded the broad application of integrated microfluidic systems. Here, we suggest a microfluidic process control method based on built-in microfluidic circuits. The circuit is composed of a fluidic timer circuit and a pneumatic logic circuit. The fluidic timer circuit is a serial connection of modularized timer units, which sequentially pass high pressure to the pneumatic logic circuit. The pneumatic logic circuit is a NOR gate array designed to control the liquid-controlling process. By using the timer circuit as a built-in signal generator, multi-step processes could be done totally inside the microchip without any external controller. The timer circuit uses only two valves per unit, and the number of process steps can be extended without limitation by adding timer units. As a demonstration, an automation chip has been designed for a six-step droplet treatment, which entails 1) loading, 2) separation, 3) reagent injection, 4) incubation, 5) clearing and 6) unloading. Each process was successfully performed for a pre-defined step-time without any external control device.
Method and apparatus for characterizing propagation delays of integrated circuit devices
NASA Technical Reports Server (NTRS)
Blaes, Brent R. (Inventor); Buehler, Martin G. (Inventor)
1987-01-01
Propagation delay of a signal through a channel is measured by cyclically generating a first step-wave signal for transmission through the channel to a two-input logic element and a second step-wave signal with a controlled delay to the second input terminal of the logic element. The logic element determines which signal is present first at its input terminals and stores a binary signal indicative of that determination for control of the delay of the second signal which is advanced or retarded for the next cycle until both the propagation delayed first step-wave signal and the control delayed step-wave signal are coincident. The propagation delay of the channel is then determined by measuring the time between the first and second step-wave signals out of the controlled step-wave signal generator.
A Planning Approach of Engineering Characteristics Based on QFD-TRIZ Integrated
NASA Astrophysics Data System (ADS)
Liu, Shang; Shi, Dongyan; Zhang, Ying
Traditional QFD planning method compromises contradictions between engineering characteristics to achieve higher customer satisfaction. However, this compromise trade-off can not eliminate the contradictions existing among the engineering characteristics which limited the overall customer satisfaction. QFD (Quality function deployment) integrated with TRIZ (the Russian acronym of the Theory of Inventive Problem Solving) becomes hot research recently for TRIZ can be used to solve contradictions between engineering characteristics which construct the roof of HOQ (House of quality). But, the traditional QFD planning approach is not suitable for QFD integrated with TRIZ for that TRIZ requires emphasizing the contradictions between engineering characteristics at problem definition stage instead of compromising trade-off. So, a new planning approach based on QFD / TRIZ integration is proposed in this paper, which based on the consideration of the correlation matrix of engineering characteristics and customer satisfaction on the basis of cost. The proposed approach suggests that TRIZ should be applied to solve contradictions at the first step, and the correlation matrix of engineering characteristics should be amended at the second step, and at next step IFR (ideal final result) must be validated, then planning execute. An example is used to illustrate the proposed approach. The application indicated that higher customer satisfaction can be met and the contradictions between the characteristic parameters are eliminated.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Stability of mixed time integration schemes for transient thermal analysis
NASA Technical Reports Server (NTRS)
Liu, W. K.; Lin, J. I.
1982-01-01
A current research topic in coupled-field problems is the development of effective transient algorithms that permit different time integration methods with different time steps to be used simultaneously in various regions of the problems. The implicit-explicit approach seems to be very successful in structural, fluid, and fluid-structure problems. This paper summarizes this research direction. A family of mixed time integration schemes, with the capabilities mentioned above, is also introduced for transient thermal analysis. A stability analysis and the computer implementation of this technique are also presented. In particular, it is shown that the mixed time implicit-explicit methods provide a natural framework for the further development of efficient, clean, modularized computer codes.
NASA Technical Reports Server (NTRS)
Billman, Dorrit Owen; Schreckenghost, Debra; Miri, Pardis
2014-01-01
Astronauts will be responsible for executing a much larger body of procedures as human exploration moves further from Earth and Mission Control. Efficient, reliable methods for executing these procedures, including manual, automated, and mixed execution will be important. Our interface integrates step-by-step instruction with the means for execution. The research reported here compared manual execution using the new system to a system analogous to the manual-only system currently in use on the International Space Station, to assess whether user performance in manual operations would be as good or better with the new than with the legacy system. The system used also allows flexible automated execution. The system and our data lay the foundation for integrating automated execution into the flow of procedures designed for humans. In our formative study, we found speed and accuracy of manual procedure execution was better using the new, integrated interface over the legacy design.
Antfolk, Maria; Kim, Soo Hyeon; Koizumi, Saori; Fujii, Teruo; Laurell, Thomas
2017-01-01
The incidence of cancer is increasing worldwide and metastatic disease, through the spread of circulating tumor cells (CTCs), is responsible for the majority of the cancer deaths. Accurate monitoring of CTC levels in blood provides clinical information supporting therapeutic decision making, and improved methods for CTC enumeration are asked for. Microfluidics has been extensively used for this purpose but most methods require several post-separation processing steps including concentration of the sample before analysis. This induces a high risk of sample loss of the collected rare cells. Here, an integrated system is presented that efficiently eliminates this risk by integrating label-free separation with single cell arraying of the target cell population, enabling direct on-chip tumor cell identification and enumeration. Prostate cancer cells (DU145) spiked into a sample with whole blood concentration of the peripheral blood mononuclear cell (PBMC) fraction were efficiently separated and trapped at a recovery of 76.2 ± 5.9% of the cancer cells and a minute contamination of 0.12 ± 0.04% PBMCs while simultaneously enabling a 20x volumetric concentration. This constitutes a first step towards a fully integrated system for rapid label-free separation and on-chip phenotypic characterization of circulating tumor cells from peripheral venous blood in clinical practice. PMID:28425472
Antfolk, Maria; Kim, Soo Hyeon; Koizumi, Saori; Fujii, Teruo; Laurell, Thomas
2017-04-20
The incidence of cancer is increasing worldwide and metastatic disease, through the spread of circulating tumor cells (CTCs), is responsible for the majority of the cancer deaths. Accurate monitoring of CTC levels in blood provides clinical information supporting therapeutic decision making, and improved methods for CTC enumeration are asked for. Microfluidics has been extensively used for this purpose but most methods require several post-separation processing steps including concentration of the sample before analysis. This induces a high risk of sample loss of the collected rare cells. Here, an integrated system is presented that efficiently eliminates this risk by integrating label-free separation with single cell arraying of the target cell population, enabling direct on-chip tumor cell identification and enumeration. Prostate cancer cells (DU145) spiked into a sample with whole blood concentration of the peripheral blood mononuclear cell (PBMC) fraction were efficiently separated and trapped at a recovery of 76.2 ± 5.9% of the cancer cells and a minute contamination of 0.12 ± 0.04% PBMCs while simultaneously enabling a 20x volumetric concentration. This constitutes a first step towards a fully integrated system for rapid label-free separation and on-chip phenotypic characterization of circulating tumor cells from peripheral venous blood in clinical practice.
NASA Technical Reports Server (NTRS)
Abdallah, Ayman A.; Barnett, Alan R.; Ibrahim, Omar M.; Manella, Richard T.
1993-01-01
Within the MSC/NASTRAN DMAP (Direct Matrix Abstraction Program) module TRD1, solving physical (coupled) or modal (uncoupled) transient equations of motion is performed using the Newmark-Beta or mode superposition algorithms, respectively. For equations of motion with initial conditions, only the Newmark-Beta integration routine has been available in MSC/NASTRAN solution sequences for solving physical systems and in custom DMAP sequences or alters for solving modal systems. In some cases, one difficulty with using the Newmark-Beta method is that the process of selecting suitable integration time steps for obtaining acceptable results is lengthy. In addition, when very small step sizes are required, a large amount of time can be spent integrating the equations of motion. For certain aerospace applications, a significant time savings can be realized when the equations of motion are solved using an exact integration routine instead of the Newmark-Beta numerical algorithm. In order to solve modal equations of motion with initial conditions and take advantage of efficiencies gained when using uncoupled solution algorithms (like that within TRD1), an exact mode superposition method using MSC/NASTRAN DMAP has been developed and successfully implemented as an enhancement to an existing coupled loads methodology at the NASA Lewis Research Center.
A numerical method for computing unsteady 2-D boundary layer flows
NASA Technical Reports Server (NTRS)
Krainer, Andreas
1988-01-01
A numerical method for computing unsteady two-dimensional boundary layers in incompressible laminar and turbulent flows is described and applied to a single airfoil changing its incidence angle in time. The solution procedure adopts a first order panel method with a simple wake model to solve for the inviscid part of the flow, and an implicit finite difference method for the viscous part of the flow. Both procedures integrate in time in a step-by-step fashion, in the course of which each step involves the solution of the elliptic Laplace equation and the solution of the parabolic boundary layer equations. The Reynolds shear stress term of the boundary layer equations is modeled by an algebraic eddy viscosity closure. The location of transition is predicted by an empirical data correlation originating from Michel. Since transition and turbulence modeling are key factors in the prediction of viscous flows, their accuracy will be of dominant influence to the overall results.
Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.
2018-03-20
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less
Compressed sparse tensor based quadrature for vibrational quantum mechanics integrals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib N.
A new method for fast evaluation of high dimensional integrals arising in quantum mechanics is proposed. Here, the method is based on sparse approximation of a high dimensional function followed by a low-rank compression. In the first step, we interpret the high dimensional integrand as a tensor in a suitable tensor product space and determine its entries by a compressed sensing based algorithm using only a few function evaluations. Secondly, we implement a rank reduction strategy to compress this tensor in a suitable low-rank tensor format using standard tensor compression tools. This allows representing a high dimensional integrand function asmore » a small sum of products of low dimensional functions. Finally, a low dimensional Gauss–Hermite quadrature rule is used to integrate this low-rank representation, thus alleviating the curse of dimensionality. Finally, numerical tests on synthetic functions, as well as on energy correction integrals for water and formaldehyde molecules demonstrate the efficiency of this method using very few function evaluations as compared to other integration strategies.« less
Simulation methods with extended stability for stiff biochemical Kinetics.
Rué, Pau; Villà-Freixa, Jordi; Burrage, Kevin
2010-08-11
With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, tau, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where tau can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called tau-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as tau grows. In this paper we extend Poisson tau-leap methods to a general class of Runge-Kutta (RK) tau-leap methods. We show that with the proper selection of the coefficients, the variance of the extended tau-leap can be well-behaved, leading to significantly larger step sizes. The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original tau-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.
Walther, Cornelia; Kellner, Martin; Berkemeyer, Matthias; Brocard, Cécile; Dürauer, Astrid
2017-10-21
Escherichia coli stores large amounts of highly pure product within inclusion bodies (IBs). To take advantage of this beneficial feature, after cell disintegration, the first step to optimal product recovery is efficient IB preparation. This step is also important in evaluating upstream optimization and process development, due to the potential impact of bioprocessing conditions on product quality and on the nanoscale properties of IBs. Proper IB preparation is often neglected, due to laboratory-scale methods requiring large amounts of materials and labor. Miniaturization and parallelization can accelerate analyses of individual processing steps and provide a deeper understanding of up- and downstream processing interdependencies. Consequently, reproducible, predictive microscale methods are in demand. In the present study, we complemented a recently established high-throughput cell disruption method with a microscale method for preparing purified IBs. This preparation provided results comparable to laboratory-scale IB processing, regarding impurity depletion, and product loss. Furthermore, with this method, we performed a "design of experiments" study to demonstrate the influence of fermentation conditions on the performance of subsequent downstream steps and product quality. We showed that this approach provided a 300-fold reduction in material consumption for each fermentation condition and a 24-fold reduction in processing time for 24 samples.
Valentijn, Pim P.; Schepman, Sanneke M.; Opheij, Wilfrid; Bruijnzeels, Marc A.
2013-01-01
Introduction Primary care has a central role in integrating care within a health system. However, conceptual ambiguity regarding integrated care hampers a systematic understanding. This paper proposes a conceptual framework that combines the concepts of primary care and integrated care, in order to understand the complexity of integrated care. Methods The search method involved a combination of electronic database searches, hand searches of reference lists (snowball method) and contacting researchers in the field. The process of synthesizing the literature was iterative, to relate the concepts of primary care and integrated care. First, we identified the general principles of primary care and integrated care. Second, we connected the dimensions of integrated care and the principles of primary care. Finally, to improve content validity we held several meetings with researchers in the field to develop and refine our conceptual framework. Results The conceptual framework combines the functions of primary care with the dimensions of integrated care. Person-focused and population-based care serve as guiding principles for achieving integration across the care continuum. Integration plays complementary roles on the micro (clinical integration), meso (professional and organisational integration) and macro (system integration) level. Functional and normative integration ensure connectivity between the levels. Discussion The presented conceptual framework is a first step to achieve a better understanding of the inter-relationships among the dimensions of integrated care from a primary care perspective. PMID:23687482
NASA Astrophysics Data System (ADS)
Ab. Aziz, Norshakirah; Ahmad, Rohiza; Dhanapal Durai, Dominic
2011-12-01
Limited trust, cooperation and communication have been identified as some of the issues that hinder collaboration among business partners. These one also true in the acceptance of e-supply chain integrator among organizations that involve in the same industry. On top of that, the huge number of components in supply chain industry also makes it impossible to include entire supply chain components in the integrator. Hence, this study intends to propose a method for identifying "trusted" collaborators for inclusion into an e-supply chain integrator. For the purpose of constructing and validating the method, the Malaysian construction industry is chosen as the case study due to its size and importance to the economy. This paper puts forward the background of the research, some relevant literatures which lead to trust values elements formulation, data collection from Malaysian Construction Supply Chain and a glimpse of the proposed method for trusted partner selection. Future work is also presented to highlight the next step of this research.
Symbolic programming language in molecular multicenter integral problem
NASA Astrophysics Data System (ADS)
Safouhi, Hassan; Bouferguene, Ahmed
It is well known that in any ab initio molecular orbital (MO) calculation, the major task involves the computation of molecular integrals, among which the computation of three-center nuclear attraction and Coulomb integrals is the most frequently encountered. As the molecular system becomes larger, computation of these integrals becomes one of the most laborious and time-consuming steps in molecular systems calculation. Improvement of the computational methods of molecular integrals would be indispensable to further development in computational studies of large molecular systems. To develop fast and accurate algorithms for the numerical evaluation of these integrals over B functions, we used nonlinear transformations for improving convergence of highly oscillatory integrals. These methods form the basis of new methods for solving various problems that were unsolvable otherwise and have many applications as well. To apply these nonlinear transformations, the integrands should satisfy linear differential equations with coefficients having asymptotic power series in the sense of Poincaré, which in their turn should satisfy some limit conditions. These differential equations are very difficult to obtain explicitly. In the case of molecular integrals, we used a symbolic programming language (MAPLE) to demonstrate that all the conditions required to apply these nonlinear transformation methods are satisfied. Differential equations are obtained explicitly, allowing us to demonstrate that the limit conditions are also satisfied.
A Heckman selection model for the safety analysis of signalized intersections
Wong, S. C.; Zhu, Feng; Pei, Xin; Huang, Helai; Liu, Youjun
2017-01-01
Purpose The objective of this paper is to provide a new method for estimating crash rate and severity simultaneously. Methods This study explores a Heckman selection model of the crash rate and severity simultaneously at different levels and a two-step procedure is used to investigate the crash rate and severity levels. The first step uses a probit regression model to determine the sample selection process, and the second step develops a multiple regression model to simultaneously evaluate the crash rate and severity for slight injury/kill or serious injury (KSI), respectively. The model uses 555 observations from 262 signalized intersections in the Hong Kong metropolitan area, integrated with information on the traffic flow, geometric road design, road environment, traffic control and any crashes that occurred during two years. Results The results of the proposed two-step Heckman selection model illustrate the necessity of different crash rates for different crash severity levels. Conclusions A comparison with the existing approaches suggests that the Heckman selection model offers an efficient and convenient alternative method for evaluating the safety performance at signalized intersections. PMID:28732050
Frazier, Zachary
2012-01-01
Abstract Particle-based Brownian dynamics simulations offer the opportunity to not only simulate diffusion of particles but also the reactions between them. They therefore provide an opportunity to integrate varied biological data into spatially explicit models of biological processes, such as signal transduction or mitosis. However, particle based reaction-diffusion methods often are hampered by the relatively small time step needed for accurate description of the reaction-diffusion framework. Such small time steps often prevent simulation times that are relevant for biological processes. It is therefore of great importance to develop reaction-diffusion methods that tolerate larger time steps while maintaining relatively high accuracy. Here, we provide an algorithm, which detects potential particle collisions prior to a BD-based particle displacement and at the same time rigorously obeys the detailed balance rule of equilibrium reactions. We can show that for reaction-diffusion processes of particles mimicking proteins, the method can increase the typical BD time step by an order of magnitude while maintaining similar accuracy in the reaction diffusion modelling. PMID:22697237
Automated optical inspection of liquid crystal display anisotropic conductive film bonding
NASA Astrophysics Data System (ADS)
Ni, Guangming; Du, Xiaohui; Liu, Lin; Zhang, Jing; Liu, Juanxiu; Liu, Yong
2016-10-01
Anisotropic conductive film (ACF) bonding is widely used in the liquid crystal display (LCD) industry. It implements circuit connection between screens and flexible printed circuits or integrated circuits. Conductive microspheres in ACF are key factors that influence LCD quality, because the conductive microspheres' quantity and shape deformation rate affect the interconnection resistance. Although this issue has been studied extensively by prior work, quick and accurate methods to inspect the quality of ACF bonding are still missing in the actual production process. We propose a method to inspect ACF bonding effectively by using automated optical inspection. The method has three steps. The first step is that it acquires images of the detection zones using a differential interference contrast (DIC) imaging system. The second step is that it identifies the conductive microspheres and their shape deformation rate using quantitative analysis of the characteristics of the DIC images. The final step is that it inspects ACF bonding using a back propagation trained neural network. The result shows that the miss rate is lower than 0.1%, and the false inspection rate is lower than 0.05%.
Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.
1998-01-01
BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.
A computationally efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.
1988-01-01
The goal of this research is to accurately predict the characteristics of the laminar separation bubble and its effects on airfoil performance. To this end, a model of the bubble is under development and will be incorporated in the analysis section of the Eppler and Somers program. As a first step in this direction, an existing bubble model was inserted into the program. It was decided to address the problem of the short bubble before attempting the prediction of the long bubble. In the second place, an integral boundary-layer method is believed more desirable than a finite difference approach. While these two methods achieve similar prediction accuracy, finite-difference methods tend to involve significantly longer computer run times than the integral methods. Finally, as the boundary-layer analysis in the Eppler and Somers program employs the momentum and kinetic energy integral equations, a short-bubble model compatible with these equations is most preferable.
Bai, Xiao-ping; Zhang, Xi-wei
2013-01-01
Selecting construction schemes of the building engineering project is a complex multiobjective optimization decision process, in which many indexes need to be selected to find the optimum scheme. Aiming at this problem, this paper selects cost, progress, quality, and safety as the four first-order evaluation indexes, uses the quantitative method for the cost index, uses integrated qualitative and quantitative methodologies for progress, quality, and safety indexes, and integrates engineering economics, reliability theories, and information entropy theory to present a new evaluation method for building construction project. Combined with a practical case, this paper also presents detailed computing processes and steps, including selecting all order indexes, establishing the index matrix, computing score values of all order indexes, computing the synthesis score, sorting all selected schemes, and making analysis and decision. Presented method can offer valuable references for risk computing of building construction projects.
Endpoints for Neural Connectivity Including Neurite Outgrowth, Synapse Formation, and Function
A strategy for alternative methods for developmental neurotoxicity testing (DNT) focuses on assessment of chemical effects on conserved neurodevelopmental processes. The development of the brain is an integrated series of steps from the commitment of embryonic cells to become neu...
Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
2016-06-23
Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less
Public Participation Procedure in Integrated Transport and Green Infrastructure Planning
NASA Astrophysics Data System (ADS)
Finka, Maroš; Ondrejička, Vladimír; Jamečný, Ľubomír; Husár, Milan
2017-10-01
The dialogue among the decision makers and stakeholders is a crucial part of any decision-making processes, particularly in case of integrated transportation planning and planning of green infrastructure where a multitude of actors is present. Although the theory of public participation is well-developed after several decades of research, there is still a lack of practical guidelines due to the specificity of public participation challenges. The paper presents a model of public participation for integrated transport and green infrastructure planning for international project TRANSGREEN covering the area of five European countries - Slovakia, Czech Republic, Austria, Hungary and Romania. The challenge of the project is to coordinate the efforts of public actors and NGOs in international environment in oftentimes precarious projects of transport infrastructure building and developing of green infrastructure. The project aims at developing and environmentally-friendly and safe international transport network. The proposed public participation procedure consists of five main steps - spread of information (passive), collection of information (consultation), intermediate discussion, engagement and partnership (empowerment). The initial spread of information is a process of communicating with the stakeholders, informing and educating them and it is based on their willingness to be informed. The methods used in this stage are public displays, newsletters or press releases. The second step of consultation is based on transacting the opinions of stakeholders to the decision makers. Pools, surveys, public hearings or written responses are examples of the multitude of ways to achieve this objective and the main principle of openness of stakeholders. The third step is intermediate discussion where all sides of are invited to a dialogue using the tools such as public meetings, workshops or urban walks. The fourth step is an engagement based on humble negotiation, arbitration and mediation. The collaborative skill needed here is dealing with conflicts. The final step in the procedure is partnership and empowerment employing methods as multi-actor decision making, voting or referenda. The leading principle is cooperation. In this ultimate step, the stakeholders are becoming decision makers themselves and the success factor here is continuous evaluation.
2016-01-01
Background Contributing to health informatics research means using conceptual models that are integrative and explain the research in terms of the two broad domains of health science and information science. However, it can be hard for novice health informatics researchers to find exemplars and guidelines in working with integrative conceptual models. Objectives The aim of this paper is to support the use of integrative conceptual models in research on information and communication technologies in the health sector, and to encourage discussion of these conceptual models in scholarly forums. Methods A two-part method was used to summarize and structure ideas about how to work effectively with conceptual models in health informatics research that included (1) a selective review and summary of the literature of conceptual models; and (2) the construction of a step-by-step approach to developing a conceptual model. Results The seven-step methodology for developing conceptual models in health informatics research explained in this paper involves (1) acknowledging the limitations of health science and information science conceptual models; (2) giving a rationale for one’s choice of integrative conceptual model; (3) explicating a conceptual model verbally and graphically; (4) seeking feedback about the conceptual model from stakeholders in both the health science and information science domains; (5) aligning a conceptual model with an appropriate research plan; (6) adapting a conceptual model in response to new knowledge over time; and (7) disseminating conceptual models in scholarly and scientific forums. Conclusions Making explicit the conceptual model that underpins a health informatics research project can contribute to increasing the number of well-formed and strongly grounded health informatics research projects. This explication has distinct benefits for researchers in training, research teams, and researchers and practitioners in information, health, and other disciplines. PMID:26912288
NASA Astrophysics Data System (ADS)
Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.
2018-05-01
The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
NASA Astrophysics Data System (ADS)
Yang, Haijian; Sun, Shuyu; Yang, Chao
2017-03-01
Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.
Seakeeping with the semi-Lagrangian particle finite element method
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio
2017-07-01
The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.
Investigation of ODE integrators using interactive graphics. [Ordinary Differential Equations
NASA Technical Reports Server (NTRS)
Brown, R. L.
1978-01-01
Two FORTRAN programs using an interactive graphic terminal to generate accuracy and stability plots for given multistep ordinary differential equation (ODE) integrators are described. The first treats the fixed stepsize linear case with complex variable solutions, and generates plots to show accuracy and error response to step driving function of a numerical solution, as well as the linear stability region. The second generates an analog to the stability region for classes of non-linear ODE's as well as accuracy plots. Both systems can compute method coefficients from a simple specification of the method. Example plots are given.
Enabling fast, stable and accurate peridynamic computations using multi-time-step integration
Lindsay, P.; Parks, M. L.; Prakash, A.
2016-04-13
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Lastly, performance of the proposed method in terms of stability, accuracy, andmore » computational cost is examined and several numerical examples are presented to corroborate the findings.« less
Single-step methods for predicting orbital motion considering its periodic components
NASA Astrophysics Data System (ADS)
Lavrov, K. N.
1989-01-01
Modern numerical methods for integration of ordinary differential equations can provide accurate and universal solutions to celestial mechanics problems. The implicit single sequence algorithms of Everhart and multiple step computational schemes using a priori information on periodic components can be combined to construct implicit single sequence algorithms which combine their advantages. The construction and analysis of the properties of such algorithms are studied, utilizing trigonometric approximation of the solutions of differential equations containing periodic components. The algorithms require 10 percent more machine memory than the Everhart algorithms, but are twice as fast, and yield short term predictions valid for five to ten orbits with good accuracy and five to six times faster than algorithms using other methods.
Three-dimensional unstructured grid Euler computations using a fully-implicit, upwind method
NASA Technical Reports Server (NTRS)
Whitaker, David L.
1993-01-01
A method has been developed to solve the Euler equations on a three-dimensional unstructured grid composed of tetrahedra. The method uses an upwind flow solver with a linearized, backward-Euler time integration scheme. Each time step results in a sparse linear system of equations which is solved by an iterative, sparse matrix solver. Local-time stepping, switched evolution relaxation (SER), preconditioning and reuse of the Jacobian are employed to accelerate the convergence rate. Implicit boundary conditions were found to be extremely important for fast convergence. Numerical experiments have shown that convergence rates comparable to that of a multigrid, central-difference scheme are achievable on the same mesh. Results are presented for several grids about an ONERA M6 wing.
Integration Methodology For Oil-Free Shaft Support Systems: Four Steps to Success
NASA Technical Reports Server (NTRS)
Howard, Samuel A.; DellaCorte, Christopher; Bruckner, Robert J.
2010-01-01
Commercial applications for Oil-Free turbomachinery are slowly becoming a reality. Micro-turbine generators, highspeed electric motors, and electrically driven centrifugal blowers are a few examples of products available in today's commercial marketplace. Gas foil bearing technology makes most of these applications possible. A significant volume of component level research has led to recent acceptance of gas foil bearings in several specialized applications, including those mentioned above. Component tests identifying such characteristics as load carrying capacity, power loss, thermal behavior, rotordynamic coefficients, etc. all help the engineer design foil bearing machines, but the development process can be just as important. As the technology gains momentum and acceptance in a wider array of machinery, the complexity and variety of applications will grow beyond the current class of machines. Following a robust integration methodology will help improve the probability of successful development of future Oil-Free turbomachinery. This paper describes a previously successful four-step integration methodology used in the development of several Oil-Free turbomachines. Proper application of the methods put forward here enable successful design of Oil-Free turbomachinery. In addition when significant design changes or unique machinery are developed, this four-step process must be considered.
SpaceNet: Modeling and Simulating Space Logistics
NASA Technical Reports Server (NTRS)
Lee, Gene; Jordan, Elizabeth; Shishko, Robert; de Weck, Olivier; Armar, Nii; Siddiqi, Afreen
2008-01-01
This paper summarizes the current state of the art in interplanetary supply chain modeling and discusses SpaceNet as one particular method and tool to address space logistics modeling and simulation challenges. Fundamental upgrades to the interplanetary supply chain framework such as process groups, nested elements, and cargo sharing, enabled SpaceNet to model an integrated set of missions as a campaign. The capabilities and uses of SpaceNet are demonstrated by a step-by-step modeling and simulation of a lunar campaign.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
Michlewski, Gracjan; Finnegan, David J.; Elfick, Alistair; Rosser, Susan J.
2017-01-01
Abstract Delivery of DNA to cells and its subsequent integration into the host genome is a fundamental task in molecular biology, biotechnology and gene therapy. Here we describe an IP-free one-step method that enables stable genome integration into either prokaryotic or eukaryotic cells. A synthetic mariner transposon is generated by flanking a DNA sequence with short inverted repeats. When purified recombinant Mos1 or Mboumar-9 transposase is co-transfected with transposon-containing plasmid DNA, it penetrates prokaryotic or eukaryotic cells and integrates the target DNA into the genome. In vivo integrations by purified transposase can be achieved by electroporation, chemical transfection or Lipofection of the transposase:DNA mixture, in contrast to other published transposon-based protocols which require electroporation or microinjection. As in other transposome systems, no helper plasmids are required since transposases are not expressed inside the host cells, thus leading to generation of stable cell lines. Since it does not require electroporation or microinjection, this tool has the potential to be applied for automated high-throughput creation of libraries of random integrants for purposes including gene knock-out libraries, screening for optimal integration positions or safe genome locations in different organisms, selection of the highest production of valuable compounds for biotechnology, and sequencing. PMID:28204586
Atomic-batched tensor decomposed two-electron repulsion integrals
NASA Astrophysics Data System (ADS)
Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove
2017-04-01
We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.
Atomic-batched tensor decomposed two-electron repulsion integrals.
Schmitz, Gunnar; Madsen, Niels Kristian; Christiansen, Ove
2017-04-07
We present a new integral format for 4-index electron repulsion integrals, in which several strategies like the Resolution-of-the-Identity (RI) approximation and other more general tensor-decomposition techniques are combined with an atomic batching scheme. The 3-index RI integral tensor is divided into sub-tensors defined by atom pairs on which we perform an accelerated decomposition to the canonical product (CP) format. In a first step, the RI integrals are decomposed to a high-rank CP-like format by repeated singular value decompositions followed by a rank reduction, which uses a Tucker decomposition as an intermediate step to lower the prefactor of the algorithm. After decomposing the RI sub-tensors (within the Coulomb metric), they can be reassembled to the full decomposed tensor (RC approach) or the atomic batched format can be maintained (ABC approach). In the first case, the integrals are very similar to the well-known tensor hypercontraction integral format, which gained some attraction in recent years since it allows for quartic scaling implementations of MP2 and some coupled cluster methods. On the MP2 level, the RC and ABC approaches are compared concerning efficiency and storage requirements. Furthermore, the overall accuracy of this approach is assessed. Initial test calculations show a good accuracy and that it is not limited to small systems.
A computational method for sharp interface advection
Bredmose, Henrik; Jasak, Hrvoje
2016-01-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619
Rudd, Michael E.
2014-01-01
Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4. PMID:25202253
Rudd, Michael E
2014-01-01
Previous work has demonstrated that perceived surface reflectance (lightness) can be modeled in simple contexts in a quantitatively exact way by assuming that the visual system first extracts information about local, directed steps in log luminance, then spatially integrates these steps along paths through the image to compute lightness (Rudd and Zemach, 2004, 2005, 2007). This method of computing lightness is called edge integration. Recent evidence (Rudd, 2013) suggests that human vision employs a default strategy to integrate luminance steps only along paths from a common background region to the targets whose lightness is computed. This implies a role for gestalt grouping in edge-based lightness computation. Rudd (2010) further showed the perceptual weights applied to edges in lightness computation can be influenced by the observer's interpretation of luminance steps as resulting from either spatial variation in surface reflectance or illumination. This implies a role for top-down factors in any edge-based model of lightness (Rudd and Zemach, 2005). Here, I show how the separate influences of grouping and attention on lightness can be modeled in tandem by a cortical mechanism that first employs top-down signals to spatially select regions of interest for lightness computation. An object-based network computation, involving neurons that code for border-ownership, then automatically sets the neural gains applied to edge signals surviving the earlier spatial selection stage. Only the borders that survive both processing stages are spatially integrated to compute lightness. The model assumptions are consistent with those of the cortical lightness model presented earlier by Rudd (2010, 2013), and with neurophysiological data indicating extraction of local edge information in V1, network computations to establish figure-ground relations and border ownership in V2, and edge integration to encode lightness and darkness signals in V4.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fath, L., E-mail: lukas.fath@kit.edu; Hochbruck, M., E-mail: marlis.hochbruck@kit.edu; Singh, C.V., E-mail: chandraveer.singh@utoronto.ca
Classical integration methods for molecular dynamics are inherently limited due to resonance phenomena occurring at certain time-step sizes. The mollified impulse method can partially avoid this problem by using appropriate filters based on averaging or projection techniques. However, existing filters are computationally expensive and tedious in implementation since they require either analytical Hessians or they need to solve nonlinear systems from constraints. In this work we follow a different approach based on corotation for the construction of a new filter for (flexible) biomolecular simulations. The main advantages of the proposed filter are its excellent stability properties and ease of implementationmore » in standard softwares without Hessians or solving constraint systems. By simulating multiple realistic examples such as peptide, protein, ice equilibrium and ice–ice friction, the new filter is shown to speed up the computations of long-range interactions by approximately 20%. The proposed filtered integrators allow step sizes as large as 10 fs while keeping the energy drift less than 1% on a 50 ps simulation.« less
Martins Pereira, Sandra; de Sá Brandão, Patrícia Joana; Araújo, Joana; Carvalho, Ana Sofia
2017-01-01
Introduction Antimicrobial resistance (AMR) is a challenging global and public health issue, raising bioethical challenges, considerations and strategies. Objectives This research protocol presents a conceptual model leading to formulating an empirically based bioethics framework for antibiotic use, AMR and designing ethically robust strategies to protect human health. Methods Mixed methods research will be used and operationalized into five substudies. The bioethical framework will encompass and integrate two theoretical models: global bioethics and ethical decision-making. Results Being a study protocol, this article reports on planned and ongoing research. Conclusions Based on data collection, future findings and using a comprehensive, integrative, evidence-based approach, a step-by-step bioethical framework will be developed for (i) responsible use of antibiotics in healthcare and (ii) design of strategies to decrease AMR. This will entail the analysis and interpretation of approaches from several bioethical theories, including deontological and consequentialist approaches, and the implications of uncertainty to these approaches. PMID:28459355
A CAD Approach to Integrating NDE With Finite Element
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Downey, James; Ghosn, Louis J.; Baaklini, George Y.
2004-01-01
Nondestructive evaluation (NDE) is one of several technologies applied at NASA Glenn Research Center to determine atypical deformities, cracks, and other anomalies experienced by structural components. NDE consists of applying high-quality imaging techniques (such as x-ray imaging and computed tomography (CT)) to discover hidden manufactured flaws in a structure. Efforts are in progress to integrate NDE with the finite element (FE) computational method to perform detailed structural analysis of a given component. This report presents the core outlines for an in-house technical procedure that incorporates this combined NDE-FE interrelation. An example is presented to demonstrate the applicability of this analytical procedure. FE analysis of a test specimen is performed, and the resulting von Mises stresses and the stress concentrations near the anomalies are observed, which indicates the fidelity of the procedure. Additional information elaborating on the steps needed to perform such an analysis is clearly presented in the form of mini step-by-step guidelines.
The fast multipole method and point dipole moment polarizable force fields.
Coles, Jonathan P; Masella, Michel
2015-01-14
We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.
Li, Shandong; Xue, Qian; Duh, Jenq-Gong; Du, Honglei; Xu, Jie; Wan, Yong; Li, Qiang; Lü, Yueguang
2014-01-01
RF/microwave soft magnetic films (SMFs) are key materials for miniaturization and multifunctionalization of monolithic microwave integrated circuits (MMICs) and their components, which demand that the SMFs should have higher self-bias ferromagnetic resonance frequency fFMR, and can be fabricated in an IC compatible process. However, self-biased metallic SMFs working at X-band or higher frequency were rarely reported, even though there are urgent demands. In this paper, we report an IC compatible process with two-step superposition to prepare SMFs, where the FeCoB SMFs were deposited on (011) lead zinc niobate–lead titanate substrates using a composition gradient sputtering method. As a result, a giant magnetic anisotropy field of 1498 Oe, 1–2 orders of magnitude larger than that by conventional magnetic annealing method, and an ultrahigh fFMR of up to 12.96 GHz reaching Ku-band, were obtained at zero magnetic bias field in the as-deposited films. These ultrahigh microwave performances can be attributed to the superposition of two effects: uniaxial stress induced by composition gradient and magnetoelectric coupling. This two-step superposition method paves a way for SMFs to surpass X-band by two-step or multi-step, where a variety of magnetic anisotropy field enhancing methods can be cumulated together to get higher ferromagnetic resonance frequency. PMID:25491374
Koziel, David; Michaelis, Uwe; Kruse, Tobias
2018-08-01
Endotoxins contaminate proteins that are produced in E. coli. High levels of endotoxins can influence cellular assays and cause severe adverse effects when administered to humans. Thus, endotoxin removal is important in protein purification for academic research and in GMP manufacturing of biopharmaceuticals. Several methods exist to remove endotoxin, but often require additional downstream-processing steps, decrease protein yield and are costly. These disadvantages can be avoided by using an integrated endotoxin depletion (iED) wash-step that utilizes Triton X-114 (TX114). In this paper, we show that the iED wash-step is broadly applicable in most commonly used chromatographies: it reduces endotoxin by a factor of 10 3 to 10 6 during NiNTA-, MBP-, SAC-, GST-, Protein A and CEX-chromatography but not during AEX or HIC-chromatography. We characterized the iED wash-step using Design of Experiments (DoE) and identified optimal experimental conditions for application scenarios that are relevant to academic research or industrial GMP manufacturing. A single iED wash-step with 0.75% (v/v) TX114 added to the feed and wash buffer can reduce endotoxin levels to below 2 EU/ml or deplete most endotoxin while keeping the manufacturing costs as low as possible. The comprehensive characterization enables academia and industry to widely adopt the iED wash-step for a routine, efficient and cost-effective depletion of endotoxin during protein purification at any scale. Copyright © 2018. Published by Elsevier B.V.
Sensitivity of indentation testing to step-off edges and interface integrity in cartilage repair.
Bae, Won C; Law, Amanda W; Amiel, David; Sah, Robert L
2004-03-01
Step-off edges and tissue interfaces are prevalent in cartilage injury such as after intra-articular fracture and reduction, and in focal defects and surgical repair procedures such as osteochondral graft implantation. It would be useful to assess the function of injured or donor tissues near such step-off edges and the extent of integration at material interfaces. The objective of this study was to determine if indentation testing is sensitive to the presence of step-off edges and the integrity of material interfaces, in both in vitro simulated repair samples of bovine cartilage defect filled with fibrin matrix, and in vivo biological repair samples from a goat animal model. Indentation stiffness decreased at locations approaching a step-off edge, a lacerated interface, or an integrated interface in which the distal tissue was relatively soft. The indentation stiffness increased or remained constant when the site of indentation approached an integrated interface in which the distal tissue was relatively stiff or similar in stiffness to the tissue being tested. These results indicate that indentation testing is sensitive to step-off edges and interface integrity, and may be useful for assessing cartilage injury and for following the progression of tissue integration after surgical treatments.
Continuous track paths reveal additive evidence integration in multistep decision making.
Buc Calderon, Cristian; Dewulf, Myrtille; Gevers, Wim; Verguts, Tom
2017-10-03
Multistep decision making pervades daily life, but its underlying mechanisms remain obscure. We distinguish four prominent models of multistep decision making, namely serial stage, hierarchical evidence integration, hierarchical leaky competing accumulation (HLCA), and probabilistic evidence integration (PEI). To empirically disentangle these models, we design a two-step reward-based decision paradigm and implement it in a reaching task experiment. In a first step, participants choose between two potential upcoming choices, each associated with two rewards. In a second step, participants choose between the two rewards selected in the first step. Strikingly, as predicted by the HLCA and PEI models, the first-step decision dynamics were initially biased toward the choice representing the highest sum/mean before being redirected toward the choice representing the maximal reward (i.e., initial dip). Only HLCA and PEI predicted this initial dip, suggesting that first-step decision dynamics depend on additive integration of competing second-step choices. Our data suggest that potential future outcomes are progressively unraveled during multistep decision making.
Microchip integrating magnetic nanoparticles for allergy diagnosis.
Teste, Bruno; Malloggi, Florent; Siaugue, Jean-Michel; Varenne, Anne; Kanoufi, Frederic; Descroix, Stéphanie
2011-12-21
We report on the development of a simple and easy to use microchip dedicated to allergy diagnosis. This microchip combines both the advantages of homogeneous immunoassays i.e. species diffusion and heterogeneous immunoassays i.e. easy separation and preconcentration steps. In vitro allergy diagnosis is based on specific Immunoglobulin E (IgE) quantitation, in that way we have developed and integrated magnetic core-shell nanoparticles (MCSNPs) as an IgE capture nanoplatform in a microdevice taking benefit from both their magnetic and colloidal properties. Integrating such immunosupport allows to perform the target analyte (IgE) capture in the colloidal phase thus increasing the analyte capture kinetics since both immunological partners are diffusing during the immune reaction. This colloidal approach improves 1000 times the analyte capture kinetics compared to conventional methods. Moreover, based on the MCSNPs' magnetic properties and on the magnetic chamber we have previously developed the MCSNPs and therefore the target can be confined and preconcentrated within the microdevice prior to the detection step. The MCSNPs preconcentration factor achieved was about 35,000 and allows to reach high sensitivity thus avoiding catalytic amplification during the detection step. The developed microchip offers many advantages: the analytical procedure was fully integrated on-chip, analyses were performed in short assay time (20 min), the sample and reagents consumption was reduced to few microlitres (5 μL) while a low limit of detection can be achieved (about 1 ng mL(-1)).
Simultaneous calibration phantom commission and geometry calibration in cone beam CT
NASA Astrophysics Data System (ADS)
Xu, Yuan; Yang, Shuai; Ma, Jianhui; Li, Bin; Wu, Shuyu; Qi, Hongliang; Zhou, Linghong
2017-09-01
Geometry calibration is a vital step for describing the geometry of a cone beam computed tomography (CBCT) system and is a prerequisite for CBCT reconstruction. In current methods, calibration phantom commission and geometry calibration are divided into two independent tasks. Small errors in ball-bearing (BB) positioning in the phantom-making step will severely degrade the quality of phantom calibration. To solve this problem, we propose an integrated method to simultaneously realize geometry phantom commission and geometry calibration. Instead of assuming the accuracy of the geometry phantom, the integrated method considers BB centers in the phantom as an optimized parameter in the workflow. Specifically, an evaluation phantom and the corresponding evaluation contrast index are used to evaluate geometry artifacts for optimizing the BB coordinates in the geometry phantom. After utilizing particle swarm optimization, the CBCT geometry and BB coordinates in the geometry phantom are calibrated accurately and are then directly used for the next geometry calibration task in other CBCT systems. To evaluate the proposed method, both qualitative and quantitative studies were performed on simulated and realistic CBCT data. The spatial resolution of reconstructed images using dental CBCT can reach up to 15 line pair cm-1. The proposed method is also superior to the Wiesent method in experiments. This paper shows that the proposed method is attractive for simultaneous and accurate geometry phantom commission and geometry calibration.
Automatic bone segmentation in knee MR images using a coarse-to-fine strategy
NASA Astrophysics Data System (ADS)
Park, Sang Hyun; Lee, Soochahn; Yun, Il Dong; Lee, Sang Uk
2012-02-01
Segmentation of bone and cartilage from a three dimensional knee magnetic resonance (MR) image is a crucial element in monitoring and understanding of development and progress of osteoarthritis. Until now, various segmentation methods have been proposed to separate the bone from other tissues, but it still remains challenging problem due to different modality of MR images, low contrast between bone and tissues, and shape irregularity. In this paper, we present a new fully-automatic segmentation method of bone compartments using relevant bone atlases from a training set. To find the relevant bone atlases and obtain the segmentation, a coarse-to-fine strategy is proposed. In the coarse step, the best atlas among the training set and an initial segmentation are simultaneously detected using branch and bound tree search. Since the best atlas in the coarse step is not accurately aligned, all atlases from the training set are aligned to the initial segmentation, and the best aligned atlas is selected in the middle step. Finally, in the fine step, segmentation is conducted as adaptively integrating shape of the best aligned atlas and appearance prior based on characteristics of local regions. For experiment, femur and tibia bones of forty test MR images are segmented by the proposed method using sixty training MR images. Experimental results show that a performance of the segmentation and the registration becomes better as going near the fine step, and the proposed method obtain the comparable performance with the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin
2017-12-01
The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.
Detection of Hepatocyte Clones Containing Integrated Hepatitis B Virus DNA Using Inverse Nested PCR.
Tu, Thomas; Jilbert, Allison R
2017-01-01
Chronic hepatitis B virus (HBV) infection is a major cause of liver cirrhosis and hepatocellular carcinoma (HCC), leading to ~600,000 deaths per year worldwide. Many of the steps that occur during progression from the normal liver to cirrhosis and/or HCC are unknown. Integration of HBV DNA into random sites in the host cell genome occurs as a by-product of the HBV replication cycle and forms a unique junction between virus and cellular DNA. Analyses of integrated HBV DNA have revealed that HCCs are clonal and imply that they develop from the transformation of hepatocytes, the only liver cell known to be infected by HBV. Integrated HBV DNA has also been shown, at least in some tumors, to cause insertional mutagenesis in cancer driver genes, which may facilitate the development of HCC. Studies of HBV DNA integration in the histologically normal liver have provided additional insight into HBV-associated liver disease, suggesting that hepatocytes with a survival or growth advantage undergo high levels of clonal expansion even in the absence of oncogenic transformation. Here we describe inverse nested PCR (invPCR), a highly sensitive method that allows detection, sequencing, and enumeration of virus-cell DNA junctions formed by the integration of HBV DNA. The invPCR protocol is composed of two major steps: inversion of the virus-cell DNA junction and single-molecule nested PCR. The invPCR method is highly specific and inexpensive and can be tailored to DNA extracted from large or small amounts of liver. This procedure also allows detection of genome-wide random integration of any known DNA sequence and is therefore a useful technique for molecular biology, virology, and genetic research.
A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics
NASA Astrophysics Data System (ADS)
Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno
2017-07-01
In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.
SENS-5D trajectory and wind-sensitivity calculations for unguided rockets
NASA Technical Reports Server (NTRS)
Singh, R. P.; Huang, L. C. P.; Cook, R. A.
1975-01-01
A computational procedure is described which numerically integrates the equations of motion of an unguided rocket. Three translational and two angular (roll discarded) degrees of freedom are integrated through the final burnout; and then, through impact, only three translational motions are considered. Input to the routine is: initial time, altitude and velocity, vehicle characteristics, and other defined options. Input format has a wide range of flexibility for special calculations. Output is geared mainly to the wind-weighting procedure, and includes summary of trajectory at burnout, apogee and impact, summary of spent-stage trajectories, detailed position and vehicle data, unit-wind effects for head, tail and cross winds, coriolis deflections, range derivative, and the sensitivity curves (the so called F(Z) and DF(Z) curves). The numerical integration procedure is a fourth-order, modified Adams-Bashforth Predictor-Corrector method. This method is supplemented by a fourth-order Runge-Kutta method to start the integration at t=0 and whenever error criteria demand a change in step size.
NASA Technical Reports Server (NTRS)
Amecke, Juergen
1986-01-01
A method for the direct calculation of the wall induced interference velocity in two dimensional flow based on Cauchy's integral formula was derived. This one-step method allows the calculation of the residual corrections and the required wall adaptation for interference-free flow starting from the wall pressure distribution without any model representation. Demonstrated applications are given.
Preconditioned conjugate-gradient methods for low-speed flow calculations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1993-01-01
An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations is integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the Lower-Upper Successive Symmetric Over-Relaxation iterative scheme is more efficient than a preconditioner based on Incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional Line Gauss-Seidel Relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.
Preconditioned Conjugate Gradient methods for low speed flow calculations
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Ng, Wing-Fai; Liou, Meng-Sing
1993-01-01
An investigation is conducted into the viability of using a generalized Conjugate Gradient-like method as an iterative solver to obtain steady-state solutions of very low-speed fluid flow problems. Low-speed flow at Mach 0.1 over a backward-facing step is chosen as a representative test problem. The unsteady form of the two dimensional, compressible Navier-Stokes equations are integrated in time using discrete time-steps. The Navier-Stokes equations are cast in an implicit, upwind finite-volume, flux split formulation. The new iterative solver is used to solve a linear system of equations at each step of the time-integration. Preconditioning techniques are used with the new solver to enhance the stability and the convergence rate of the solver and are found to be critical to the overall success of the solver. A study of various preconditioners reveals that a preconditioner based on the lower-upper (L-U)-successive symmetric over-relaxation iterative scheme is more efficient than a preconditioner based on incomplete L-U factorizations of the iteration matrix. The performance of the new preconditioned solver is compared with a conventional line Gauss-Seidel relaxation (LGSR) solver. Overall speed-up factors of 28 (in terms of global time-steps required to converge to a steady-state solution) and 20 (in terms of total CPU time on one processor of a CRAY-YMP) are found in favor of the new preconditioned solver, when compared with the LGSR solver.
NASA Astrophysics Data System (ADS)
Pokhmurska, H.; Maksymovych, O.; Dzyubyk, A.; Dzyubyk, L.
2018-06-01
The methods of calculating the trajectories and the rate of growth of curvilinear fatigue cracks in isotropic and composite plate structure elements during cyclic loading along straight or curvilinear trajectories are developed. For isotropic and anisotropic materials, the methodes are developed on the basis of the force criterion of destruction with the additional application of the fatigue fracture diagrams. To find the change in the shape of the cracks in the loading process, the step-by-step method was used. At each stage, the direction of the growth of all vertices of cracks and the lengths of their arcs was found on the basis of determining the intensity coefficients of stresses by the method of singular integral equations. The results of calculations of the cracks system growth process are presented.
A modular modulation method for achieving increases in metabolite production.
Acerenza, Luis; Monzon, Pablo; Ortega, Fernando
2015-01-01
Increasing the production of overproducing strains represents a great challenge. Here, we develop a modular modulation method to determine the key steps for genetic manipulation to increase metabolite production. The method consists of three steps: (i) modularization of the metabolic network into two modules connected by linking metabolites, (ii) change in the activity of the modules using auxiliary rates producing or consuming the linking metabolites in appropriate proportions and (iii) determination of the key modules and steps to increase production. The mathematical formulation of the method in matrix form shows that it may be applied to metabolic networks of any structure and size, with reactions showing any kind of rate laws. The results are valid for any type of conservation relationships in the metabolite concentrations or interactions between modules. The activity of the module may, in principle, be changed by any large factor. The method may be applied recursively or combined with other methods devised to perform fine searches in smaller regions. In practice, it is implemented by integrating to the producer strain heterologous reactions or synthetic pathways producing or consuming the linking metabolites. The new procedure may contribute to develop metabolic engineering into a more systematic practice. © 2015 American Institute of Chemical Engineers.
Introductory guide to integrated ecological framework.
DOT National Transportation Integrated Search
2014-10-01
This guide introduces the Integrated Ecological Framework (IEF) to Texas Department of Transportation : (TxDOT) engineers and planners. IEF is step-by-step approach to integrating ecological and : transportation planning with the goal of avoiding imp...
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; ...
2018-04-17
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
Implicit-explicit (IMEX) Runge-Kutta methods for non-hydrostatic atmospheric models
NASA Astrophysics Data System (ADS)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.; Reynolds, Daniel R.; Ullrich, Paul A.; Woodward, Carol S.
2018-04-01
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit-explicit (IMEX) additive Runge-Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit - vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored. The accuracy and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.
Real-time localization of mobile device by filtering method for sensor fusion
NASA Astrophysics Data System (ADS)
Fuse, Takashi; Nagara, Keita
2017-06-01
Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.
Implicit–explicit (IMEX) Runge–Kutta methods for non-hydrostatic atmospheric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, David J.; Guerra, Jorge E.; Hamon, François P.
The efficient simulation of non-hydrostatic atmospheric dynamics requires time integration methods capable of overcoming the explicit stability constraints on time step size arising from acoustic waves. In this work, we investigate various implicit–explicit (IMEX) additive Runge–Kutta (ARK) methods for evolving acoustic waves implicitly to enable larger time step sizes in a global non-hydrostatic atmospheric model. The IMEX formulations considered include horizontally explicit – vertically implicit (HEVI) approaches as well as splittings that treat some horizontal dynamics implicitly. In each case, the impact of solving nonlinear systems in each implicit ARK stage in a linearly implicit fashion is also explored.The accuracymore » and efficiency of the IMEX splittings, ARK methods, and solver options are evaluated on a gravity wave and baroclinic wave test case. HEVI splittings that treat some vertical dynamics explicitly do not show a benefit in solution quality or run time over the most implicit HEVI formulation. While splittings that implicitly evolve some horizontal dynamics increase the maximum stable step size of a method, the gains are insufficient to overcome the additional cost of solving a globally coupled system. Solving implicit stage systems in a linearly implicit manner limits the solver cost but this is offset by a reduction in step size to achieve the desired accuracy for some methods. Overall, the third-order ARS343 and ARK324 methods performed the best, followed by the second-order ARS232 and ARK232 methods.« less
A discontinuous Galerkin method for the shallow water equations in spherical triangular coordinates
NASA Astrophysics Data System (ADS)
Läuter, Matthias; Giraldo, Francis X.; Handorf, Dörthe; Dethloff, Klaus
2008-12-01
A global model of the atmosphere is presented governed by the shallow water equations and discretized by a Runge-Kutta discontinuous Galerkin method on an unstructured triangular grid. The shallow water equations on the sphere, a two-dimensional surface in R3, are locally represented in terms of spherical triangular coordinates, the appropriate local coordinate mappings on triangles. On every triangular grid element, this leads to a two-dimensional representation of tangential momentum and therefore only two discrete momentum equations. The discontinuous Galerkin method consists of an integral formulation which requires both area (elements) and line (element faces) integrals. Here, we use a Rusanov numerical flux to resolve the discontinuous fluxes at the element faces. A strong stability-preserving third-order Runge-Kutta method is applied for the time discretization. The polynomial space of order k on each curved triangle of the grid is characterized by a Lagrange basis and requires high-order quadature rules for the integration over elements and element faces. For the presented method no mass matrix inversion is necessary, except in a preprocessing step. The validation of the atmospheric model has been done considering standard tests from Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow water equations in spherical geometry, J. Comput. Phys. 102 (1992) 211-224], unsteady analytical solutions of the nonlinear shallow water equations and a barotropic instability caused by an initial perturbation of a jet stream. A convergence rate of O(Δx) was observed in the model experiments. Furthermore, a numerical experiment is presented, for which the third-order time-integration method limits the model error. Thus, the time step Δt is restricted by both the CFL-condition and accuracy demands. Conservation of mass was shown up to machine precision and energy conservation converges for both increasing grid resolution and increasing polynomial order k.
NASA Technical Reports Server (NTRS)
Underwood, Steve; Lvovsky, Oleg
2007-01-01
The International Space Station (ISS has Qualification and Acceptance Environmental Test Requirements document, SSP 41172 that includes many environmental tests such as Thermal vacuum & Cycling, Depress/Repress, Sinusoidal, Random, and Acoustic Vibration, Pyro Shock, Acceleration, Humidity, Pressure, Electromatic Interference (EMI)/Electromagnetic Compatibility (EMCO), etc. This document also includes (13) leak test methods for Pressure Integrity Verification of the ISS Elements, Systems, and Components. These leak test methods are well known, however, the test procedure for specific leak test method shall be written and implemented paying attention to the important procedural steps/details that, if omitted or deviated, could impact the quality of the final product and affect the crew safety. Such procedural steps/details for different methods include, but not limited to: - Sequence of testing, f or example, pressurization and submersion steps for Method I (Immersion); - Stabilization of the mass spectrometer leak detector outputs fo r Method II (vacuum Chamber or Bell jar); - Proper data processing an d taking a conservative approach while making predictions for on-orbit leakage rate for Method III(Pressure Change); - Proper Calibration o f the mass spectrometer leak detector for all the tracer gas (mostly Helium) Methods such as Method V (Detector Probe), Method VI (Hood), Method VII (Tracer Probe), Method VIII(Accumulation); - Usage of visibl ility aides for Method I (Immersion), Method IV (Chemical Indicator), Method XII (Foam/Liquid Application), and Method XIII (Hydrostatic/Visual Inspection); While some methods could be used for the total leaka ge (either internal-to-external or external-to-internal) rate requirement verification (Vacuum Chamber, Pressure Decay, Hood, Accumulation), other methods shall be used only as a pass/fail test for individual joints (e.g., welds, fittings, and plugs) or for troubleshooting purposes (Chemical Indicator, Detector Probe, Tracer Probe, Local Vacuum Chamber, Foam/Liquid Application, and Hydrostatic/Visual Inspection). Any isolation of SSP 41172 requirements have led to either retesting of hardware or accepting a risk associated with the potential system or component pressure integrity problem during flight.
Multigrid for hypersonic viscous two- and three-dimensional flows
NASA Technical Reports Server (NTRS)
Turkel, E.; Swanson, R. C.; Vatsa, V. N.; White, J. A.
1991-01-01
The use of a multigrid method with central differencing to solve the Navier-Stokes equations for hypersonic flows is considered. The time dependent form of the equations is integrated with an explicit Runge-Kutta scheme accelerated by local time stepping and implicit residual smoothing. Variable coefficients are developed for the implicit process that removes the diffusion limit on the time step, producing significant improvement in convergence. A numerical dissipation formulation that provides good shock capturing capability for hypersonic flows is presented. This formulation is shown to be a crucial aspect of the multigrid method. Solutions are given for two-dimensional viscous flow over a NACA 0012 airfoil and three-dimensional flow over a blunt biconic.
Large-eddy simulation of a backward facing step flow using a least-squares spectral element method
NASA Technical Reports Server (NTRS)
Chan, Daniel C.; Mittal, Rajat
1996-01-01
We report preliminary results obtained from the large eddy simulation of a backward facing step at a Reynolds number of 5100. The numerical platform is based on a high order Legendre spectral element spatial discretization and a least squares time integration scheme. A non-reflective outflow boundary condition is in place to minimize the effect of downstream influence. Smagorinsky model with Van Driest near wall damping is used for sub-grid scale modeling. Comparisons of mean velocity profiles and wall pressure show good agreement with benchmark data. More studies are needed to evaluate the sensitivity of this method on numerical parameters before it is applied to complex engineering problems.
Variable Step-Size Selection Methods for Implicit Integration Schemes
2005-10-01
for ρk numerically. 23 4 Examples In this section we explore this variable step-size selection method for two problems, the Lotka - Volterra model and...the Kepler problem. 4.1 The Lotka - Volterra Model For this example we consider the Lotka - Volterra model of a simple predator- prey system from...problems. Consider this variation to the Lotka - Volterra problem: u̇ v̇ = u2v(v − 2) v2u(1− u) = f(u, v); t ∈ [0, 50
2016-11-01
DEFENSE INTELLIGENCE Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon...States Government Accountability Office Highlights of GAO-17-10, a report to congressional committees November 2016 DEFENSE INTELLIGENCE ...Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon Systems What GAO Found The Department of Defense (DOD
Data harmonization of environmental variables: from simple to general solutions
NASA Astrophysics Data System (ADS)
Baume, O.
2009-04-01
European data platforms often contain measurements from different regional or national networks. As standards and protocols - e.g. type of measurement devices, sensors or measurement site classification, laboratory analysis and post-processing methods, vary between networks, discontinuities will appear when mapping the target variable at an international scale. Standardisation is generally a costly solution and does not allow classical statistical analysis of previously reported values. As an alternative, harmonization should be envisaged as an integrated step in mapping procedures across borders. In this paper, several harmonization solutions developed under the INTAMAP FP6 project are presented. The INTAMAP FP6 project is currently developing an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods to web-based implementations. Harmonization is often considered as a pre-processing step in statistical data analysis workflow. If biases are assessed with little knowledge about the target variable - in particular when no explanatory covariate is integrated, a harmonization procedure along borders or between regionally overlapping networks may be adopted (Skøien et al., 2007). In this case, bias is estimated as the systematic difference between line or local predictions. On the other hand, when covariates can be included in spatial prediction, the harmonization step is integrated in the whole model estimation procedure, and, therefore, is no longer an independent pre-processing step of the automatic mapping process (Baume et al., 2007). In this case, bias factors become integrated parameters of the geostatistical model and are estimated alongside the other model parameters. The harmonization methods developed within the INTAMAP project were first applied within the field of radiation, where the European Radiological Data Exchange Platform (EURDEP) - http://eurdep.jrc.ec.europa.eu/ - has been active for all member states for more than a decade (de Cort and de Vries, 1997). This database contains biases because of the different networks processes used in data reporting (Bossew et al., 2007). In a comparison study, monthly averaged Gamma dose measurements from eight European countries were using the methods described above. Baume et al. (2008) showed that both methods yield similar results and can detect and remove bias from the EURDEP database. To broaden the potential of the methods developed within the INTAMAP project, another application example taken from soil science is presented in this paper. The Carbon/Nitrogen (C/N) ratio of forest soils is one of the best predictors for evaluating soil functions such as used in climate change issues. Although soil samples were analyzed according to a common European laboratory method, Carré et al. (2008) concluded that systematic errors are introduced in the measurements due to calibration issues and instability of the sample. The application of the harmonization procedures showed that bias could be adequately removed, although the procedures have difficulty to distinguish real differences from bias.
A framework for simultaneous aerodynamic design optimization in the presence of chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi
Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less
Telschow, K.L.; Siu, B.K.
1996-07-09
A method of evaluating integrity of adherence of a conductor bond to a substrate includes: (a) impinging a plurality of light sources onto a substrate; (b) detecting optical reflective signatures emanating from the substrate from the impinged light; (c) determining location of a selected conductor bond on the substrate from the detected reflective signatures; (d) determining a target site on the selected conductor bond from the detected reflective signatures; (e) optically imparting an elastic wave at the target site through the selected conductor bond and into the substrate; (f) optically detecting an elastic wave signature emanating from the substrate resulting from the optically imparting step; and (g) determining integrity of adherence of the selected conductor bond to the substrate from the detected elastic wave signature emanating from the substrate. A system is disclosed which is capable of conducting the method. 13 figs.
Telschow, Kenneth L.; Siu, Bernard K.
1996-01-01
A method of evaluating integrity of adherence of a conductor bond to a substrate includes: a) impinging a plurality of light sources onto a substrate; b) detecting optical reflective signatures emanating from the substrate from the impinged light; c) determining location of a selected conductor bond on the substrate from the detected reflective signatures; d) determining a target site on the selected conductor bond from the detected reflective signatures; e) optically imparting an elastic wave at the target site through the selected conductor bond and into the substrate; f) optically detecting an elastic wave signature emanating from the substrate resulting from the optically imparting step; and g) determining integrity of adherence of the selected conductor bond to the substrate from the detected elastic wave signature emanating from the substrate. A system is disclosed which is capable of conducting the method.
NASA Astrophysics Data System (ADS)
Dodig, H.
2017-11-01
This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.
NASA Astrophysics Data System (ADS)
Fatrias, D.; Kamil, I.; Meilani, D.
2018-03-01
Coordinating business operation with suppliers becomes increasingly important to survive and prosper under the dynamic business environment. A good partnership with suppliers not only increase efficiency, but also strengthen corporate competitiveness. Associated with such concern, this study aims to develop a practical approach of multi-criteria supplier evaluation using combined methods of Taguchi loss function (TLF), best-worst method (BWM) and VIse Kriterijumska Optimizacija kompromisno Resenje (VIKOR). A new framework of integrative approach adopting these methods is our main contribution for supplier evaluation in literature. In this integrated approach, a compromised supplier ranking list based on the loss score of suppliers is obtained using efficient steps of a pairwise comparison based decision making process. Implemetation to the case problem with real data from crumb rubber industry shows the usefulness of the proposed approach. Finally, a suitable managerial implication is presented.
NASA Astrophysics Data System (ADS)
Macomber, B.; Woollands, R. M.; Probe, A.; Younes, A.; Bai, X.; Junkins, J.
2013-09-01
Modified Chebyshev Picard Iteration (MCPI) is an iterative numerical method for approximating solutions of linear or non-linear Ordinary Differential Equations (ODEs) to obtain time histories of system state trajectories. Unlike other step-by-step differential equation solvers, the Runge-Kutta family of numerical integrators for example, MCPI approximates long arcs of the state trajectory with an iterative path approximation approach, and is ideally suited to parallel computation. Orthogonal Chebyshev Polynomials are used as basis functions during each path iteration; the integrations of the Picard iteration are then done analytically. Due to the orthogonality of the Chebyshev basis functions, the least square approximations are computed without matrix inversion; the coefficients are computed robustly from discrete inner products. As a consequence of discrete sampling and weighting adopted for the inner product definition, Runge phenomena errors are minimized near the ends of the approximation intervals. The MCPI algorithm utilizes a vector-matrix framework for computational efficiency. Additionally, all Chebyshev coefficients and integrand function evaluations are independent, meaning they can be simultaneously computed in parallel for further decreased computational cost. Over an order of magnitude speedup from traditional methods is achieved in serial processing, and an additional order of magnitude is achievable in parallel architectures. This paper presents a new MCPI library, a modular toolset designed to allow MCPI to be easily applied to a wide variety of ODE systems. Library users will not have to concern themselves with the underlying mathematics behind the MCPI method. Inputs are the boundary conditions of the dynamical system, the integrand function governing system behavior, and the desired time interval of integration, and the output is a time history of the system states over the interval of interest. Examples from the field of astrodynamics are presented to compare the output from the MCPI library to current state-of-practice numerical integration methods. It is shown that MCPI is capable of out-performing the state-of-practice in terms of computational cost and accuracy.
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
NASA Astrophysics Data System (ADS)
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Contemporary Quantitative Methods and "Slow" Causal Inference: Response to Palinkas
ERIC Educational Resources Information Center
Stone, Susan
2014-01-01
This response considers together simultaneously occurring discussions about causal inference in social work and allied health and social science disciplines. It places emphasis on scholarship that integrates the potential outcomes model with directed acyclic graphing techniques to extract core steps in causal inference. Although this scholarship…
Fostering Inclusive Schools & Communities: A Public Relations Guide.
ERIC Educational Resources Information Center
Hammond, Marilyn; And Others
This guide provides instructions on implementing a low-budget public relations (PR) program to improve acceptance and integration of students with disabilities. Sixteen steps for a PR program and the use of multiple methods of publicity are outlined. Topics covered include: using appropriate terminology when writing or talking about disability…
ERIC Educational Resources Information Center
Oskoz, Ana; Elola, Idoia
2016-01-01
This article provides an overview of how digital stories (DSs)--storylines that integrate text, images, and sound--have been used in second-language (L2) contexts. The article first reviews the methodical and planned, albeit non-linear, steps required for successful implementation of DSs in the L2 classroom and then assesses the observed…
Pedometry Methods for Assessing Free-Living Youth
ERIC Educational Resources Information Center
Tudor-Locke, Catrine; McClain, James J.; Hart, Teresa L.; Sisson, Susan B.; Washington, Tracy L.
2009-01-01
The purpose of this review is to integrate and summarize specific measurement topics (instrument and metric choice, validity, reliability, how many and what types of days, reactivity, and data treatment) appropriate to the study of youth physical activity. Research quality pedometers are necessary to aid interpretation of steps per day collected…
Haregu, Tilahun Nigatu; Setswe, Geoffrey; Elliott, Julian; Oldenburg, Brian
2014-01-01
Introduction: Although there are several models of integrated architecture, we still lack models and theories about the integration process of health system responses to HIV/AIDS and NCDs. Objective: The overall purpose of this study is to design an action model, a systematic approach, for the integration of health system responses to HIV/AIDS and NCDs in developing countries. Methods: An iterative and progressive approach of model development using inductive qualitative evidence synthesis techniques was applied. As evidence about integration is spread across different fields, synthesis of evidence from a broad range of disciplines was conducted. Results: An action model of integration having 5 underlying principles, 4 action fields, and a 9-step action cycle is developed. The INTEGRATE model is an acronym of the 9 steps of the integration process: 1) Interrelate the magnitude and distribution of the problems, 2) Navigate the linkage between the problems, 3) Testify individual level co-occurrence of the problems, 4) Examine the similarities and understand the differences between the response functions, 5) Glance over the health system’s environment for integration, 6) Repackage and share evidence in a useable form, 7) Ascertain the plan for integration, 8) Translate the plan in to action, 9) Evaluate and Monitor the integration. Conclusion: Our model provides a basis for integration of health system responses to HIV/AIDS and NCDs in the context of developing countries. We propose that future empirical work is needed to refine the validity and applicability of the model. PMID:24373260
USE OF THE SDO POINTING CONTROLLERS FOR INSTRUMENT CALIBRATION MANEUVERS
NASA Technical Reports Server (NTRS)
Vess, Melissa F.; Starin, Scott R.; Morgenstern, Wendy M.
2005-01-01
During the science phase of the Solar Dynamics Observatory mission, the three science instruments require periodic instrument calibration maneuvers with a frequency of up to once per month. The command sequences for these maneuvers vary in length from a handful of steps to over 200 steps, and individual steps vary in size from 5 arcsec per step to 22.5 degrees per step. Early in the calibration maneuver development, it was determined that the original attitude sensor complement could not meet the knowledge requirements for the instrument calibration maneuvers in the event of a sensor failure. Because the mission must be single fault tolerant, an attitude determination trade study was undertaken to determine the impact of adding an additional attitude sensor versus developing alternative, potentially complex, methods of performing the maneuvers in the event of a sensor failure. To limit the impact to the science data capture budget, these instrument calibration maneuvers must be performed as quickly as possible while maintaining the tight pointing and knowledge required to obtain valid data during the calibration. To this end, the decision was made to adapt a linear pointing controller by adjusting gains and adding an attitude limiter so that it would be able to slew quickly and still achieve steady pointing once on target. During the analysis of this controller, questions arose about the stability of the controller during slewing maneuvers due to the combination of the integral gain, attitude limit, and actuator saturation. Analysis was performed and a method for disabling the integral action while slewing was incorporated to ensure stability. A high fidelity simulation is used to simulate the various instrument calibration maneuvers.
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2010-02-21
RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d
NASA Astrophysics Data System (ADS)
Savin, Andrei V.; Smirnov, Petr G.
2018-05-01
Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.
Advancement of Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Emiley, Mark S.; Agte, Jeremy S.; Sandusky, Robert R., Jr.
2000-01-01
Bi-Level Integrated System Synthesis (BLISS) is a method for optimization of an engineering system, e.g., an aerospace vehicle. BLISS consists of optimizations at the subsystem (module) and system levels to divide the overall large optimization task into sets of smaller ones that can be executed concurrently. In the initial version of BLISS that was introduced and documented in previous publications, analysis in the modules was kept at the early conceptual design level. This paper reports on the next step in the BLISS development in which the fidelity of the aerodynamic drag and structural stress and displacement analyses were upgraded while the method's satisfactory convergence rate was retained.
Real-time, interactive animation of deformable two- and three-dimensional objects
Desbrun, Mathieu; Schroeder, Peter; Meyer, Mark; Barr, Alan H.
2003-06-03
A method of updating in real-time the locations and velocities of mass points of a two- or three-dimensional object represented by a mass-spring system. A modified implicit Euler integration scheme is employed to determine the updated locations and velocities. In an optional post-integration step, the updated locations are corrected to preserve angular momentum. A processor readable medium and a network server each tangibly embodying the method are also provided. A system comprising a processor in combination with the medium, and a system comprising the server in combination with a client for accessing the server over a computer network, are also provided.
NASA Astrophysics Data System (ADS)
Qu, Zilian; Meng, Yonggang; Zhao, Qian
2015-03-01
This paper proposes a new eddy current method, named equivalent unit method (EUM), for the thickness measurement of the top copper film of multilayer interconnects in the chemical mechanical polishing (CMP) process, which is an important step in the integrated circuit (IC) manufacturing. The influence of the underneath circuit layers on the eddy current is modeled and treated as an equivalent film thickness. By subtracting this equivalent film component, the accuracy of the thickness measurement of the top copper layer with an eddy current sensor is improved and the absolute error is 3 nm for sampler measurement.
NASA Astrophysics Data System (ADS)
Santi, S. S.; Renanto; Altway, A.
2018-01-01
The energy use system in a production process, in this case heat exchangers networks (HENs), is one element that plays a role in the smoothness and sustainability of the industry itself. Optimizing Heat Exchanger Networks (HENs) from process streams can have a major effect on the economic value of an industry as a whole. So the solving of design problems with heat integration becomes an important requirement. In a plant, heat integration can be carried out internally or in combination between process units. However, steps in the determination of suitable heat integration techniques require long calculations and require a long time. In this paper, we propose an alternative step in determining heat integration technique by investigating 6 hypothetical units using Pinch Analysis approach with objective function energy target and total annual cost target. The six hypothetical units consist of units A, B, C, D, E, and F, where each unit has the location of different process streams to the temperature pinch. The result is a potential heat integration (ΔH’) formula that can trim conventional steps from 7 steps to just 3 steps. While the determination of the preferred heat integration technique is to calculate the potential of heat integration (ΔH’) between the hypothetical process units. Completion of calculation using matlab language programming.
Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2011-01-01
A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.
Li, Xiaofan; Nie, Qing
2009-07-01
Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.
Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut
2014-05-01
Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Keijsers, Carolina J P W; Segers, Wieke S; de Wildt, Dick J; Brouwers, Jacobus R B J; Keijsers, Loes; Jansen, Paul A F
2015-06-01
The only validated tool for pharmacotherapy education for medical students is the 6-step method of the World Health Organization. It has proven effective in experimental studies with short term interventions. The generalizability of this effect after implementation in a contextual-rich medical curriculum was investigated. The pharmacology knowledge and pharmacotherapy skills of cohorts of students, from years before, during and after implementation of a WHO-6-step-based integrated learning programme were tested using a standardized assessment containing 50 items covering knowledge of basic (n = 25) and clinical (n = 24) pharmacology, and pharmacotherapy skills (n = 1 open question). All scores are expressed as a percentage of the maximum score possible per (sub)domain. In total, 1652 students were included between September 2010 and July 2014 (participation rate 89%). The WHO-6-step-based learning programme improved students' knowledge of basic pharmacology (mean score ± SD, 60.6 ± 10.5% vs. 63.4 ± 10.9%, P < 0.01) and clinical or applied pharmacology (63.7 ± 10.4% vs. 67.4 ± 10.3%, P < 0.01), and improved their pharmacotherapy skills (68.8 ± 26.1% vs. 74.6% ± 22.9%, P 0.02). Moreover, satisfaction with education increased (5.7 ± 1.3 vs. 6.3 ± 1.0 on a 10-point scale, P < 0.01) and as did students' confidence in daily practice (from -0.81 ± 0.72 to -0.50 ± 0.79 on a -2 to +2 scale, P < 0.01). The WHO-6-step method was successfully implemented in a medical curriculum. In this observational study, the integrated learning programme had positive effects on students' knowledge of basic and applied pharmacology, improved their pharmacotherapy skills, and increased satisfaction with education and self-confidence in prescribing. Whether this training method leads to better patient care remains to be established. © 2015 The British Pharmacological Society.
An automatic segmentation method of a parameter-adaptive PCNN for medical images.
Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide
2017-09-01
Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.
Instantaneous Coastline Extraction from LIDAR Point Cloud and High Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Li, Y.; Zhoing, L.; Lai, Z.; Gan, Z.
2018-04-01
A new method was proposed for instantaneous waterline extraction in this paper, which combines point cloud geometry features and image spectral characteristics of the coastal zone. The proposed method consists of follow steps: Mean Shift algorithm is used to segment the coastal zone of high resolution remote sensing images into small regions containing semantic information;Region features are extracted by integrating LiDAR data and the surface area of the image; initial waterlines are extracted by α-shape algorithm; a region growing algorithm with is taking into coastline refinement, with a growth rule integrating the intensity and topography of LiDAR data; moothing the coastline. Experiments are conducted to demonstrate the efficiency of the proposed method.
Disposable world-to-chip interface for digital microfluidics
Van Dam, R. Michael; Shah, Gaurav; Keng, Pei-Yuin
2017-05-16
The present disclosure sets forth incorporating microfluidic chips interfaces for use with digital microfluidic processes. Methods and devices according to the present disclosure utilize compact, integrated platforms that interface with a chip upstream and downstream of the reaction, as well as between intermediate reaction steps if needed. In some embodiments these interfaces are automated, including automation of a multiple reagent process. Various reagent delivery systems and methods are also disclosed.
Unsteady Flow Simulation: A Numerical Challenge
2003-03-01
drive to convergence the numerical unsteady term. The time marching procedure is based on the approximate implicit Newton method for systems of non...computed through analytical derivatives of S. The linear system stemming from equation (3) is solved at each integration step by the same iterative method...significant reduction of memory usage, thanks to the reduced dimensions of the linear system matrix during the implicit marching of the solution. The
MetaboTools: A comprehensive toolbox for analysis of genome-scale metabolic models
Aurich, Maike K.; Fleming, Ronan M. T.; Thiele, Ines
2016-08-03
Metabolomic data sets provide a direct read-out of cellular phenotypes and are increasingly generated to study biological questions. Previous work, by us and others, revealed the potential of analyzing extracellular metabolomic data in the context of the metabolic model using constraint-based modeling. With the MetaboTools, we make our methods available to the broader scientific community. The MetaboTools consist of a protocol, a toolbox, and tutorials of two use cases. The protocol describes, in a step-wise manner, the workflow of data integration, and computational analysis. The MetaboTools comprise the Matlab code required to complete the workflow described in the protocol. Tutorialsmore » explain the computational steps for integration of two different data sets and demonstrate a comprehensive set of methods for the computational analysis of metabolic models and stratification thereof into different phenotypes. The presented workflow supports integrative analysis of multiple omics data sets. Importantly, all analysis tools can be applied to metabolic models without performing the entire workflow. Taken together, the MetaboTools constitute a comprehensive guide to the intra-model analysis of extracellular metabolomic data from microbial, plant, or human cells. In conclusion, this computational modeling resource offers a broad set of computational analysis tools for a wide biomedical and non-biomedical research community.« less
High Precision Edge Detection Algorithm for Mechanical Parts
NASA Astrophysics Data System (ADS)
Duan, Zhenyun; Wang, Ning; Fu, Jingshun; Zhao, Wenhui; Duan, Boqiang; Zhao, Jungui
2018-04-01
High precision and high efficiency measurement is becoming an imperative requirement for a lot of mechanical parts. So in this study, a subpixel-level edge detection algorithm based on the Gaussian integral model is proposed. For this purpose, the step edge normal section line Gaussian integral model of the backlight image is constructed, combined with the point spread function and the single step model. Then gray value of discrete points on the normal section line of pixel edge is calculated by surface interpolation, and the coordinate as well as gray information affected by noise is fitted in accordance with the Gaussian integral model. Therefore, a precise location of a subpixel edge was determined by searching the mean point. Finally, a gear tooth was measured by M&M3525 gear measurement center to verify the proposed algorithm. The theoretical analysis and experimental results show that the local edge fluctuation is reduced effectively by the proposed method in comparison with the existing subpixel edge detection algorithms. The subpixel edge location accuracy and computation speed are improved. And the maximum error of gear tooth profile total deviation is 1.9 μm compared with measurement result with gear measurement center. It indicates that the method has high reliability to meet the requirement of high precision measurement.
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
A Versatile Microfluidic Device for Automating Synthetic Biology.
Shih, Steve C C; Goyal, Garima; Kim, Peter W; Koutsoubelis, Nicolas; Keasling, Jay D; Adams, Paul D; Hillson, Nathan J; Singh, Anup K
2015-10-16
New microbes are being engineered that contain the genetic circuitry, metabolic pathways, and other cellular functions required for a wide range of applications such as producing biofuels, biobased chemicals, and pharmaceuticals. Although currently available tools are useful in improving the synthetic biology process, further improvements in physical automation would help to lower the barrier of entry into this field. We present an innovative microfluidic platform for assembling DNA fragments with 10× lower volumes (compared to that of current microfluidic platforms) and with integrated region-specific temperature control and on-chip transformation. Integration of these steps minimizes the loss of reagents and products compared to that with conventional methods, which require multiple pipetting steps. For assembling DNA fragments, we implemented three commonly used DNA assembly protocols on our microfluidic device: Golden Gate assembly, Gibson assembly, and yeast assembly (i.e., TAR cloning, DNA Assembler). We demonstrate the utility of these methods by assembling two combinatorial libraries of 16 plasmids each. Each DNA plasmid is transformed into Escherichia coli or Saccharomyces cerevisiae using on-chip electroporation and further sequenced to verify the assembly. We anticipate that this platform will enable new research that can integrate this automated microfluidic platform to generate large combinatorial libraries of plasmids and will help to expedite the overall synthetic biology process.
Daee, Pedram; Mirian, Maryam S; Ahmadabadi, Majid Nili
2014-01-01
In a multisensory task, human adults integrate information from different sensory modalities--behaviorally in an optimal Bayesian fashion--while children mostly rely on a single sensor modality for decision making. The reason behind this change of behavior over age and the process behind learning the required statistics for optimal integration are still unclear and have not been justified by the conventional Bayesian modeling. We propose an interactive multisensory learning framework without making any prior assumptions about the sensory models. In this framework, learning in every modality and in their joint space is done in parallel using a single-step reinforcement learning method. A simple statistical test on confidence intervals on the mean of reward distributions is used to select the most informative source of information among the individual modalities and the joint space. Analyses of the method and the simulation results on a multimodal localization task show that the learning system autonomously starts with sensory selection and gradually switches to sensory integration. This is because, relying more on modalities--i.e. selection--at early learning steps (childhood) is more rewarding than favoring decisions learned in the joint space since, smaller state-space in modalities results in faster learning in every individual modality. In contrast, after gaining sufficient experiences (adulthood), the quality of learning in the joint space matures while learning in modalities suffers from insufficient accuracy due to perceptual aliasing. It results in tighter confidence interval for the joint space and consequently causes a smooth shift from selection to integration. It suggests that sensory selection and integration are emergent behavior and both are outputs of a single reward maximization process; i.e. the transition is not a preprogrammed phenomenon.
Bürger, Raimund; Diehl, Stefan; Mejías, Camilo
2016-01-01
The main purpose of the recently introduced Bürger-Diehl simulation model for secondary settling tanks was to resolve spatial discretization problems when both hindered settling and the phenomena of compression and dispersion are included. Straightforward time integration unfortunately means long computational times. The next step in the development is to introduce and investigate time-integration methods for more efficient simulations, but where other aspects such as implementation complexity and robustness are equally considered. This is done for batch settling simulations. The key findings are partly a new time-discretization method and partly its comparison with other specially tailored and standard methods. Several advantages and disadvantages for each method are given. One conclusion is that the new linearly implicit method is easier to implement than another one (semi-implicit method), but less efficient based on two types of batch sedimentation tests.
NASA Astrophysics Data System (ADS)
Zhang, Shuangxi; Jia, Yuesong; Sun, Qizhi
2015-02-01
Webb [1] proposed a method to get symplectic integrators of magnetic systems by Taylor expanding the discrete Euler-Lagrangian equations (DEL) which resulted from variational symplectic method by making the variation of the discrete action [2], and approximating the results to the order of O (h2), where h is the time step. And in that paper, Webb thought that the integrators obtained by that method are symplectic ones, especially, he treated Boris integrator (BI) as the symplectic one. However, we have questions about Webb's results. Theoretically the transformation of phase-space coordinates between two adjacent points induced by symplectic algorithm should conserve a symplectic 2-form [2-5]. As proved in Refs. [2,3], the transformations induced by the standard symplectic integrator derived from Hamilton and the variational symplectic integrator (VSI) [2,6] from Lagrangian should conserve a symplectic 2-forms. But the approximation of VSI to O (h2) obtained by that paper is hard to conserve a symplectic 2-form, contrary to the claim of [1]. In the next section, we will use BI as an example to support our point and will prove BI not to be a symplectic one but an integrator conserving discrete phase-space volume.
Valentijn, Pim P; Schepman, Sanneke M; Opheij, Wilfrid; Bruijnzeels, Marc A
2013-01-01
Primary care has a central role in integrating care within a health system. However, conceptual ambiguity regarding integrated care hampers a systematic understanding. This paper proposes a conceptual framework that combines the concepts of primary care and integrated care, in order to understand the complexity of integrated care. The search method involved a combination of electronic database searches, hand searches of reference lists (snowball method) and contacting researchers in the field. The process of synthesizing the literature was iterative, to relate the concepts of primary care and integrated care. First, we identified the general principles of primary care and integrated care. Second, we connected the dimensions of integrated care and the principles of primary care. Finally, to improve content validity we held several meetings with researchers in the field to develop and refine our conceptual framework. The conceptual framework combines the functions of primary care with the dimensions of integrated care. Person-focused and population-based care serve as guiding principles for achieving integration across the care continuum. Integration plays complementary roles on the micro (clinical integration), meso (professional and organisational integration) and macro (system integration) level. Functional and normative integration ensure connectivity between the levels. The presented conceptual framework is a first step to achieve a better understanding of the inter-relationships among the dimensions of integrated care from a primary care perspective.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis; del-Castillo-Negrete, Diego; Hauck, Cory D.
2014-09-01
We propose a Lagrangian numerical algorithm for a time-dependent, anisotropic temperature transport equation in magnetized plasmas in the large guide field regime. The approach is based on an analytical integral formal solution of the parallel (i.e., along the magnetic field) transport equation with sources, and it is able to accommodate both local and non-local parallel heat flux closures. The numerical implementation is based on an operator-split formulation, with two straightforward steps: a perpendicular transport step (including sources), and a Lagrangian (field-line integral) parallel transport step. Algorithmically, the first step is amenable to the use of modern iterative methods, while themore » second step has a fixed cost per degree of freedom (and is therefore scalable). Accuracy-wise, the approach is free from the numerical pollution introduced by the discrete parallel transport term when the perpendicular to parallel transport coefficient ratio X ⊥ /X ∥ becomes arbitrarily small, and is shown to capture the correct limiting solution when ε = X⊥L 2 ∥/X1L 2 ⊥ → 0 (with L∥∙ L⊥ , the parallel and perpendicular diffusion length scales, respectively). Therefore, the approach is asymptotic-preserving. We demonstrate the capabilities of the scheme with several numerical experiments with varying magnetic field complexity in two dimensions, including the case of transport across a magnetic island.« less
Explicit methods in extended phase space for inseparable Hamiltonian problems
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2015-03-01
We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.
Bayesian Analysis of Evolutionary Divergence with Genomic Data under Diverse Demographic Models.
Chung, Yujin; Hey, Jody
2017-06-01
We present a new Bayesian method for estimating demographic and phylogenetic history using population genomic data. Several key innovations are introduced that allow the study of diverse models within an Isolation-with-Migration framework. The new method implements a 2-step analysis, with an initial Markov chain Monte Carlo (MCMC) phase that samples simple coalescent trees, followed by the calculation of the joint posterior density for the parameters of a demographic model. In step 1, the MCMC sampling phase, the method uses a reduced state space, consisting of coalescent trees without migration paths, and a simple importance sampling distribution without the demography of interest. Once obtained, a single sample of trees can be used in step 2 to calculate the joint posterior density for model parameters under multiple diverse demographic models, without having to repeat MCMC runs. Because migration paths are not included in the state space of the MCMC phase, but rather are handled by analytic integration in step 2 of the analysis, the method is scalable to a large number of loci with excellent MCMC mixing properties. With an implementation of the new method in the computer program MIST, we demonstrate the method's accuracy, scalability, and other advantages using simulated data and DNA sequences of two common chimpanzee subspecies: Pan troglodytes (P. t.) troglodytes and P. t. verus. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Starting with Worldviews: A Five-Step Preparatory Approach to Integrative Interdisciplinary Learning
ERIC Educational Resources Information Center
Augsburg, Tanya; Chitewere, Tendai
2013-01-01
In this article we propose a five-step sequenced approach to integrative interdisciplinary learning in undergraduate gateway courses. Drawing from the literature of interdisciplinarity, transformative learning theory, and theories of reflective learning, we utilize a sequence of five steps early in our respective undergraduate gateway courses to…
Improved method of step length estimation based on inverted pendulum model.
Zhao, Qi; Zhang, Boxue; Wang, Jingjing; Feng, Wenquan; Jia, Wenyan; Sun, Mingui
2017-04-01
Step length estimation is an important issue in areas such as gait analysis, sport training, or pedestrian localization. In this article, we estimate the step length of walking using a waist-worn wearable computer named eButton. Motion sensors within this device are used to record body movement from the trunk instead of extremities. Two signal-processing techniques are applied to our algorithm design. The direction cosine matrix transforms vertical acceleration from the device coordinates to the topocentric coordinates. The empirical mode decomposition is used to remove the zero- and first-order skew effects resulting from an integration process. Our experimental results show that our algorithm performs well in step length estimation. The effectiveness of the direction cosine matrix algorithm is improved from 1.69% to 3.56% while the walking speed increased.
Numerical simulation of pseudoelastic shape memory alloys using the large time increment method
NASA Astrophysics Data System (ADS)
Gu, Xiaojun; Zhang, Weihong; Zaki, Wael; Moumni, Ziad
2017-04-01
The paper presents a numerical implementation of the large time increment (LATIN) method for the simulation of shape memory alloys (SMAs) in the pseudoelastic range. The method was initially proposed as an alternative to the conventional incremental approach for the integration of nonlinear constitutive models. It is adapted here for the simulation of pseudoelastic SMA behavior using the Zaki-Moumni model and is shown to be especially useful in situations where the phase transformation process presents little or lack of hardening. In these situations, a slight stress variation in a load increment can result in large variations of strain and local state variables, which may lead to difficulties in numerical convergence. In contrast to the conventional incremental method, the LATIN method solve the global equilibrium and local consistency conditions sequentially for the entire loading path. The achieved solution must satisfy the conditions of static and kinematic admissibility and consistency simultaneously after several iterations. 3D numerical implementation is accomplished using an implicit algorithm and is then used for finite element simulation using the software Abaqus. Computational tests demonstrate the ability of this approach to simulate SMAs presenting flat phase transformation plateaus and subjected to complex loading cases, such as the quasi-static behavior of a stent structure. Some numerical results are contrasted to those obtained using step-by-step incremental integration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wunschel, David S.; Melville, Angela M.; Ehrhardt, Christopher J.
2012-05-17
The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of the castor plant Ricinus communis. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatographicmore » - mass spectrometric (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method and independent of the seed source. In particular the abundance of mannose, arabinose, fucose, ricinoleic acid and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation.« less
NASA Astrophysics Data System (ADS)
Ha, Sanghyun; Park, Junshin; You, Donghyun
2018-01-01
Utility of the computational power of Graphics Processing Units (GPUs) is elaborated for solutions of incompressible Navier-Stokes equations which are integrated using a semi-implicit fractional-step method. The Alternating Direction Implicit (ADI) and the Fourier-transform-based direct solution methods used in the semi-implicit fractional-step method take advantage of multiple tridiagonal matrices whose inversion is known as the major bottleneck for acceleration on a typical multi-core machine. A novel implementation of the semi-implicit fractional-step method designed for GPU acceleration of the incompressible Navier-Stokes equations is presented. Aspects of the programing model of Compute Unified Device Architecture (CUDA), which are critical to the bandwidth-bound nature of the present method are discussed in detail. A data layout for efficient use of CUDA libraries is proposed for acceleration of tridiagonal matrix inversion and fast Fourier transform. OpenMP is employed for concurrent collection of turbulence statistics on a CPU while the Navier-Stokes equations are computed on a GPU. Performance of the present method using CUDA is assessed by comparing the speed of solving three tridiagonal matrices using ADI with the speed of solving one heptadiagonal matrix using a conjugate gradient method. An overall speedup of 20 times is achieved using a Tesla K40 GPU in comparison with a single-core Xeon E5-2660 v3 CPU in simulations of turbulent boundary-layer flow over a flat plate conducted on over 134 million grids. Enhanced performance of 48 times speedup is reached for the same problem using a Tesla P100 GPU.
Fast and reliable symplectic integration for planetary system N-body problems
NASA Astrophysics Data System (ADS)
Hernandez, David M.
2016-06-01
We apply one of the exactly symplectic integrators, which we call HB15, of Hernandez & Bertschinger, along with the Kepler problem solver of Wisdom & Hernandez, to solve planetary system N-body problems. We compare the method to Wisdom-Holman (WH) methods in the MERCURY software package, the MERCURY switching integrator, and others and find HB15 to be the most efficient method or tied for the most efficient method in many cases. Unlike WH, HB15 solved N-body problems exhibiting close encounters with small, acceptable error, although frequent encounters slowed the code. Switching maps like MERCURY change between two methods and are not exactly symplectic. We carry out careful tests on their properties and suggest that they must be used with caution. We then use different integrators to solve a three-body problem consisting of a binary planet orbiting a star. For all tested tolerances and time steps, MERCURY unbinds the binary after 0 to 25 years. However, in the solutions of HB15, a time-symmetric HERMITE code, and a symplectic Yoshida method, the binary remains bound for >1000 years. The methods' solutions are qualitatively different, despite small errors in the first integrals in most cases. Several checks suggest that the qualitative binary behaviour of HB15's solution is correct. The Bulirsch-Stoer and Radau methods in the MERCURY package also unbind the binary before a time of 50 years, suggesting that this dynamical error is due to a MERCURY bug.
Concept mapping as a promising method to bring practice into science.
van Bon-Martens, M J H; van de Goor, L A M; Holsappel, J C; Kuunders, T J M; Jacobs-van der Bruggen, M A M; te Brake, J H M; van Oers, J A M
2014-06-01
Concept mapping is a method for developing a conceptual framework of a complex topic for use as a guide to evaluation or planning. In concept mapping, thoughts and ideas are represented in the form of a picture or map, the content of which is determined by a group of stakeholders. This study aimed to explore the suitability of this method as a tool to integrate practical knowledge with scientific knowledge in order to improve theory development as a sound basis for practical decision-making. Following a short introduction to the method of concept mapping, five Dutch studies, serving different purposes and fields in public health, will be described. The aim of these studies was: to construct a theoretical framework for good regional public health reporting; to design an implementation strategy for a guideline for integral local health policy; to guide the evaluation of a local integral approach of overweight and obesity in youth; to guide the construction of a questionnaire to measure the quality of postdisaster psychosocial care; and to conceptualize an integral base for formulation of ambitions and targets for the new youth healthcare programme of a regional health service. The studies showed that concept mapping is a way to integrate practical and scientific knowledge with careful selection of participants that represent the different perspectives. Theory development can be improved through concept mapping; not by formulating new theories, but by highlighting the key issues and defining perceived relationships between topics. In four of the five studies, the resulting concept map was received as a sound basis for practical decision-making. Concept mapping is a valuable method for evidence-based public health policy, and a powerful instrument for facilitating dialogue, coherence and collaboration between researchers, practitioners, policy makers and the public. Development of public health theory was realized by a step-by-step approach, considering both scientific and practical knowledge. However, the external validity of the concept maps in place and time is of importance. Copyright © 2014 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Krylov Deferred Correction Accelerated Method of Lines Transpose for Parabolic Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, Jun; Jingfang, Huang
2008-01-01
In this paper, a new class of numerical methods for the accurate and efficient solutions of parabolic partial differential equations is presented. Unlike traditional method of lines (MoL), the new {\\bf \\it Krylov deferred correction (KDC) accelerated method of lines transpose (MoL^T)} first discretizes the temporal direction using Gaussian type nodes and spectral integration, and symbolically applies low-order time marching schemes to form a preconditioned elliptic system, which is then solved iteratively using Newton-Krylov techniques such as Newton-GMRES or Newton-BiCGStab method. Each function evaluation in the Newton-Krylov method is simply one low-order time-stepping approximation of the error by solving amore » decoupled system using available fast elliptic equation solvers. Preliminary numerical experiments show that the KDC accelerated MoL^T technique is unconditionally stable, can be spectrally accurate in both temporal and spatial directions, and allows optimal time-step sizes in long-time simulations.« less
Compressible, multiphase semi-implicit method with moment of fluid interface representation
Jemison, Matthew; Sussman, Mark; Arienti, Marco
2014-09-16
A unified method for simulating multiphase flows using an exactly mass, momentum, and energy conserving Cell-Integrated Semi-Lagrangian advection algorithm is presented. The deforming material boundaries are represented using the moment-of-fluid method. Our new algorithm uses a semi-implicit pressure update scheme that asymptotically preserves the standard incompressible pressure projection method in the limit of infinite sound speed. The asymptotically preserving attribute makes the new method applicable to compressible and incompressible flows including stiff materials; enabling large time steps characteristic of incompressible flow algorithms rather than the small time steps required by explicit methods. Moreover, shocks are captured and material discontinuities aremore » tracked, without the aid of any approximate or exact Riemann solvers. As a result, wimulations of underwater explosions and fluid jetting in one, two, and three dimensions are presented which illustrate the effectiveness of the new algorithm at efficiently computing multiphase flows containing shock waves and material discontinuities with large “impedance mismatch.”« less
Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions
NASA Astrophysics Data System (ADS)
Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.
2016-09-01
Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.
Five Steps for Improving Evaluation Reports by Using Different Data Analysis Methods.
ERIC Educational Resources Information Center
Thompson, Bruce
Although methodological integrity is not the sole determinant of the value of a program evaluation, decision-makers do have a right, at a minimum, to be able to expect competent work from evaluators. This paper explores five areas where evaluators might improve methodological practices. First, evaluation reports should reflect the limited…
ERIC Educational Resources Information Center
Bedford, Denise A. D.
2015-01-01
The knowledge life cycle is applied to two core capabilities of library and information science (LIS) education--teaching, and research and development. The knowledge claim validation, invalidation and integration steps of the knowledge life cycle are translated to learning, unlearning and relearning processes. Mixed methods are used to determine…
Targeted Therapies for Myeloma and Metastatic Bone Cancers
2007-02-01
increased efficacy in the targeted microenvironment, and the ultimate opportunity to reverse catastrophic disease processes . Furthermore, targeted...concentrate the resulting nanoparticles using centrifuge concentrator tubes and we have integrated this processing step into our nanoparticle...is unaffected by this slightly altered approach. Furthermore, this modified method avoids a lengthy column separation process that diminishes the
Empirical methods for modeling landscape change, ecosystem services, and biodiversity
David Lewis; Ralph Alig
2009-01-01
The purpose of this paper is to synthesize recent economics research aimed at integrating discrete-choice econometric models of land-use change with spatially-explicit landscape simulations and quantitative ecology. This research explicitly models changes in the spatial pattern of landscapes in two steps: 1) econometric estimation of parcel-scale transition...
NASA Astrophysics Data System (ADS)
Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi
Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.
Method for network analyzation and apparatus
Bracht, Roger B.; Pasquale, Regina V.
2001-01-01
A portable network analyzer and method having multiple channel transmit and receive capability for real-time monitoring of processes which maintains phase integrity, requires low power, is adapted to provide full vector analysis, provides output frequencies of up to 62.5 MHz and provides fine sensitivity frequency resolution. The present invention includes a multi-channel means for transmitting and a multi-channel means for receiving, both in electrical communication with a software means for controlling. The means for controlling is programmed to provide a signal to a system under investigation which steps consecutively over a range of predetermined frequencies. The resulting received signal from the system provides complete time domain response information by executing a frequency transform of the magnitude and phase information acquired at each frequency step.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
Kuijpers, Niels GA; Chroumpi, Soultana; Vos, Tim; Solis-Escalante, Daniel; Bosman, Lizanne; Pronk, Jack T; Daran, Jean-Marc; Daran-Lapujade, Pascale
2013-01-01
In vivo assembly of overlapping fragments by homologous recombination in Saccharomyces cerevisiae is a powerful method to engineer large DNA constructs. Whereas most in vivo assembly methods reported to date result in circular vectors, stable integrated constructs are often preferred for metabolic engineering as they are required for large-scale industrial application. The present study explores the potential of combining in vivo assembly of large, multigene expression constructs with their targeted chromosomal integration in S. cerevisiae. Combined assembly and targeted integration of a ten-fragment 22-kb construct to a single chromosomal locus was successfully achieved in a single transformation process, but with low efficiency (5% of the analyzed transformants contained the correctly assembled construct). The meganuclease I-SceI was therefore used to introduce a double-strand break at the targeted chromosomal locus, thus to facilitate integration of the assembled construct. I-SceI-assisted integration dramatically increased the efficiency of assembly and integration of the same construct to 95%. This study paves the way for the fast, efficient, and stable integration of large DNA constructs in S. cerevisiae chromosomes. PMID:24028550
NASA Astrophysics Data System (ADS)
Kreis, Karsten; Kremer, Kurt; Potestio, Raffaello; Tuckerman, Mark E.
2017-12-01
Path integral-based methodologies play a crucial role for the investigation of nuclear quantum effects by means of computer simulations. However, these techniques are significantly more demanding than corresponding classical simulations. To reduce this numerical effort, we recently proposed a method, based on a rigorous Hamiltonian formulation, which restricts the quantum modeling to a small but relevant spatial region within a larger reservoir where particles are treated classically. In this work, we extend this idea and show how it can be implemented along with state-of-the-art path integral simulation techniques, including path-integral molecular dynamics, which allows for the calculation of quantum statistical properties, and ring-polymer and centroid molecular dynamics, which allow the calculation of approximate quantum dynamical properties. To this end, we derive a new integration algorithm that also makes use of multiple time-stepping. The scheme is validated via adaptive classical-path-integral simulations of liquid water. Potential applications of the proposed multiresolution method are diverse and include efficient quantum simulations of interfaces as well as complex biomolecular systems such as membranes and proteins.
Etchells, Peter J; Benton, Christopher P; Ludwig, Casimir J H; Gilchrist, Iain D
2011-01-01
A growing number of studies in vision research employ analyses of how perturbations in visual stimuli influence behavior on single trials. Recently, we have developed a method along such lines to assess the time course over which object velocity information is extracted on a trial-by-trial basis in order to produce an accurate intercepting saccade to a moving target. Here, we present a simplified version of this methodology, and use it to investigate how changes in stimulus contrast affect the temporal velocity integration window used when generating saccades to moving targets. Observers generated saccades to one of two moving targets which were presented at high (80%) or low (7.5%) contrast. In 50% of trials, target velocity stepped up or down after a variable interval after the saccadic go signal. The extent to which the saccade endpoint can be accounted for as a weighted combination of the pre- or post-step velocities allows for identification of the temporal velocity integration window. Our results show that the temporal integration window takes longer to peak in the low when compared to high contrast condition. By enabling the assessment of how information such as changes in velocity can be used in the programming of a saccadic eye movement on single trials, this study describes and tests a novel methodology with which to look at the internal processing mechanisms that transform sensory visual inputs into oculomotor outputs.
Collocation and Galerkin Time-Stepping Methods
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2011-01-01
We study the numerical solutions of ordinary differential equations by one-step methods where the solution at tn is known and that at t(sub n+1) is to be calculated. The approaches employed are collocation, continuous Galerkin (CG) and discontinuous Galerkin (DG). Relations among these three approaches are established. A quadrature formula using s evaluation points is employed for the Galerkin formulations. We show that with such a quadrature, the CG method is identical to the collocation method using quadrature points as collocation points. Furthermore, if the quadrature formula is the right Radau one (including t(sub n+1)), then the DG and CG methods also become identical, and they reduce to the Radau IIA collocation method. In addition, we present a generalization of DG that yields a method identical to CG and collocation with arbitrary collocation points. Thus, the collocation, CG, and generalized DG methods are equivalent, and the latter two methods can be formulated using the differential instead of integral equation. Finally, all schemes discussed can be cast as s-stage implicit Runge-Kutta methods.
NASA Astrophysics Data System (ADS)
Wang, Jinting; Lu, Liqiao; Zhu, Fei
2018-01-01
Finite element (FE) is a powerful tool and has been applied by investigators to real-time hybrid simulations (RTHSs). This study focuses on the computational efficiency, including the computational time and accuracy, of numerical integrations in solving FE numerical substructure in RTHSs. First, sparse matrix storage schemes are adopted to decrease the computational time of FE numerical substructure. In this way, the task execution time (TET) decreases such that the scale of the numerical substructure model increases. Subsequently, several commonly used explicit numerical integration algorithms, including the central difference method (CDM), the Newmark explicit method, the Chang method and the Gui-λ method, are comprehensively compared to evaluate their computational time in solving FE numerical substructure. CDM is better than the other explicit integration algorithms when the damping matrix is diagonal, while the Gui-λ (λ = 4) method is advantageous when the damping matrix is non-diagonal. Finally, the effect of time delay on the computational accuracy of RTHSs is investigated by simulating structure-foundation systems. Simulation results show that the influences of time delay on the displacement response become obvious with the mass ratio increasing, and delay compensation methods may reduce the relative error of the displacement peak value to less than 5% even under the large time-step and large time delay.
Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations
Brocke, Ekaterina; Bhalla, Upinder S.; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael
2016-01-01
Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience. PMID:27672364
Efficient Integration of Coupled Electrical-Chemical Systems in Multiscale Neuronal Simulations.
Brocke, Ekaterina; Bhalla, Upinder S; Djurfeldt, Mikael; Hellgren Kotaleski, Jeanette; Hanke, Michael
2016-01-01
Multiscale modeling and simulations in neuroscience is gaining scientific attention due to its growing importance and unexplored capabilities. For instance, it can help to acquire better understanding of biological phenomena that have important features at multiple scales of time and space. This includes synaptic plasticity, memory formation and modulation, homeostasis. There are several ways to organize multiscale simulations depending on the scientific problem and the system to be modeled. One of the possibilities is to simulate different components of a multiscale system simultaneously and exchange data when required. The latter may become a challenging task for several reasons. First, the components of a multiscale system usually span different spatial and temporal scales, such that rigorous analysis of possible coupling solutions is required. Then, the components can be defined by different mathematical formalisms. For certain classes of problems a number of coupling mechanisms have been proposed and successfully used. However, a strict mathematical theory is missing in many cases. Recent work in the field has not so far investigated artifacts that may arise during coupled integration of different approximation methods. Moreover, in neuroscience, the coupling of widely used numerical fixed step size solvers may lead to unexpected inefficiency. In this paper we address the question of possible numerical artifacts that can arise during the integration of a coupled system. We develop an efficient strategy to couple the components comprising a multiscale test problem in neuroscience. We introduce an efficient coupling method based on the second-order backward differentiation formula (BDF2) numerical approximation. The method uses an adaptive step size integration with an error estimation proposed by Skelboe (2000). The method shows a significant advantage over conventional fixed step size solvers used in neuroscience for similar problems. We explore different coupling strategies that define the organization of computations between system components. We study the importance of an appropriate approximation of exchanged variables during the simulation. The analysis shows a substantial impact of these aspects on the solution accuracy in the application to our multiscale neuroscientific test problem. We believe that the ideas presented in the paper may essentially contribute to the development of a robust and efficient framework for multiscale brain modeling and simulations in neuroscience.
Method and apparatus for determining material structural integrity
Pechersky, M.J.
1994-01-01
Disclosed are a nondestructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis to determine the damping loss factor. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity vs time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method: if an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the coil current. If a reciprocating transducer is used, the vibrational force is determined by a force gauge in the transducer. Using vibrational analysis, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity data. Damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
Geometric integration in Born-Oppenheimer molecular dynamics.
Odell, Anders; Delin, Anna; Johansson, Börje; Cawkwell, Marc J; Niklasson, Anders M N
2011-12-14
Geometric integration schemes for extended Lagrangian self-consistent Born-Oppenheimer molecular dynamics, including a weak dissipation to remove numerical noise, are developed and analyzed. The extended Lagrangian framework enables the geometric integration of both the nuclear and electronic degrees of freedom. This provides highly efficient simulations that are stable and energy conserving even under incomplete and approximate self-consistent field (SCF) convergence. We investigate three different geometric integration schemes: (1) regular time reversible Verlet, (2) second order optimal symplectic, and (3) third order optimal symplectic. We look at energy conservation, accuracy, and stability as a function of dissipation, integration time step, and SCF convergence. We find that the inclusion of dissipation in the symplectic integration methods gives an efficient damping of numerical noise or perturbations that otherwise may accumulate from finite arithmetics in a perfect reversible dynamics. © 2011 American Institute of Physics
Huo, Zhiguang; Tseng, George
2017-01-01
Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K-means (is-K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is-K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency. PMID:28959370
Huo, Zhiguang; Tseng, George
2017-06-01
Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.
A method for real-time generation of augmented reality work instructions via expert movements
NASA Astrophysics Data System (ADS)
Bhattacharya, Bhaskar; Winer, Eliot
2015-03-01
Augmented Reality (AR) offers tremendous potential for a wide range of fields including entertainment, medicine, and engineering. AR allows digital models to be integrated with a real scene (typically viewed through a video camera) to provide useful information in a variety of contexts. The difficulty in authoring and modifying scenes is one of the biggest obstacles to widespread adoption of AR. 3D models must be created, textured, oriented and positioned to create the complex overlays viewed by a user. This often requires using multiple software packages in addition to performing model format conversions. In this paper, a new authoring tool is presented which uses a novel method to capture product assembly steps performed by a user with a depth+RGB camera. Through a combination of computer vision and imaging process techniques, each individual step is decomposed into objects and actions. The objects are matched to those in a predetermined geometry library and the actions turned into animated assembly steps. The subsequent instruction set is then generated with minimal user input. A proof of concept is presented to establish the method's viability.
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.
Kakui, Yasutaka; Sunaga, Tomonari; Arai, Kunio; Dodgson, James; Ji, Liang; Csikász-Nagy, Attila; Carazo-Salas, Rafael; Sato, Masamitsu
2015-01-01
Integration of an external gene into a fission yeast chromosome is useful to investigate the effect of the gene product. An easy way to knock-in a gene construct is use of an integration plasmid, which can be targeted and inserted to a chromosome through homologous recombination. Despite the advantage of integration, construction of integration plasmids is energy- and time-consuming, because there is no systematic library of integration plasmids with various promoters, fluorescent protein tags, terminators and selection markers; therefore, researchers are often forced to make appropriate ones through multiple rounds of cloning procedures. Here, we establish materials and methods to easily construct integration plasmids. We introduce a convenient cloning system based on Golden Gate DNA shuffling, which enables the connection of multiple DNA fragments at once: any kind of promoters and terminators, the gene of interest, in combination with any fluorescent protein tag genes and any selection markers. Each of those DNA fragments, called a ‘module’, can be tandemly ligated in the order we desire in a single reaction, which yields a circular plasmid in a one-step manner. The resulting plasmids can be integrated through standard methods for transformation. Thus, these materials and methods help easy construction of knock-in strains, and this will further increase the value of fission yeast as a model organism. PMID:26108218
Richter, Jack; McFarland, Lela; Bredfeldt, Christine
2012-01-01
Background/Aims Integrating data across systems can be a daunting process. The traditional method of moving data to a common location, mapping fields with different formats and meanings, and performing data cleaning activities to ensure valid and reliable integration across systems can be both expensive and extremely time consuming. As the scope of needed research data increases, the traditional methodology may not be sustainable. Data Virtualization provides an alternative to traditional methods that may reduce the effort required to integrate data across disparate systems. Objective Our goal was to survey new methods in data integration, cloud computing, enterprise data management and virtual data management for opportunities to increase the efficiency of producing VDW and similar data sets. Methods Kaiser Permanente Information Technology (KPIT), in collaboration with the Mid-Atlantic Permanente Research Institute (MAPRI) reviewed methodologies in the burgeoning field of Data Virtualization. We identified potential strengths and weaknesses of new approaches to data integration. For each method, we evaluated its potential application for producing effective research data sets. Results Data Virtualization provides opportunities to reduce the amount of data movement required to integrate data sources on different platforms in order to produce research data sets. Additionally, Data Virtualization also includes methods for managing “fuzzy” matching used to match fields known to have poor reliability such as names, addresses and social security numbers. These methods could improve the efficiency of integrating state and federal data such as patient race, death, and tumors with internal electronic health record data. Discussion The emerging field of Data Virtualization has considerable potential for increasing the efficiency of producing research data sets. An important next step will be to develop a proof of concept project that will help us understand to benefits and drawbacks of these techniques.
NASA Astrophysics Data System (ADS)
Balakina, E. V.; Zotov, N. M.; Fedin, A. P.
2018-02-01
Modeling of the motion of the elastic wheel of the vehicle in real-time is used in the tasks of constructing different models in the creation of wheeled vehicles motion control electronic systems, in the creation of automobile stand-simulators etc. The accuracy and the reliability of simulation of the parameters of the wheel motion in real-time when rolling with a slip within the given road conditions are determined not only by the choice of the model, but also by the inaccuracy and instability of the numerical calculation. It is established that the inaccuracy and instability of the calculation depend on the size of the step of integration and the numerical method being used. The analysis of these inaccuracy and instability when wheel rolling with a slip was made and recommendations for reducing them were developed. It is established that the total allowable range of steps of integration is 0.001.0.005 s; the strongest instability is manifested in the calculation of the angular and linear accelerations of the wheel; the weakest instability is manifested in the calculation of the translational velocity of the wheel and moving of the center of the wheel; the instability is less at large values of slip angle and on more slippery surfaces. A new method of the average acceleration is suggested, which allows to significantly reduce (up to 100%) the manifesting of instability of the solution in the calculation of all parameters of motion of the elastic wheel for different braking conditions and for the entire range of steps of integration. The results of research can be applied to the selection of control algorithms in vehicles motion control electronic systems and in the testing stand-simulators
High-Speed Solution of Spacecraft Trajectory Problems Using Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2010-01-01
It has been known for some time that Taylor series (TS) integration is among the most efficient and accurate numerical methods in solving differential equations. However, the full benefit of the method has yet to be realized in calculating spacecraft trajectories, for two main reasons. First, most applications of Taylor series to trajectory propagation have focused on relatively simple problems of orbital motion or on specific problems and have not provided general applicability. Second, applications that have been more general have required use of a preprocessor, which inevitably imposes constraints on computational efficiency. The latter approach includes the work of Berryman et al., who solved the planetary n-body problem with relativistic effects. Their work specifically noted the computational inefficiencies arising from use of a preprocessor and pointed out the potential benefit of manually coding derivative routines. In this Engineering Note, we report on a systematic effort to directly implement Taylor series integration in an operational trajectory propagation code: the Spacecraft N-Body Analysis Program (SNAP). The present Taylor series implementation is unique in that it applies to spacecraft virtually anywhere in the solar system and can be used interchangeably with another integration method. SNAP is a high-fidelity trajectory propagator that includes force models for central body gravitation with N X N harmonics, other body gravitation with N X N harmonics, solar radiation pressure, atmospheric drag (for Earth orbits), and spacecraft thrusting (including shadowing). The governing equations are solved using an eighth-order Runge-Kutta Fehlberg (RKF) single-step method with variable step size control. In the present effort, TS is implemented by way of highly integrated subroutines that can be used interchangeably with RKF. This makes it possible to turn TS on or off during various phases of a mission. Current TS force models include central body gravitation with the J2 spherical harmonic, other body gravitation, thrust, constant atmospheric drag from Earth's atmosphere, and solar radiation pressure for a sphere under constant illumination. The purpose of this Engineering Note is to demonstrate the performance of TS integration in an operational trajectory analysis code and to compare it with a standard method, eighth-order RKF. Results show that TS is 16.6 times faster on average and is more accurate in 87.5% of the cases presented.
Zatzick, Douglas; Rivara, Frederick; Jurkovich, Gregory; Russo, Joan; Trusz, Sarah Geiss; Wang, Jin; Wagner, Amy; Stephens, Kari; Dunn, Chris; Uehara, Edwina; Petrie, Megan; Engel, Charles; Davydow, Dimitri; Katon, Wayne
2011-01-01
Objective To develop and implement a stepped collaborative care intervention targeting PTSD and related co-morbidities to enhance the population impact of early trauma-focused interventions. Method We describe the design and implementation of the Trauma Survivors Outcomes & Support Study (TSOS II). An interdisciplinary treatment development team was comprised of trauma surgical, clinical psychiatric and mental health services “change agents” who spanned the boundaries between front-line trauma center clinical care and acute care policy. Mixed method clinical epidemiologic and clinical ethnographic studies informed the development of PTSD screening and intervention procedures. Results Two-hundred and seven acutely injured trauma survivors with high early PTSD symptom levels were randomized into the study. The stepped collaborative care model integrated care management (i.e., posttraumatic concern elicitation and amelioration, motivational interviewing, and behavioral activation) with cognitive behavioral therapy and pharmacotherapy targeting PTSD. The model was feasibly implemented by front-line acute care MSW and ARNP providers. Conclusions Stepped care protocols targeting PTSD may enhance the population impact of early interventions developed for survivors of individual and mass trauma by extending the reach of collaborative care interventions to acute care medical settings and other non-specialty posttraumatic contexts. PMID:21596205
Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems
NASA Technical Reports Server (NTRS)
Cerro, J. A.; Scotti, S. J.
1991-01-01
Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.
Step-by-step integration for fractional operators
NASA Astrophysics Data System (ADS)
Colinas-Armijo, Natalia; Di Paola, Mario
2018-06-01
In this paper, an approach based on the definition of the Riemann-Liouville fractional operators is proposed in order to provide a different discretisation technique as alternative to the Grünwald-Letnikov operators. The proposed Riemann-Liouville discretisation consists of performing step-by-step integration based upon the discretisation of the function f(t). It has been shown that, as f(t) is discretised as stepwise or piecewise function, the Riemann-Liouville fractional integral and derivative are governing by operators very similar to the Grünwald-Letnikov operators. In order to show the accuracy and capabilities of the proposed Riemann-Liouville discretisation technique and the Grünwald-Letnikov discrete operators, both techniques have been applied to: unit step functions, exponential functions and sample functions of white noise.
A progress report on estuary modeling by the finite-element method
Gray, William G.
1978-01-01
Various schemes are investigated for finite-element modeling of two-dimensional surface-water flows. The first schemes investigated combine finite-element spatial discretization with split-step time stepping schemes that have been found useful in finite-difference computations. Because of the large number of numerical integrations performed in space and the large sparse matrices solved, these finite-element schemes were found to be economically uncompetitive with finite-difference schemes. A very promising leapfrog scheme is proposed which, when combined with a novel very fast spatial integration procedure, eliminates the need to solve any matrices at all. Additional problems attacked included proper propagation of waves and proper specification of the normal flow-boundary condition. This report indicates work in progress and does not come to a definitive conclusion as to the best approach for finite-element modeling of surface-water problems. The results presented represent findings obtained between September 1973 and July 1976. (Woodard-USGS)
Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition
NASA Technical Reports Server (NTRS)
Kenwright, David; Lane, David
1995-01-01
An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.
NASA Technical Reports Server (NTRS)
Cooke, C. H.; Blanchard, D. K.
1975-01-01
A finite element algorithm for solution of fluid flow problems characterized by the two-dimensional compressible Navier-Stokes equations was developed. The program is intended for viscous compressible high speed flow; hence, primitive variables are utilized. The physical solution was approximated by trial functions which at a fixed time are piecewise cubic on triangular elements. The Galerkin technique was employed to determine the finite-element model equations. A leapfrog time integration is used for marching asymptotically from initial to steady state, with iterated integrals evaluated by numerical quadratures. The nonsymmetric linear systems of equations governing time transition from step-to-step are solved using a rather economical block iterative triangular decomposition scheme. The concept was applied to the numerical computation of a free shear flow. Numerical results of the finite-element method are in excellent agreement with those obtained from a finite difference solution of the same problem.
Investigating the use of a rational Runge Kutta method for transport modelling
NASA Astrophysics Data System (ADS)
Dougherty, David E.
An unconditionally stable explicit time integrator has recently been developed for parabolic systems of equations. This rational Runge Kutta (RRK) method, proposed by Wambecq 1 and Hairer 2, has been applied by Liu et al.3 to linear heat conduction problems in a time-partitioned solution context. An important practical question is whether the method has application for the solution of (nearly) hyperbolic equations as well. In this paper the RRK method is applied to a nonlinear heat conduction problem, the advection-diffusion equation, and the hyperbolic Buckley-Leverett problem. The method is, indeed, found to be unconditionally stable for the linear heat conduction problem and performs satisfactorily for the nonlinear heat flow case. A heuristic limitation on the utility of RRK for the advection-diffusion equation arises in the Courant number; for the second-order accurate one-step two-stage RRK method, a limiting Courant number of 2 applies. First order upwinding is not as effective when used with RRK as with Euler one-step methods. The method is found to perform poorly for the Buckley-Leverett problem.
The Crank Nicolson Time Integrator for EMPHASIS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGregor, Duncan Alisdair Odum; Love, Edward; Kramer, Richard Michael Jack
2018-03-01
We investigate the use of implicit time integrators for finite element time domain approxi- mations of Maxwell's equations in vacuum. We discretize Maxwell's equations in time using Crank-Nicolson and in 3D space using compatible finite elements. We solve the system by taking a single step of Newton's method and inverting the Eddy-Current Schur complement allowing for the use of standard preconditioning techniques. This approach also generalizes to more complex material models that can include the Unsplit PML. We present verification results and demonstrate performance at CFL numbers up to 1000.
NASA Technical Reports Server (NTRS)
Ipatov, S. I.; Mather, J. C.
2003-01-01
Using the Bulirsh Stoer method of integration, we investigated the migration of dust particles under the gravitational influence of all planets, radiation pressure, Poynting Robertson drag and solar wind drag for equal to 0.01, 0.05, 0.1, 0.25, and 0.4. For silicate particles such values of correspond to diameters equal to about 40, 9, 4, 2, and 1 microns, respectively [1]. The relative error per integration step was taken to be less than 10sup-8. Initial orbits of the particles were close to the orbits of the first numbered mainbelt asteroids.
Wunschel, David S; Melville, Angela M; Ehrhardt, Christopher J; Colburn, Heather A; Victry, Kristin D; Antolick, Kathryn C; Wahl, Jon H; Wahl, Karen L
2012-05-07
The investigation of crimes involving chemical or biological agents is infrequent, but presents unique analytical challenges. The protein toxin ricin is encountered more frequently than other agents and is found in the seeds of Ricinus communis, commonly known as the castor plant. Typically, the toxin is extracted from castor seeds utilizing a variety of different recipes that result in varying purity of the toxin. Moreover, these various purification steps can also leave or differentially remove a variety of exogenous and endogenous residual components with the toxin that may indicate the type and number of purification steps involved. We have applied three gas chromatography-mass spectrometry (GC-MS) based analytical methods to measure the variation in seed carbohydrates and castor oil ricinoleic acid, as well as the presence of solvents used for purification. These methods were applied to the same samples prepared using four previously identified toxin preparation methods, starting from four varieties of castor seeds. The individual data sets for seed carbohydrate profiles, ricinoleic acid, or acetone amount each provided information capable of differentiating different types of toxin preparations across seed types. However, the integration of the data sets using multivariate factor analysis provided a clear distinction of all samples based on the preparation method, independent of the seed source. In particular, the abundance of mannose, arabinose, fucose, ricinoleic acid, and acetone were shown to be important differentiating factors. These complementary tools provide a more confident determination of the method of toxin preparation than would be possible using a single analytical method.
Integrating Behavioral Health Support into a Pediatric Setting: What Happens in the Exam Room?
ERIC Educational Resources Information Center
Cuno, Kate; Krug, Laura M.; Umylny, Polina
2015-01-01
This article presents an overview of the Healthy Steps for Young Children (Healthy Steps) program at Montefiore Medical Center, in the Bronx, NY. The authors review the theoretical underpinnings of this national program for the promotion of early childhood mental health. The Healthy Steps program at Montefiore is integrated into outpatient…
Extending the Universal One-Loop Effective Action: heavy-light coefficients
Ellis, Sebastian A. R.; Quevillon, Jérémie; You, Tevong; ...
2017-08-16
The Universal One-Loop Effective Action (UOLEA) is a general expression for the effective action obtained by evaluating in a model-independent way the one-loop expansion of a functional path integral. It can also be used to match UV theories to their low-energy EFTs more efficiently by avoiding redundant steps in the application of functional methods, simplifying the process of obtaining Wilson coefficients of operators up to dimension six. In addition to loops involving only heavy fields, matching may require the inclusion of loops containing both heavy and light particles. Here we use the recently-developed covariant diagram technique to extend the UOLEAmore » to include heavy-light terms which retain the same universal structure as the previously-derived heavy-only terms. As an example of its application, we integrate out a heavy singlet scalar with a linear coupling to a light doublet Higgs. The extension presented here is a first step towards completing the UOLEA to incorporate all possible structures encountered in a covariant derivative expansion of the one-loop path integral.« less
Extending the Universal One-Loop Effective Action: heavy-light coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, Sebastian A. R.; Quevillon, Jérémie; You, Tevong
The Universal One-Loop Effective Action (UOLEA) is a general expression for the effective action obtained by evaluating in a model-independent way the one-loop expansion of a functional path integral. It can also be used to match UV theories to their low-energy EFTs more efficiently by avoiding redundant steps in the application of functional methods, simplifying the process of obtaining Wilson coefficients of operators up to dimension six. In addition to loops involving only heavy fields, matching may require the inclusion of loops containing both heavy and light particles. Here we use the recently-developed covariant diagram technique to extend the UOLEAmore » to include heavy-light terms which retain the same universal structure as the previously-derived heavy-only terms. As an example of its application, we integrate out a heavy singlet scalar with a linear coupling to a light doublet Higgs. The extension presented here is a first step towards completing the UOLEA to incorporate all possible structures encountered in a covariant derivative expansion of the one-loop path integral.« less
Hughson, Michael D; Cruz, Thayana A; Carvalho, Rimenys J; Castilho, Leda R
2017-07-01
The pressures to efficiently produce complex biopharmaceuticals at reduced costs are driving the development of novel techniques, such as in downstream processing with straight-through processing (STP). This method involves directly and sequentially purifying a particular target with minimal holding steps. This work developed and compared six different 3-step STP strategies, combining membrane adsorbers, monoliths, and resins, to purify a large, complex, and labile glycoprotein from Chinese hamster ovary cell culture supernatant. The best performing pathway was cation exchange chromatography to hydrophobic interaction chromatography to affinity chromatography with an overall product recovery of up to 88% across the process and significant clearance of DNA and protein impurities. This work establishes a platform and considerations for the development of STP of biopharmaceutical products and highlights its suitability for integration with single-use technologies and continuous production methods. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:931-940, 2017. © 2017 American Institute of Chemical Engineers.
NASA Astrophysics Data System (ADS)
Baturin, A. P.; Votchel, I. A.
2013-12-01
The problem of asteroid motion sumulation has been considered. At present this simulation is being performed by means of numerical integration taking into account the pertubations from planets and the Moon with some their ephemerides (DE405, DE422, etc.). All these ephemerides contain coefficients for Chebyshev polinomials for the great amount of equal interpolation intervals. However, all ephemerides has been constructed to keep at the junctions of adjacent intervals a continuity of just coordinates and their first derivatives (just in 16-digit decimal format corre-sponding to 64-bit floating-point numbers). But as for the second and higher order derivatives, they have breaks at these junctions. These breaks, if they are within an integration step, decrease the accuracy of numerical integration. If to consider 34-digit format (128-bit floating point numbers) the coordinates and their first derivatives will also have breaks (at 15-16 decimal digit) at interpolation intervals' junctions. Two ways of elimination of influence of such breaks have been considered. The first one is a "smoothing" of ephemerides so that planets' coordinates and their de-rivatives up to some order will be continuous at the junctions. The smoothing algorithm is based on conditional least-square fitting of coefficients for Chebyshev polynomials, the conditions are equalities of coordinates and derivatives up to some order "from the left" and "from the right" at the each junction. The algorithm has been applied for the smoothing of ephemerides DE430 just up to the first-order derivatives. The second way is a correction of integration step so that junctions does not lie within the step and always coincide with its end. But this way may be applied just at 16-digit decimal precision because it assumes a continuity of planets' coordinates and their first derivatives. Both ways was applied in forward and backward numerical integration for asteroids Apophis and 2012 DA14 by means of 15- and 31-order Everhart method at 16- and 34-digit decimal precision correspondently. The ephemerides DE430 (in its original and smoothed form) has been used for the calculation of perturbations. The results of the research indicate that the integration step correction increases a numercal integration accuracy by 3-4 orders. If, in addition, to replace the original ephemerides by the smoothed ones the accuracy increases approximately by 10 orders.
Preserving Simplecticity in the Numerical Integration of Linear Beam Optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, Christopher K.
2017-07-01
Presented are mathematical tools and methods for the development of numerical integration techniques that preserve the symplectic condition inherent to mechanics. The intended audience is for beam physicists with backgrounds in numerical modeling and simulation with particular attention to beam optics applications. The paper focuses on Lie methods that are inherently symplectic regardless of the integration accuracy order. Section 2 provides the mathematically tools used in the sequel and necessary for the reader to extend the covered techniques. Section 3 places those tools in the context of charged-particle beam optics; in particular linear beam optics is presented in terms ofmore » a Lie algebraic matrix representation. Section 4 presents numerical stepping techniques with particular emphasis on a third-order leapfrog method. Section 5 discusses the modeling of field imperfections with particular attention to the fringe fields of quadrupole focusing magnets. The direct computation of a third order transfer matrix for a fringe field is shown.« less
Tuominen, Mark; Bal, Mustafa; Russell, Thomas P.; Ursache, Andrei
2007-03-13
Pathways to rapid and reliable fabrication of three-dimensional nanostructures are provided. Simple methods are described for the production of well-ordered, multilevel nanostructures. This is accomplished by patterning block copolymer templates with selective exposure to a radiation source. The resulting multi-scale lithographic template can be treated with post-fabrication steps to produce multilevel, three-dimensional, integrated nanoscale media, devices, and systems.
A pure DNA hydrogel with stable catalytic ability produced by one-step rolling circle amplification.
Huang, Yishun; Xu, Wanlin; Liu, Guoyuan; Tian, Leilei
2017-03-09
A rolling-circle-amplification method was developed to produce DNA hydrogels with horseradish-peroxidase-like catalytic capability. The catalytic hydrogel exhibits highly improved stability at elevated temperatures or during a long-term storage. Integrated with glucose oxidase, the complex hydrogel can be applied to the sensitive and reliable detection of glucose.
Optimising Service Delivery of AAC AT Devices and Compensating AT for Dyslexia.
Roentgen, Uta R; Hagedoren, Edith A V; Horions, Katrien D L; Dalemans, Ruth J P
2017-01-01
To promote successful use of Assistive Technology (AT) supporting Augmentative and Alternative Communication (AAC) and compensating for dyslexia, the last steps of their provision, delivery and instruction, use, maintenance and evaluation, were optimised. In co-creation with all stakeholders based on a list of requirements an integral method and tools were developed.
ERIC Educational Resources Information Center
Kirschenbaum, Daniel S.; Gierut, Kristen
2013-01-01
Objective: To compare and contrast 5 sets of expert recommendations about the treatment of childhood and adolescent obesity. Method: We reviewed 5 sets of recent expert recommendations: 2007 health care organizations' four stage model, 2007 Canadian clinical practice guidelines, 2008 Endocrine Society recommendations, 2009 seven step model, and…
Floating loop method for cooling integrated motors and inverters using hot liquid refrigerant
Hsu, John S.; Ayers, Curtis W.; Coomer, Chester; Marlino, Laura D.
2007-03-20
A method for cooling vehicle components using the vehicle air conditioning system comprising the steps of: tapping the hot liquid refrigerant of said air conditioning system, flooding a heat exchanger in the vehicle component with said hot liquid refrigerant, evaporating said hot liquid refrigerant into hot vapor refrigerant using the heat from said vehicle component, and returning said hot vapor refrigerant to the hot vapor refrigerant line in said vehicle air conditioning system.
Measurement methods to build up the digital optical twin
NASA Astrophysics Data System (ADS)
Prochnau, Marcel; Holzbrink, Michael; Wang, Wenxin; Holters, Martin; Stollenwerk, Jochen; Loosen, Peter
2018-02-01
The realization of the Digital Optical Twin (DOT), which is in short the digital representation of the physical state of an optical system, is particularly useful in the context of an automated assembly process of optical systems. During the assembly process, the physical system status of the optical system is continuously measured and compared with the digital model. In case of deviations between physical state and the digital model, the latter one is adapted to match the physical state. To reach the goal described above, in a first step measurement/characterization technologies concerning their suitability to generate a precise digital twin of an existing optical system have to be identified and evaluated. This paper gives an overview of possible characterization methods and, finally, shows first results of evaluated, compared methods (e.g. spot-radius, MTF, Zernike-polynomials), to create a DOT. The focus initially lies on the unequivocalness of the optimization results as well as on the computational time required for the optimization to reach the characterized system state. Possible sources of error are the measurement accuracy (to characterize the system) , execution time of the measurement, time needed to map the digital to the physical world (optimization step) as well as interface possibilities to integrate the measurement tool into an assembly cell. Moreover, it is to be discussed whether the used measurement methods are suitable for a `seamless' integration into an assembly cell.
Design, Development and Testing of Web Services for Multi-Sensor Snow Cover Mapping
NASA Astrophysics Data System (ADS)
Kadlec, Jiri
This dissertation presents the design, development and validation of new data integration methods for mapping the extent of snow cover based on open access ground station measurements, remote sensing images, volunteer observer snow reports, and cross country ski track recordings from location-enabled mobile devices. The first step of the data integration procedure includes data discovery, data retrieval, and data quality control of snow observations at ground stations. The WaterML R package developed in this work enables hydrologists to retrieve and analyze data from multiple organizations that are listed in the Consortium of Universities for the Advancement of Hydrologic Sciences Inc (CUAHSI) Water Data Center catalog directly within the R statistical software environment. Using the WaterML R package is demonstrated by running an energy balance snowpack model in R with data inputs from CUAHSI, and by automating uploads of real time sensor observations to CUAHSI HydroServer. The second step of the procedure requires efficient access to multi-temporal remote sensing snow images. The Snow Inspector web application developed in this research enables the users to retrieve a time series of fractional snow cover from the Moderate Resolution Imaging Spectroradiometer (MODIS) for any point on Earth. The time series retrieval method is based on automated data extraction from tile images provided by a Web Map Tile Service (WMTS). The average required time for retrieving 100 days of data using this technique is 5.4 seconds, which is significantly faster than other methods that require the download of large satellite image files. The presented data extraction technique and space-time visualization user interface can be used as a model for working with other multi-temporal hydrologic or climate data WMTS services. The third, final step of the data integration procedure is generating continuous daily snow cover maps. A custom inverse distance weighting method has been developed to combine volunteer snow reports, cross-country ski track reports and station measurements to fill cloud gaps in the MODIS snow cover product. The method is demonstrated by producing a continuous daily time step snow presence probability map dataset for the Czech Republic region. The ability of the presented methodology to reconstruct MODIS snow cover under cloud is validated by simulating cloud cover datasets and comparing estimated snow cover to actual MODIS snow cover. The percent correctly classified indicator showed accuracy between 80 and 90% using this method. Using crowdsourcing data (volunteer snow reports and ski tracks) improves the map accuracy by 0.7--1.2%. The output snow probability map data sets are published online using web applications and web services. Keywords: crowdsourcing, image analysis, interpolation, MODIS, R statistical software, snow cover, snowpack probability, Tethys platform, time series, WaterML, web services, winter sports.
Carraro, Mattia; Park, Albert H; Harrison, Robert V
2016-02-01
Some forms of sensorineural hearing loss involve damage or degenerative changes to the stria vascularis and/or other vascular structures in the cochlea. In animal models, many methods for anatomical assessment of cochlear vasculature exist, each with advantages and limitations. One methodology, corrosion casting, has proved useful in some species, however in the mouse model this technique is difficult to achieve because digestion of non vascular tissue results in collapse of the delicate cast specimen. We have developed a partial corrosion cast method that allows visualization of vasculature along much of the cochlear length but maintains some structural integrity of the specimen. We provide a detailed step-by-step description of this novel technique. We give some illustrative examples of the use of the method in mouse models of presbycusis and cytomegalovirus (CMV) infection. Copyright © 2015 Elsevier B.V. All rights reserved.
Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore
2011-12-01
Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.
NASA Astrophysics Data System (ADS)
Du, Xiaofeng; Song, William; Munro, Malcolm
Web Services as a new distributed system technology has been widely adopted by industries in the areas, such as enterprise application integration (EAI), business process management (BPM), and virtual organisation (VO). However, lack of semantics in the current Web Service standards has been a major barrier in service discovery and composition. In this chapter, we propose an enhanced context-based semantic service description framework (CbSSDF+) that tackles the problem and improves the flexibility of service discovery and the correctness of generated composite services. We also provide an agile transformation method to demonstrate how the various formats of Web Service descriptions on the Web can be managed and renovated step by step into CbSSDF+ based service description without large amount of engineering work. At the end of the chapter, we evaluate the applicability of the transformation method and the effectiveness of CbSSDF+ through a series of experiments.
Chen, Keping; Blong, Russell; Jacobson, Carol
2003-04-01
This paper develops a GIS-based integrated approach to risk assessment in natural hazards, with reference to bushfires. The challenges for undertaking this approach have three components: data integration, risk assessment tasks, and risk decision-making. First, data integration in GIS is a fundamental step for subsequent risk assessment tasks and risk decision-making. A series of spatial data integration issues within GIS such as geographical scales and data models are addressed. Particularly, the integration of both physical environmental data and socioeconomic data is examined with an example linking remotely sensed data and areal census data in GIS. Second, specific risk assessment tasks, such as hazard behavior simulation and vulnerability assessment, should be undertaken in order to understand complex hazard risks and provide support for risk decision-making. For risk assessment tasks involving heterogeneous data sources, the selection of spatial analysis units is important. Third, risk decision-making concerns spatial preferences and/or patterns, and a multicriteria evaluation (MCE)-GIS typology for risk decision-making is presented that incorporates three perspectives: spatial data types, data models, and methods development. Both conventional MCE methods and artificial intelligence-based methods with GIS are identified to facilitate spatial risk decision-making in a rational and interpretable way. Finally, the paper concludes that the integrated approach can be used to assist risk management of natural hazards, in theory and in practice.
NASA Astrophysics Data System (ADS)
Thiery, Yannick; Reninger, Pierre-Alexandre; Vandromme, Rosalie; Nachbaur, Aude
2017-04-01
Landslide hazard and risk assessment (LHA & LRA) in French West Indies is a big challenge, because of several factors contributing to high sensitivity of slopes to landslide (complex weathered volcanic grounds, hurricane seasons, heavy land pressure).The initial step is to assess the spatial probability (and sometimes temporal) of failure (i.e. landslide susceptibility assessment; LSA) for a given area. LSA can be evaluated by several approaches (i.e. knowledge approach, data-driven approach, physically based approach). Physically based approaches are used to calculate a slope stability factor taking into account mechanical, geotechnical, hydrological and hydrogeological parameters. However, the parametrization of these models can be difficult because of a lack of information (i.e. soil depths, precipitations chronicles, lithology sometimes due to a difficult ground access, particularly in French Indies. Thus, HEM (Heliborne Electro-Magnetic Survey) appears as a solution to obtain specific information quickly and over large areas. Since 2000, the HEM method is increasingly used for environmental studies: geomorphological and hydrogeological studies. In 2013, The French Geological Survey conducted an HEM survey over La Martinique (West Indies). Resistivity contrasts were imaged up 250-300 meters depth with a horizontal resolution around 30 m and a vertical resolution between 3 and 8 m. Even if the resistivity has not a straightforward relationship with soil mechanical properties (which are key parameters for LHA) it provides relevant information on both the thickness and the extension of formations. The aim of this study is to evaluate the contribution of HEM survey to recognize landslide prone areas and landslide prone formations in volcanic environment. Once the different formations defined, they are introduced in a physically based model to assess the susceptibility of slope for different landslide types with hydrogeological control. The methodology is split in four steps: i. In the first step, the analysis of the HEM data to assess location and thicknesses of lithological and surficial formations is performed by comparisons and correlations with field data and drilling; ii. In the second step, given the numerous geotechnical parameters required (i.e. cohesion,angle of friction, specific bulk unit weight), a sensitivity analysis on representative cross sections is conducted to obtain the best set of geotechnical parameters adapted to the sites; iii. In the third step, a geological model, integrating surficial formation and lithology obtained after the first step, is built; iv. In the fourth step, the geological model is integrated in a physically based model called ALICE® (Assessment of Landslides Induced by Climatic Events) to assess and to map the landslide susceptibility of slopes for selected areas. Different simulations, integrating different type of failures (translational and rotational), different resolutions (i.e. 5m, 10 m, 25 m) and the variation of the ground water table, are performed. For each step, statistical and expert evaluation (by calculation of success rates, exchanges between field observations, boreholes and geomorphological features) are conducted allowing the models validation. Finally, this approach is a first step, though it shows promising results in assessing and forecasting landslide hazard by integration of precipitation thresholds, the contributions and weaknesses of the method are discussed, as well as proposals to improve the latters.
High-Order Implicit-Explicit Multi-Block Time-stepping Method for Hyperbolic PDEs
NASA Technical Reports Server (NTRS)
Nielsen, Tanner B.; Carpenter, Mark H.; Fisher, Travis C.; Frankel, Steven H.
2014-01-01
This work seeks to explore and improve the current time-stepping schemes used in computational fluid dynamics (CFD) in order to reduce overall computational time. A high-order scheme has been developed using a combination of implicit and explicit (IMEX) time-stepping Runge-Kutta (RK) schemes which increases numerical stability with respect to the time step size, resulting in decreased computational time. The IMEX scheme alone does not yield the desired increase in numerical stability, but when used in conjunction with an overlapping partitioned (multi-block) domain significant increase in stability is observed. To show this, the Overlapping-Partition IMEX (OP IMEX) scheme is applied to both one-dimensional (1D) and two-dimensional (2D) problems, the nonlinear viscous Burger's equation and 2D advection equation, respectively. The method uses two different summation by parts (SBP) derivative approximations, second-order and fourth-order accurate. The Dirichlet boundary conditions are imposed using the Simultaneous Approximation Term (SAT) penalty method. The 6-stage additive Runge-Kutta IMEX time integration schemes are fourth-order accurate in time. An increase in numerical stability 65 times greater than the fully explicit scheme is demonstrated to be achievable with the OP IMEX method applied to 1D Burger's equation. Results from the 2D, purely convective, advection equation show stability increases on the order of 10 times the explicit scheme using the OP IMEX method. Also, the domain partitioning method in this work shows potential for breaking the computational domain into manageable sizes such that implicit solutions for full three-dimensional CFD simulations can be computed using direct solving methods rather than the standard iterative methods currently used.
SIVA/DIVA- INITIAL VALUE ORDINARY DIFFERENTIAL EQUATION SOLUTION VIA A VARIABLE ORDER ADAMS METHOD
NASA Technical Reports Server (NTRS)
Krogh, F. T.
1994-01-01
The SIVA/DIVA package is a collection of subroutines for the solution of ordinary differential equations. There are versions for single precision and double precision arithmetic. These solutions are applicable to stiff or nonstiff differential equations of first or second order. SIVA/DIVA requires fewer evaluations of derivatives than other variable order Adams predictor-corrector methods. There is an option for the direct integration of second order equations which can make integration of trajectory problems significantly more efficient. Other capabilities of SIVA/DIVA include: monitoring a user supplied function which can be separate from the derivative; dynamically controlling the step size; displaying or not displaying output at initial, final, and step size change points; saving the estimated local error; and reverse communication where subroutines return to the user for output or computation of derivatives instead of automatically performing calculations. The user must supply SIVA/DIVA with: 1) the number of equations; 2) initial values for the dependent and independent variables, integration stepsize, error tolerance, etc.; and 3) the driver program and operational parameters necessary for subroutine execution. SIVA/DIVA contains an extensive diagnostic message library should errors occur during execution. SIVA/DIVA is written in FORTRAN 77 for batch execution and is machine independent. It has a central memory requirement of approximately 120K of 8 bit bytes. This program was developed in 1983 and last updated in 1987.
Innovative method and equipment for personalized ventilation.
Kalmár, F
2015-06-01
At the University of Debrecen, a new method and equipment for personalized ventilation has been developed. This equipment makes it possible to change the airflow direction during operation with a time frequency chosen by the user. The developed office desk with integrated air ducts and control system permits ventilation with 100% outdoor air, 100% recirculated air, or a mix of outdoor and recirculated air in a relative proportion set by the user. It was shown that better comfort can be assured in hot environments if the fresh airflow direction is variable. Analyzing the time step of airflow direction changing, it was found that women prefer smaller time steps and their votes related to thermal comfort sensation are higher than men's votes. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Xiao, W; Rank, G H
1989-03-15
The yeast SMR1 gene was used as a dominant resistance-selectable marker for industrial yeast transformation and for targeting integration of an economically important gene at the homologous ILV2 locus. A MEL1 gene, which codes for alpha-galactosidase, was inserted into a dispensable upstream region of SMR1 in vitro; different treatments of the plasmid (pWX813) prior to transformation resulted in 3' end, 5' end and replacement integrations that exhibited distinct integrant structures. One-step replacement within a nonessential region of the host genome generated a stable integration of MEL1 devoid of bacterial plasmid DNA. Using this method, we have constructed several alpha-galactosidase positive industrial Saccharomyces strains. Our study provides a general method for stable gene transfer in most industrial Saccharomyces yeasts, including those used in the baking, brewing (ale and lager), distilling, wine and sake industries, with solely nucleotide sequences of interest. The absence of bacterial DNA in the integrant structure facilitates the commercial application of recombinant DNA technology in the food and beverage industry.
Kernel-PCA data integration with enhanced interpretability
2014-01-01
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747
An integral equation formulation for rigid bodies in Stokes flow in three dimensions
NASA Astrophysics Data System (ADS)
Corona, Eduardo; Greengard, Leslie; Rachh, Manas; Veerapaneni, Shravan
2017-03-01
We present a new derivation of a boundary integral equation (BIE) for simulating the three-dimensional dynamics of arbitrarily-shaped rigid particles of genus zero immersed in a Stokes fluid, on which are prescribed forces and torques. Our method is based on a single-layer representation and leads to a simple second-kind integral equation. It avoids the use of auxiliary sources within each particle that play a role in some classical formulations. We use a spectrally accurate quadrature scheme to evaluate the corresponding layer potentials, so that only a small number of spatial discretization points per particle are required. The resulting discrete sums are computed in O (n) time, where n denotes the number of particles, using the fast multipole method (FMM). The particle positions and orientations are updated by a high-order time-stepping scheme. We illustrate the accuracy, conditioning and scaling of our solvers with several numerical examples.
Automatic analysis of quantitative NMR data of pharmaceutical compound libraries.
Liu, Xuejun; Kolpak, Michael X; Wu, Jiejun; Leo, Gregory C
2012-08-07
In drug discovery, chemical library compounds are usually dissolved in DMSO at a certain concentration and then distributed to biologists for target screening. Quantitative (1)H NMR (qNMR) is the preferred method for the determination of the actual concentrations of compounds because the relative single proton peak areas of two chemical species represent the relative molar concentrations of the two compounds, that is, the compound of interest and a calibrant. Thus, an analyte concentration can be determined using a calibration compound at a known concentration. One particularly time-consuming step in the qNMR analysis of compound libraries is the manual integration of peaks. In this report is presented an automated method for performing this task without prior knowledge of compound structures and by using an external calibration spectrum. The script for automated integration is fast and adaptable to large-scale data sets, eliminating the need for manual integration in ~80% of the cases.
Integrated Project Management: A Case Study in Integrating Cost, Schedule, Technical, and Risk Areas
NASA Technical Reports Server (NTRS)
Smith, Greg
2004-01-01
This viewgraph presentation describes a case study as a model for integrated project management. The ISS Program Office (ISSPO) developed replacement fluid filtration cartridges in house for the International Space Station (ISS). The presentation includes a step-by-step procedure and organizational charts for how the fluid filtration problem was approached.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research.
Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak
2016-01-01
Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research. PMID:27774054
Treating electron transport in MCNP{sup trademark}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, H.G.
1996-12-31
The transport of electrons and other charged particles is fundamentally different from that of neutrons and photons. A neutron, in aluminum slowing down from 0.5 MeV to 0.0625 MeV will have about 30 collisions; a photon will have fewer than ten. An electron with the same energy loss will undergo 10{sup 5} individual interactions. This great increase in computational complexity makes a single- collision Monte Carlo approach to electron transport unfeasible for many situations of practical interest. Considerable theoretical work has been done to develop a variety of analytic and semi-analytic multiple-scattering theories for the transport of charged particles. Themore » theories used in the algorithms in MCNP are the Goudsmit-Saunderson theory for angular deflections, the Landau an theory of energy-loss fluctuations, and the Blunck-Leisegang enhancements of the Landau theory. In order to follow an electron through a significant energy loss, it is necessary to break the electron`s path into many steps. These steps are chosen to be long enough to encompass many collisions (so that multiple-scattering theories are valid) but short enough that the mean energy loss in any one step is small (for the approximations in the multiple-scattering theories). The energy loss and angular deflection of the electron during each step can then be sampled from probability distributions based on the appropriate multiple- scattering theories. This subsumption of the effects of many individual collisions into single steps that are sampled probabilistically constitutes the ``condensed history`` Monte Carlo method. This method is exemplified in the ETRAN series of electron/photon transport codes. The ETRAN codes are also the basis for the Integrated TIGER Series, a system of general-purpose, application-oriented electron/photon transport codes. The electron physics in MCNP is similar to that of the Integrated TIGER Series.« less
Dynamic response and stability of a gas-lubricated Rayleigh-step pad
NASA Technical Reports Server (NTRS)
Cheng, C.; Cheng, H. S.
1973-01-01
The quasi-static, pressure characteristics of a gas-lubricated thrust bearing with shrouded, Rayleigh-step pads are determined for a time-varying film thickness. The axial response of the thrust bearing to an axial forcing function or an axial rotor disturbance is investigated by treating the gas film as a spring having nonlinear restoring and damping forces. These forces are related to the film thickness by a power relation. The nonlinear equation of motion in the axial mode is solved by the Ritz-Galerkin method as well as the direct, numerical integration. Results of the nonlinear response by both methods are compared with the response based on the linearized equation. Further, the gas-film instability of an infinitely wide Rayleigh step thrust pad is determined by solving the transient Reynolds equation coupled with the equation of the motion of the pad. Results show that the Rayleigh-step geometry is very stable for bearing number A up to 50. The stability threshold is shown to exist only for ultrahigh values of Lambda equal to or greater than 100, where the stability can be achieved by making the mass heavier than the critical mass.
Weare, Jonathan; Dinner, Aaron R.; Roux, Benoît
2016-01-01
A multiple time-step integrator based on a dual Hamiltonian and a hybrid method combining molecular dynamics (MD) and Monte Carlo (MC) is proposed to sample systems in the canonical ensemble. The Dual Hamiltonian Multiple Time-Step (DHMTS) algorithm is based on two similar Hamiltonians: a computationally expensive one that serves as a reference and a computationally inexpensive one to which the workload is shifted. The central assumption is that the difference between the two Hamiltonians is slowly varying. Earlier work has shown that such dual Hamiltonian multiple time-step schemes effectively precondition nonlinear differential equations for dynamics by reformulating them into a recursive root finding problem that can be solved by propagating a correction term through an internal loop, analogous to RESPA. Of special interest in the present context, a hybrid MD-MC version of the DHMTS algorithm is introduced to enforce detailed balance via a Metropolis acceptance criterion and ensure consistency with the Boltzmann distribution. The Metropolis criterion suppresses the discretization errors normally associated with the propagation according to the computationally inexpensive Hamiltonian, treating the discretization error as an external work. Illustrative tests are carried out to demonstrate the effectiveness of the method. PMID:26918826
Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics
NASA Astrophysics Data System (ADS)
d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.
2018-05-01
Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.
Finite time step and spatial grid effects in δf simulation of warm plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less
Symmetry-preserving contact interaction model for heavy-light mesons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serna, F. E.; Brito, M. A.; Krein, G.
2016-01-22
We use a symmetry-preserving regularization method of ultraviolet divergences in a vector-vector contact interaction model for low-energy QCD. The contact interaction is a representation of nonperturbative kernels used Dyson-Schwinger and Bethe-Salpeter equations. The regularization method is based on a subtraction scheme that avoids standard steps in the evaluation of divergent integrals that invariably lead to symmetry violation. Aiming at the study of heavy-light mesons, we have implemented the method to the pseudoscalar π and K mesons. We have solved the Dyson-Schwinger equation for the u, d and s quark propagators, and obtained the bound-state Bethe-Salpeter amplitudes in a way thatmore » the Ward-Green-Takahashi identities reflecting global symmetries of the model are satisfied for arbitrary routing of the momenta running in loop integrals.« less
Integral equation methods for vesicle electrohydrodynamics in three dimensions
NASA Astrophysics Data System (ADS)
Veerapaneni, Shravan
2016-12-01
In this paper, we develop a new boundary integral equation formulation that describes the coupled electro- and hydro-dynamics of a vesicle suspended in a viscous fluid and subjected to external flow and electric fields. The dynamics of the vesicle are characterized by a competition between the elastic, electric and viscous forces on its membrane. The classical Taylor-Melcher leaky-dielectric model is employed for the electric response of the vesicle and the Helfrich energy model combined with local inextensibility is employed for its elastic response. The coupled governing equations for the vesicle position and its transmembrane electric potential are solved using a numerical method that is spectrally accurate in space and first-order in time. The method uses a semi-implicit time-stepping scheme to overcome the numerical stiffness associated with the governing equations.
ERIC Educational Resources Information Center
Lane, Kathleen Lynne; Oakes, Wendy Peia; Jenkins, Abbie; Menzies, Holly Mariah; Kalberg, Jemma Robertson
2014-01-01
Comprehensive, integrated, three-tiered models are context specific and developed by school-site teams according to the core values held by the school community. In this article, the authors provide a step-by-step, team-based process for designing comprehensive, integrated, three-tiered models of prevention that integrate academic, behavioral, and…
A protein interaction network analysis for yeast integral membrane protein.
Shi, Ming-Guang; Huang, De-Shuang; Li, Xue-Ling
2008-01-01
Although the yeast Saccharomyces cerevisiae is the best exemplified single-celled eukaryote, the vast number of protein-protein interactions of integral membrane proteins of Saccharomyces cerevisiae have not been characterized by experiments. Here, based on the kernel method of Greedy Kernel Principal Component analysis plus Linear Discriminant Analysis, we identify 300 protein-protein interactions involving 189 membrane proteins and get the outcome of a highly connected protein-protein interactions network. Furthermore, we study the global topological features of integral membrane proteins network of Saccharomyces cerevisiae. These results give the comprehensive description of protein-protein interactions of integral membrane proteins and reveal global topological and robustness of the interactome network at a system level. This work represents an important step towards a comprehensive understanding of yeast protein interactions.
Subgroup conflicts? Try the psychodramatic "double triad method".
Verhofstadt-Denève, Leni M F
2012-04-01
The present article suggests the application of a psychodramatic action method for tackling subgroup conflicts in which the direct dialogue between representatives of two opposing subgroups is prepared step by step through an indirect dialogue strategy within two triads, a strategy known as the Double Triad Method (DTM). In order to achieve integration in the group as a whole, it is important that all the members of both subgroups participate actively during the entire process. The first part of the article briefly explores the theoretical background, with a special emphasis on the Phenomenological-Dialectical Personality Model (Phe-Di PModel). In the second part, the DTM procedure is systematically described through its five action stages, each accompanied with 1) a spatial representation of the consecutive actions, 2) some illustrative statements for each stage, and 3) a theoretical interpretation of the dialectically involved personality dimensions in both protagonists. The article concludes with a discussion and suggestions for more extensive applications of the DTM method, including the question of its relationships to Agazarian's functional subgrouping, psychodrama, and sociodrama.
iMOSFLM: a new graphical interface for diffraction-image processing with MOSFLM
Battye, T. Geoff G.; Kontogiannis, Luke; Johnson, Owen; Powell, Harold R.; Leslie, Andrew G. W.
2011-01-01
iMOSFLM is a graphical user interface to the diffraction data-integration program MOSFLM. It is designed to simplify data processing by dividing the process into a series of steps, which are normally carried out sequentially. Each step has its own display pane, allowing control over parameters that influence that step and providing graphical feedback to the user. Suitable values for integration parameters are set automatically, but additional menus provide a detailed level of control for experienced users. The image display and the interfaces to the different tasks (indexing, strategy calculation, cell refinement, integration and history) are described. The most important parameters for each step and the best way of assessing success or failure are discussed. PMID:21460445
Migration of Dust Particles from Comet 2P Encke
NASA Technical Reports Server (NTRS)
Ipatov, S. I.
2003-01-01
We investigated the migration of dust particles under the gravitational influence of all planets (except for Pluto), radiation pressure, Poynting-Robertson drag and solar wind drag for Beta equal to 0.002, 0.004, 0.01, 0.05, 0.1, 0.2, and 0.4. For silicate particles such values of Beta correspond to diameters equal to about 200, 100, 40, 9, 4, 2, and 1 microns, respectively. We used the Bulirsh-Stoer method of integration, and the relative error per integration step was taken to be less than lo-'. Initial orbits of the particles were close to the orbit of Comet 2P Encke. We considered initial particles near perihelion (runs denoted as Delta tsub o, = 0), near aphelion (Delta tsub o, = 0.5), and also studied their initial positions when the comet moved for Pa/4 after perihelion passage (such runs are denoted as Delta tsub o, =i 0.25), where Pa is the period of the comet. Variations in time T when perihelion was passed was varied with a step 0.1 day for series 'S' and with a step 1 day for series 'L'. For each Beta we considered N = 101 particles for "S" runs and 150 particles for "L" runs.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2015-01-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
NASA Technical Reports Server (NTRS)
Barnes, Robert A.; Brown, Steven W.; Lykke, Keith R.; Guenther, Bruce; Xiong, Xiaoxiong (Jack); Butler, James J.
2010-01-01
Traditionally, satellite instruments that measure Earth-reflected solar radiation in the visible and near infrared wavelength regions have been calibrated for radiance response in a two-step method. In the first step, the spectral response of the instrument is determined using a nearly monochromatic light source, such a lamp-illuminated monochromator. Such sources only provide a relative spectral response (RSR) for the instrument, since they do not act as calibrated sources of light nor do they typically fill the field-of-view of the instrument. In the second step, the instrument views a calibrated source of broadband light, such as lamp-illuminated integrating sphere. In the traditional method, the RSR and the sphere spectral radiance are combined and, with the instrument's response, determine the absolute spectral radiance responsivity of the instrument. More recently, an absolute calibration system using widely tunable monochromatic laser systems has been developed, Using these sources, the absolute spectral responsivity (ASR) of an instrument can be determined on a wavelength-hy-wavelength basis. From these monochromatic ASRs. the responses of the instrument bands to broadband radiance sources can be calculated directly, eliminating the need for calibrated broadband light sources such as integrating spheres. Here we describe the laser-based calibration and the traditional broad-band source-based calibration of the NPP VIIRS sensor, and compare the derived calibration coefficients for the instrument. Finally, we evaluate the impact of the new calibration approach on the on-orbit performance of the sensor.
This report was prepared by the Global Change Research Program (GCRP) in the National Center for Environmental Assessment (NCEA) of the Office of Research and Development (ORD) at the U.S. Environmental Protection Agency (EPA). This draft report is a description of the methods u...
A Three-Step Approach To Model Tree Mortality in the State of Georgia
Qingmin Meng; Chris J. Cieszewski; Roger C. Lowe; Michal Zasada
2005-01-01
Tree mortality is one of the most complex phenomena of forest growth and yield. Many types of factors affect tree mortality, which is considered difficult to predict. This study presents a new systematic approach to simulate tree mortality based on the integration of statistical models and geographical information systems. This method begins with variable preselection...
Management of hazardous medical waste in Croatia.
Marinković, Natalija; Vitale, Ksenija; Janev Holcer, Natasa; Dzakula, Aleksandar; Pavić, Tomo
2008-01-01
This article provides a review of hazardous medical waste production and its management in Croatia. Even though Croatian regulations define all steps in the waste management chain, implementation of those steps is one of the country's greatest issues. Improper practice is evident from the point of waste production to final disposal. The biggest producers of hazardous medical waste are hospitals that do not implement existing legislation, due to the lack of education and funds. Information on quantities, type and flow of medical waste are inadequate, as is sanitary control. We propose an integrated approach to medical waste management based on a hierarchical structure from the point of generation to its disposal. Priority is given to the reduction of the amounts and potential for harm. Where this is not possible, management includes reduction by sorting and separating, pretreatment on site, safe transportation, final treatment and sanitary disposal. Preferred methods should be the least harmful for human health and the environment. Integrated medical waste management could greatly reduce quantities and consequently financial strains. Landfilling is the predominant route of disposal in Croatia, although the authors believe that incineration is the most appropriate method. In a country such as Croatia, a number of small incinerators would be the most economical solution.
NASA Astrophysics Data System (ADS)
Ozhikandathil, Jayan; Badilescu, Simona; Packirisamy, Muthukumaran
2012-10-01
Antibiotics are extensively used in veterinary medicine for the treatment of infectious diseases. The use of antibiotics for the treatment of animals used for food production raised the concern of the public and a rapid screening method became necessary. A novel approach of detection of antibiotics in milk is reported in this work by using an immunoassay format and the Localized Surface Plasmon Resonance property of gold. An antibiotic from the penicillin family that is, ampicillin is used for testing. Gold nanostructures deposited on a glass substrate by a novel convective assembly method were heat-treated to form a nanoisland morphology. The Au nanostructures were functionalized and the corresponding antibody was absorbed from a solution. Solutions with known concentrations of antigen (antibiotics) were subsequently added and the spectral changes were monitored step by step. The Au LSPR band corresponding to the nano-island structure was found to be suitable for the detection of the antibody antigen interaction. The detection of the ampicillin was successfully demonstrated with the gold nano-islands deposited on glass substrate. This process was subsequently adapted for the integration of gold nanostructures on the silica-on-silicon waveguide for the purpose of detecting antibiotics.
Laser capture microdissection of embryonic cells and preparation of RNA for microarray assays.
Redmond, Latasha C; Pang, Christopher J; Dumur, Catherine; Haar, Jack L; Lloyd, Joyce A
2014-01-01
In order to compare the global gene expression profiles of different embryonic cell types, it is first necessary to isolate the specific cells of interest. The purpose of this chapter is to provide a step-by-step protocol to perform laser capture microdissection (LCM) on embryo samples and obtain sufficient amounts of high-quality RNA for microarray hybridizations. Using the LCM/microarray strategy on mouse embryo samples has some challenges, because the cells of interest are available in limited quantities. The first step in the protocol is to obtain embryonic tissue, and immediately cryoprotect and freeze it in a cryomold containing Optimal Cutting Temperature freezing media (Sakura Finetek), using a dry ice-isopentane bath. The tissue is then cryosectioned, and the microscope slides are processed to fix, stain, and dehydrate the cells. LCM is employed to isolate specific cell types from the slides, identified under the microscope by virtue of their morphology. Detailed protocols are provided for using the currently available ArcturusXT LCM instrument and CapSure(®) LCM Caps, to which the selected cells adhere upon laser capture. To maintain RNA integrity, upon removing a slide from the final processing step, or attaching the first cells on the LCM cap, LCM is completed within 20 min. The cells are then immediately recovered from the LCM cap using a denaturing solution that stabilizes RNA integrity. RNA is prepared using standard methods, modified for working with small samples. To ensure the validity of the microarray data, the quality of the RNA is assessed using the Agilent bioanalyzer. Only RNA that is of sufficient integrity and quantity is used to perform microarray assays. This chapter provides guidance regarding troubleshooting and optimization to obtain high-quality RNA from cells of limited availability, obtained from embryo samples by LCM.
Integrating medical informatics into the medical undergraduate curriculum.
Khonsari, L S; Fabri, P J
1997-01-01
The advent of healthcare reform and the rapid application of new technologies have resulted in a paradigm shift in medical practice. Integrating medical Informatics into the full spectrum of medical education is a viral step toward implementing this new instructional model, a step required for the understanding and practice of modern medicine. We have developed an informatics curriculum, a new educational paradigm, and an intranet-based teaching module which are designed to enhance adult-learning principles, life-long self education, and evidence-based critical thinking. Thirty two, fourth year medical students have participated in a one month, full time, independent study focused on but not limited to four topics: mastering the windows-based environment, understanding hospital based information management systems, developing competence in using the internet/intranet and world wide web/HTML, and experiencing distance communication and TeleVideo networks. Each student has completed a clinically relevant independent study project utilizing technology mastered during the course. This initial curriculum offering was developed in conjunction with faculty from the College of Medicine, College of Engineering, College of Education, College of Business, College of Public Health. Florida Center of Instructional Technology, James A. Haley Veterans Hospital, Moffitt Cancer Center, Tampa General Hospital, GTE, Westshore Walk-in Clinic (paperless office), and the Florida Engineering Education Delivery System. Our second step toward the distributive integration process was the introduction of Medical Informatics to first, second and third year medical students. To date, these efforts have focused on undergraduate medical education. Our next step is to offer workshops in Informatics to college of medicine faculty, to residents in post graduate training programs (GME), and ultimately as a method of distance learning in continuing medical education (CME).
Laser Capture Microdissection of Embryonic Cells and Preparation of RNA for Microarray Assays
Redmond, Latasha C.; Pang, Christopher J.; Dumur, Catherine; Haar, Jack L.; Lloyd, Joyce A.
2014-01-01
In order to compare the global gene expression profiles of different embryonic cell types, it is first necessary to isolate the specific cells of interest. The purpose of this chapter is to provide a step-by-step protocol to perform laser capture microdissection (LCM) on embryo samples and obtain sufficient amounts of high-quality RNA for microarray hybridizations. Using the LCM/microarray strategy on mouse embryo samples has some challenges, because the cells of interest are available in limited quantities. The first step in the protocol is to obtain embryonic tissue, and immediately cryoprotect and freeze it in a cryomold containing Optimal Cutting Temperature freezing media (Sakura Finetek), using a dry ice–isopentane bath. The tissue is then cryosectioned, and the microscope slides are processed to fix, stain, and dehydrate the cells. LCM is employed to isolate specific cell types from the slides, identified under the microscope by virtue of their morphology. Detailed protocols are provided for using the currently available ArcturusXT LCM instrument and CapSure® LCM Caps, to which the selected cells adhere upon laser capture. To maintain RNA integrity, upon removing a slide from the final processing step, or attaching the first cells on the LCM cap, LCM is completed within 20 min. The cells are then immediately recovered from the LCM cap using a denaturing solution that stabilizes RNA integrity. RNA is prepared using standard methods, modified for working with small samples. To ensure the validity of the microarray data, the quality of the RNA is assessed using the Agilent bioanalyzer. Only RNA that is of sufficient integrity and quantity is used to perform microarray assays. This chapter provides guidance regarding troubleshooting and optimization to obtain high-quality RNA from cells of limited availability, obtained from embryo samples by LCM. PMID:24318813
Code of Federal Regulations, 2010 CFR
2010-10-01
... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...
Code of Federal Regulations, 2012 CFR
2012-10-01
... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...
Code of Federal Regulations, 2011 CFR
2011-10-01
... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...
Code of Federal Regulations, 2014 CFR
2014-10-01
... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...
Code of Federal Regulations, 2013 CFR
2013-10-01
... to protect the security and integrity of urine collections? 40.43 Section 40.43 Transportation Office... PROGRAMS Collection Sites, Forms, Equipment and Supplies Used in DOT Urine Collections § 40.43 What steps must operators of collection sites take to protect the security and integrity of urine collections? (a...
Geometrically derived difference formulae for the numerical integration of trajectory problems
NASA Technical Reports Server (NTRS)
Mcleod, R. J. Y.; Sanz-Serna, J. M.
1981-01-01
The term 'trajectory problem' is taken to include problems that can arise, for instance, in connection with contour plotting, or in the application of continuation methods, or during phase-plane analysis. Geometrical techniques are used to construct difference methods for these problems to produce in turn explicit and implicit circularly exact formulae. Based on these formulae, a predictor-corrector method is derived which, when compared with a closely related standard method, shows improved performance. It is found that this latter method produces spurious limit cycles, and this behavior is partly analyzed. Finally, a simple variable-step algorithm is constructed and tested.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2002-01-01
A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.
Joint Transform Correlation for face tracking: elderly fall detection application
NASA Astrophysics Data System (ADS)
Katz, Philippe; Aron, Michael; Alfalou, Ayman
2013-03-01
In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
3 Steps to Developing a Tribal Integrated Waste Management Plan (IWMP)
An Integrated Waste Management Plan (IWMP) is the blueprint of a comprehensive waste management program. The steps to developing an IWMP are collect background data, map out the tribal IWMP framework, and write and implement the tribal IWMP.
Study on the Algorithm of Judgment Matrix in Analytic Hierarchy Process
NASA Astrophysics Data System (ADS)
Lu, Zhiyong; Qin, Futong; Jin, Yican
2017-10-01
A new algorithm is proposed for the non-consistent judgment matrix in AHP. A primary judgment matrix is generated firstly through pre-ordering the targeted factor set, and a compared matrix is built through the top integral function. Then a relative error matrix is created by comparing the compared matrix with the primary judgment matrix which is regulated under the control of the relative error matrix and the dissimilar degree of the matrix step by step. Lastly, the targeted judgment matrix is generated to satisfy the requirement of consistence and the least dissimilar degree. The feasibility and validity of the proposed method are verified by simulation results.
Analysis of High-Throughput ELISA Microarray Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Daly, Don S.; Zangar, Richard C.
Our research group develops analytical methods and software for the high-throughput analysis of quantitative enzyme-linked immunosorbent assay (ELISA) microarrays. ELISA microarrays differ from DNA microarrays in several fundamental aspects and most algorithms for analysis of DNA microarray data are not applicable to ELISA microarrays. In this review, we provide an overview of the steps involved in ELISA microarray data analysis and how the statistically sound algorithms we have developed provide an integrated software suite to address the needs of each data-processing step. The algorithms discussed are available in a set of open-source software tools (http://www.pnl.gov/statistics/ProMAT).
One-step Ge/Si epitaxial growth.
Wu, Hung-Chi; Lin, Bi-Hsuan; Chen, Huang-Chin; Chen, Po-Chin; Sheu, Hwo-Shuenn; Lin, I-Nan; Chiu, Hsin-Tien; Lee, Chi-Young
2011-07-01
Fabricating a low-cost virtual germanium (Ge) template by epitaxial growth of Ge films on silicon wafer with a Ge(x)Si(1-x) (0 < x < 1) graded buffer layer was demonstrated through a facile chemical vapor deposition method in one step by decomposing a hazardousless GeO(2) powder under hydrogen atmosphere without ultra-high vacuum condition and then depositing in a low-temperature region. X-ray diffraction analysis shows that the Ge film with an epitaxial relationship is along the in-plane direction of Si. The successful growth of epitaxial Ge films on Si substrate demonstrates the feasibility of integrating various functional devices on the Ge/Si substrates.
Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems
NASA Technical Reports Server (NTRS)
Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.
2004-01-01
Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian; Scherzinger, William
2017-01-19
Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lester, Brian T.; Scherzinger, William M.
2017-01-19
A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less
A Multiomics Approach to Identify Genes Associated with Childhood Asthma Risk and Morbidity.
Forno, Erick; Wang, Ting; Yan, Qi; Brehm, John; Acosta-Perez, Edna; Colon-Semidey, Angel; Alvarez, Maria; Boutaoui, Nadia; Cloutier, Michelle M; Alcorn, John F; Canino, Glorisa; Chen, Wei; Celedón, Juan C
2017-10-01
Childhood asthma is a complex disease. In this study, we aim to identify genes associated with childhood asthma through a multiomics "vertical" approach that integrates multiple analytical steps using linear and logistic regression models. In a case-control study of childhood asthma in Puerto Ricans (n = 1,127), we used adjusted linear or logistic regression models to evaluate associations between several analytical steps of omics data, including genome-wide (GW) genotype data, GW methylation, GW expression profiling, cytokine levels, asthma-intermediate phenotypes, and asthma status. At each point, only the top genes/single-nucleotide polymorphisms/probes/cytokines were carried forward for subsequent analysis. In step 1, asthma modified the gene expression-protein level association for 1,645 genes; pathway analysis showed an enrichment of these genes in the cytokine signaling system (n = 269 genes). In steps 2-3, expression levels of 40 genes were associated with intermediate phenotypes (asthma onset age, forced expiratory volume in 1 second, exacerbations, eosinophil counts, and skin test reactivity); of those, methylation of seven genes was also associated with asthma. Of these seven candidate genes, IL5RA was also significant in analytical steps 4-8. We then measured plasma IL-5 receptor α levels, which were associated with asthma age of onset and moderate-severe exacerbations. In addition, in silico database analysis showed that several of our identified IL5RA single-nucleotide polymorphisms are associated with transcription factors related to asthma and atopy. This approach integrates several analytical steps and is able to identify biologically relevant asthma-related genes, such as IL5RA. It differs from other methods that rely on complex statistical models with various assumptions.
NASA Astrophysics Data System (ADS)
Hellmich, S.; Mottola, S.; Hahn, G.; Kührt, E.; Hlawitschka, M.
2014-07-01
Simulations of dynamical processes in planetary systems represent an important tool for studying the orbital evolution of the systems [1--3]. Using modern numerical integration methods, it is possible to model systems containing many thousands of objects over timescales of several hundred million years. However, in general, supercomputers are needed to get reasonable simulation results in acceptable execution times [3]. To exploit the ever-growing computation power of Graphics Processing Units (GPUs) in modern desktop computers, we implemented cuSwift, a library of numerical integration methods for studying long-term dynamical processes in planetary systems. cuSwift can be seen as a re-implementation of the famous SWIFT integrator package written by Hal Levison and Martin Duncan. cuSwift is written in C/CUDA and contains different integration methods for various purposes. So far, we have implemented three algorithms: a 15th-order Radau integrator [4], the Wisdom-Holman Mapping (WHM) integrator [5], and the Regularized Mixed Variable Symplectic (RMVS) Method [6]. These algorithms treat only the planets as mutually gravitationally interacting bodies whereas asteroids and comets (or other minor bodies of interest) are treated as massless test particles which are gravitationally influenced by the massive bodies but do not affect each other or the massive bodies. The main focus of this work is on the symplectic methods (WHM and RMVS) which use a larger time step and thus are capable of integrating many particles over a large time span. As an additional feature, we implemented the non-gravitational Yarkovsky effect as described by M. Brož [7]. With cuSwift, we show that the use of modern GPUs makes it possible to speed up these methods by more than one order of magnitude compared to the single-core CPU implementation, thereby enabling modest workstation computers to perform long-term dynamical simulations. We use these methods to study the influence of the Yarkovsky effect on resonant asteroids. We present first results and compare them with integrations done with the original algorithms implemented in SWIFT in order to assess the numerical precision of cuSwift and to demonstrate the speed-up we achieved using the GPU.
Lewis, Cara C; Scott, Kelli; Marriott, Brigid R
2018-05-16
Tailored implementation approaches are touted as more likely to support the integration of evidence-based practices. However, to our knowledge, few methodologies for tailoring implementations exist. This manuscript will apply a model-driven, mixed methods approach to a needs assessment to identify the determinants of practice, and pilot a modified conjoint analysis method to generate an implementation blueprint using a case example of a cognitive behavioral therapy (CBT) implementation in a youth residential center. Our proposed methodology contains five steps to address two goals: (1) identify the determinants of practice and (2) select and match implementation strategies to address the identified determinants (focusing on barriers). Participants in the case example included mental health therapists and operations staff in two programs of Wolverine Human Services. For step 1, the needs assessment, they completed surveys (clinician N = 10; operations staff N = 58; other N = 7) and participated in focus groups (clinician N = 15; operations staff N = 38) guided by the domains of the Framework for Diffusion [1]. For step 2, the research team conducted mixed methods analyses following the QUAN + QUAL structure for the purpose of convergence and expansion in a connecting process, revealing 76 unique barriers. Step 3 consisted of a modified conjoint analysis. For step 3a, agency administrators prioritized the identified barriers according to feasibility and importance. For step 3b, strategies were selected from a published compilation and rated for feasibility and likelihood of impacting CBT fidelity. For step 4, sociometric surveys informed implementation team member selection and a meeting was held to identify officers and clarify goals and responsibilities. For step 5, blueprints for each of pre-implementation, implementation, and sustainment phases were generated. Forty-five unique strategies were prioritized across the 5 years and three phases representing all nine categories. Our novel methodology offers a relatively low burden collaborative approach to generating a plan for implementation that leverages advances in implementation science including measurement, models, strategy compilations, and methods from other fields.
Clifford, Harry J [Los Alamos, NM
2011-03-22
A method and apparatus for mounting a calibration sphere to a calibration fixture for Coordinate Measurement Machine (CMM) calibration and qualification is described, decreasing the time required for such qualification, thus allowing the CMM to be used more productively. A number of embodiments are disclosed that allow for new and retrofit manufacture to perform as integrated calibration sphere and calibration fixture devices. This invention renders unnecessary the removal of a calibration sphere prior to CMM measurement of calibration features on calibration fixtures, thereby greatly reducing the time spent qualifying a CMM.
R-Based Software for the Integration of Pathway Data into Bioinformatic Algorithms
Kramer, Frank; Bayerlová, Michaela; Beißbarth, Tim
2014-01-01
Putting new findings into the context of available literature knowledge is one approach to deal with the surge of high-throughput data results. Furthermore, prior knowledge can increase the performance and stability of bioinformatic algorithms, for example, methods for network reconstruction. In this review, we examine software packages for the statistical computing framework R, which enable the integration of pathway data for further bioinformatic analyses. Different approaches to integrate and visualize pathway data are identified and packages are stratified concerning their features according to a number of different aspects: data import strategies, the extent of available data, dependencies on external tools, integration with further analysis steps and visualization options are considered. A total of 12 packages integrating pathway data are reviewed in this manuscript. These are supplemented by five R-specific packages for visualization and six connector packages, which provide access to external tools. PMID:24833336
Mousazadeh, Yalda; Jannati, Ali; Jabbari Beiramy, Hossein; AsghariJafarabadi, Mohammad; Ebadi, Ali
2013-01-01
Background: Hospitals as key actors in health systems face growing pressures especially cost cutting and search for costeffective ways to resources management. Downsizing is one of these ways. This study was conducted to identify advantages and disadvantages of different methods of hospital' downsizing. Methods:The search was conducted in databases of Medlib, SID, Pub Med, Science Direct and Google Scholar Meta search engine by keywords of Downsizing, Hospital Downsizing, Hospital Rightsizing, Hospital Restructuring, Staff Downsizing, Hospital Merging, Hospital Reorganization and the Persian equivalents. Resulted 815 articles were studied and refined step by step. Finally, 27 articles were selected for analysis. Results: Five hospital downsizing methods were identified during searching. These methods were reducing the number of employees and beds, outsourcing, integration of hospital units, and the combination of these methods. The most important benefits were cost reduction, increasing patient satisfaction, increasing home care and outpatient services. The most important disadvantage included reducing access, reducing the rate of hospital admissions and increasing employees’ workload and dissatisfaction. Conclusion: Each downsizing method has strengths and weaknesses. Using different methods of downsizing, according to circumstances and applying appropriate interventions after implementation, is necessary for promotion. PMID:24688978
López Marzo, Adaris M; Pons, Josefina; Blake, Diane A; Merkoçi, Arben
2013-04-02
Nowadays, the development of systems, devices, or methods that integrate several process steps into one multifunctional step for clinical, environmental, or industrial purposes constitutes a challenge for many ongoing research projects. Here, we present a new integrated paper based cadmium (Cd(2+)) immunosensing system in lateral flow format, which integrates the sample treatment process with the analyte detection process. The principle of Cd(2+) detection is based on competitive reaction between the cadmium-ethylenediaminetetraacetic acid-bovine serum albumin-gold nanoparticles (Cd-EDTA-BSA-AuNP) conjugate deposited on the conjugation pad strip and the Cd-EDTA complex formed in the analysis sample for the same binding sites of the 2A81G5 monoclonal antibody (mAb), specific to Cd-EDTA but not Cd(2+) free, which is immobilized onto the test line. This platform operates without any sample pretreatment step for Cd(2+) detection thanks to an extra conjugation pad that ensures Cd(2+) complexation with EDTA and interference masking through ovalbumin (OVA). The detection and quantification limits found for the device were 0.1 and 0.4 ppb, respectively, these being the lowest limits reported up to now for metal sensors based on paper. The accuracy of the device was evaluated by addition of known quantities of Cd(2+) to different drinking water samples and subsequent Cd(2+) content analysis. Sample recoveries ranged from 95 to 105% and the coefficient of variation for the intermediate precision assay was less than 10%. In addition, the results obtained here were compared with those obtained with the well-established inductively coupled plasma emission spectroscopy (ICPES) and the analysis of certificate standard samples.
Gray, Kathleen; Sockolow, Paulina
2016-02-24
Contributing to health informatics research means using conceptual models that are integrative and explain the research in terms of the two broad domains of health science and information science. However, it can be hard for novice health informatics researchers to find exemplars and guidelines in working with integrative conceptual models. The aim of this paper is to support the use of integrative conceptual models in research on information and communication technologies in the health sector, and to encourage discussion of these conceptual models in scholarly forums. A two-part method was used to summarize and structure ideas about how to work effectively with conceptual models in health informatics research that included (1) a selective review and summary of the literature of conceptual models; and (2) the construction of a step-by-step approach to developing a conceptual model. The seven-step methodology for developing conceptual models in health informatics research explained in this paper involves (1) acknowledging the limitations of health science and information science conceptual models; (2) giving a rationale for one's choice of integrative conceptual model; (3) explicating a conceptual model verbally and graphically; (4) seeking feedback about the conceptual model from stakeholders in both the health science and information science domains; (5) aligning a conceptual model with an appropriate research plan; (6) adapting a conceptual model in response to new knowledge over time; and (7) disseminating conceptual models in scholarly and scientific forums. Making explicit the conceptual model that underpins a health informatics research project can contribute to increasing the number of well-formed and strongly grounded health informatics research projects. This explication has distinct benefits for researchers in training, research teams, and researchers and practitioners in information, health, and other disciplines.
Accurate airway segmentation based on intensity structure analysis and graph-cut
NASA Astrophysics Data System (ADS)
Meng, Qier; Kitsaka, Takayuki; Nimura, Yukitaka; Oda, Masahiro; Mori, Kensaku
2016-03-01
This paper presents a novel airway segmentation method based on intensity structure analysis and graph-cut. Airway segmentation is an important step in analyzing chest CT volumes for computerized lung cancer detection, emphysema diagnosis, asthma diagnosis, and pre- and intra-operative bronchoscope navigation. However, obtaining a complete 3-D airway tree structure from a CT volume is quite challenging. Several researchers have proposed automated algorithms basically based on region growing and machine learning techniques. However these methods failed to detect the peripheral bronchi branches. They caused a large amount of leakage. This paper presents a novel approach that permits more accurate extraction of complex bronchial airway region. Our method are composed of three steps. First, the Hessian analysis is utilized for enhancing the line-like structure in CT volumes, then a multiscale cavity-enhancement filter is employed to detect the cavity-like structure from the previous enhanced result. In the second step, we utilize the support vector machine (SVM) to construct a classifier for removing the FP regions generated. Finally, the graph-cut algorithm is utilized to connect all of the candidate voxels to form an integrated airway tree. We applied this method to sixteen cases of 3D chest CT volumes. The results showed that the branch detection rate of this method can reach about 77.7% without leaking into the lung parenchyma areas.
A transient response analysis of the space shuttle vehicle during liftoff
NASA Technical Reports Server (NTRS)
Brunty, J. A.
1990-01-01
A proposed transient response method is formulated for the liftoff analysis of the space shuttle vehicles. It uses a power series approximation with unknown coefficients for the interface forces between the space shuttle and mobile launch platform. This allows the equation of motion of the two structures to be solved separately with the unknown coefficients at the end of each step. These coefficients are obtained by enforcing the interface compatibility conditions between the two structures. Once the unknown coefficients are determined, the total response is computed for that time step. The method is validated by a numerical example of a cantilevered beam and by the liftoff analysis of the space shuttle vehicles. The proposed method is compared to an iterative transient response analysis method used by Martin Marietta for their space shuttle liftoff analysis. It is shown that the proposed method uses less computer time than the iterative method and does not require as small a time step for integration. The space shuttle vehicle model is reduced using two different types of component mode synthesis (CMS) methods, the Lanczos method and the Craig and Bampton CMS method. By varying the cutoff frequency in the Craig and Bampton method it was shown that the space shuttle interface loads can be computed with reasonable accuracy. Both the Lanczos CMS method and Craig and Bampton CMS method give similar results. A substantial amount of computer time is saved using the Lanczos CMS method over that of the Craig and Bampton method. However, when trying to compute a large number of Lanczos vectors, input/output computer time increased and increased the overall computer time. The application of several liftoff release mechanisms that can be adapted to the proposed method are discussed.
Toward a fully integrated neurostimulator with inductive power recovery front-end.
Mounaïm, Fayçal; Sawan, Mohamad
2012-08-01
In order to investigate new neurostimulation strategies for micturition recovery in spinal cord injured patients, custom implantable stimulators are required to carry-on chronic animal experiments. However, higher integration of the neurostimulator becomes increasingly necessary for miniaturization purposes, power consumption reduction, and for increasing the number of stimulation channels. As a first step towards total integration, we present in this paper the design of a highly-integrated neurostimulator that can be assembled on a 21-mm diameter printed circuit board. The prototype is based on three custom integrated circuits fabricated in High-Voltage (HV) CMOS technology, and a low-power small-scale commercially available FPGA. Using a step-down approach where the inductive voltage is left free up to 20 V, the inductive power and data recovery front-end is fully integrated. In particular, the front-end includes a bridge rectifier, a 20-V voltage limiter, an adjustable series regulator (5 to 12 V), a switched-capacitor step-down DC/DC converter (1:3, 1:2, or 2:3 ratio), as well as data recovery. Measurements show that the DC/DC converter achieves more than 86% power efficiency while providing around 3.9-V from a 12-V input at 1-mA load, 1:3 conversion ratio, and 50-kHz switching frequency. With such efficiency, the proposed step-down inductive power recovery topology is more advantageous than its conventional step-up counterpart. Experimental results confirm good overall functionality of the system.
Kellie, John F; Higgs, Richard E; Ryder, John W; Major, Anthony; Beach, Thomas G; Adler, Charles H; Merchant, Kalpana; Knierman, Michael D
2014-07-23
A robust top down proteomics method is presented for profiling alpha-synuclein species from autopsied human frontal cortex brain tissue from Parkinson's cases and controls. The method was used to test the hypothesis that pathology associated brain tissue will have a different profile of post-translationally modified alpha-synuclein than the control samples. Validation of the sample processing steps, mass spectrometry based measurements, and data processing steps were performed. The intact protein quantitation method features extraction and integration of m/z data from each charge state of a detected alpha-synuclein species and fitting of the data to a simple linear model which accounts for concentration and charge state variability. The quantitation method was validated with serial dilutions of intact protein standards. Using the method on the human brain samples, several previously unreported modifications in alpha-synuclein were identified. Low levels of phosphorylated alpha synuclein were detected in brain tissue fractions enriched for Lewy body pathology and were marginally significant between PD cases and controls (p = 0.03).
Aquifer response to stream-stage and recharge variations. II. Convolution method and applications
Barlow, P.M.; DeSimone, L.A.; Moench, A.F.
2000-01-01
In this second of two papers, analytical step-response functions, developed in the companion paper for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to streamstage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems in the northeastern and central United States. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank hydraulic properties, recharge rates, streambank seepage rates, and bank storage. Analysis of the water-table aquifer adjacent to the Blackstone River in Massachusetts suggests that the very shallow depth of water table and associated thin unsaturated zone at the site cause the aquifer to behave like a confined aquifer (negligible specific yield). This finding is consistent with previous studies that have shown that the effective specific yield of an unconfined aquifer approaches zero when the capillary fringe, where sediment pores are saturated by tension, extends to land surface. Under this condition, the aquifer's response is determined by elastic storage only. Estimates of horizontal and vertical hydraulic conductivity, specific yield, specific storage, and recharge for a water-table aquifer adjacent to the Cedar River in eastern Iowa, determined by the use of analytical methods, are in close agreement with those estimated by use of a more complex, multilayer numerical model of the aquifer. Streambank leakance of the semipervious streambank materials also was estimated for the site. The streambank-leakance parameter may be considered to be a general (or lumped) parameter that accounts not only for the resistance of flow at the river-aquifer boundary, but also for the effects of partial penetration of the river and other near-stream flow phenomena not included in the theoretical development of the step-response functions.Analytical step-response functions, developed for several cases of transient hydraulic interaction between a fully penetrating stream and a confined, leaky, or water-table aquifer, are used in the convolution integral to calculate aquifer heads, streambank seepage rates, and bank storage that occur in response to stream-stage fluctuations and basinwide recharge or evapotranspiration. Two computer programs developed on the basis of these step-response functions and the convolution integral are applied to the analysis of hydraulic interaction of two alluvial stream-aquifer systems. These applications demonstrate the utility of the analytical functions and computer programs for estimating aquifer and streambank seepage rates and bank storage.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
The study of integration about measurable image and 4D production
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun
2008-12-01
In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.
Latent Heating Retrieval from TRMM Observations Using a Simplified Thermodynamic Model
NASA Technical Reports Server (NTRS)
Grecu, Mircea; Olson, William S.
2003-01-01
A procedure for the retrieval of hydrometeor latent heating from TRMM active and passive observations is presented. The procedure is based on current methods for estimating multiple-species hydrometeor profiles from TRMM observations. The species include: cloud water, cloud ice, rain, and graupel (or snow). A three-dimensional wind field is prescribed based on the retrieved hydrometeor profiles, and, assuming a steady-state, the sources and sinks in the hydrometeor conservation equations are determined. Then, the momentum and thermodynamic equations, in which the heating and cooling are derived from the hydrometeor sources and sinks, are integrated one step forward in time. The hydrometeor sources and sinks are reevaluated based on the new wind field, and the momentum and thermodynamic equations are integrated one more step. The reevalution-integration process is repeated until a steady state is reached. The procedure is tested using cloud model simulations. Cloud-model derived fields are used to synthesize TRMM observations, from which hydrometeor profiles are derived. The procedure is applied to the retrieved hydrometeor profiles, and the latent heating estimates are compared to the actual latent heating produced by the cloud model. Examples of procedure's applications to real TRMM data are also provided.
Variational Algorithms for Test Particle Trajectories
NASA Astrophysics Data System (ADS)
Ellison, C. Leland; Finn, John M.; Qin, Hong; Tang, William M.
2015-11-01
The theory of variational integration provides a novel framework for constructing conservative numerical methods for magnetized test particle dynamics. The retention of conservation laws in the numerical time advance captures the correct qualitative behavior of the long time dynamics. For modeling the Lorentz force system, new variational integrators have been developed that are both symplectic and electromagnetically gauge invariant. For guiding center test particle dynamics, discretization of the phase-space action principle yields multistep variational algorithms, in general. Obtaining the desired long-term numerical fidelity requires mitigation of the multistep method's parasitic modes or applying a discretization scheme that possesses a discrete degeneracy to yield a one-step method. Dissipative effects may be modeled using Lagrange-D'Alembert variational principles. Numerical results will be presented using a new numerical platform that interfaces with popular equilibrium codes and utilizes parallel hardware to achieve reduced times to solution. This work was supported by DOE Contract DE-AC02-09CH11466.
NASA Astrophysics Data System (ADS)
Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong
2017-12-01
The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Hybrid Skyshine Calculations for Complex Neutron and Gamma-Ray Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J. Kenneth
2000-10-15
A two-step hybrid method is described for computationally efficient estimation of neutron and gamma-ray skyshine doses far from a shielded source. First, the energy and angular dependence of radiation escaping into the atmosphere from a source containment is determined by a detailed transport model such as MCNP. Then, an effective point source with this energy and angular dependence is used in the integral line-beam method to transport the radiation through the atmosphere up to 2500 m from the source. An example spent-fuel storage cask is analyzed with this hybrid method and compared to detailed MCNP skyshine calculations.
Method and apparatus for determining material structural integrity
Pechersky, Martin
1996-01-01
A non-destructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis techniques to determine the damping loss factor of a material. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity as a function of time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method. If an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the amount of coil current used in vibrating the magnet. If a reciprocating transducer is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by a force gauge in the reciprocating transducer. Using known vibrational analysis methods, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity measurements. The damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
Adaptive Integration of Nonsmooth Dynamical Systems
2017-10-11
controlled time stepping method to interactively design running robots. [1] John Shepherd, Samuel Zapolsky, and Evan M. Drumwright, “Fast multi-body...software like this to test software running on my robots. Started working in simulation after attempting to use software like this to test software... running on my robots. The libraries that produce these beautiful results have failed at simulating robotic manipulation. Postulate: It is easier to
Integration of enabling methods for the automated flow preparation of piperazine-2-carboxamide.
Ingham, Richard J; Battilocchio, Claudio; Hawkins, Joel M; Ley, Steven V
2014-01-01
Here we describe the use of a new open-source software package and a Raspberry Pi(®) computer for the simultaneous control of multiple flow chemistry devices and its application to a machine-assisted, multi-step flow preparation of pyrazine-2-carboxamide - a component of Rifater(®), used in the treatment of tuberculosis - and its reduced derivative piperazine-2-carboxamide.
High-speed extended-term time-domain simulation for online cascading analysis of power system
NASA Astrophysics Data System (ADS)
Fu, Chuan
A high-speed extended-term (HSET) time domain simulator (TDS), intended to become a part of an energy management system (EMS), has been newly developed for use in online extended-term dynamic cascading analysis of power systems. HSET-TDS includes the following attributes for providing situational awareness of high-consequence events: (i) online analysis, including n-1 and n-k events, (ii) ability to simulate both fast and slow dynamics for 1-3 hours in advance, (iii) inclusion of rigorous protection-system modeling, (iv) intelligence for corrective action ID, storage, and fast retrieval, and (v) high-speed execution. Very fast on-line computational capability is the most desired attribute of this simulator. Based on the process of solving algebraic differential equations describing the dynamics of power system, HSET-TDS seeks to develop computational efficiency at each of the following hierarchical levels, (i) hardware, (ii) strategies, (iii) integration methods, (iv) nonlinear solvers, and (v) linear solver libraries. This thesis first describes the Hammer-Hollingsworth 4 (HH4) implicit integration method. Like the trapezoidal rule, HH4 is symmetrically A-Stable but it possesses greater high-order precision (h4 ) than the trapezoidal rule. Such precision enables larger integration steps and therefore improves simulation efficiency for variable step size implementations. This thesis provides the underlying theory on which we advocate use of HH4 over other numerical integration methods for power system time-domain simulation. Second, motivated by the need to perform high speed extended-term time domain simulation (HSET-TDS) for on-line purposes, this thesis presents principles for designing numerical solvers of differential algebraic systems associated with power system time-domain simulation, including DAE construction strategies (Direct Solution Method), integration methods(HH4), nonlinear solvers(Very Dishonest Newton), and linear solvers(SuperLU). We have implemented a design appropriate for HSET-TDS, and we compare it to various solvers, including the commercial grade PSSE program, with respect to computational efficiency and accuracy, using as examples the New England 39 bus system, the expanded 8775 bus system, and PJM 13029 buses system. Third, we have explored a stiffness-decoupling method, intended to be part of parallel design of time domain simulation software for super computers. The stiffness-decoupling method is able to combine the advantages of implicit methods (A-stability) and explicit method(less computation). With the new stiffness detection method proposed herein, the stiffness can be captured. The expanded 975 buses system is used to test simulation efficiency. Finally, several parallel strategies for super computer deployment to simulate power system dynamics are proposed and compared. Design A partitions the task via scale with the stiffness decoupling method, waveform relaxation, and parallel linear solver. Design B partitions the task via the time axis using a highly precise integration method, the Kuntzmann-Butcher Method - order 8 (KB8). The strategy of partitioning events is designed to partition the whole simulation via the time axis through a simulated sequence of cascading events. For all strategies proposed, a strategy of partitioning cascading events is recommended, since the sub-tasks for each processor are totally independent, and therefore minimum communication time is needed.
Single-Cell RT-PCR in Microfluidic Droplets with Integrated Chemical Lysis.
Kim, Samuel C; Clark, Iain C; Shahi, Payam; Abate, Adam R
2018-01-16
Droplet microfluidics can identify and sort cells using digital reverse transcription polymerase chain reaction (RT-PCR) signals from individual cells. However, current methods require multiple microfabricated devices for enzymatic cell lysis and PCR reagent addition, making the process complex and prone to failure. Here, we describe a new approach that integrates all components into a single device. The method enables controlled exposure of isolated single cells to a high pH buffer, which lyses cells and inactivates reaction inhibitors but can be instantly neutralized with RT-PCR buffer. Using our chemical lysis approach, we distinguish individual cells' gene expression with data quality equivalent to more complex two-step workflows. Our system accepts cells and produces droplets ready for amplification, making single-cell droplet RT-PCR faster and more reliable.
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1986-01-01
The design of a very fast, automatic black-box code for homogeneous, gas-phase chemical kinetics problems requires an understanding of the physical and numerical sources of computational inefficiency. Some major sources reviewed in this report are stiffness of the governing ordinary differential equations (ODE's) and its detection, choice of appropriate method (i.e., integration algorithm plus step-size control strategy), nonphysical initial conditions, and too frequent evaluation of thermochemical and kinetic properties. Specific techniques are recommended (and some advised against) for improving or overcoming the identified problem areas. It is argued that, because reactive species increase exponentially with time during induction, and all species exhibit asymptotic, exponential decay with time during equilibration, exponential-fitted integration algorithms are inherently more accurate for kinetics modeling than classical, polynomial-interpolant methods for the same computational work. But current codes using the exponential-fitted method lack the sophisticated stepsize-control logic of existing black-box ODE solver codes, such as EPISODE and LSODE. The ultimate chemical kinetics code does not exist yet, but the general characteristics of such a code are becoming apparent.
Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor
NASA Astrophysics Data System (ADS)
Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.
2018-04-01
RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.
Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz
NASA Astrophysics Data System (ADS)
Vanicat, Matthieu
2018-04-01
We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.
Pseudo-time algorithms for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Swanson, R. C.; Turkel, E.
1986-01-01
A pseudo-time method is introduced to integrate the compressible Navier-Stokes equations to a steady state. This method is a generalization of a method used by Crocco and also by Allen and Cheng. We show that for a simple heat equation that this is just a renormalization of the time. For a convection-diffusion equation the renormalization is dependent only on the viscous terms. We implement the method for the Navier-Stokes equations using a Runge-Kutta type algorithm. This permits the time step to be chosen based on the inviscid model only. We also discuss the use of residual smoothing when viscous terms are present.
General method for extracting the quantum efficiency of dispersive qubit readout in circuit QED
NASA Astrophysics Data System (ADS)
Bultink, C. C.; Tarasinski, B.; Haandbæk, N.; Poletto, S.; Haider, N.; Michalak, D. J.; Bruno, A.; DiCarlo, L.
2018-02-01
We present and demonstrate a general three-step method for extracting the quantum efficiency of dispersive qubit readout in circuit QED. We use active depletion of post-measurement photons and optimal integration weight functions on two quadratures to maximize the signal-to-noise ratio of the non-steady-state homodyne measurement. We derive analytically and demonstrate experimentally that the method robustly extracts the quantum efficiency for arbitrary readout conditions in the linear regime. We use the proven method to optimally bias a Josephson traveling-wave parametric amplifier and to quantify different noise contributions in the readout amplification chain.
Entropy Splitting for High Order Numerical Simulation of Vortex Sound at Low Mach Numbers
NASA Technical Reports Server (NTRS)
Mueller, B.; Yee, H. C.; Mansour, Nagi (Technical Monitor)
2001-01-01
A method of minimizing numerical errors, and improving nonlinear stability and accuracy associated with low Mach number computational aeroacoustics (CAA) is proposed. The method consists of two levels. From the governing equation level, we condition the Euler equations in two steps. The first step is to split the inviscid flux derivatives into a conservative and a non-conservative portion that satisfies a so called generalized energy estimate. This involves the symmetrization of the Euler equations via a transformation of variables that are functions of the physical entropy. Owing to the large disparity of acoustic and stagnation quantities in low Mach number aeroacoustics, the second step is to reformulate the split Euler equations in perturbation form with the new unknowns as the small changes of the conservative variables with respect to their large stagnation values. From the numerical scheme level, a stable sixth-order central interior scheme with a third-order boundary schemes that satisfies the discrete analogue of the integration-by-parts procedure used in the continuous energy estimate (summation-by-parts property) is employed.
Edwards, Jeffrey R; Lambert, Lisa Schurer
2007-03-01
Studies that combine moderation and mediation are prevalent in basic and applied psychology research. Typically, these studies are framed in terms of moderated mediation or mediated moderation, both of which involve similar analytical approaches. Unfortunately, these approaches have important shortcomings that conceal the nature of the moderated and the mediated effects under investigation. This article presents a general analytical framework for combining moderation and mediation that integrates moderated regression analysis and path analysis. This framework clarifies how moderator variables influence the paths that constitute the direct, indirect, and total effects of mediated models. The authors empirically illustrate this framework and give step-by-step instructions for estimation and interpretation. They summarize the advantages of their framework over current approaches, explain how it subsumes moderated mediation and mediated moderation, and describe how it can accommodate additional moderator and mediator variables, curvilinear relationships, and structural equation models with latent variables. (c) 2007 APA, all rights reserved.
Molecular dynamics at low time resolution.
Faccioli, P
2010-10-28
The internal dynamics of macromolecular systems is characterized by widely separated time scales, ranging from fraction of picoseconds to nanoseconds. In ordinary molecular dynamics simulations, the elementary time step Δt used to integrate the equation of motion needs to be chosen much smaller of the shortest time scale in order not to cut-off physical effects. We show that in systems obeying the overdamped Langevin equation, it is possible to systematically correct for such discretization errors. This is done by analytically averaging out the fast molecular dynamics which occurs at time scales smaller than Δt, using a renormalization group based technique. Such a procedure gives raise to a time-dependent calculable correction to the diffusion coefficient. The resulting effective Langevin equation describes by construction the same long-time dynamics, but has a lower time resolution power, hence it can be integrated using larger time steps Δt. We illustrate and validate this method by studying the diffusion of a point-particle in a one-dimensional toy model and the denaturation of a protein.
Martínková, Ludmila; Chmátal, Martin
2016-10-01
The aim of this study was to design an effective method for the bioremediation of coking wastewaters, specifically for the concurrent elimination of their highly toxic components - cyanide and phenols. Almost full degradation of free cyanide (0.32-20 mM; 8.3-520 mg L(-1)) in the model and the real coking wastewaters was achieved by using a recombinant cyanide hydratase in the first step. The removal of cyanide, a strong inhibitor of tyrosinase, enabled an effective degradation of phenols by this enzyme in the second step. Phenol (16.5 mM, 1,552 mg L(-1)) was completely removed from a real coking wastewater within 20 h and cresols (5.0 mM, 540 mg L(-1)) were removed by 66% under the same conditions. The integration of cyanide hydratase and tyrosinase open up new possibilities for the bioremediation of wastewaters with complex pollution. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fostering supportive learning environments in long-term care: the case of WIN A STEP UP.
Craft Morgan, Jennifer; Haviland, Sara B; Woodside, M Allyson; Konrad, Thomas R
2007-01-01
The education of direct care workers (DCWs) is key to improving job quality and the quality of care in long-term care (LTC). This paper describes the successful integration of a supervisory training program into a continuing education intervention (WIN A STEP UP) for DCWs, identifies the factors that appear to influence the integration of the learning into practice, and discusses the implications for educators. The WIN A STEP UP program achieved its strongest results when the DCW curriculum was paired with Coaching Supervision. Attention to pre-training, training and post-training conditions is necessary to successfully integrate learning into practice in LTC.
Optimal subinterval selection approach for power system transient stability simulation
Kim, Soobae; Overbye, Thomas J.
2015-10-21
Power system transient stability analysis requires an appropriate integration time step to avoid numerical instability as well as to reduce computational demands. For fast system dynamics, which vary more rapidly than what the time step covers, a fraction of the time step, called a subinterval, is used. However, the optimal value of this subinterval is not easily determined because the analysis of the system dynamics might be required. This selection is usually made from engineering experiences, and perhaps trial and error. This paper proposes an optimal subinterval selection approach for power system transient stability analysis, which is based on modalmore » analysis using a single machine infinite bus (SMIB) system. Fast system dynamics are identified with the modal analysis and the SMIB system is used focusing on fast local modes. An appropriate subinterval time step from the proposed approach can reduce computational burden and achieve accurate simulation responses as well. As a result, the performance of the proposed method is demonstrated with the GSO 37-bus system.« less
Evaluation of atomic pressure in the multiple time-step integration algorithm.
Andoh, Yoshimichi; Yoshii, Noriyuki; Yamada, Atsushi; Okazaki, Susumu
2017-04-15
In molecular dynamics (MD) calculations, reduction in calculation time per MD loop is essential. A multiple time-step (MTS) integration algorithm, the RESPA (Tuckerman and Berne, J. Chem. Phys. 1992, 97, 1990-2001), enables reductions in calculation time by decreasing the frequency of time-consuming long-range interaction calculations. However, the RESPA MTS algorithm involves uncertainties in evaluating the atomic interaction-based pressure (i.e., atomic pressure) of systems with and without holonomic constraints. It is not clear which intermediate forces and constraint forces in the MTS integration procedure should be used to calculate the atomic pressure. In this article, we propose a series of equations to evaluate the atomic pressure in the RESPA MTS integration procedure on the basis of its equivalence to the Velocity-Verlet integration procedure with a single time step (STS). The equations guarantee time-reversibility even for the system with holonomic constrants. Furthermore, we generalize the equations to both (i) arbitrary number of inner time steps and (ii) arbitrary number of force components (RESPA levels). The atomic pressure calculated by our equations with the MTS integration shows excellent agreement with the reference value with the STS, whereas pressures calculated using the conventional ad hoc equations deviated from it. Our equations can be extended straightforwardly to the MTS integration algorithm for the isothermal NVT and isothermal-isobaric NPT ensembles. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Portegies Zwart, Simon; Boekholt, Tjarda
2014-04-01
The conservation of energy, linear momentum, and angular momentum are important drivers of our physical understanding of the evolution of the universe. These quantities are also conserved in Newton's laws of motion under gravity. Numerical integration of the associated equations of motion is extremely challenging, in particular due to the steady growth of numerical errors (by round-off and discrete time-stepping and the exponential divergence between two nearby solutions. As a result, numerical solutions to the general N-body problem are intrinsically questionable. Using brute force integrations to arbitrary numerical precision we demonstrate empirically that ensembles of different realizations of resonant three-body interactions produce statistically indistinguishable results. Although individual solutions using common integration methods are notoriously unreliable, we conjecture that an ensemble of approximate three-body solutions accurately represents an ensemble of true solutions, so long as the energy during integration is conserved to better than 1/10. We therefore provide an independent confirmation that previous work on self-gravitating systems can actually be trusted, irrespective of the intrinsically chaotic nature of the N-body problem.
The preparation method of terahertz monolithic integrated device
NASA Astrophysics Data System (ADS)
Zhang, Cong; Su, Bo; He, Jingsuo; Zhang, Hongfei; Wu, Yaxiong; Zhang, Shengbo; Zhang, Cunlin
2018-01-01
The terahertz monolithic integrated device is to integrate the pumping area of the terahertz generation, the detection area of the terahertz receiving and the metal waveguide of terahertz transmission on the same substrate. The terahertz generation and detection device use a photoconductive antenna structure the metal waveguide use a microstrip line structure. The evanescent terahertz-bandwidth electric field extending above the terahertz transmission line interacts with, and is modified by, overlaid dielectric samples, thus enabling the characteristic vibrational absorption resonances in the sample to be probed. In this device structure, since the semiconductor substrate of the photoconductive antenna is located between the strip conductor and the dielectric layer of the microstrip line, and the semiconductor substrate cannot grow on the dielectric layer directly. So how to prepare the semiconductor substrate of the photoconductive antenna and how to bond the semiconductor substrate to the dielectric layer of the microstrip line is a key step in the terahertz monolithic integrated device. In order to solve this critical problem, the epitaxial wafer structure of the two semiconductor substrates is given and transferred to the desired substrate by two methods, respectively.
Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C
2017-08-01
The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.
Analytical Approaches to Verify Food Integrity: Needs and Challenges.
Stadler, Richard H; Tran, Lien-Anh; Cavin, Christophe; Zbinden, Pascal; Konings, Erik J M
2016-09-01
A brief overview of the main analytical approaches and practices to determine food authenticity is presented, addressing, as well, food supply chain and future requirements to more effectively mitigate food fraud. Food companies are introducing procedures and mechanisms that allow them to identify vulnerabilities in their food supply chain under the umbrella of a food fraud prevention management system. A key step and first line of defense is thorough supply chain mapping and full transparency, assessing the likelihood of fraudsters to penetrate the chain at any point. More vulnerable chains, such as those where ingredients and/or raw materials are purchased through traders or auctions, may require a higher degree of sampling, testing, and surveillance. Access to analytical tools is therefore pivotal, requiring continuous development and possibly sophistication in identifying chemical markers, data acquisition, and modeling. Significant progress in portable technologies is evident already today, for instance, as in the rapid testing now available at the agricultural level. In the near future, consumers may also have the ability to scan products in stores or at home to authenticate labels and food content. For food manufacturers, targeted analytical methods complemented by untargeted approaches are end control measures at the factory gate when the material is delivered. In essence, testing for food adulterants is an integral part of routine QC, ideally tailored to the risks in the individual markets and/or geographies or supply chains. The development of analytical methods is a first step in verifying the compliance and authenticity of food materials. A next, more challenging step is the successful establishment of global consensus reference methods as exemplified by the AOAC Stakeholder Panel on Infant Formula and Adult Nutritionals initiative, which can serve as an approach that could also be applied to methods for contaminants and adulterants in food. The food industry has taken these many challenges aboard, working closely with all stakeholders and continuously communicating on progress in a fully transparent manner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu; Visvikis, Dimitris; Fernandez, Philippe
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimationmore » of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a wavelet-based denoising in the reconstruction process to better correct for PVE. Future work includes further evaluations of the proposed method on clinical datasets and the use of improved PSF models.« less
A study of the displacement of a Wankel rotary engine
NASA Astrophysics Data System (ADS)
Beard, J. E.; Pennock, G. R.
1993-03-01
The volumetric displacement of a Wankel rotary engine is a function of the trochoid ratio and the pin size ratio, assuming the engine has a unit depth and the number of lobes is specified. The mathematical expression which defines the displacement contains a function which can be evaluated directly and a normal elliptic integral of the second type which does not have an explicit solution. This paper focuses on the contribution of the elliptic integral to the total displacement of the engine. The influence of the elliptic integral is shown to account for as much as 20 percent of the total displacement, depending on the trochoid ratio and the pin size ratio. Two numerical integration techniques are compared in the paper, namely, the trapezoidal rule and Simpson's 1/3 rule. The bounds on the error, associated with each numerical method, are analyzed. The results indicate that the numerical method has a minimal effect on the accuracy of the calculated displacement for a practical number of integration steps. The paper also evaluates the influence of manufacturing tolerances on the calculated displacement and the actual displacement. Finally. a numerical example of the common three-lobed Wankel rotary engine is included for illustrative purposes.
Microfluidic devices for sample preparation and rapid detection of foodborne pathogens.
Kant, Krishna; Shahbazi, Mohammad-Ali; Dave, Vivek Priy; Ngo, Tien Anh; Chidambara, Vinayaka Aaydha; Than, Linh Quyen; Bang, Dang Duong; Wolff, Anders
2018-03-10
Rapid detection of foodborne pathogens at an early stage is imperative for preventing the outbreak of foodborne diseases, known as serious threats to human health. Conventional bacterial culturing methods for foodborne pathogen detection are time consuming, laborious, and with poor pathogen diagnosis competences. This has prompted researchers to call the current status of detection approaches into question and leverage new technologies for superior pathogen sensing outcomes. Novel strategies mainly rely on incorporating all the steps from sample preparation to detection in miniaturized devices for online monitoring of pathogens with high accuracy and sensitivity in a time-saving and cost effective manner. Lab on chip is a blooming area in diagnosis, which exploits different mechanical and biological techniques to detect very low concentrations of pathogens in food samples. This is achieved through streamlining the sample handling and concentrating procedures, which will subsequently reduce human errors and enhance the accuracy of the sensing methods. Integration of sample preparation techniques into these devices can effectively minimize the impact of complex food matrix on pathogen diagnosis and improve the limit of detections. Integration of pathogen capturing bio-receptors on microfluidic devices is a crucial step, which can facilitate recognition abilities in harsh chemical and physical conditions, offering a great commercial benefit to the food-manufacturing sector. This article reviews recent advances in current state-of-the-art of sample preparation and concentration from food matrices with focus on bacterial capturing methods and sensing technologies, along with their advantages and limitations when integrated into microfluidic devices for online rapid detection of pathogens in foods and food production line. Copyright © 2018. Published by Elsevier Inc.
Efficient evaluation of the material response of tissues reinforced by statistically oriented fibres
NASA Astrophysics Data System (ADS)
Hashlamoun, Kotaybah; Grillo, Alfio; Federico, Salvatore
2016-10-01
For several classes of soft biological tissues, modelling complexity is in part due to the arrangement of the collagen fibres. In general, the arrangement of the fibres can be described by defining, at each point in the tissue, the structure tensor (i.e. the tensor product of the unit vector of the local fibre arrangement by itself) and a probability distribution of orientation. In this approach, assuming that the fibres do not interact with each other, the overall contribution of the collagen fibres to a given mechanical property of the tissue can be estimated by means of an averaging integral of the constitutive function describing the mechanical property at study over the set of all possible directions in space. Except for the particular case of fibre constitutive functions that are polynomial in the transversely isotropic invariants of the deformation, the averaging integral cannot be evaluated directly, in a single calculation because, in general, the integrand depends both on deformation and on fibre orientation in a non-separable way. The problem is thus, in a sense, analogous to that of solving the integral of a function of two variables, which cannot be split up into the product of two functions, each depending only on one of the variables. Although numerical schemes can be used to evaluate the integral at each deformation increment, this is computationally expensive. With the purpose of containing computational costs, this work proposes approximation methods that are based on the direct integrability of polynomial functions and that do not require the step-by-step evaluation of the averaging integrals. Three different methods are proposed: (a) a Taylor expansion of the fibre constitutive function in the transversely isotropic invariants of the deformation; (b) a Taylor expansion of the fibre constitutive function in the structure tensor; (c) for the case of a fibre constitutive function having a polynomial argument, an approximation in which the directional average of the constitutive function is replaced by the constitutive function evaluated at the directional average of the argument. Each of the proposed methods approximates the averaged constitutive function in such a way that it is multiplicatively decomposed into the product of a function of the deformation only and a function of the structure tensors only. In order to assess the accuracy of these methods, we evaluate the constitutive functions of the elastic potential and the Cauchy stress, for a biaxial test, under different conditions, i.e. different fibre distributions and different ratios of the nominal strains in the two directions. The results are then compared against those obtained for an averaging method available in the literature, as well as against the integration made at each increment of deformation.
Analysis of mixed-mode crack propagation using the boundary integral method
NASA Technical Reports Server (NTRS)
Mendelson, A.; Ghosn, L. J.
1986-01-01
Crack propagation in a rotating inner raceway of a high speed roller bearing is analyzed using the boundary integral equation method. The method consists of an edge crack in a plate under tension, upon which varying Hertzian stress fields are superimposed. A computer program for the boundary integral equation method was written using quadratic elements to determine the stress and displacement fields for discrete roller positions. Mode I and Mode II stress intensity factors and crack extension forces G sub 00 (energy release rate due to tensile opening mode) and G sub r0 (energy release rate due to shear displacement mode) were computed. These calculations permit determination of that crack growth angle for which the change in the crack extension forces is maximum. The crack driving force was found to be the alternating mixed-mode loading that occurs with each passage of the most heavily loaded roller. The crack is predicted to propagate in a step-like fashion alternating between radial and inclined segments, and this pattern was observed experimentally. The maximum changes DeltaG sub 00 and DeltaG sub r0 of the crack extension forces are found to be good measures of the crack propagation rate and direction.
NASA Astrophysics Data System (ADS)
Richter, Martin; Fingerhut, Benjamin P.
2017-06-01
The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2016-09-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
An Ejector Air Intake Design Method for a Novel Rocket-Based Combined-Cycle Rocket Nozzle
NASA Astrophysics Data System (ADS)
Waung, Timothy S.
Rocket-based combined-cycle (RBCC) vehicles have the potential to reduce launch costs through the use of several different air breathing engine cycles, which reduce fuel consumption. The rocket-ejector cycle, in which air is entrained into an ejector section by the rocket exhaust, is used at flight speeds below Mach 2. This thesis develops a design method for an air intake geometry around a novel RBCC rocket nozzle design for the rocket-ejector engine cycle. This design method consists of a geometry creation step in which a three-dimensional intake geometry is generated, and a simple flow analysis step which predicts the air intake mass flow rate. The air intake geometry is created using the rocket nozzle geometry and eight primary input parameters. The input parameters are selected to give the user significant control over the air intake shape. The flow analysis step uses an inviscid panel method and an integral boundary layer method to estimate the air mass flow rate through the intake geometry. Intake mass flow rate is used as a performance metric since it directly affects the amount of thrust a rocket-ejector can produce. The design method results for the air intake operating at several different points along the subsonic portion of the Ariane 4 flight profile are found to under predict mass flow rate by up to 8.6% when compared to three-dimensional computational fluid dynamics simulations for the same air intake.
Probability genotype imputation method and integrated weighted lasso for QTL identification.
Demetrashvili, Nino; Van den Heuvel, Edwin R; Wit, Ernst C
2013-12-30
Many QTL studies have two common features: (1) often there is missing marker information, (2) among many markers involved in the biological process only a few are causal. In statistics, the second issue falls under the headings "sparsity" and "causal inference". The goal of this work is to develop a two-step statistical methodology for QTL mapping for markers with binary genotypes. The first step introduces a novel imputation method for missing genotypes. Outcomes of the proposed imputation method are probabilities which serve as weights to the second step, namely in weighted lasso. The sparse phenotype inference is employed to select a set of predictive markers for the trait of interest. Simulation studies validate the proposed methodology under a wide range of realistic settings. Furthermore, the methodology outperforms alternative imputation and variable selection methods in such studies. The methodology was applied to an Arabidopsis experiment, containing 69 markers for 165 recombinant inbred lines of a F8 generation. The results confirm previously identified regions, however several new markers are also found. On the basis of the inferred ROC behavior these markers show good potential for being real, especially for the germination trait Gmax. Our imputation method shows higher accuracy in terms of sensitivity and specificity compared to alternative imputation method. Also, the proposed weighted lasso outperforms commonly practiced multiple regression as well as the traditional lasso and adaptive lasso with three weighting schemes. This means that under realistic missing data settings this methodology can be used for QTL identification.
Integration of living cells into nanostructures using non-conventional self-assembly
NASA Astrophysics Data System (ADS)
Carnes, Eric C.
Patternable cell immobilization is an essential feature of any solid-state device designed for interrogating or exploiting living cells. Immobilized cells must remain viable in a robust matrix that promotes fluidic connectivity between the cells and their environment while retaining the ability to establish and maintain necessary chemical gradients. A suitable inorganic matrix can be constructed via evaporation-induced self-assembly of nanostructured silica, in which phospholipids are used in place of traditional surfactant structure-directing agents in order to enhance cell viability and to create a coherent interface between the cell and the surrounding three-dimensional nanostructure. We have used this technique to develop two distinct cell encapsulation processes: cell-directed assembly and cell-directed integration. Cell-directed assembly is a one-step procedure that provides superior viability of immobilized cells by encouraging cells to interact with the developing host matrix. Limitations of this system include low viability for some cell types due to exposure to solvents and stresses, as well as a lack of control over the developing host nanostructure. Cell-directed integration addresses these shortcomings by introducing a two-step process in which cells become encapsulated in a pre-formed silica matrix. The validity of each encapsulation method has been demonstrated with Gram-positive and Gram-negative bacteria, yeast, and mammalian cells. The ability of the immobilized cells to establish relevant gradients of ions or signaling molecules, a key feature of these systems, has been characterized. Additionally, extension of cell encapsulation to address lingering questions in cell biology is addressed. We have also adapted these immobilization processes to be compatible with a variety of patterning strategies having tailorable properties. Widely available photolithography techniques, as well as direct aerosol deposition, have been adapted to provide methods for obtaining both positive and negative transfer of desired cell patterns. Multi-step lithography is also used to create a highly functional system allowing spatial control of not only cells but also media and other molecules of interest.
Kelly, John F; Kaminer, Yifrah; Kahler, Christopher W; Hoeppner, Bettina; Yeterian, Julie; Cristello, Julie V; Timko, Christine
2017-12-01
The integration of 12-Step philosophy and practices is common in adolescent substance use disorder (SUD) treatment programs, particularly in North America. However, although numerous experimental studies have tested 12-Step facilitation (TSF) treatments among adults, no studies have tested TSF-specific treatments for adolescents. We tested the efficacy of a novel integrated TSF. Explanatory, parallel-group, randomized clinical trial comparing 10 sessions of either motivational enhancement therapy/cognitive-behavioral therapy (MET/CBT; n = 30) or a novel integrated TSF (iTSF; n = 29), with follow-up assessments at 3, 6 and 9 months following treatment entry. Out-patient addiction clinic in the United States. Adolescents [n = 59; mean age = 16.8 (1.7) years; range = 14-21; 27% female; 78% white]. The iTSF integrated 12-Step with motivational and cognitive-behavioral strategies, and was compared with state-of-the-art MET/CBT for SUD. Primary outcome: percentage days abstinent (PDA); secondary outcomes: 12-Step attendance, substance-related consequences, longest period of abstinence, proportion abstinent/mostly abstinent, psychiatric symptoms. Primary outcome: PDA was not significantly different across treatments [b = 0.08, 95% confidence interval (CI) = -0.08 to 0.24, P = 0.33; Bayes' factor = 0.28). during treatment, iTSF patients had substantially greater 12-Step attendance, but this advantage declined thereafter (b = -0.87; 95% CI = -1.67 to 0.07, P = 0.03). iTSF did show a significant advantage at all follow-up points for substance-related consequences (b = -0.42; 95% CI = -0.80 to -0.04, P < 0.05; effect size range d = 0.26-0.71). Other secondary outcomes did not differ significantly between treatments, but effect sizes tended to favor iTSF. Throughout the entire sample, greater 12-Step meeting attendance was associated significantly with longer abstinence during (r = 0.39, P = 0.008), and early following (r = 0.30, P = 0.049), treatment. Compared with motivational enhancement therapy/cognitive-behavioral therapy (MET/CBT), in terms of abstinence, a novel integrated 12-Step facilitation treatment for adolescent substance use disorder (iTSF) showed no greater benefits, but showed benefits in terms of 12-Step attendance and consequences. Given widespread use of combinations of 12-Step, MET and CBT in adolescent community out-patient settings in North America, iTSF may provide an integrated evidence-based option that is compatible with existing practices. © 2017 Society for the Study of Addiction.
Developments in the formulation and delivery of spray dried vaccines.
Kanojia, Gaurav; Have, Rimko Ten; Soema, Peter C; Frijlink, Henderik; Amorij, Jean-Pierre; Kersten, Gideon
2017-10-03
Spray drying is a promising method for the stabilization of vaccines, which are usually formulated as liquids. Usually, vaccine stability is improved by spray drying in the presence of a range of excipients. Unlike freeze drying, there is no freezing step involved, thus the damage related to this step is avoided. The edge of spray drying resides in its ability for particles to be engineered to desired requirements, which can be used in various vaccine delivery methods and routes. Although several spray dried vaccines have shown encouraging preclinical results, the number of vaccines that have been tested in clinical trials is limited, indicating a relatively new area of vaccine stabilization and delivery. This article reviews the current status of spray dried vaccine formulations and delivery methods. In particular it discusses the impact of process stresses on vaccine integrity, the application of excipients in spray drying of vaccines, process and formulation optimization strategies based on Design of Experiment approaches as well as opportunities for future application of spray dried vaccine powders for vaccine delivery.
A spectral approach for discrete dislocation dynamics simulations of nanoindentation
NASA Astrophysics Data System (ADS)
Bertin, Nicolas; Glavas, Vedran; Datta, Dibakar; Cai, Wei
2018-07-01
We present a spectral approach to perform nanoindentation simulations using three-dimensional nodal discrete dislocation dynamics. The method relies on a two step approach. First, the contact problem between an indenter of arbitrary shape and an isotropic elastic half-space is solved using a spectral iterative algorithm, and the contact pressure is fully determined on the half-space surface. The contact pressure is then used as a boundary condition of the spectral solver to determine the resulting stress field produced in the simulation volume. In both stages, the mechanical fields are decomposed into Fourier modes and are efficiently computed using fast Fourier transforms. To further improve the computational efficiency, the method is coupled with a subcycling integrator and a special approach is devised to approximate the displacement field associated with surface steps. As a benchmark, the method is used to compute the response of an elastic half-space using different types of indenter. An example of a dislocation dynamics nanoindentation simulation with complex initial microstructure is presented.
Dispersive shock waves in systems with nonlocal dispersion of Benjamin-Ono type
NASA Astrophysics Data System (ADS)
El, G. A.; Nguyen, L. T. K.; Smyth, N. F.
2018-04-01
We develop a general approach to the description of dispersive shock waves (DSWs) for a class of nonlinear wave equations with a nonlocal Benjamin-Ono type dispersion term involving the Hilbert transform. Integrability of the governing equation is not a pre-requisite for the application of this method which represents a modification of the DSW fitting method previously developed for dispersive-hydrodynamic systems of Korteweg-de Vries (KdV) type (i.e. reducible to the KdV equation in the weakly nonlinear, long wave, unidirectional approximation). The developed method is applied to the Calogero-Sutherland dispersive hydrodynamics for which the classification of all solution types arising from the Riemann step problem is constructed and the key physical parameters (DSW edge speeds, lead soliton amplitude, intermediate shelf level) of all but one solution type are obtained in terms of the initial step data. The analytical results are shown to be in excellent agreement with results of direct numerical simulations.
Warburton, William K.; Momayezi, Michael
2006-06-20
A method and apparatus for processing step-like output signals (primary signals) generated by non-ideal, for example, nominally single-pole ("N-1P ") devices. An exemplary method includes creating a set of secondary signals by directing the primary signal along a plurality of signal paths to a signal summation point, summing the secondary signals reaching the signal summation point after propagating along the signal paths to provide a summed signal, performing a filtering or delaying operation in at least one of said signal paths so that the secondary signals reaching said summing point have a defined time correlation with respect to one another, applying a set of weighting coefficients to the secondary signals propagating along said signal paths, and performing a capturing operation after any filtering or delaying operations so as to provide a weighted signal sum value as a measure of the integrated area QgT of the input signal.
Advantages of InGaN/GaN multiple quantum wells with two-step grown low temperature GaN cap layers
NASA Astrophysics Data System (ADS)
Zhu, Yadan; Lu, Taiping; Zhou, Xiaorun; Zhao, Guangzhou; Dong, Hailiang; Jia, Zhigang; Liu, Xuguang; Xu, Bingshe
2017-11-01
Two-step grown low temperature GaN cap layers (LT-cap) are employed to improve the optical and structural properties of InGaN/GaN multiple quantum wells (MQWs). The first LT-cap layer is grown in nitrogen atmosphere, while a small hydrogen flow is added to the carrier gas during the growth of the second LT-cap layer. High-resolution X-ray diffraction results indicate that the two-step growth method can improve the interface quality of MQWs. Room temperature photoluminescence (PL) tests show about two-fold enhancement in integrated PL intensity, only 25 meV blue-shift in peak energy and almost unchanged line width. On the basis of temperature-dependent PL characteristics analysis, it is concluded that the first and the second LT-cap layer play a different role during the growth of MQWs. The first LT-cap layer acts as a protective layer, which protects quantum well from serious indium loss and interface roughening resulting from the hydrogen over-etching. The hydrogen gas employed in the second LT-cap layer is in favor of reducing defect density and indium segregation. Consequently, interface/surface and optical properties are improved by adopting the two-step growth method.
NASA Astrophysics Data System (ADS)
Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.
1990-08-01
In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.
Kasiri, Keyvan; Kazemi, Kamran; Dehghani, Mohammad Javad; Helfroush, Mohammad Sadegh
2013-01-01
In this paper, we present a new semi-automatic brain tissue segmentation method based on a hybrid hierarchical approach that combines a brain atlas as a priori information and a least-square support vector machine (LS-SVM). The method consists of three steps. In the first two steps, the skull is removed and the cerebrospinal fluid (CSF) is extracted. These two steps are performed using the toolbox FMRIB's automated segmentation tool integrated in the FSL software (FSL-FAST) developed in Oxford Centre for functional MRI of the brain (FMRIB). Then, in the third step, the LS-SVM is used to segment grey matter (GM) and white matter (WM). The training samples for LS-SVM are selected from the registered brain atlas. The voxel intensities and spatial positions are selected as the two feature groups for training and test. SVM as a powerful discriminator is able to handle nonlinear classification problems; however, it cannot provide posterior probability. Thus, we use a sigmoid function to map the SVM output into probabilities. The proposed method is used to segment CSF, GM and WM from the simulated magnetic resonance imaging (MRI) using Brainweb MRI simulator and real data provided by Internet Brain Segmentation Repository. The semi-automatically segmented brain tissues were evaluated by comparing to the corresponding ground truth. The Dice and Jaccard similarity coefficients, sensitivity and specificity were calculated for the quantitative validation of the results. The quantitative results show that the proposed method segments brain tissues accurately with respect to corresponding ground truth. PMID:24696800
The iso-response method: measuring neuronal stimulus integration with closed-loop experiments
Gollisch, Tim; Herz, Andreas V. M.
2012-01-01
Throughout the nervous system, neurons integrate high-dimensional input streams and transform them into an output of their own. This integration of incoming signals involves filtering processes and complex non-linear operations. The shapes of these filters and non-linearities determine the computational features of single neurons and their functional roles within larger networks. A detailed characterization of signal integration is thus a central ingredient to understanding information processing in neural circuits. Conventional methods for measuring single-neuron response properties, such as reverse correlation, however, are often limited by the implicit assumption that stimulus integration occurs in a linear fashion. Here, we review a conceptual and experimental alternative that is based on exploring the space of those sensory stimuli that result in the same neural output. As demonstrated by recent results in the auditory and visual system, such iso-response stimuli can be used to identify the non-linearities relevant for stimulus integration, disentangle consecutive neural processing steps, and determine their characteristics with unprecedented precision. Automated closed-loop experiments are crucial for this advance, allowing rapid search strategies for identifying iso-response stimuli during experiments. Prime targets for the method are feed-forward neural signaling chains in sensory systems, but the method has also been successfully applied to feedback systems. Depending on the specific question, “iso-response” may refer to a predefined firing rate, single-spike probability, first-spike latency, or other output measures. Examples from different studies show that substantial progress in understanding neural dynamics and coding can be achieved once rapid online data analysis and stimulus generation, adaptive sampling, and computational modeling are tightly integrated into experiments. PMID:23267315
Integration of mask and silicon metrology in DFM
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Mito, Hiroaki; Sugiyama, Akiyuki; Toyoda, Yasutaka
2009-03-01
We have developed a highly integrated method of mask and silicon metrology. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. We have inspected the high accuracy, stability and reproducibility in the experiments of integration. The accuracy is comparable with that of the mask and silicon CD-SEM metrology. In this report, we introduce the experimental results and the application. As shrinkage of design rule for semiconductor device advances, OPC (Optical Proximity Correction) goes aggressively dense in RET (Resolution Enhancement Technology). However, from the view point of DFM (Design for Manufacturability), the cost of data process for advanced MDP (Mask Data Preparation) and mask producing is a problem. Such trade-off between RET and mask producing is a big issue in semiconductor market especially in mask business. Seeing silicon device production process, information sharing is not completely organized between design section and production section. Design data created with OPC and MDP should be linked to process control on production. But design data and process control data are optimized independently. Thus, we provided a solution of DFM: advanced integration of mask metrology and silicon metrology. The system we propose here is composed of followings. 1) Design based recipe creation: Specify patterns on the design data for metrology. This step is fully automated since they are interfaced with hot spot coordinate information detected by various verification methods. 2) Design based image acquisition: Acquire the images of mask and silicon automatically by a recipe based on the pattern design of CD-SEM.It is a robust automated step because a wide range of design data is used for the image acquisition. 3) Contour profiling and GDS data generation: An image profiling process is applied to the acquired image based on the profiling method of the field proven CD metrology algorithm. The detected edges are then converted to GDSII format, which is a standard format for a design data, and utilized for various DFM systems such as simulation. Namely, by integrating pattern shapes of mask and silicon formed during a manufacturing process into GDSII format, it makes it possible to bridge highly accurate pattern profile information over to the design field of various EDA systems. These are fully integrated into design data and automated. Bi-directional cross probing between mask data and process control data is allowed by linking them. This method is a solution for total optimization that covers Design, MDP, mask production and silicon device producing. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.
[Design method of convex master gratings for replicating flat-field concave gratings].
Zhou, Qian; Li, Li-Feng
2009-08-01
Flat-field concave diffraction grating is the key device of a portable grating spectrometer with the advantage of integrating dispersion, focusing and flat-field in a single device. It directly determines the quality of a spectrometer. The most important two performances determining the quality of the spectrometer are spectral image quality and diffraction efficiency. The diffraction efficiency of a grating depends mainly on its groove shape. But it has long been a problem to get a uniform predetermined groove shape across the whole concave grating area, because the incident angle of the ion beam is restricted by the curvature of the concave substrate, and this severely limits the diffraction efficiency and restricts the application of concave gratings. The authors present a two-step method for designing convex gratings, which are made holographically with two exposure point sources placed behind a plano-convex transparent glass substrate, to solve this problem. The convex gratings are intended to be used as the master gratings for making aberration-corrected flat-field concave gratings. To achieve high spectral image quality for the replicated concave gratings, the refraction effect at the planar back surface and the extra optical path lengths through the substrate thickness experienced by the two divergent recording beams are considered during optimization. This two-step method combines the optical-path-length function method and the ZEMAX software to complete the optimization with a high success rate and high efficiency. In the first step, the optical-path-length function method is used without considering the refraction effect to get an approximate optimization result. In the second step, the approximate result of the first step is used as the initial value for ZEMAX to complete the optimization including the refraction effect. An example of design problem was considered. The simulation results of ZEMAX proved that the spectral image quality of a replicated concave grating is comparable with that of a directly recorded concave grating.
NASA Technical Reports Server (NTRS)
Kim, Sang-Wook
1988-01-01
A velocity-pressure integrated, mixed interpolation, Galerkin finite element method for the Navier-Stokes equations is presented. In the method, the velocity variables were interpolated using complete quadratic shape functions and the pressure was interpolated using linear shape functions. For the two dimensional case, the pressure is defined on a triangular element which is contained inside the complete biquadratic element for velocity variables; and for the three dimensional case, the pressure is defined on a tetrahedral element which is again contained inside the complete tri-quadratic element. Thus the pressure is discontinuous across the element boundaries. Example problems considered include: a cavity flow for Reynolds number of 400 through 10,000; a laminar backward facing step flow; and a laminar flow in a square duct of strong curvature. The computational results compared favorable with those of the finite difference methods as well as experimental data available. A finite elememt computer program for incompressible, laminar flows is presented.
A new method to identify the foot of continental slope based on an integrated profile analysis
NASA Astrophysics Data System (ADS)
Wu, Ziyin; Li, Jiabiao; Li, Shoujun; Shang, Jihong; Jin, Xiaobin
2017-06-01
A new method is proposed to identify automatically the foot of the continental slope (FOS) based on the integrated analysis of topographic profiles. Based on the extremum points of the second derivative and the Douglas-Peucker algorithm, it simplifies the topographic profiles, then calculates the second derivative of the original profiles and the D-P profiles. Seven steps are proposed to simplify the original profiles. Meanwhile, multiple identification methods are proposed to determine the FOS points, including gradient, water depth and second derivative values of data points, as well as the concave and convex, continuity and segmentation of the topographic profiles. This method can comprehensively and intelligently analyze the topographic profiles and their derived slopes, second derivatives and D-P profiles, based on which, it is capable to analyze the essential properties of every single data point in the profile. Furthermore, it is proposed to remove the concave points of the curve and in addition, to implement six FOS judgment criteria.
Systems and methods for knowledge discovery in spatial data
Obradovic, Zoran; Fiez, Timothy E.; Vucetic, Slobodan; Lazarevic, Aleksandar; Pokrajac, Dragoljub; Hoskinson, Reed L.
2005-03-08
Systems and methods are provided for knowledge discovery in spatial data as well as to systems and methods for optimizing recipes used in spatial environments such as may be found in precision agriculture. A spatial data analysis and modeling module is provided which allows users to interactively and flexibly analyze and mine spatial data. The spatial data analysis and modeling module applies spatial data mining algorithms through a number of steps. The data loading and generation module obtains or generates spatial data and allows for basic partitioning. The inspection module provides basic statistical analysis. The preprocessing module smoothes and cleans the data and allows for basic manipulation of the data. The partitioning module provides for more advanced data partitioning. The prediction module applies regression and classification algorithms on the spatial data. The integration module enhances prediction methods by combining and integrating models. The recommendation module provides the user with site-specific recommendations as to how to optimize a recipe for a spatial environment such as a fertilizer recipe for an agricultural field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saygin, H.; Hebert, A.
The calculation of a dilution cross section {bar {sigma}}{sub e} is the most important step in the self-shielding formalism based on the equivalence principle. If a dilution cross section that accurately characterizes the physical situation can be calculated, it can then be used for calculating the effective resonance integrals and obtaining accurate self-shielded cross sections. A new technique for the calculation of equivalent cross sections based on the formalism of Riemann integration in the resolved energy domain is proposed. This new method is compared to the generalized Stamm`ler method, which is also based on an equivalence principle, for a two-regionmore » cylindrical cell and for a small pressurized water reactor assembly in two dimensions. The accuracy of each computing approach is obtained using reference results obtained from a fine-group slowing-down code named CESCOL. It is shown that the proposed method leads to slightly better performance than the generalized Stamm`ler approach.« less
Split Space-Marching Finite-Volume Method for Chemically Reacting Supersonic Flow
NASA Technical Reports Server (NTRS)
Rizzi, Arthur W.; Bailey, Harry E.
1976-01-01
A space-marching finite-volume method employing a nonorthogonal coordinate system and using a split differencing scheme for calculating steady supersonic flow over aerodynamic shapes is presented. It is a second-order-accurate mixed explicit-implicit procedure that solves the inviscid adiabatic and nondiffusive equations for chemically reacting flow in integral conservation-law form. The relationship between the finite-volume and differential forms of the equations is examined and the relative merits of each discussed. The method admits initial Cauchy data situated on any arbitrary surface and integrates them forward along a general curvilinear coordinate, distorting and deforming the surface as it advances. The chemical kinetics term is split from the convective terms which are themselves dimensionally split, thereby freeing the fluid operators from the restricted step size imposed by the chemical reactions and increasing the computational efficiency. The accuracy of this splitting technique is analyzed, a sufficient stability criterion is established, a representative flow computation is discussed, and some comparisons are made with another method.
2012-01-01
Background Optimization of the clinical care process by integration of evidence-based knowledge is one of the active components in care pathways. When studying the impact of a care pathway by using a cluster-randomized design, standardization of the care pathway intervention is crucial. This methodology paper describes the development of the clinical content of an evidence-based care pathway for in-hospital management of chronic obstructive pulmonary disease (COPD) exacerbation in the context of a cluster-randomized controlled trial (cRCT) on care pathway effectiveness. Methods The clinical content of a care pathway for COPD exacerbation was developed based on recognized process design and guideline development methods. Subsequently, based on the COPD case study, a generalized eight-step method was designed to support the development of the clinical content of an evidence-based care pathway. Results A set of 38 evidence-based key interventions and a set of 24 process and 15 outcome indicators were developed in eight different steps. Nine Belgian multidisciplinary teams piloted both the set of key interventions and indicators. The key intervention set was judged by the teams as being valid and clinically applicable. In addition, the pilot study showed that the indicators were feasible for the involved clinicians and patients. Conclusions The set of 38 key interventions and the set of process and outcome indicators were found to be appropriate for the development and standardization of the clinical content of the COPD care pathway in the context of a cRCT on pathway effectiveness. The developed eight-step method may facilitate multidisciplinary teams caring for other patient populations in designing the clinical content of their future care pathways. PMID:23190552
Integrated Modeling Tools for Thermal Analysis and Applications
NASA Technical Reports Server (NTRS)
Milman, Mark H.; Needels, Laura; Papalexandris, Miltiadis
1999-01-01
Integrated modeling of spacecraft systems is a rapidly evolving area in which multidisciplinary models are developed to design and analyze spacecraft configurations. These models are especially important in the early design stages where rapid trades between subsystems can substantially impact design decisions. Integrated modeling is one of the cornerstones of two of NASA's planned missions in the Origins Program -- the Next Generation Space Telescope (NGST) and the Space Interferometry Mission (SIM). Common modeling tools for control design and opto-mechanical analysis have recently emerged and are becoming increasingly widely used. A discipline that has been somewhat less integrated, but is nevertheless of critical concern for high precision optical instruments, is thermal analysis and design. A major factor contributing to this mild estrangement is that the modeling philosophies and objectives for structural and thermal systems typically do not coincide. Consequently the tools that are used in these discplines suffer a degree of incompatibility, each having developed along their own evolutionary path. Although standard thermal tools have worked relatively well in the past. integration with other disciplines requires revisiting modeling assumptions and solution methods. Over the past several years we have been developing a MATLAB based integrated modeling tool called IMOS (Integrated Modeling of Optical Systems) which integrates many aspects of structural, optical, control and dynamical analysis disciplines. Recent efforts have included developing a thermal modeling and analysis capability, which is the subject of this article. Currently, the IMOS thermal suite contains steady state and transient heat equation solvers, and the ability to set up the linear conduction network from an IMOS finite element model. The IMOS code generates linear conduction elements associated with plates and beams/rods of the thermal network directly from the finite element structural model. Conductances for temperature varying materials are accommodated. This capability both streamlines the process of developing the thermal model from the finite element model, and also makes the structural and thermal models compatible in the sense that each structural node is associated with a thermal node. This is particularly useful when the purpose of the analysis is to predict structural deformations due to thermal loads. The steady state solver uses a restricted step size Newton method, and the transient solver is an adaptive step size implicit method applicable to general differential algebraic systems. Temperature dependent conductances and capacitances are accommodated by the solvers. In addition to discussing the modeling and solution methods. applications where the thermal modeling is "in the loop" with sensitivity analysis, optimization and optical performance drawn from our experiences with the Space Interferometry Mission (SIM), and the Next Generation Space Telescope (NGST) are presented.
Intracranial Cortical Responses during Visual–Tactile Integration in Humans
Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric
2014-01-01
Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279
Elfakey, Walyeldin Em; Al-Ghamdi, Ahmed H
2016-01-01
The Faculty of Medicine, Al-Baha University (FMBU), is a newly established medical school that implements a community-oriented and integrated system-based curriculum which is suitable for both medical students and serving the needs of the local community. The aim of this study is to describe the steps that were followed to plan, design, and implement an endocrinology and endocrine surgery module (EESM) for the fourth-year medical students, as an example of how system-based modules are designed at FMBU. Ten questions based on Harden's methodolgy were asked in order to design, plan, and implement an endocrinology and endocrine surgery module. The module committee determined the needs of the module and accordingly stated the aims and objectives of the module. The module planners selected the relevant contents, teaching methods, and assessment strategies and organized them. After addressing each of the ten questions, the results indicated the need, aim, objectives, and contents for the endocrinology and endocrine surgery module at FMBU. The implementation strategies were chosen according to the SPICES model. The teaching methods and the assessment strategies were selected and arranged. The module is well communicated at all levels, and the module committee used every effort to create a productive teaching environment. The module is well managed and follows the hierarchy of FMBU. Implementing Harden's ten steps methodology resulted in an integrated module of endocrinology and endocrine surgery where related disciplines and systems were merged and medical and surgical endocrine topics were included.
NASA Astrophysics Data System (ADS)
Schmitt, R.; Niggemann, C.; Mersmann, C.
2008-04-01
Fibre-reinforced plastics (FRP) are particularly suitable for components where light-weight structures with advanced mechanical properties are required, e.g. for aerospace parts. Nevertheless, many manufacturing processes for FRP include manual production steps without an integrated quality control. A vital step in the process chain is the lay-up of the textile preform, as it greatly affects the geometry and the mechanical performance of the final part. In order to automate the FRP production, an inline machine vision system is needed for a closed-loop control of the preform lay-up. This work describes the development of a novel laser light-section sensor for optical inspection of textile preforms and its integration and validation in a machine vision prototype. The proposed method aims at the determination of the contour position of each textile layer through edge scanning. The scanning route is automatically derived by using texture analysis algorithms in a preliminary step. As sensor output a distinct stage profile is computed from the acquired greyscale image. The contour position is determined with sub-pixel accuracy using a novel algorithm based on a non-linear least-square fitting to a sigmoid function. The whole contour position is generated through data fusion of the measured edge points. The proposed method provides robust process automation for the FRP production improving the process quality and reducing the scrap quota. Hence, the range of economically feasible FRP products can be increased and new market segments with cost sensitive products can be addressed.
Vandenbussche, Pierre-Yves; Cormont, Sylvie; André, Christophe; Daniel, Christel; Delahousse, Jean; Charlet, Jean; Lepage, Eric
2013-01-01
Objective This study shows the evolution of a biomedical observation dictionary within the Assistance Publique Hôpitaux Paris (AP-HP), the largest European university hospital group. The different steps are detailed as follows: the dictionary creation, the mapping to logical observation identifier names and codes (LOINC), the integration into a multiterminological management platform and, finally, the implementation in the health information system. Methods AP-HP decided to create a biomedical observation dictionary named AnaBio, to map it to LOINC and to maintain the mapping. A management platform based on methods used for knowledge engineering has been put in place. It aims at integrating AnaBio within the health information system and improving both the quality and stability of the dictionary. Results This new management platform is now active in AP-HP. The AnaBio dictionary is shared by 120 laboratories and currently includes 50 000 codes. The mapping implementation to LOINC reaches 40% of the AnaBio entries and uses 26% of LOINC records. The results of our work validate the choice made to develop a local dictionary aligned with LOINC. Discussion and Conclusions This work constitutes a first step towards a wider use of the platform. The next step will support the entire biomedical production chain, from the clinician prescription, through laboratory tests tracking in the laboratory information system to the communication of results and the use for decision support and biomedical research. In addition, the increase in the mapping implementation to LOINC ensures the interoperability allowing communication with other international health institutions. PMID:23635601
Boundary-element modelling of dynamics in external poroviscoelastic problems
NASA Astrophysics Data System (ADS)
Igumnov, L. A.; Litvinchuk, S. Yu; Ipatov, A. A.; Petrov, A. N.
2018-04-01
A problem of a spherical cavity in porous media is considered. Porous media are assumed to be isotropic poroelastic or isotropic poroviscoelastic. The poroviscoelastic formulation is treated as a combination of Biot’s theory of poroelasticity and elastic-viscoelastic correspondence principle. Such viscoelastic models as Kelvin–Voigt, Standard linear solid, and a model with weakly singular kernel are considered. Boundary field study is employed with the help of the boundary element method. The direct approach is applied. The numerical scheme is based on the collocation method, regularized boundary integral equation, and Radau stepped scheme.
Stochastic, real-space, imaginary-time evaluation of third-order Feynman-Goldstone diagrams
NASA Astrophysics Data System (ADS)
Willow, Soohaeng Yoo; Hirata, So
2014-01-01
A new, alternative set of interpretation rules of Feynman-Goldstone diagrams for many-body perturbation theory is proposed, which translates diagrams into algebraic expressions suitable for direct Monte Carlo integrations. A vertex of a diagram is associated with a Coulomb interaction (rather than a two-electron integral) and an edge with the trace of a Green's function in real space and imaginary time. With these, 12 diagrams of third-order many-body perturbation (MP3) theory are converted into 20-dimensional integrals, which are then evaluated by a Monte Carlo method. It uses redundant walkers for convergence acceleration and a weight function for importance sampling in conjunction with the Metropolis algorithm. The resulting Monte Carlo MP3 method has low-rank polynomial size dependence of the operation cost, a negligible memory cost, and a naturally parallel computational kernel, while reproducing the correct correlation energies of small molecules within a few mEh after 106 Monte Carlo steps.
Effect of ITE and nozzle exit cone erosion on specific impulse of solid rocket motors
NASA Astrophysics Data System (ADS)
Smith-Kent, Randall; Ridder, Jeffrey P.; Loh, Hai-Tien; Abel, Ralph
1993-06-01
Specific impulse loss due to the use of a slowly eroding integral throat entrance, or a throat insert, with a faster eroding nozzle exit cone is studied. It is suggested that an oblique shock wave produced by step-off erosion results in loss of specific impulse. This is studied by use of a shock capturing CFD method. The shock loss predictions for first-stage Peacekeeper and Castor 25 motors are found to match the trends of the test data. This work suggests that a loss mechanism, previously unaccounted, should be considered in the specific impulse prediction procedure for nozzles with step-off exit cone erosion.
A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System.
Li, Xin; Wang, Jian; Liu, Chunyan
2015-09-25
This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians' different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians' moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the "go and back" phenomenon caused by the instability of the Bluetooth-based positioning system and the "cross-wall" phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision.
A Bluetooth/PDR Integration Algorithm for an Indoor Positioning System
Li, Xin; Wang, Jian; Liu, Chunyan
2015-01-01
This paper proposes two schemes for indoor positioning by fusing Bluetooth beacons and a pedestrian dead reckoning (PDR) technique to provide meter-level positioning without additional infrastructure. As to the PDR approach, a more effective multi-threshold step detection algorithm is used to improve the positioning accuracy. According to pedestrians’ different walking patterns such as walking or running, this paper makes a comparative analysis of multiple step length calculation models to determine a linear computation model and the relevant parameters. In consideration of the deviation between the real heading and the value of the orientation sensor, a heading estimation method with real-time compensation is proposed, which is based on a Kalman filter with map geometry information. The corrected heading can inhibit the positioning error accumulation and improve the positioning accuracy of PDR. Moreover, this paper has implemented two positioning approaches integrated with Bluetooth and PDR. One is the PDR-based positioning method based on map matching and position correction through Bluetooth. There will not be too much calculation work or too high maintenance costs using this method. The other method is a fusion calculation method based on the pedestrians’ moving status (direct movement or making a turn) to determine adaptively the noise parameters in an Extended Kalman Filter (EKF) system. This method has worked very well in the elimination of various phenomena, including the “go and back” phenomenon caused by the instability of the Bluetooth-based positioning system and the “cross-wall” phenomenon due to the accumulative errors caused by the PDR algorithm. Experiments performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building in the China University of Mining and Technology (CUMT) campus showed that the proposed scheme can reliably achieve a 2-meter precision. PMID:26404277
Buckingham, Christopher D; Adams, Ann; Vail, Laura; Kumar, Ashish; Ahmed, Abu; Whelan, Annie; Karasouli, Eleni
2015-10-01
To develop a decision support system (DSS), myGRaCE, that integrates service user (SU) and practitioner expertise about mental health and associated risks of suicide, self-harm, harm to others, self-neglect, and vulnerability. The intention is to help SUs assess and manage their own mental health collaboratively with practitioners. An iterative process involving interviews, focus groups, and agile software development with 115 SUs, to elicit and implement myGRaCE requirements. Findings highlight shared understanding of mental health risk between SUs and practitioners that can be integrated within a single model. However, important differences were revealed in SUs' preferred process of assessing risks and safety, which are reflected in the distinctive interface, navigation, tool functionality and language developed for myGRaCE. A challenge was how to provide flexible access without overwhelming and confusing users. The methods show that practitioner expertise can be reformulated in a format that simultaneously captures SU expertise, to provide a tool highly valued by SUs. A stepped process adds necessary structure to the assessment, each step with its own feedback and guidance. The GRiST web-based DSS (www.egrist.org) links and integrates myGRaCE self-assessments with GRiST practitioner assessments for supporting collaborative and self-managed healthcare. Copyright © 2015. Published by Elsevier Ireland Ltd.
Vibration control by limiting the maximum axial forces in space trusses
NASA Technical Reports Server (NTRS)
Chawla, Vikas; Utku, Senol; Wada, Ben K.
1993-01-01
Proposed here is a method of vibration control based on limiting the maximum axial forces in the active members of an adaptive truss. The actuators simulate elastic rigid-plastic behavior and consume the vibrational energy as work. The method is applicable to both statically determinate as well as indeterminate truss structures. However, for energy efficient control of statistically indeterminate trusses extra actuators may be provided on the redundant bars. An energy formulation relating the various control parameters is derived to get an estimate of the control time. Since the simulation of elastic rigid-plastic behavior requires a piecewise linear control law, a general analytical solution is not possible. Numerical simulation by step-by-step integration is performed to simulate the control of an example truss structure. The problems of application to statically indeterminate trusses and optimal actuator placement are identified for future work.
A multistage gene normalization system integrating multiple effective methods.
Li, Lishuang; Liu, Shanshan; Li, Lihua; Fan, Wenting; Huang, Degen; Zhou, Huiwei
2013-01-01
Gene/protein recognition and normalization is an important preliminary step for many biological text mining tasks. In this paper, we present a multistage gene normalization system which consists of four major subtasks: pre-processing, dictionary matching, ambiguity resolution and filtering. For the first subtask, we apply the gene mention tagger developed in our earlier work, which achieves an F-score of 88.42% on the BioCreative II GM testing set. In the stage of dictionary matching, the exact matching and approximate matching between gene names and the EntrezGene lexicon have been combined. For the ambiguity resolution subtask, we propose a semantic similarity disambiguation method based on Munkres' Assignment Algorithm. At the last step, a filter based on Wikipedia has been built to remove the false positives. Experimental results show that the presented system can achieve an F-score of 90.1%, outperforming most of the state-of-the-art systems.
Accelerating Time Integration for the Shallow Water Equations on the Sphere Using GPUs
Archibald, R.; Evans, K. J.; Salinger, A.
2015-06-01
The push towards larger and larger computational platforms has made it possible for climate simulations to resolve climate dynamics across multiple spatial and temporal scales. This direction in climate simulation has created a strong need to develop scalable timestepping methods capable of accelerating throughput on high performance computing. This study details the recent advances in the implementation of implicit time stepping of the spectral element dynamical core within the United States Department of Energy (DOE) Accelerated Climate Model for Energy (ACME) on graphical processing units (GPU) based machines. We demonstrate how solvers in the Trilinos project are interfaced with ACMEmore » and GPU kernels to increase computational speed of the residual calculations in the implicit time stepping method for the atmosphere dynamics. We demonstrate the optimization gains and data structure reorganization that facilitates the performance improvements.« less
Geometrically derived difference formulae for the numerical integration of trajectory problems
NASA Technical Reports Server (NTRS)
Mcleod, R. J. Y.; Sanz-Serna, J. M.
1982-01-01
An initial value problem for the autonomous system of ordinary differential equations dy/dt = f(y), where y is a vector, is considered. In a number of practical applications the interest lies in obtaining the curve traced by the solution y. These applications include the computation of trajectories in mechanical problems. The term 'trajectory problem' is employed to refer to these cases. Lambert and McLeod (1979) have introduced a method involving local rotation of the axes in the y-plane for the two-dimensional case. The present investigation continues the study of difference schemes specifically derived for trajectory problems. A simple geometrical way of constructing such methods is presented, and the local accuracy of the schemes is investigated. A circularly exact, fixed-step predictor-corrector algorithm is defined, and a variable-step version of a circularly exact algorithm is presented.
Analytical Prediction of Damage Growth in Notched Composite Panels Loaded in Axial Compression
NASA Technical Reports Server (NTRS)
Ambur, Damodar R.; McGowan, David M.; Davila, Carlos G.
1999-01-01
A progressive failure analysis method based on shell elements is developed for the computation of damage initiation and growth in stiffened thick-skin stitched graphite-epoxy panels loaded in axial compression. The analysis method involves a step-by-step simulation of material degradation based on ply-level failure mechanisms. High computational efficiency is derived from the use of superposed layers of shell elements to model each ply orientation in the laminate. Multiple integration points through the thickness are used to obtain the correct bending effects through the thickness without the need for ply-by-ply evaluations of the state of the material. The analysis results are compared with experimental results for three stiffened panels with notches oriented at 0, 15 and 30 degrees to the panel width dimension. A parametric study is performed to investigate the damage growth retardation characteristics of the Kevlar stitch lines in the pan
NASA Astrophysics Data System (ADS)
Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han
2017-11-01
Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.
Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method
NASA Astrophysics Data System (ADS)
Taitano, William; Knoll, Dana; Chacon, Luis
2009-11-01
The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO
Ceramic honeycomb structures and the method thereof
NASA Technical Reports Server (NTRS)
Riccitiello, Salvatore R. (Inventor); Cagliostro, Domenick E. (Inventor)
1987-01-01
The subject invention pertains to a method of producing an improved composite-composite honeycomb structure for aircraft or aerospace use. Specifically, the subject invention relates to a method for the production of a lightweight ceramic-ceramic composite honeycomb structure, which method comprises: (1) pyrolyzing a loosely woven fabric/binder having a honeycomb shape and having a high char yield and geometric integrity after pyrolysis at between about 700 and 1,100 C; (2) substantially evenly depositing at least one layer of ceramic material on the pyrolyzed fabric/binder of step (1); (3) recovering the coated ceramic honeycomb structure; (4) removing the pyrolyzed fabric/binder of the structure of step (3) by slow pyrolysis at between 700 and 1000 C in between about a 2 to 5% by volume oxygen atmosphere for between about 0.5 and 5 hr.; and (5) substantially evenly depositing on and within the rigid hollow honeycomb structure at least one additional layer of the same or a different ceramic material by chemical vapor deposition and chemical vapor infiltration. The honeycomb shaped ceramic articles have enhanced physical properties and are useful in aircraft and aerospace uses.
Preoperative Planning in Orthopaedic Surgery. Current Practice and Evolving Applications.
Atesok, Kivanc; Galos, David; Jazrawi, Laith M; Egol, Kenneth A
2015-12-01
Preoperative planning is an essential prerequisite for the success of orthopaedic procedures. Traditionally, the exercise has involved the written down, step by step "blueprint" of the surgical procedure. Preoperative planning of the technical aspects of the orthopaedic procedure has been performed on hardcopy radiographs using various methods such as copying the radiographic image on tracing papers to practice the planned interventions. This method has become less practical due to variability in radiographic magnification and increasing implementation of digital imaging systems. Advances in technology along with recognition of the importance of surgical safety protocols resulted in widespread changes in orthopaedic preoperative planning approaches. Nowadays, perioperative "briefings" have gained particular importance and novel planning methods have started to integrate into orthopaedic practice. These methods include using software that enables surgeons to perform preoperative planning on digital radiographs and to construct 3D digital models or prototypes of various orthopaedic pathologies from a patient's CT scans to practice preoperatively. Evidence-to-date suggests that preoperative planning and briefings are effective means of favorably influencing the outcomes of orthopaedic procedures.
Aguirre-Junco, Angel-Ricardo; Colombet, Isabelle; Zunino, Sylvain; Jaulent, Marie-Christine; Leneveut, Laurence; Chatellier, Gilles
2004-01-01
The initial step for the computerization of guidelines is the knowledge specification from the prose text of guidelines. We describe a method of knowledge specification based on a structured and systematic analysis of text allowing detailed specification of a decision tree. We use decision tables to validate the decision algorithm and decision trees to specify and represent this algorithm, along with elementary messages of recommendation. Edition tools are also necessary to facilitate the process of validation and workflow between expert physicians who will validate the specified knowledge and computer scientist who will encode the specified knowledge in a guide-line model. Applied to eleven different guidelines issued by an official agency, the method allows a quick and valid computerization and integration in a larger decision support system called EsPeR (Personalized Estimate of Risks). The quality of the text guidelines is however still to be developed further. The method used for computerization could help to define a framework usable at the initial step of guideline development in order to produce guidelines ready for electronic implementation.
Semi-implicit integration factor methods on sparse grids for high-dimensional systems
NASA Astrophysics Data System (ADS)
Wang, Dongyong; Chen, Weitao; Nie, Qing
2015-07-01
Numerical methods for partial differential equations in high-dimensional spaces are often limited by the curse of dimensionality. Though the sparse grid technique, based on a one-dimensional hierarchical basis through tensor products, is popular for handling challenges such as those associated with spatial discretization, the stability conditions on time step size due to temporal discretization, such as those associated with high-order derivatives in space and stiff reactions, remain. Here, we incorporate the sparse grids with the implicit integration factor method (IIF) that is advantageous in terms of stability conditions for systems containing stiff reactions and diffusions. We combine IIF, in which the reaction is treated implicitly and the diffusion is treated explicitly and exactly, with various sparse grid techniques based on the finite element and finite difference methods and a multi-level combination approach. The overall method is found to be efficient in terms of both storage and computational time for solving a wide range of PDEs in high dimensions. In particular, the IIF with the sparse grid combination technique is flexible and effective in solving systems that may include cross-derivatives and non-constant diffusion coefficients. Extensive numerical simulations in both linear and nonlinear systems in high dimensions, along with applications of diffusive logistic equations and Fokker-Planck equations, demonstrate the accuracy, efficiency, and robustness of the new methods, indicating potential broad applications of the sparse grid-based integration factor method.
Eaton, William P.; Staple, Bevan D.; Smith, James H.
2000-01-01
A microelectromechanical (MEM) capacitance pressure sensor integrated with electronic circuitry on a common substrate and a method for forming such a device are disclosed. The MEM capacitance pressure sensor includes a capacitance pressure sensor formed at least partially in a cavity etched below the surface of a silicon substrate and adjacent circuitry (CMOS, BiCMOS, or bipolar circuitry) formed on the substrate. By forming the capacitance pressure sensor in the cavity, the substrate can be planarized (e.g. by chemical-mechanical polishing) so that a standard set of integrated circuit processing steps can be used to form the electronic circuitry (e.g. using an aluminum or aluminum-alloy interconnect metallization).
Variational methods for direct/inverse problems of atmospheric dynamics and chemistry
NASA Astrophysics Data System (ADS)
Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena
2013-04-01
We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).
Laser surface structuring of AZ31 Mg alloy for controlled wettability.
Gökhan Demir, Ali; Furlan, Valentina; Lecis, Nora; Previtali, Barbara
2014-06-01
Structured surfaces exhibit functional properties that can enhance the performance of a bioimplant in terms of biocompatibility, adhesion, or corrosion behavior. In order to tailor the surface property, chemical and physical methods can be used in a sequence of many steps. On the other hand, laser surface processing can provide a single step solution to achieve the designated surface function with the use of simpler equipment and high repeatability. This work provides the details on the surface structuring of AZ31, a biocompatible and biodegradable Mg alloy, by a single-step laser surface structuring based on remelting. The surfaces are characterized in terms of topography, chemistry, and physical integrity, as well as the effective change in the surface wetting behavior is demonstrated. The results imply a great potential in local or complete surface structuring of medical implants for functionalization by the flexible positioning of the laser beam.
Moazzez, Behrang; O'Brien, Stacey M.; Merschrod S., Erika F.
2013-01-01
We present and analyze a method to improve the morphology and mechanical properties of gold thin films for use in optical sensors or other settings where good adhesion of gold to a substrate is of importance and where controlled topography/roughness is key. To improve the adhesion of thermally evaporated gold thin films, we introduce a gold deposition step on SU-8 photoresist prior to UV exposure but after the pre-bake step of SU-8 processing. Shrinkage and distribution of residual stresses, which occur during cross-linking of the SU-8 polymer layer in the post-exposure baking step, are responsible for the higher adhesion of the top gold film to the post-deposition cured SU-8 sublayer. The SU-8 underlayer can also be used to tune the resulting gold film morphology. Our promoter-free protocol is easily integrated with existing sensor microfabrication processes. PMID:23760086
Design, Fabrication, Characterization and Modeling of Integrated Functional Materials
2014-10-01
oxide ( AAO ) membranes were fabricated from high purity aluminum foil (99.999%) by electrochemical route using a controlled two-step anodization ...deposition of Fe and Co in anodized alumina templates. We used commercially prepared AAO templates which had pore diameters of 100 nm (300 nm), an...a thermal decomposition method. The final product was suspended in high-purity hexane to create a ferrofluid. Custom highly ordered anodic aluminum
Integration of enabling methods for the automated flow preparation of piperazine-2-carboxamide
Ingham, Richard J; Battilocchio, Claudio; Hawkins, Joel M
2014-01-01
Summary Here we describe the use of a new open-source software package and a Raspberry Pi® computer for the simultaneous control of multiple flow chemistry devices and its application to a machine-assisted, multi-step flow preparation of pyrazine-2-carboxamide – a component of Rifater®, used in the treatment of tuberculosis – and its reduced derivative piperazine-2-carboxamide. PMID:24778715
Multibody Parachute Flight Simulations for Planetary Entry Trajectories Using "Equilibrium Points"
NASA Technical Reports Server (NTRS)
Raiszadeh, Ben
2003-01-01
A method has been developed to reduce numerical stiffness and computer CPU requirements of high fidelity multibody flight simulations involving parachutes for planetary entry trajectories. Typical parachute entry configurations consist of entry bodies suspended from a parachute, connected by flexible lines. To accurately calculate line forces and moments, the simulations need to keep track of the point where the flexible lines meet (confluence point). In previous multibody parachute flight simulations, the confluence point has been modeled as a point mass. Using a point mass for the confluence point tends to make the simulation numerically stiff, because its mass is typically much less that than the main rigid body masses. One solution for stiff differential equations is to use a very small integration time step. However, this results in large computer CPU requirements. In the method described in the paper, the need for using a mass as the confluence point has been eliminated. Instead, the confluence point is modeled using an "equilibrium point". This point is calculated at every integration step as the point at which sum of all line forces is zero (static equilibrium). The use of this "equilibrium point" has the advantage of both reducing the numerical stiffness of the simulations, and eliminating the dynamical equations associated with vibration of a lumped mass on a high-tension string.
Advanced Ceramic Technology for Space Applications at NASA MSFC
NASA Technical Reports Server (NTRS)
Alim, Mohammad A.
2003-01-01
The ceramic processing technology using conventional methods is applied to the making of the state-of-the-art ceramics known as smart ceramics or intelligent ceramics or electroceramics. The sol-gel and wet chemical processing routes are excluded in this investigation considering economic aspect and proportionate benefit of the resulting product. The use of ceramic ingredients in making coatings or devices employing vacuum coating unit is also excluded in this investigation. Based on the present information it is anticipated that the conventional processing methods provide identical performing ceramics when compared to that processed by the chemical routes. This is possible when sintering temperature, heating and cooling ramps, peak temperature (sintering temperature), soak-time (hold-time), etc. are considered as variable parameters. In addition, optional calcination step prior to the sintering operation remains as a vital variable parameter. These variable parameters constitute a sintering profile to obtain a sintered product. Also it is possible to obtain identical products for more than one sintering profile attributing to the calcination step in conjunction with the variables of the sintering profile. Overall, the state-of-the-art ceramic technology is evaluated for potential thermal and electrical insulation coatings, microelectronics and integrated circuits, discrete and integrated devices, etc. applications in the space program.
A review on machine learning principles for multi-view biological data integration.
Li, Yifeng; Wu, Fang-Xiang; Ngom, Alioune
2018-03-01
Driven by high-throughput sequencing techniques, modern genomic and clinical studies are in a strong need of integrative machine learning models for better use of vast volumes of heterogeneous information in the deep understanding of biological systems and the development of predictive models. How data from multiple sources (called multi-view data) are incorporated in a learning system is a key step for successful analysis. In this article, we provide a comprehensive review on omics and clinical data integration techniques, from a machine learning perspective, for various analyses such as prediction, clustering, dimension reduction and association. We shall show that Bayesian models are able to use prior information and model measurements with various distributions; tree-based methods can either build a tree with all features or collectively make a final decision based on trees learned from each view; kernel methods fuse the similarity matrices learned from individual views together for a final similarity matrix or learning model; network-based fusion methods are capable of inferring direct and indirect associations in a heterogeneous network; matrix factorization models have potential to learn interactions among features from different views; and a range of deep neural networks can be integrated in multi-modal learning for capturing the complex mechanism of biological systems.
NASA Astrophysics Data System (ADS)
Fatt Siew, Tuck; Döll, Petra
2015-04-01
Transdisciplinary approaches are useful for supporting integrated land and water management. However, the implementation of the approach in practice to facilitate the co-production of useable socio-hydrological (and -ecological) knowledge among scientists and stakeholders is challenging. It requires appropriate methods to bring individuals with diverse interests and needs together and to integrate their knowledge for generating shared perspectives/understanding, identifying common goals, and developing actionable management strategies. The approach and the methods need, particularly, to be adapted to the local political and socio-cultural conditions. To demonstrate how knowledge co-production and integration can be done in practice, we present a transdisciplinary approach which has been implemented and adapted for supporting land and water management that takes ecosystem services into account in an arid region in northwestern China. Our approach comprises three steps: (1) stakeholder analysis and interdisciplinary knowledge integration, (2) elicitation of perspectives of scientists and stakeholders, scenario development, and identification of management strategies, and (3) evaluation of knowledge integration and social learning. Our adapted approach has enabled interdisciplinary and cross-sectoral communication among scientists and stakeholders. Furthermore, the application of a combination of participatory methods, including actor modeling, Bayesian Network modeling, and participatory scenario development, has contributed to the integration of system, target, and transformation knowledge of involved stakeholders. The realization of identified management strategies is unknown because other important and representative decision makers have not been involved in the transdisciplinary research process. The contribution of our transdisciplinary approach to social learning still needs to be assessed.
Evaluating Computer Integration in the Elementary School: A Step-by-Step Guide.
ERIC Educational Resources Information Center
Mowe, Richard
This handbook was written to enable elementary school educators to conduct formative evaluations of their computer integrated instruction (CII) programs in minimum time. CII is defined as the use of computer software, such as word processing, database, and graphics programs, to help students solve problems or work more productively. The first…
Stewardship of Integrity in Scientific Communication.
Albertine, Kurt H
2018-06-14
Integrity in the pursuit of discovery through application of the scientific method and reporting the results is an obligation for each of us as scientists. We cannot let the value of science be diminished because discovering knowledge is vital to understand ourselves and our impacts on the earth. We support the value of science by our stewardship of integrity in the conduct, training, reporting, and proposing of scientific investigation. The players who have these responsibilities are authors, reviewers, editors, and readers. Each role has to be played with vigilance for ethical behavior, including compliance with regulations for protections of study subjects, use of select agents and biohazards, regulations of use of stem cells, resource sharing, posting datasets to public repositories, etc. The positive take-home message is that the scientific community is taking steps in behavior to protect the integrity of science. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.