Multiple methods integration for structural mechanics analysis and design
NASA Technical Reports Server (NTRS)
Housner, J. M.; Aminpour, M. A.
1991-01-01
A new research area of multiple methods integration is proposed for joining diverse methods of structural mechanics analysis which interact with one another. Three categories of multiple methods are defined: those in which a physical interface are well defined; those in which a physical interface is not well-defined, but selected; and those in which the interface is a mathematical transformation. Two fundamental integration procedures are presented that can be extended to integrate various methods (e.g., finite elements, Rayleigh Ritz, Galerkin, and integral methods) with one another. Since the finite element method will likely be the major method to be integrated, its enhanced robustness under element distortion is also examined and a new robust shell element is demonstrated.
Method and system of integrating information from multiple sources
Alford, Francine A.; Brinkerhoff, David L.
2006-08-15
A system and method of integrating information from multiple sources in a document centric application system. A plurality of application systems are connected through an object request broker to a central repository. The information may then be posted on a webpage. An example of an implementation of the method and system is an online procurement system.
Integrating Multiple Teaching Methods into a General Chemistry Classroom.
ERIC Educational Resources Information Center
Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella
1998-01-01
Four different methods of teaching--cooperative learning, class discussions, concept maps, and lectures--were integrated into a freshman-level general chemistry course to compare students' levels of participation. Findings support the idea that multiple modes of learning foster the metacognitive skills necessary for mastering general chemistry.…
Integrating Multiple Teaching Methods into a General Chemistry Classroom
NASA Astrophysics Data System (ADS)
Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella
1998-02-01
In addition to the traditional lecture format, three other teaching strategies (class discussions, concept maps, and cooperative learning) were incorporated into a freshman level general chemistry course. Student perceptions of their involvement in each of the teaching methods, as well as their perceptions of the utility of each method were used to assess the effectiveness of the integration of the teaching strategies as received by the students. Results suggest that each strategy serves a unique purpose for the students and increased student involvement in the course. These results indicate that the multiple teaching strategies were well received by the students and that all teaching strategies are necessary for students to get the most out of the course.
On a New Simple Method for Evaluation of Certain Multiple Definite Integrals
ERIC Educational Resources Information Center
Sen Gupta, I.; Debnath, L.
2006-01-01
This paper deals with a simple method of evaluation of certain multiple definite integrals. This is followed by two main theorems concerning multiple definite integrals. Some examples of applications are given.
NASA Astrophysics Data System (ADS)
Tang, Xiaojun
2016-04-01
The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.
Error and timing analysis of multiple time-step integration methods for molecular dynamics
NASA Astrophysics Data System (ADS)
Han, Guowen; Deng, Yuefan; Glimm, James; Martyna, Glenn
2007-02-01
Molecular dynamics simulations of biomolecules performed using multiple time-step integration methods are hampered by resonance instabilities. We analyze the properties of a simple 1D linear system integrated with the symplectic reference system propagator MTS (r-RESPA) technique following earlier work by others. A closed form expression for the time step dependent Hamiltonian which corresponds to r-RESPA integration of the model is derived. This permits us to present an analytic formula for the dependence of the integration accuracy on short-range force cutoff range. A detailed analysis of the force decomposition for the standard Ewald summation method is then given as the Ewald method is a good candidate to achieve high scaling on modern massively parallel machines. We test the new analysis on a realistic system, a protein in water. Under Langevin dynamics with a weak friction coefficient ( ζ=1 ps) to maintain temperature control and using the SHAKE algorithm to freeze out high frequency vibrations, we show that the 5 fs resonance barrier present when all degrees of freedom are unconstrained is postponed to ≈12 fs. An iso-error boundary with respect to the short-range cutoff range and multiple time step size agrees well with the analytical results which are valid due to dominance of the high frequency modes in determining integrator accuracy. Using r-RESPA to treat the long range interactions results in a 6× increase in efficiency for the decomposition described in the text.
Musick, Charles R.; Critchlow, Terence; Ganesh, Madhaven; Slezak, Tom; Fidelis, Krzysztof
2006-12-19
A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.
Statistical Methods for Integrating Multiple Types of High-Throughput Data
Xie, Yang; Ahn, Chul
2011-01-01
Large-scale sequencing, copy number, mRNA, and protein data have given great promise to the biomedical research, while posing great challenges to data management and data analysis. Integrating different types of high-throughput data from diverse sources can increase the statistical power of data analysis and provide deeper biological understanding. This chapter uses two biomedical research examples to illustrate why there is an urgent need to develop reliable and robust methods for integrating the heterogeneous data. We then introduce and review some recently developed statistical methods for integrative analysis for both statistical inference and classification purposes. Finally, we present some useful public access databases and program code to facilitate the integrative analysis in practice. PMID:20652519
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
Reichardt, Jens; Reichardt, Susanne
2006-04-20
A method is presented that permits the determination of the cloud effective particle size from Raman- or Rayleigh-integration temperature measurements that exploits the dependence of the multiple-scattering contributions to the lidar signals from heights above the cloud on the particle size of the cloud. Independent temperature information is needed for the determination of size. By use of Raman-integration temperatures, the technique is applied to cirrus measurements. The magnitude of the multiple-scattering effect and the above-cloud lidar signal strength limit the method's range of applicability to cirrus optical depths from 0.1 to 0.5. Our work implies that records of stratosphere temperature obtained with lidar may be affected by multiple scattering in clouds up to heights of 30 km and beyond.
NASA Astrophysics Data System (ADS)
Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Ayyadurai, V A Shiva; Dewey, C Forbes
2011-03-01
A grand challenge of computational systems biology is to create a molecular pathway model of the whole cell. Current approaches involve merging smaller molecular pathway models' source codes to create a large monolithic model (computer program) that runs on a single computer. Such a larger model is difficult, if not impossible, to maintain given ongoing updates to the source codes of the smaller models. This paper describes a new system called CytoSolve that dynamically integrates computations of smaller models that can run in parallel across different machines without the need to merge the source codes of the individual models. This approach is demonstrated on the classic Epidermal Growth Factor Receptor (EGFR) model of Kholodenko. The EGFR model is split into four smaller models and each smaller model is distributed on a different machine. Results from four smaller models are dynamically integrated to generate identical results to the monolithic EGFR model running on a single machine. The overhead for parallel and dynamic computation is approximately twice that of a monolithic model running on a single machine. The CytoSolve approach provides a scalable method since smaller models may reside on any computer worldwide, where the source code of each model can be independently maintained and updated.
2012-01-01
Background The Hedgehog Signaling Pathway is one of signaling pathways that are very important to embryonic development. The participation of inhibitors in the Hedgehog Signal Pathway can control cell growth and death, and searching novel inhibitors to the functioning of the pathway are in a great demand. As the matter of fact, effective inhibitors could provide efficient therapies for a wide range of malignancies, and targeting such pathway in cells represents a promising new paradigm for cell growth and death control. Current research mainly focuses on the syntheses of the inhibitors of cyclopamine derivatives, which bind specifically to the Smo protein, and can be used for cancer therapy. While quantitatively structure-activity relationship (QSAR) studies have been performed for these compounds among different cell lines, none of them have achieved acceptable results in the prediction of activity values of new compounds. In this study, we proposed a novel collaborative QSAR model for inhibitors of the Hedgehog Signaling Pathway by integration the information from multiple cell lines. Such a model is expected to substantially improve the QSAR ability from single cell lines, and provide useful clues in developing clinically effective inhibitors and modifications of parent lead compounds for target on the Hedgehog Signaling Pathway. Results In this study, we have presented: (1) a collaborative QSAR model, which is used to integrate information among multiple cell lines to boost the QSAR results, rather than only a single cell line QSAR modeling. Our experiments have shown that the performance of our model is significantly better than single cell line QSAR methods; and (2) an efficient feature selection strategy under such collaborative environment, which can derive the commonly important features related to the entire given cell lines, while simultaneously showing their specific contributions to a specific cell-line. Based on feature selection results, we have
Wang, Jingtao; Liu, Jinxia; Han, Junjie; Guan, Jing
2013-02-08
A boundary integral method is developed to investigate the effects of inner droplets and asymmetry of internal structures on rheology of two-dimensional multiple emulsion particles with arbitrary numbers of layers and droplets within each layer. Under a modest extensional flow, the number increment of layers and inner droplets, and the collision among inner droplets subject the particle to stronger shears. In addition, the coalescence or release of inner droplets changes the internal structure of the multiple emulsion particles. Since the rheology of such particles is sensitive to internal structures and their change, modeling them as the core-shell particles to obtain the viscosity equation of a single particle should be modified by introducing the time-dependable volume fraction Φ(t) of the core instead of the fixed Φ. An asymmetric internal structure induces an oriented contact and merging of the outer and inner interface. The start time of the interface merging is controlled by adjusting the viscosity ratio and enhancing the asymmetry, which is promising in the controlled release of inner droplets through hydrodynamics for targeted drug delivery.
NASA Astrophysics Data System (ADS)
Kawata, Masaaki; Mikami, Masuhiro
A canonical molecular dynamics (MD) simulation was accelerated by using an efficient implementation of the multiple timestep integrator algorithm combined with the periodic fast multiple method (MEFMM) for both Coulombic and van der Waals interactions. Although a significant reduction in computational cost has been obtained previously by using the integrated method, in which the MEFMM was used only to calculate Coulombic interactions (Kawata, M., and Mikami, M., 2000, J. Comput. Chem., in press), the extension of this method to include van der Waals interactions yielded further acceleration of the overall MD calculation by a factor of about two. Compared with conventional methods, such as the velocity-Verlet algorithm combined with the Ewald method (timestep of 0.25fs), the speedup by using the extended integrated method amounted to a factor of 500 for a 100 ps simulation. Therefore, the extended method reduces substantially the computational effort of large scale MD simulations.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
NASA Astrophysics Data System (ADS)
Chang, Xin
This dissertation proposal is concerned with the use of fast and broadband full-wave electromagnetic methods for modeling high speed interconnects (e.g, vertical vias and horizontal traces) and passive components (e.g, decoupling capacitors) for structures of PCB and packages, in 3D IC, Die-level packaging and SIW based devices, to effectively modeling the designs signal integrity (SI) and power integrity (PI) aspects. The main contributions finished in this thesis is to create a novel methodology, which hybridizes the Foldy-Lax multiple scattering equations based fast full wave method, method of moment (MoM) based 1D technology, modes decoupling based geometry decomposition and cavity modes expansions, to model and simulate the electromagnetic scattering effects for the irregular power/ground planes, multiple vias and traces, for fast and accurate analysis of link level simulation on multilayer electronic structures. For the modeling details, the interior massively-coupled multiple vias problem is modeled most-analytically by using the Foldy-Lax multiple scattering equations. The dyadic Green's functions of the magnetic field are expressed in terms of waveguide modes in the vertical direction and vector cylindrical wave expansions or cavity modes expansions in the horizontal direction, combined with 2D MoM realized by 1D technology. For the incident field of the case of vias in the arbitrarily shaped antipad in finite large cavity/waveguide, the exciting and scattering field coefficients are calculated based on the transformation which converts surface integration of magnetic surface currents in antipad into 1D line integration of surface charges on the vias and on the ground plane. Geometry decomposition method is applied to model and integrate both the vertical and horizontal interconnects/traces in arbitrarily shaped power/ground planes. Moreover, a new form of multiple scattering equations is derived for solving coupling effects among mixed metallic
Cacha, L A; Parida, S; Dehuri, S; Cho, S-B; Poznanski, R R
2016-12-01
The huge number of voxels in fMRI over time poses a major challenge to for effective analysis. Fast, accurate, and reliable classifiers are required for estimating the decoding accuracy of brain activities. Although machine-learning classifiers seem promising, individual classifiers have their own limitations. To address this limitation, the present paper proposes a method based on the ensemble of neural networks to analyze fMRI data for cognitive state classification for application across multiple subjects. Similarly, the fuzzy integral (FI) approach has been employed as an efficient tool for combining different classifiers. The FI approach led to the development of a classifiers ensemble technique that performs better than any of the single classifier by reducing the misclassification, the bias, and the variance. The proposed method successfully classified the different cognitive states for multiple subjects with high accuracy of classification. Comparison of the performance improvement, while applying ensemble neural networks method, vs. that of the individual neural network strongly points toward the usefulness of the proposed method.
Johnson, Alicia S; Selimovic, Asmira; Martin, R Scott
2011-11-01
This paper describes the use of epoxy-encapsulated electrodes to integrate microchip-based electrophoresis with electrochemical detection. Devices with various electrode combinations can easily be developed. This includes a palladium decoupler with a downstream working electrode material of either gold, mercury/gold, platinum, glassy carbon, or a carbon fiber bundle. Additional device components such as the platinum wires for the electrophoresis separation and the counter electrode for detection can also be integrated into the epoxy base. The effect of the decoupler configuration was studied in terms of the separation performance, detector noise, and the ability to analyze samples of a high ionic strength. The ability of both glassy carbon and carbon fiber bundle electrodes to analyze a complex mixture was demonstrated. It was also shown that a PDMS-based valving microchip can be used along with the epoxy-embedded electrodes to integrate microdialysis sampling with microchip electrophoresis and electrochemical detection, with the microdialysis tubing also being embedded in the epoxy substrate. This approach enables one to vary the detection electrode material as desired in a manner where the electrodes can be polished and modified as is done with electrochemical flow cells used in liquid chromatography.
Multiple-stage integrating accelerometer
Devaney, H.F.
1984-06-27
An accelerometer assembly is provided for use in activating a switch in response to multiple acceleration pulses in series. The accelerometer includes a housing forming a chamber. An inertial mass or piston is slidably disposed in the chamber and spring biased toward a first or reset position. A damping system is also provided to damp piston movement in response to first and subsequent acceleration pulses. Additionally, a cam, including a Z-shaped slot, and cooperating follower pin slidably received therein are mounted to the piston and the housing. The middle or cross-over leg of the Z-shaped slot cooperates with the follower pin to block or limit piston movement and prevent switch activation in response to a lone acceleration pulse. The switch of the assembly is only activated after two or more separate acceleration pulses are sensed and the piston reaches the end of the chamber opposite the reset position.
Multiple-stage integrating accelerometer
Devaney, Howard F.
1986-01-01
An accelerometer assembly is provided for use in activating a switch in response to multiple acceleration pulses in series. The accelerometer includes a housing forming a chamber. An inertial mass or piston is slidably disposed in the chamber and spring biased toward a first or reset position. A damping system is also provided to damp piston movement in response to first and subsequent acceleration pulses. Additionally, a cam, including a Z-shaped slot, and cooperating follower pin slidably received therein are mounted to the piston and the housing. The middle or cross-over leg of the Z-shaped slot cooperates with the follower pin to block or limit piston movement and prevent switch activation in response to a lone acceleration pulse. The switch of the assembly is only activated after two or more separate acceleration pulses are sensed and the piston reaches the end of the chamber opposite the reset position.
Improving Inferences from Multiple Methods.
ERIC Educational Resources Information Center
Shotland, R. Lance; Mark, Melvin M.
1987-01-01
Multiple evaluation methods (MEMs) can cause an inferential challenge, although there are strategies to strengthen inferences. Practical and theoretical issues involved in the use by social scientists of MEMs, three potential problems in drawing inferences from MEMs, and short- and long-term strategies for alleviating these problems are outlined.…
Method for deploying multiple spacecraft
NASA Technical Reports Server (NTRS)
Sharer, Peter J. (Inventor)
2007-01-01
A method for deploying multiple spacecraft is disclosed. The method can be used in a situation where a first celestial body is being orbited by a second celestial body. The spacecraft are loaded onto a single spaceship that contains the multiple spacecraft and the spacecraft is launched from the second celestial body towards a third celestial body. The spacecraft are separated from each other while in route to the third celestial body. Each of the spacecraft is then subjected to the gravitational field of the third celestial body and each of the spacecraft assumes a different, independent orbit about the first celestial body. In those situations where the spacecraft are launched from Earth, the Sun can act as the first celestial body, the Earth can act as the second celestial body and the Moon can act as the third celestial body.
Vertically Integrated Multiple Nanowire Field Effect Transistor.
Lee, Byung-Hyun; Kang, Min-Ho; Ahn, Dae-Chul; Park, Jun-Young; Bang, Tewook; Jeon, Seung-Bae; Hur, Jae; Lee, Dongil; Choi, Yang-Kyu
2015-12-09
A vertically integrated multiple channel-based field-effect transistor (FET) with the highest number of nanowires reported ever is demonstrated on a bulk silicon substrate without use of wet etching. The driving current is increased by 5-fold due to the inherent vertically stacked five-level nanowires, thus showing good feasibility of three-dimensional integration-based high performance transistor. The developed fabrication process, which is simple and reproducible, is used to create multiple stiction-free and uniformly sized nanowires with the aid of the one-route all-dry etching process (ORADEP). Furthermore, the proposed FET is revamped to create nonvolatile memory with the adoption of a charge trapping layer for enhanced practicality. Thus, this research suggests an ultimate design for the end-of-the-roadmap devices to overcome the limits of scaling.
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
ERIC Educational Resources Information Center
Dadelo, Stanislav; Turskis, Zenonas; Zavadskas, Edmundas Kazimieras; Kacerauskas, Tomas; Dadeliene, Ruta
2016-01-01
To maximize the effectiveness of a decision, it is necessary to support decision-making with integrated methods. It can be assumed that subjective evaluation (considering only absolute values) is only remotely connected with the evaluation of real processes. Therefore, relying solely on these values in process management decision-making would be a…
Functional integral approach for multiplicative stochastic processes.
Arenas, Zochil González; Barci, Daniel G
2010-05-01
We present a functional formalism to derive a generating functional for correlation functions of a multiplicative stochastic process represented by a Langevin equation. We deduce a path integral over a set of fermionic and bosonic variables without performing any time discretization. The usual prescriptions to define the Wiener integral appear in our formalism in the definition of Green's functions in the Grassman sector of the theory. We also study nonperturbative constraints imposed by Becchi, Rouet and Stora symmetry (BRS) and supersymmetry on correlation functions. We show that the specific prescription to define the stochastic process is wholly contained in tadpole diagrams. Therefore, in a supersymmetric theory, the stochastic process is uniquely defined since tadpole contributions cancels at all order of perturbation theory.
Integrated management of multiple reservoir field developments
Lyons, S.L.; Chan, H.M.; Harper, J.L.; Boyett, B.A.; Dowson, P.R.; Bette, S.
1995-10-01
This paper consists of two sections. The authors first describe the coupling of a pipeline network model to a reservoir simulator and then the application of this new simulator to optimize the production strategy of two Mobil field developments. Mobil`s PEGASUS simulator is an integrated all purpose reservoir simulator that handles black-oil, compositional, faulted and naturally fractured reservoirs. The authors have extended the simulator to simultaneously model multiple reservoirs coupled with surface pipeline networks and processes. This allows them to account for the effects of geology, well placement, and surface production facilities on well deliverability in a fully integrated fashion. They have also developed a gas contract allocation system that takes the user-specified constraints, target rates and swing factors and automatically assigns rates to the individual wells of each reservoir. This algorithm calculates the overall deliverability and automatically reduces the user-specified target rates to meet the deliverability constraints. The algorithm and solution technique are described. This enhanced simulator has been applied to model a Mobil field development in the Southern Gas Basin, offshore United Kingdom, which consists of three separate gas reservoirs connected via a pipeline network. The simulator allowed the authors to accurately determine the impact on individual reservoir and total field performance by varying the development timing of these reservoirs. Several development scenarios are shown to illustrate the capabilities of PEGASUS. Another application of this technology is in the field developments in North Sumatra, Indonesia. Here the objective is to economically optimize the development of multiple fields to feed the PT Arun LNG facility. Consideration of a range of gas compositions, well productivity`s, and facilities constraints in an integrated fashion results in improved management of these assets. Model specifics are discussed.
Predicting Protein Function via Semantic Integration of Multiple Networks.
Yu, Guoxian; Fu, Guangyuan; Wang, Jun; Zhu, Hailong
2016-01-01
Determining the biological functions of proteins is one of the key challenges in the post-genomic era. The rapidly accumulated large volumes of proteomic and genomic data drives to develop computational models for automatically predicting protein function in large scale. Recent approaches focus on integrating multiple heterogeneous data sources and they often get better results than methods that use single data source alone. In this paper, we investigate how to integrate multiple biological data sources with the biological knowledge, i.e., Gene Ontology (GO), for protein function prediction. We propose a method, called SimNet, to Semantically integrate multiple functional association Networks derived from heterogenous data sources. SimNet firstly utilizes GO annotations of proteins to capture the semantic similarity between proteins and introduces a semantic kernel based on the similarity. Next, SimNet constructs a composite network, obtained as a weighted summation of individual networks, and aligns the network with the kernel to get the weights assigned to individual networks. Then, it applies a network-based classifier on the composite network to predict protein function. Experiment results on heterogenous proteomic data sources of Yeast, Human, Mouse, and Fly show that, SimNet not only achieves better (or comparable) results than other related competitive approaches, but also takes much less time. The Matlab codes of SimNet are available at https://sites.google.com/site/guoxian85/simnet.
Multiple network interface core apparatus and method
Underwood, Keith D [Albuquerque, NM; Hemmert, Karl Scott [Albuquerque, NM
2011-04-26
A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.
SPARSE INTEGRATIVE CLUSTERING OF MULTIPLE OMICS DATA SETS
Wang, Sijian; Mo, Qianxing
2012-01-01
High resolution microarrays and second-generation sequencing platforms are powerful tools to investigate genome-wide alterations in DNA copy number, methylation, and gene expression associated with a disease. An integrated genomic profiling approach measuring multiple omics data types simultaneously in the same set of biological samples would render an integrated data resolution that would not be available with any single data type. In this study, we use penalized latent variable regression methods for joint modeling of multiple omics data types to identify common latent variables that can be used to cluster patient samples into biologically and clinically relevant disease subtypes. We consider lasso (Tibshirani, 1996), elastic net (Zou and Hastie, 2005), and fused lasso (Tibshirani et al., 2005) methods to induce sparsity in the coefficient vectors, revealing important genomic features that have significant contributions to the latent variables. An iterative ridge regression is used to compute the sparse coefficient vectors. In model selection, a uniform design (Fang and Wang, 1994) is used to seek “experimental” points that scattered uniformly across the search domain for efficient sampling of tuning parameter combinations. We compared our method to sparse singular value decomposition (SVD) and penalized Gaussian mixture model (GMM) using both real and simulated data sets. The proposed method is applied to integrate genomic, epigenomic, and transcriptomic data for subtype analysis in breast and lung cancer data sets. PMID:24587839
Multiple protocol fluorometer and method
Kolber, Zbigniew S.; Falkowski, Paul G.
2000-09-19
A multiple protocol fluorometer measures photosynthetic parameters of phytoplankton and higher plants using actively stimulated fluorescence protocols. The measured parameters include spectrally-resolved functional and optical absorption cross sections of PSII, extent of energy transfer between reaction centers of PSII, F.sub.0 (minimal), F.sub.m (maximal) and F.sub.v (variable) components of PSII fluorescence, photochemical and non-photochemical quenching, size of the plastoquinone (PQ) pool, and the kinetics of electron transport between Q.sub.a and PQ pool and between PQ pool and PSI. The multiple protocol fluorometer, in one embodiment, is equipped with an excitation source having a controlled spectral output range between 420 nm and 555 nm and capable of generating flashlets having a duration of 0.125-32 .mu.s, an interval between 0.5 .mu.s and 2 seconds, and peak optical power of up to 2 W/cm.sup.2. The excitation source is also capable of generating, simultaneous with the flashlets, a controlled continuous, background illumination.
Integrating Multiple Intelligences in EFL/ESL Classrooms
ERIC Educational Resources Information Center
Bas, Gokhan
2008-01-01
This article deals with the integration of the theory of Multiple Intelligences in EFL/ESL classrooms. In this study, after the theory of multiple intelligences was presented shortly, the integration of this theory into English classrooms. Intelligence types in MI Theory were discussed and some possible application ways of these intelligence types…
An efficient method for multiple sequence alignment
Kim, J.; Pramanik, S.
1994-12-31
Multiple sequence alignment has been a useful method in the study of molecular evolution and sequence-structure relationships. This paper presents a new method for multiple sequence alignment based on simulated annealing technique. Dynamic programming has been widely used to find an optimal alignment. However, dynamic programming has several limitations to obtain optimal alignment. It requires long computation time and cannot apply certain types of cost functions. We describe detail mechanisms of simulated annealing for multiple sequence alignment problem. It is shown that simulated annealing can be an effective approach to overcome the limitations of dynamic programming in multiple sequence alignment problem.
Geometric integrators for multiple time-scale simulation
NASA Astrophysics Data System (ADS)
Jia, Zhidong; Leimkuhler, Ben
2006-05-01
In this paper, we review and extend recent research on averaging integrators for multiple time-scale simulation such as are needed for physical N-body problems including molecular dynamics, materials modelling and celestial mechanics. A number of methods have been proposed for direct numerical integration of multiscale problems with special structure, such as the mollified impulse method (Garcia-Archilla, Sanz-Serna and Skeel 1999 SIAM J. Sci. Comput. 20 930-63) and the reversible averaging method (Leimkuhler and Reich 2001 J. Comput. Phys. 171 95-114). Features of problems of interest, such as thermostatted coarse-grained molecular dynamics, require extension of the standard framework. At the same time, in some applications the computation of averages plays a crucial role, but the available methods have deficiencies in this regard. We demonstrate that a new approach based on the introduction of shadow variables, which mirror physical variables, has promised for broadening the usefulness of multiscale methods and enhancing accuracy of or simplifying computation of averages. The shadow variables must be computed from an auxiliary equation. While a geometric integrator in the extended space is possible, in practice we observe enhanced long-term energy behaviour only through use of a variant of the method which controls drift of the shadow variables using dissipation and sacrifices the formal geometric properties such as time-reversibility and volume preservation in the enlarged phase space, stabilizing the corresponding properties in the physical variables. The method is applied to a gravitational three-body problem as well as a partially thermostatted model problem for a dilute gas of diatomic molecules.
Building a cognitive map by assembling multiple path integration systems.
Wang, Ranxiao Frances
2016-06-01
Path integration and cognitive mapping are two of the most important mechanisms for navigation. Path integration is a primitive navigation system which computes a homing vector based on an animal's self-motion estimation, while cognitive map is an advanced spatial representation containing richer spatial information about the environment that is persistent and can be used to guide flexible navigation to multiple locations. Most theories of navigation conceptualize them as two distinctive, independent mechanisms, although the path integration system may provide useful information for the integration of cognitive maps. This paper demonstrates a fundamentally different scenario, where a cognitive map is constructed in three simple steps by assembling multiple path integrators and extending their basic features. The fact that a collection of path integration systems can be turned into a cognitive map suggests the possibility that cognitive maps may have evolved directly from the path integration system.
Perturbative Methods in Path Integration
NASA Astrophysics Data System (ADS)
Johnson-Freyd, Theodore Paul
This dissertation addresses a number of related questions concerning perturbative "path" integrals. Perturbative methods are one of the few successful ways physicists have worked with (or even defined) these infinite-dimensional integrals, and it is important as mathematicians to check that they are correct. Chapter 0 provides a detailed introduction. We take a classical approach to path integrals in Chapter 1. Following standard arguments, we posit a Feynman-diagrammatic description of the asymptotics of the time-evolution operator for the quantum mechanics of a charged particle moving nonrelativistically through a curved manifold under the influence of an external electromagnetic field. We check that our sum of Feynman diagrams has all desired properties: it is coordinate-independent and well-defined without ultraviolet divergences, it satisfies the correct composition law, and it satisfies Schrodinger's equation thought of as a boundary-value problem in PDE. Path integrals in quantum mechanics and elsewhere in quantum field theory are almost always of the shape ∫ f es for some functions f (the "observable") and s (the "action"). In Chapter 2 we step back to analyze integrals of this type more generally. Integration by parts provides algebraic relations between the values of ∫ (-) es for different inputs, which can be packaged into a Batalin--Vilkovisky-type chain complex. Using some simple homological perturbation theory, we study the version of this complex that arises when f and s are taken to be polynomial functions, and power series are banished. We find that in such cases, the entire scheme-theoretic critical locus (complex points included) of s plays an important role, and that one can uniformly (but noncanonically) integrate out in a purely algebraic way the contributions to the integral from all "higher modes," reducing ∫ f es to an integral over the critical locus. This may help explain the presence of analytic continuation in questions like the
Integral methodological pluralism in science education research: valuing multiple perspectives
NASA Astrophysics Data System (ADS)
Davis, Nancy T.; Callihan, Laurie P.
2013-09-01
This article examines the multiple methodologies used in educational research and proposes a model that includes all of them as contributing to understanding educational contexts and research from multiple perspectives. The model, based on integral theory (Wilber in a theory of everything. Shambhala, Boston,
Multiple crossbar network: Integrated supercomputing framework
Hoebelheinrich, R. )
1989-01-01
At Los Alamos National Laboratory, site of one of the world's most powerful scientific supercomputing facilities, a prototype network for an environment that links supercomputers and workstations is being developed. Driven by a need to provide graphics data at movie rates across a network from a Cray supercomputer to a Sun scientific workstation, the network is called the Multiple Crossbar Network (MCN). It is intended to be coarsely grained, loosely coupled, general-purpose interconnection network that will vastly increase the speed at which supercomputers communicate with each other in large networks. The components of the network are described, as well as work done in collaboration with vendors who are interested in providing commercial products. 9 refs.
Lutken, Carol; Macelloni, Leonardo; D'Emidio, Marco; Dunbar, John; Higley, Paul
2015-01-31
detect short-term changes within the hydrates system, identify relationships/impacts of local oceanographic parameters on the hydrates system, and improve our understanding of how seafloor instability is affected by hydrates-driven changes. A 2009 DCR survey of MC118 demonstrated that we could image resistivity anomalies to a depth of 75m below the seafloor in water depths of 1km. We reconfigured this system to operate autonomously on the seafloor in a pre-programmed mode, for periods of months. We designed and built a novel seafloor lander and deployment capability that would allow us to investigate the seafloor at potential deployment sites and deploy instruments only when conditions met our criteria. This lander held the DCR system, controlling computers, and battery power supply, as well as instruments to record oceanographic parameters. During the first of two cruises to the study site, we conducted resistivity surveying, selected a monitoring site, and deployed the instrumented lander and DCR, centered on what appeared to be the most active locations within the site, programmed to collect a DCR profile, weekly. After a 4.5-month residence on the seafloor, the team recovered all equipment. Unfortunately, several equipment failures occurred prior to recovery of the instrument packages. Prior to the failures, however, two resistivity profiles were collected together with oceanographic data. Results show, unequivocally, that significant changes can occur in both hydrate volume and distribution during time periods as brief as one week. Occurrences appear to be controlled by both deep and near-surface structure. Results have been integrated with seismic data from the area and show correspondence in space of hydrate and structures, including faults and gas chimneys.
A Fuzzy Logic Framework for Integrating Multiple Learned Models
Hartog, Bobi Kai Den
1999-03-01
The Artificial Intelligence field of Integrating Multiple Learned Models (IMLM) explores ways to combine results from sets of trained programs. Aroclor Interpretation is an ill-conditioned problem in which trained programs must operate in scenarios outside their training ranges because it is intractable to train them completely. Consequently, they fail in ways related to the scenarios. We developed a general-purpose IMLM solution, the Combiner, and applied it to Aroclor Interpretation. The Combiner's first step, Scenario Identification (M), learns rules from very sparse, synthetic training data consisting of results from a suite of trained programs called Methods. S1 produces fuzzy belief weights for each scenario by approximately matching the rules. The Combiner's second step, Aroclor Presence Detection (AP), classifies each of three Aroclors as present or absent in a sample. The third step, Aroclor Quantification (AQ), produces quantitative values for the concentration of each Aroclor in a sample. AP and AQ use automatically learned empirical biases for each of the Methods in each scenario. Through fuzzy logic, AP and AQ combine scenario weights, automatically learned biases for each of the Methods in each scenario, and Methods' results to determine results for a sample.
Integrating Multiple Criteria Evaluation and GIS in Ecotourism: a Review
NASA Astrophysics Data System (ADS)
Mohd, Z. H.; Ujang, U.
2016-09-01
The concept of 'Eco-tourism' is increasingly heard in recent decades. Ecotourism is one adventure that environmentally responsible intended to appreciate the nature experiences and cultures. Ecotourism should have low impact on environment and must contribute to the prosperity of local residents. This article reviews the use of Multiple Criteria Evaluation (MCE) and Geographic Information System (GIS) in ecotourism. Multiple criteria evaluation mostly used to land suitability analysis or fulfill specific objectives based on various attributes that exist in the selected area. To support the process of environmental decision making, the application of GIS is used to display and analysis the data through Analytic Hierarchy Process (AHP). Integration between MCE and GIS tool is important to determine the relative weight for the criteria used objectively. With the MCE method, it can resolve the conflict between recreation and conservation which is to minimize the environmental and human impact. Most studies evidences that the GIS-based AHP as a multi criteria evaluation is a strong and effective in tourism planning which can aid in the development of ecotourism industry effectively.
Lamp method and apparatus using multiple reflections
MacLennan, Donald A.; Turner, Brian; Kipling, Kent
1999-01-01
A method wherein the light in a sulfur or selenium lamp is reflected through the fill a multiplicity of times to convert ultraviolet radiation to visible. A light emitting device comprised of an electrodeless envelope which bears a light reflecting covering around a first portion which does not crack due to differential thermal expansion and which has a second portion which comprises a light transmissive aperture.
Lamp method and apparatus using multiple reflections
MacLennan, D.A.; Turner, B.; Kipling, K.
1999-05-11
A method wherein the light in a sulfur or selenium lamp is reflected through the fill a multiplicity of times to convert ultraviolet radiation to visible is disclosed. A light emitting device comprised of an electrodeless envelope which bears a light reflecting covering around a first portion which does not crack due to differential thermal expansion and which has a second portion which comprises a light transmissive aperture. 20 figs.
Content Integration across Multiple Documents Reduces Memory for Sources
ERIC Educational Resources Information Center
Braasch, Jason L. G.; McCabe, Rebecca M.; Daniel, Frances
2016-01-01
The current experiments systematically examined semantic content integration as a mechanism for explaining source inattention and forgetting when reading-to-remember multiple texts. For all 3 experiments, degree of semantic overlap was manipulated amongst messages provided by various information sources. In Experiment 1, readers' source…
An Alternative Method for Multiplication of Rhotrices. Classroom Notes
ERIC Educational Resources Information Center
Sani, B.
2004-01-01
In this article, an alternative multiplication method for rhotrices is proposed. The method establishes some relationships between rhotrices and matrices. This article has discussed a modified multiplication method for rhotrices. The method has a direct relationship with matrix multiplication, and so rhotrices under this multiplication procedure…
Multiple Model Methods for Cost Function Based Multiple Hypothesis Trackers
2006-03-01
MHT’s Gaussian mixture with Multiple Model Adaptive Estimators (MMAEs) or Interacting Multiple Model (IMM) estimators, and replacing the elemental...Kalman Filtering . . . . . . . . . . . . . . . . . . . . . . . . . 2-2 2.3.1 Dynamics Design Models . . . . . . . . . . . . . . . 2-3 2.3.2 Propagation ...Track Life of Various Merging and Pruning Algorithms . . 2-30 3.1. Constant Velocity Truth Model Driven by White Gaussian Noise . . 3-3 3.2. Constant
Multiple frequency method for operating electrochemical sensors
Martin, Louis P [San Ramon, CA
2012-05-15
A multiple frequency method for the operation of a sensor to measure a parameter of interest using calibration information including the steps of exciting the sensor at a first frequency providing a first sensor response, exciting the sensor at a second frequency providing a second sensor response, using the second sensor response at the second frequency and the calibration information to produce a calculated concentration of the interfering parameters, using the first sensor response at the first frequency, the calculated concentration of the interfering parameters, and the calibration information to measure the parameter of interest.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
HMC algorithm with multiple time scale integration and mass preconditioning
NASA Astrophysics Data System (ADS)
Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.
2006-01-01
We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.
Automatic numerical integration methods for Feynman integrals through 3-loop
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.
2015-05-01
We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.
Case studies: Soil mapping using multiple methods
NASA Astrophysics Data System (ADS)
Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald
2010-05-01
Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful
NEXT Propellant Management System Integration With Multiple Ion Thrusters
NASA Technical Reports Server (NTRS)
Sovey, James S.; Soulas, George C.; Herman, Daniel A.
2011-01-01
As a critical part of the NEXT test validation process, a multiple-string integration test was performed on the NEXT propellant management system and ion thrusters. The objectives of this test were to verify that the PMS is capable of providing stable flow control to multiple thrusters operating over the NEXT system throttling range and to demonstrate to potential users that the NEXT PMS is ready for transition to flight. A test plan was developed for the sub-system integration test for verification of PMS and thruster system performance and functionality requirements. Propellant management system calibrations were checked during the single and multi-thruster testing. The low pressure assembly total flow rates to the thruster(s) were within 1.4 percent of the calibrated support equipment flow rates. The inlet pressures to the main, cathode, and neutralizer ports of Thruster PM1R were measured as the PMS operated in 1-thruster, 2-thruster, and 3-thruster configurations. It was found that the inlet pressures to Thruster PM1R for 2-thruster and 3-thruster operation as well as single thruster operation with the PMS compare very favorably indicating that flow rates to Thruster PM1R were similar in all cases. Characterizations of discharge losses, accelerator grid current, and neutralizer performance were performed as more operating thrusters were added to the PMS. There were no variations in these parameters as thrusters were throttled and single and multiple thruster operations were conducted. The propellant management system power consumption was at a fixed voltage to the DCIU and a fixed thermal throttle temperature of 75 C. The total power consumed by the PMS was 10.0, 17.9, and 25.2 W, respectively, for single, 2-thruster, and 3-thruster operation with the PMS. These sub-system integration tests of the PMS, the DCIU Simulator, and multiple thrusters addressed, in part, the NEXT PMS and propulsion system performance and functionality requirements.
Cao, D-S; Xiao, N; Li, Y-J; Zeng, W-B; Liang, Y-Z; Lu, A-P; Xu, Q-S; Chen, AF
2015-01-01
Identifying potential adverse drug reactions (ADRs) is critically important for drug discovery and public health. Here we developed a multiple evidence fusion (MEF) method for the large-scale prediction of drug ADRs that can handle both approved drugs and novel molecules. MEF is based on the similarity reference by collaborative filtering, and integrates multiple similarity measures from various data types, taking advantage of the complementarity in the data. We used MEF to integrate drug-related and ADR-related data from multiple levels, including the network structural data formed by known drug–ADR relationships for predicting likely unknown ADRs. On cross-validation, it obtains high sensitivity and specificity, substantially outperforming existing methods that utilize single or a few data types. We validated our prediction by their overlap with drug–ADR associations that are known in databases. The proposed computational method could be used for complementary hypothesis generation and rapid analysis of potential drug–ADR interactions. PMID:26451329
Comparison of photopeak integration methods
NASA Astrophysics Data System (ADS)
Kennedy, G.
1990-12-01
Several methods for the calculation of gamma-ray photopeak areas have been compared for the case of a small peak on a high Compton background. 980 similar spectra were accumulated with a germanium detector using a weak 137Cs source to produce a peak at 662 keV on a Compton background generated by a 60Co source. A computer program was written to calculate the area of the 662 keV peak using the total- and partial-peak-area methods, a modification of Sterlinski's method, Loska's method and least-squares fitting of Gaussian peak shapes with linear and quadratic background. The precision attained was highly dependent on the number of channels used to estimate the background, and the best precision, about 9.5%, was obtained with the partial-peak-area method, the modified Sterlinski method and least-squares fitting with variable peak position, fixed peak width and linear background. The methods were also evaluated for their sensitivity to uncertainty in the peak centroid position. Considering precision, ease of use, reliability and universal applicability, the total-peak-area method using several channels for background estimation and the least-squares-fitting method are recommended.
Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.
ERIC Educational Resources Information Center
Smith, Kent W.; Sasaki, M. S.
1979-01-01
A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)
Can the meaning of multiple words be integrated unconsciously?
van Gaal, Simon; Naccache, Lionel; Meuwese, Julia D. I.; van Loon, Anouk M.; Leighton, Alexandra H.; Cohen, Laurent; Dehaene, Stanislas
2014-01-01
What are the limits of unconscious language processing? Can language circuits process simple grammatical constructions unconsciously and integrate the meaning of several unseen words? Using behavioural priming and electroencephalography (EEG), we studied a specific rule-based linguistic operation traditionally thought to require conscious cognitive control: the negation of valence. In a masked priming paradigm, two masked words were successively (Experiment 1) or simultaneously presented (Experiment 2), a modifier (‘not’/‘very’) and an adjective (e.g. ‘good’/‘bad’), followed by a visible target noun (e.g. ‘peace’/‘murder’). Subjects indicated whether the target noun had a positive or negative valence. The combination of these three words could either be contextually consistent (e.g. ‘very bad - murder’) or inconsistent (e.g. ‘not bad - murder’). EEG recordings revealed that grammatical negations could unfold partly unconsciously, as reflected in similar occipito-parietal N400 effects for conscious and unconscious three-word sequences forming inconsistent combinations. However, only conscious word sequences elicited P600 effects, later in time. Overall, these results suggest that multiple unconscious words can be rapidly integrated and that an unconscious negation can automatically ‘flip the sign’ of an unconscious adjective. These findings not only extend the limits of subliminal combinatorial language processes, but also highlight how consciousness modulates the grammatical integration of multiple words. PMID:24639583
Integrated control system and method
Wang, Paul Sai Keat; Baldwin, Darryl; Kim, Myoungjin
2013-10-29
An integrated control system for use with an engine connected to a generator providing electrical power to a switchgear is disclosed. The engine receives gas produced by a gasifier. The control system includes an electronic controller associated with the gasifier, engine, generator, and switchgear. A gas flow sensor monitors a gas flow from the gasifier to the engine through an engine gas control valve and provides a gas flow signal to the electronic controller. A gas oversupply sensor monitors a gas oversupply from the gasifier and provides an oversupply signal indicative of gas not provided to the engine. A power output sensor monitors a power output of the switchgear and provide a power output signal. The electronic controller changes gas production of the gasifier and the power output rating of the switchgear based on the gas flow signal, the oversupply signal, and the power output signal.
Chen, Jia-Jin; Wang, Jia-Yi; Li, Li-Chun; Lin, Jing; Yang, Kai; Ma, Zhi-Guo; Xu, Zong-Huan
2012-03-01
In this study, an index system for the integrated risk evaluation of multiple disasters on the Longyan production in Fujian Province was constructed, based on the analysis of the major environmental factors affecting the Longyan growth and yield, and from the viewpoints of potential hazard of disaster-causing factors, vulnerability of hazard-affected body, and disaster prevention and mitigation capability of Longyan growth regions in the Province. In addition, an integrated evaluation model of multiple disasters was established to evaluate the risks of the major agro-meteorological disasters affecting the Longyan yield, based on the yearly meteorological data, Longyan planting area and yield, and other socio-economic data in Longyan growth region in Fujian, and by using the integral weight of risk indices determined by AHP and entropy weight coefficient methods. In the Province, the Longyan growth regions with light integrated risk of multiple disasters were distributed in the coastal counties (except Dongshan County) with low elevation south of Changle, the regions with severe and more severe integrated risk were mainly in Zhangping of Longyan, Dongshan, Pinghe, Nanjin, and Hua' an of Zhangzhou, Yongchun and Anxi of Quanzhou, north mountainous areas of Putian and Xianyou, Minqing, Minhou, Luoyuan, and mountainous areas of Fuzhou, and Fuan, Xiapu, and mountainous areas of Ninde, among which, the regions with severe integrated risk were in Dongshan, Zhangping, and other mountainous areas with high altitudes, and the regions with moderate integrated risk were distributed in the other areas of the Province.
Robust rotational-velocity-Verlet integration methods
NASA Astrophysics Data System (ADS)
Rozmanov, Dmitri; Kusalik, Peter G.
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Fast integral methods for integrated optical systems simulations: a review
NASA Astrophysics Data System (ADS)
Kleemann, Bernd H.
2015-09-01
Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non
Early Gnathostome Phylogeny Revisited: Multiple Method Consensus
Qiao, Tuo; King, Benedict; Long, John A.; Ahlberg, Per E.; Zhu, Min
2016-01-01
A series of recent studies recovered consistent phylogenetic scenarios of jawed vertebrates, such as the paraphyly of placoderms with respect to crown gnathostomes, and antiarchs as the sister group of all other jawed vertebrates. However, some of the phylogenetic relationships within the group have remained controversial, such as the positions of Entelognathus, ptyctodontids, and the Guiyu-lineage that comprises Guiyu, Psarolepis and Achoania. The revision of the dataset in a recent study reveals a modified phylogenetic hypothesis, which shows that some of these phylogenetic conflicts were sourced from a few inadvertent miscodings. The interrelationships of early gnathostomes are addressed based on a combined new dataset with 103 taxa and 335 characters, which is the most comprehensive morphological dataset constructed to date. This dataset is investigated in a phylogenetic context using maximum parsimony (MP), Bayesian inference (BI) and maximum likelihood (ML) approaches in an attempt to explore the consensus and incongruence between the hypotheses of early gnathostome interrelationships recovered from different methods. Our findings consistently corroborate the paraphyly of placoderms, all ‘acanthodians’ as a paraphyletic stem group of chondrichthyans, Entelognathus as a stem gnathostome, and the Guiyu-lineage as stem sarcopterygians. The incongruence using different methods is less significant than the consensus, and mainly relates to the positions of the placoderm Wuttagoonaspis, the stem chondrichthyan Ramirosuarezia, and the stem osteichthyan Lophosteus—the taxa that are either poorly known or highly specialized in character complement. Given that the different performances of each phylogenetic approach, our study provides an empirical case that the multiple phylogenetic analyses of morphological data are mutually complementary rather than redundant. PMID:27649538
Dissociating conflict adaptation from feature integration: a multiple regression approach.
Notebaert, Wim; Verguts, Tom
2007-10-01
Congruency effects are typically smaller after incongruent than after congruent trials. One explanation is in terms of higher levels of cognitive control after detection of conflict (conflict adaptation; e.g., M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, & J. D. Cohen, 2001). An alternative explanation for these results is based on feature repetition and/or integration effects (e.g., B. Hommel, R. W. Proctor, & K.-P. Vu, 2004; U. Mayr, E. Awh, & P. Laurey, 2003). Previous attempts to dissociate feature integration from conflict adaptation focused on a particular subset of the data in which feature transitions were held constant (J. G. Kerns et al., 2004) or in which congruency transitions were held constant (C. Akcay & E. Hazeltine, in press), but this has a number of disadvantages. In this article, the authors present a multiple regression solution for this problem and discuss its possibilities and pitfalls.
Research in Mathematics Education: Multiple Methods for Multiple Uses
ERIC Educational Resources Information Center
Battista, Michael; Smith, Margaret S.; Boerst, Timothy; Sutton, John; Confrey, Jere; White, Dorothy; Knuth, Eric; Quander, Judith
2009-01-01
Recent federal education policies and reports have generated considerable debate about the meaning, methods, and goals of "scientific research" in mathematics education. Concentrating on the critical problem of determining which educational programs and practices reliably improve students' mathematics achievement, these policies and reports focus…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-12
... From the Federal Register Online via the Government Publishing Office INTERNATIONAL TRADE COMMISSION Certain Integrated Circuit Packages Provided with Multiple Heat- Conducting Paths and Products... integrated circuit packages provided with multiple heat-conducting paths and products containing same...
Integrated presentation of ecological risk from multiple stressors
Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman
2016-01-01
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic. PMID:27782171
Integrated presentation of ecological risk from multiple stressors
NASA Astrophysics Data System (ADS)
Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman
2016-10-01
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic.
Method and systems for collecting data from multiple fields of view
NASA Technical Reports Server (NTRS)
Schwemmer, Geary K. (Inventor)
2002-01-01
Systems and methods for processing light from multiple fields (48, 54, 55) of view without excessive machinery for scanning optical elements. In an exemplary embodiment of the invention, multiple holographic optical elements (41, 42, 43, 44, 45), integrated on a common film (4), diffract and project light from respective fields of view.
EMERGY METHODS: VALUABLE INTEGRATED ASSESSMENT TOOLS
NHEERL's Atlantic Ecology Division is investigating emergy methods as tools for integrated assessment in several projects evaluating environmental impacts, policies, and alternatives for remediation and intervention. Emergy accounting is a methodology that provides a quantitative...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
... COMMISSION Certain Integrated Circuit Packages Provided With Multiple Heat- Conducting Paths and Products.... International Trade Commission has received a complaint entitled Certain Integrated Circuit Packages Provided... sale within the United States after importation of certain integrated circuit packages provided...
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods.
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835
Methods for biological data integration: perspectives and challenges
Gligorijević, Vladimir; Pržulj, Nataša
2015-01-01
Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biological entities. Standard network data analysis methods were shown to be limited in dealing with such heterogeneous networked data and consequently, new methods for integrative data analyses have been proposed. The integrative methods can collectively mine multiple types of biological data and produce more holistic, systems-level biological insights. We survey recent methods for collective mining (integration) of various types of networked biological data. We compare different state-of-the-art methods for data integration and highlight their advantages and disadvantages in addressing important biological problems. We identify the important computational challenges of these methods and provide a general guideline for which methods are suited for specific biological problems, or specific data types. Moreover, we propose that recent non-negative matrix factorization-based approaches may become the integration methodology of choice, as they are well suited and accurate in dealing with heterogeneous data and have many opportunities for further development. PMID:26490630
Erlangga, Mokhammad Puput
2015-04-16
Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.
An integrated modelling framework for neural circuits with multiple neuromodulators
Vemana, Vinith
2017-01-01
Neuromodulators are endogenous neurochemicals that regulate biophysical and biochemical processes, which control brain function and behaviour, and are often the targets of neuropharmacological drugs. Neuromodulator effects are generally complex partly owing to the involvement of broad innervation, co-release of neuromodulators, complex intra- and extrasynaptic mechanism, existence of multiple receptor subtypes and high interconnectivity within the brain. In this work, we propose an efficient yet sufficiently realistic computational neural modelling framework to study some of these complex behaviours. Specifically, we propose a novel dynamical neural circuit model that integrates the effective neuromodulator-induced currents based on various experimental data (e.g. electrophysiology, neuropharmacology and voltammetry). The model can incorporate multiple interacting brain regions, including neuromodulator sources, simulate efficiently and easily extendable to large-scale brain models, e.g. for neuroimaging purposes. As an example, we model a network of mutually interacting neural populations in the lateral hypothalamus, dorsal raphe nucleus and locus coeruleus, which are major sources of neuromodulator orexin/hypocretin, serotonin and norepinephrine/noradrenaline, respectively, and which play significant roles in regulating many physiological functions. We demonstrate that such a model can provide predictions of systemic drug effects of the popular antidepressants (e.g. reuptake inhibitors), neuromodulator antagonists or their combinations. Finally, we developed user-friendly graphical user interface software for model simulation and visualization for both fundamental sciences and pharmacological studies. PMID:28100828
Using Multiple Ontologies to Integrate Complex Biological Data
Petri, Victoria; Pasko, Dean; Bromberg, Susan; Wu, Wenhua; Chen, Jiali; Nenasheva, Nataliya; Kwitek, Anne; Twigger, Simon; Jacob, Howard
2005-01-01
The strength of the rat as a model organism lies in its utility in pharmacology, biochemistry and physiology research. Data resulting from such studies is difficult to represent in databases and the creation of user-friendly data mining tools has proved difficult. The Rat Genome Database has developed a comprehensive ontology-based data structure and annotation system to integrate physiological data along with environmental and experimental factors, as well as genetic and genomic information. RGD uses multiple ontologies to integrate complex biological information from the molecular level to the whole organism, and to develop data mining and presentation tools. This approach allows RGD to indicate not only the phenotypes seen in a strain but also the specific values under each diet and atmospheric condition, as well as gender differences. Harnessing the power of ontologies in this way allows the user to gather and filter data in a customized fashion, so that a researcher can retrieve all phenotype readings for which a high hypoxia is a factor. Utilizing the same data structure for expression data, pathways and biological processes, RGD will provide a comprehensive research platform which allows users to investigate the conditions under which biological processes are altered and to elucidate the mechanisms of disease. PMID:18629202
Multiple Integrated Complementary Healing Approaches: Energetics & Light for bone.
Gray, Michael G; Lackey, Brett R; Patrick, Evelyn F; Gray, Sandra L; Hurley, Susan G
2016-01-01
A synergistic-healing strategy that combines molecular targeting within a system-wide perspective is presented as the Multiple Integrated Complementary Healing Approaches: Energetics And Light (MICHAEL). The basis of the MICHAEL approach is the realization that environmental, nutritional and electromagnetic factors form a regulatory framework involved in bone and nerve healing. The interactions of light, energy, and nutrition with neural, hormonal and cellular pathways will be presented. Energetic therapies including electrical, low-intensity pulsed ultrasound and light based treatments affect growth, differentiation and proliferation of bone and nerve and can be utilized for their healing benefits. However, the benefits of these therapies can be impaired by the absence of nutritional, hormonal and organismal factors. For example, lack of sleep, disrupted circadian rhythms and vitamin-D deficiency can impair healing. Molecular targets, such as the Wnt pathway, protein kinase B and glucocorticoid signaling systems can be modulated by nutritional components, including quercetin, curcumin and Mg(2+) to enhance the healing process. The importance of water and water-regulation will be presented as an integral component. The effects of exercise and acupuncture on bone healing will also be discussed within the context of the MICHAEL approach.
Integrated molecular profiling of SOD2 expression in multiple myeloma.
Hurt, Elaine M; Thomas, Suneetha B; Peng, Benjamin; Farrar, William L
2007-05-01
Reactive oxygen species are known to be involved in several cellular processes, including cell signaling. SOD2 is a key enzyme in the conversion of reactive oxygen species and has been implicated in a host of disease states, including cancer. Using an integrated, whole-cell approach encompassing epigenetics, genomics, and proteomics, we have defined the role of SOD2 in multiple myeloma. We show that the SOD2 promoter is methylated in several cell lines and there is a correlative decrease in expression. Furthermore, myeloma patient samples have decreased SOD2 expression compared with healthy donors. Overexpression of SOD2 results in decreased proliferation and altered sensitivity to 2-methoxyestradiol-induced DNA damage and apoptosis. Genomic profiling revealed regulation of 65 genes, including genes involved in tumorigenesis, and proteomic analysis identified activation of the JAK/STAT pathway. Analysis of nearly 400 activated transcription factors identified 31 transcription factors with altered DNA binding activity, including XBP1, NFAT, forkhead, and GAS binding sites. Integration of data from our gestalt molecular analysis has defined a role for SOD2 in cellular proliferation, JAK/STAT signaling, and regulation of several transcription factors.
Tools and Models for Integrating Multiple Cellular Networks
Gerstein, Mark
2015-11-06
In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed
An advanced Gibbs-Duhem integration method: theory and applications.
van 't Hof, A; Peters, C J; de Leeuw, S W
2006-02-07
The conventional Gibbs-Duhem integration method is very convenient for the prediction of phase equilibria of both pure components and mixtures. However, it turns out to be inefficient. The method requires a number of lengthy simulations to predict the state conditions at which phase coexistence occurs. This number is not known from the outset of the numerical integration process. Furthermore, the molecular configurations generated during the simulations are merely used to predict the coexistence condition and not the liquid- and vapor-phase densities and mole fractions at coexistence. In this publication, an advanced Gibbs-Duhem integration method is presented that overcomes above-mentioned disadvantage and inefficiency. The advanced method is a combination of Gibbs-Duhem integration and multiple-histogram reweighting. Application of multiple-histogram reweighting enables the substitution of the unknown number of simulations by a fixed and predetermined number. The advanced method has a retroactive nature; a current simulation improves the predictions of previously computed coexistence points as well. The advanced Gibbs-Duhem integration method has been applied for the prediction of vapor-liquid equilibria of a number of binary mixtures. The method turned out to be very convenient, much faster than the conventional method, and provided smooth simulation results. As the employed force fields perfectly predict pure-component vapor-liquid equilibria, the binary simulations were very well suitable for testing the performance of different sets of combining rules. Employing Lorentz-Hudson-McCoubrey combining rules for interactions between unlike molecules, as opposed to Lorentz-Berthelot combining rules for all interactions, considerably improved the agreement between experimental and simulated data.
Principles and methods of integrative genomic analyses in cancer.
Kristensen, Vessela N; Lingjærde, Ole Christian; Russnes, Hege G; Vollan, Hans Kristian M; Frigessi, Arnoldo; Børresen-Dale, Anne-Lise
2014-05-01
Combined analyses of molecular data, such as DNA copy-number alteration, mRNA and protein expression, point to biological functions and molecular pathways being deregulated in multiple cancers. Genomic, metabolomic and clinical data from various solid cancers and model systems are emerging and can be used to identify novel patient subgroups for tailored therapy and monitoring. The integrative genomics methodologies that are used to interpret these data require expertise in different disciplines, such as biology, medicine, mathematics, statistics and bioinformatics, and they can seem daunting. The objectives, methods and computational tools of integrative genomics that are available to date are reviewed here, as is their implementation in cancer research.
Multiple Shooting-Local Linearization method for the identification of dynamical systems
NASA Astrophysics Data System (ADS)
Carbonell, F.; Iturria-Medina, Y.; Jimenez, J. C.
2016-08-01
The combination of the multiple shooting strategy with the generalized Gauss-Newton algorithm turns out in a recognized method for estimating parameters in ordinary differential equations (ODEs) from noisy discrete observations. A key issue for an efficient implementation of this method is the accurate integration of the ODE and the evaluation of the derivatives involved in the optimization algorithm. In this paper, we study the feasibility of the Local Linearization (LL) approach for the simultaneous numerical integration of the ODE and the evaluation of such derivatives. This integration approach results in a stable method for the accurate approximation of the derivatives with no more computational cost than that involved in the integration of the ODE. The numerical simulations show that the proposed Multiple Shooting-Local Linearization method recovers the true parameters value under different scenarios of noisy data.
Methods of geometrical integration in accelerator physics
NASA Astrophysics Data System (ADS)
Andrianov, S. N.
2016-12-01
In the paper we consider a method of geometric integration for a long evolution of the particle beam in cyclic accelerators, based on the matrix representation of the operator of particles evolution. This method allows us to calculate the corresponding beam evolution in terms of two-dimensional matrices including for nonlinear effects. The ideology of the geometric integration introduces in appropriate computational algorithms amendments which are necessary for preserving the qualitative properties of maps presented in the form of the truncated series generated by the operator of evolution. This formalism extends both on polarized and intense beams. Examples of practical applications are described.
Differential temperature integrating diagnostic method and apparatus
Doss, James D.; McCabe, Charles W.
1976-01-01
A method and device for detecting the presence of breast cancer in women by integrating the temperature difference between the temperature of a normal breast and that of a breast having a malignant tumor. The breast-receiving cups of a brassiere are each provided with thermally conductive material next to the skin, with a thermistor attached to the thermally conductive material in each cup. The thermistors are connected to adjacent arms of a Wheatstone bridge. Unbalance currents in the bridge are integrated with respect to time by means of an electrochemical integrator. In the absence of a tumor, both breasts maintain substantially the same temperature, and the bridge remains balanced. If the tumor is present in one breast, a higher temperature in that breast unbalances the bridge and the electrochemical cells integrate the temperature difference with respect to time.
2004-03-01
THE UNITED STATES MARINE CORPS DATA COLLABORATION REQUIREMENTS: RETRIEVING AND INTEGRATING DATA FROM...STATES MARINE CORPS DATA COLLABORATION REQUIREMENTS: RETRIEVING AND INTEGRATING DATA FROM MULTIPLE DATABASES THESIS Presented to the...04M-04 THE UNITED STATES MARINE CORPS DATA COLLABORATION REQUIREMENTS: RETRIEVING AND INTEGRATING DATA FROM MULTIPLE DATABASES Pamela J
A rapid and reliable strategy for chromosomal integration of gene(s) with multiple copies
Gu, Pengfei; Yang, Fan; Su, Tianyuan; Wang, Qian; Liang, Quanfeng; Qi, Qingsheng
2015-01-01
Direct optimization of the metabolic pathways on the chromosome requires tools that can fine tune the overexpression of a desired gene or optimize the combination of multiple genes. Although plasmid-dependent overexpression has been used for this task, fundamental issues concerning its genetic stability and operational repeatability have not been addressed. Here, we describe a rapid and reliable strategy for chromosomal integration of gene(s) with multiple copies (CIGMC), which uses the flippase from the yeast 2-μm plasmid. Using green fluorescence protein as a model, we verified that the fluorescent intensity was in accordance with the integration copy number of the target gene. When a narrow-host-range replicon, R6K, was used in the integrative plasmid, the maximum integrated copy number of Escherichia coli reached 15. Applying the CIGMC method to optimize the overexpression of single or multiple genes in amino acid biosynthesis, we successfully improved the product yield and stability of the production. As a flexible strategy, CIGMC can be used in various microorganisms other than E. coli. PMID:25851494
Multiple tag labeling method for DNA sequencing
Mathies, R.A.; Huang, X.C.; Quesada, M.A.
1995-07-25
A DNA sequencing method is described which uses single lane or channel electrophoresis. Sequencing fragments are separated in the lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radioisotope labels. 5 figs.
Multiple tag labeling method for DNA sequencing
Mathies, Richard A.; Huang, Xiaohua C.; Quesada, Mark A.
1995-01-01
A DNA sequencing method described which uses single lane or channel electrophoresis. Sequencing fragments are separated in said lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radio-isotope labels.
Students' Use of "Look Back" Strategies in Multiple Solution Methods
ERIC Educational Resources Information Center
Lee, Shin-Yi
2016-01-01
The purpose of this study was to investigate the relationship between both 9th-grade and 1st-year undergraduate students' use of "look back" strategies and problem solving performance in multiple solution methods, the difference in their use of look back strategies and problem solving performance in multiple solution methods, and the…
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Evaluation of Scheduling Methods for Multiple Runways
NASA Technical Reports Server (NTRS)
Bolender, Michael A.; Slater, G. L.
1996-01-01
Several scheduling strategies are analyzed in order to determine the most efficient means of scheduling aircraft when multiple runways are operational and the airport is operating at different utilization rates. The study compares simulation data for two and three runway scenarios to results from queuing theory for an M/D/n queue. The direction taken, however, is not to do a steady-state, or equilibrium, analysis since this is not the case during a rush period at a typical airport. Instead, a transient analysis of the delay per aircraft is performed. It is shown that the scheduling strategy that reduces the delay depends upon the density of the arrival traffic. For light traffic, scheduling aircraft to their preferred runways is sufficient; however, as the arrival rate increases, it becomes more important to separate traffic by weight class. Significant delay reduction is realized when aircraft that belong to the heavy and small weight classes are sent to separate runways with large aircraft put into the 'best' landing slot.
A Method for Comparing Completely Standardized Solutions in Multiple Groups.
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2000-01-01
Outlines a method for comparing completely standardized solutions in multiple groups. The method is based on a correlation structure analysis of equal-size samples and uses the correlation distribution theory implemented in the structural equation modeling program RAMONA. (SLD)
Enhanced performance for the interacting multiple model estimator with integrated multiple filters
NASA Astrophysics Data System (ADS)
Sabordo, Madeleine G.; Aboutanios, Elias
2015-05-01
In this paper, we propose a new approach to target visibility for the Interacting Multiple Model (IMM) algorithm. We introduce the IMM Integrated Multiple Filters (IMF) to selectively engage a suitable filter appropriate for gated clutter density at each time step and investigate five model sets that model the dynamic motion of a manoeuvring target. The model sets are incorporated into the IMM-IMF tracker to estimate the behaviour of the target. We employ the Dynamic Error Spectrum (DES) to assess the effectiveness of the tracker with target visibility concept incorporated and to compare the performance of the model sets in enhancing tracking performance. Results show that the new version of target visibility significantly improves the performance of the tracker. Simulation results also demonstrate that the 2CV-CA-2CT model set proves to be the most robust at the cost of computational resource. The CV-CA model is the fastest tracker. However, it is the least robust in terms of performance. These results assist decision makers and researchers in choosing appropriate models for IMMtrackers. Augmenting the capability of the tracker improves the ability of the platform to identify possible threats and consequently, enhance situational awareness.
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Berke, Laszlo; Gallagher, Richard H.
1990-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EE's) are integrated with the global compatibility conditions (CC's) to form the governing set of equations. In IFM the CC's are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
Implicit integration methods for dislocation dynamics
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-03-01
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. This paper investigates the viability of high-order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Collaborative Teaching of an Integrated Methods Course
ERIC Educational Resources Information Center
Zhou, George; Kim, Jinyoung; Kerekes, Judit
2011-01-01
With an increasing diversity in American schools, teachers need to be able to collaborate in teaching. University courses are widely considered as a stage to demonstrate or model the ways of collaboration. To respond to this call, three authors team taught an integrated methods course at an urban public university in the city of New York.…
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Bioluminescent bioreporter integrated circuit detection methods
Simpson, Michael L.; Paulus, Michael J.; Sayler, Gary S.; Applegate, Bruce M.; Ripp, Steven A.
2005-06-14
Disclosed are monolithic bioelectronic devices comprising a bioreporter and an OASIC. These bioluminescent bioreporter integrated circuit are useful in detecting substances such as pollutants, explosives, and heavy-metals residing in inhospitable areas such as groundwater, industrial process vessels, and battlefields. Also disclosed are methods and apparatus for detection of particular analytes, including ammonia and estrogen compounds.
Impaired functional integration in multiple sclerosis: a graph theory study.
Rocca, Maria A; Valsasina, Paola; Meani, Alessandro; Falini, Andrea; Comi, Giancarlo; Filippi, Massimo
2016-01-01
Aim of this study was to explore the topological organization of functional brain network connectivity in a large cohort of multiple sclerosis (MS) patients and to assess whether its disruption contributes to disease clinical manifestations. Graph theoretical analysis was applied to resting state fMRI data from 246 MS patients and 55 matched healthy controls (HC). Functional connectivity between 116 cortical and subcortical brain regions was estimated using a bivariate correlation analysis. Global network properties (network degree, global efficiency, hierarchy, path length and assortativity) were abnormal in MS patients vs HC, and contributed to distinguish cognitively impaired MS patients (34%) from HC, but not the main MS clinical phenotypes. Compared to HC, MS patients also showed: (1) a loss of hubs in the superior frontal gyrus, precuneus and anterior cingulum in the left hemisphere; (2) a different lateralization of basal ganglia hubs (mostly located in the left hemisphere in HC, and in the right hemisphere in MS patients); and (3) a formation of hubs, not seen in HC, in the left temporal pole and cerebellum. MS patients also experienced a decreased nodal degree in the bilateral caudate nucleus and right cerebellum. Such a modification of regional network properties contributed to cognitive impairment and phenotypic variability of MS. An impairment of global integration (likely to reflect a reduced competence in information exchange between distant brain areas) occurs in MS and is associated with cognitive deficits. A regional redistribution of network properties contributes to cognitive status and phenotypic variability of these patients.
Hamilton, Chris A; Hendrixson, Brent E; Brewer, Michael S; Bond, Jason E
2014-02-01
The North American tarantula genus Aphonopelma provides one of the greatest challenges to species delimitation and downstream identification in spiders because traditional morphological characters appear ineffective for evaluating limits of intra- and interspecific variation in the group. We evaluated the efficacy of numerous molecular-based approaches to species delimitation within Aphonopelma based upon the most extensive sampling of theraphosids to date, while also investigating the sensitivity of randomized taxon sampling on the reproducibility of species boundaries. Mitochondrial DNA (cytochrome c oxidase subunit I) sequences were sampled from 682 specimens spanning the genetic, taxonomic, and geographic breadth of the genus within the United States. The effects of random taxon sampling compared traditional Neighbor-Joining with three modern quantitative species delimitation approaches (ABGD, P ID(Liberal), and GMYC). Our findings reveal remarkable consistency and congruence across various approaches and sampling regimes, while highlighting highly divergent outcomes in GMYC. Our investigation allowed us to integrate methodologies into an efficient, consistent, and more effective general methodological workflow for estimating species boundaries within the mygalomorph spider genus Aphonopelma. Taken alone, these approaches are not particularly useful - especially in the absence of prior knowledge of the focal taxa. Only through the incorporation of multiple lines of evidence, employed in a hypothesis-testing framework, can the identification and delimitation of confident species boundaries be determined. A key point in studying closely related species, and perhaps one of the most important aspects of DNA barcoding, is to combine a sampling strategy that broadly identifies the extent of genetic diversity across the distributions of the species of interest and incorporates previous knowledge into the "species equation" (morphology, molecules, and natural history).
Fidelity of the Integrated Force Method Solution
NASA Technical Reports Server (NTRS)
Hopkins, Dale; Halford, Gary; Coroneos, Rula; Patnaik, Surya
2002-01-01
The theory of strain compatibility of the solid mechanics discipline was incomplete since St. Venant's 'strain formulation' in 1876. We have addressed the compatibility condition both in the continuum and the discrete system. This has lead to the formulation of the Integrated Force Method. A dual Integrated Force Method with displacement as the primal variable has also been formulated. A modest finite element code (IFM/Analyzers) based on the IFM theory has been developed. For a set of standard test problems the IFM results were compared with the stiffness method solutions and the MSC/Nastran code. For the problems IFM outperformed the existing methods. Superior IFM performance is attributed to simultaneous compliance of equilibrium equation and compatibility condition. MSC/Nastran organization expressed reluctance to accept the high fidelity IFM solutions. This report discusses the solutions to the examples. No inaccuracy was detected in the IFM solutions. A stiffness method code with a small programming effort can be improved to reap the many IFM benefits when implemented with the IFMD elements. Dr. Halford conducted a peer-review on the Integrated Force Method. Reviewers' response is included.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2012-05-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2013-10-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2008-06-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Belaineh, Getachew; Sumner, David; Carter, Edward; Clapp, David
2013-01-01
Potential evapotranspiration (PET) and reference evapotranspiration (RET) data are usually critical components of hydrologic analysis. Many different equations are available to estimate PET and RET. Most of these equations, such as the Priestley-Taylor and Penman- Monteith methods, rely on detailed meteorological data collected at ground-based weather stations. Few weather stations collect enough data to estimate PET or RET using one of the more complex evapotranspiration equations. Currently, satellite data integrated with ground meteorological data are used with one of these evapotranspiration equations to accurately estimate PET and RET. However, earlier than the last few decades, historical reconstructions of PET and RET needed for many hydrologic analyses are limited by the paucity of satellite data and of some types of ground data. Air temperature stands out as the most generally available meteorological ground data type over the last century. Temperature-based approaches used with readily available historical temperature data offer the potential for long period-of-record PET and RET historical reconstructions. A challenge is the inconsistency between the more accurate, but more data intensive, methods appropriate for more recent periods and the less accurate, but less data intensive, methods appropriate to the more distant past. In this study, multiple methods are harmonized in a seamless reconstruction of historical PET and RET by quantifying and eliminating the biases of the simple Hargreaves-Samani method relative to the more complex and accurate Priestley-Taylor and Penman-Monteith methods. This harmonization process is used to generate long-term, internally consistent, spatiotemporal databases of PET and RET.
Multiple time step integrators in ab initio molecular dynamics
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
Method and apparatus for controlling multiple motors
Jones, Rollin G.; Kortegaard, Bert L.; Jones, David F.
1987-01-01
A method and apparatus are provided for simultaneously controlling a plurality of stepper motors. Addressing circuitry generates address data for each motor in a periodic address sequence. Memory circuits respond to the address data for each motor by accessing a corresponding memory location containing a first operational data set functionally related to a direction for moving the motor, speed data, and rate of speed change. First logic circuits respond to the first data set to generate a motor step command. Second logic circuits respond to the command from the first logic circuits to generate a third data set for replacing the first data set in memory with a current operational motor status, which becomes the first data set when the motor is next addressed.
Package for integrated optic circuit and method
Kravitz, S.H.; Hadley, G.R.; Warren, M.E.; Carson, R.F.; Armendariz, M.G.
1998-08-04
A structure and method are disclosed for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package. 6 figs.
Package for integrated optic circuit and method
Kravitz, Stanley H.; Hadley, G. Ronald; Warren, Mial E.; Carson, Richard F.; Armendariz, Marcelino G.
1998-01-01
A structure and method for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package.
A Method for Obtaining Integrable Couplings
NASA Astrophysics Data System (ADS)
Zhang, Yu-Sen; Chen, Wei; Liao, Bo; Gong, Xin-Bo
2006-06-01
By making use of the vector product in R3, a commuting operation is introduced so that R3 becomes a Lie algebra. The resulting loop algebra tilde R3 is presented, from which the well-known AKNS hierarchy is produced. Again via applying the superposition of the commuting operations of the Lie algebra, a commuting operation in R6 is constructed so that R6 becomes a Lie algebra. Thanks to the corresponding loop algebra tilde R3 of the Lie algebra R3, the integrable coupling of the AKNS system is obtained. The method presented in this paper is rather simple and can be used to work out integrable coupling systems of the other known integrable hierarchies of soliton equations.
Recursive integral method for transmission eigenvalues
NASA Astrophysics Data System (ADS)
Huang, Ruihao; Struthers, Allan A.; Sun, Jiguang; Zhang, Ruming
2016-12-01
Transmission eigenvalue problems arise from inverse scattering theory for inhomogeneous media. These non-selfadjoint problems are numerically challenging because of a complicated spectrum. In this paper, we propose a novel recursive contour integral method for matrix eigenvalue problems from finite element discretizations of transmission eigenvalue problems. The technique tests (using an approximate spectral projection) if a region contains eigenvalues. Regions that contain eigenvalues are subdivided and tested recursively until eigenvalues are isolated with a specified precision. The method is fully parallel and requires no a priori spectral information. Numerical examples show the method is effective and robust.
Relationship between Multiple Regression and Selected Multivariable Methods.
ERIC Educational Resources Information Center
Schumacker, Randall E.
The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…
A parallel multiple path tracing method based on OptiX for infrared image generation
NASA Astrophysics Data System (ADS)
Wang, Hao; Wang, Xia; Liu, Li; Long, Teng; Wu, Zimu
2015-12-01
Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.
Multistep Methods for Integrating the Solar System
1988-07-01
Technical Report 1055 [Multistep Methods for Integrating the Solar System 0 Panayotis A. Skordos’ MIT Artificial Intelligence Laboratory DTIC S D g8...RMA ELEENT. PROECT. TASK Artific ial Inteligence Laboratory ARE1A G WORK UNIT NUMBERS 545 Technology Square Cambridge, MA 02139 IL. CONTROLLING...describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology, supported by the Advanced Research Projects
Generating nonlinear FM chirp radar signals by multiple integrations
Doerry, Armin W [Albuquerque, NM
2011-02-01
A phase component of a nonlinear frequency modulated (NLFM) chirp radar pulse can be produced by performing digital integration operations over a time interval defined by the pulse width. Each digital integration operation includes applying to a respectively corresponding input parameter value a respectively corresponding number of instances of digital integration.
NASA Astrophysics Data System (ADS)
Ochman, M.; Riemann, T.
Feynman integrals may be represented by the Mathematica packages AMBRE and MB as multiple Mellin-Barnes integrals. With the Mathematica package MBsums these Mellin-Barnes integrals are transformed into multiple sums.
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
Multiple Methods: Research Methods in Education Projects at NSF
ERIC Educational Resources Information Center
Suter, Larry E.
2005-01-01
Projects on science and mathematics education research supported by the National Science Foundation (US government) rarely employ a single method of study. Studies of educational practices that use experimental design are very rare. The most common research method is the case study method and the second most common is some form of experimental…
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Thermoelastic analysis of multiple defects with the extended finite element method
NASA Astrophysics Data System (ADS)
Jia, Honggang; Nie, Yufeng
2016-12-01
In this paper, the extended finite element method (XFEM) is adopted to analyze the interaction between a single macroscopic inclusion and a single macroscopic crack as well as that between multiple macroscopic or microscopic defects under thermal/mechanical load. The effects of different shapes of multiple inclusions on the material thermomechanical response are investigated, and the level set method is coupled with XFEM to analyze the interaction of multiple defects. Further, the discretized extended finite element approximations in relation to thermoelastic problems of multiple defects under displacement or temperature field are given. Also, the interfaces of cracks or materials are represented by level set functions, which allow the mesh assignment not to conform to crack or material interfaces. Moreover, stress intensity factors of cracks are obtained by the interaction integral method or the M-integral method, and the stress/strain/stiffness fields are simulated in the case of multiple cracks or multiple inclusions. Finally, some numerical examples are provided to demonstrate the accuracy of our proposed method.
ERIC Educational Resources Information Center
Lee, Hee-Sun; Liu, Ou Lydia; Linn, Marcia C.
2011-01-01
This study explores measurement of a construct called knowledge integration in science using multiple-choice and explanation items. We use construct and instructional validity evidence to examine the role multiple-choice and explanation items plays in measuring students' knowledge integration ability. For construct validity, we analyze item…
Information Integration in Multiple Cue Judgment: A Division of Labor Hypothesis
ERIC Educational Resources Information Center
Juslin, Peter; Karlsson, Linnea; Olsson, Henrik
2008-01-01
There is considerable evidence that judgment is constrained to additive integration of information. The authors propose an explanation of why serial and additive cognitive integration can produce accurate multiple cue judgment both in additive and non-additive environments in terms of an adaptive division of labor between multiple representations.…
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
Cardiac power integral: a new method for monitoring cardiovascular performance.
Rimehaug, Audun E; Lyng, Oddveig; Nordhaug, Dag O; Løvstakken, Lasse; Aadahl, Petter; Kirkeby-Garstad, Idar
2013-11-01
Cardiac power (PWR) is the continuous product of flow and pressure in the proximal aorta. Our aim was to validate the PWR integral as a marker of left ventricular energy transfer to the aorta, by comparing it to stroke work (SW) under multiple different loading and contractility conditions in subjects without obstructions in the left ventricular outflow tract. Six pigs were under general anesthesia equipped with transit time flow probes on their proximal aortas and Millar micromanometer catheters in their descending aortas to measure PWR, and Leycom conductance catheters in their left ventricles to measure SW. The PWR integral was calculated as the time integral of PWR per cardiac cycle. SW was calculated as the area encompassed by the pressure-volume loop (PV loop). The relationship between the PWR integral and SW was tested during extensive mechanical and pharmacological interventions that affected the loading conditions and myocardial contractility. The PWR integral displayed a strong correlation with SW in all pigs (R (2) > 0.95, P < 0.05) under all conditions, using a linear model. Regression analysis and Bland Altman plots also demonstrated a stable relationship. A mixed linear analysis indicated that the slope of the SW-to-PWR-integral relationship was similar among all six animals, whereas loading and contractility conditions tended to affect the slope. The PWR integral followed SW and appeared to be a promising parameter for monitoring the energy transferred from the left ventricle to the aorta. This conclusion motivates further studies to determine whether the PWR integral can be evaluated using less invasive methods, such as echocardiography combined with a radial artery catheter.
Integrating stakeholder values with multiple attributes to quantify watershed performance
NASA Astrophysics Data System (ADS)
Shriver, Deborah M.; Randhir, Timothy O.
2006-08-01
Integrating stakeholder values into the process of quantifying impairment of ecosystem functions is an important aspect of watershed assessment and planning. This study develops a classification and prioritization model to assess potential impairment in watersheds. A systematic evaluation of a broad set of abiotic, biotic, and human indicators of watershed structure and function was used to identify the level of degradation at a subbasin scale. Agencies and communities can use the method to effectively target and allocate resources to areas of greatest restoration need. The watershed performance measure (WPM) developed in this study is composed of three major components: (1) hydrologic processes (water quantity and quality), (2) biodiversity at a species scale (core and priority habitat for rare and endangered species and species richness) and landscape scale (impacts of fragmentation), and (3) urban impacts as assessed in the built environment (effective impervious area) and population effects (densities and density of toxic waste sites). Simulation modeling using the Soil and Water Assessment Tool (SWAT), monitoring information, and spatial analysis with GIS were used to assess each criterion in developing this model. Weights for attributes of potential impairment were determined through the use of the attribute prioritization procedure with a panel of expert stakeholders. This procedure uses preselected attributes and corresponding stakeholder values and is data intensive. The model was applied to all subbasins of the Chicopee River Watershed of western Massachusetts, an area with a mixture of rural, heavily forested lands, suburban, and urbanized areas. Highly impaired subbasins in one community were identified using this methodology and evaluated for principal forms of degradation and potential restoration policies and BMPs. This attribute-based prioritization method could be used in identifying baselines, prioritization policies, and adaptive community
Integration Strategies for Learners with Severe Multiple Disabilities.
ERIC Educational Resources Information Center
Eichinger, Joanne; Woltman, Sheila
1993-01-01
This article reports the experiences of one school district as it moved from serving students with severe disabilities in segregated programs to a full inclusion model. Year one focused on getting started, planning, and beginning integration efforts and year two on implementation of a structured peer integration program. Applicability of the full…
Integrated Force Method for Indeterminate Structures
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Halford, Gary R.; Patnaik, Surya N.
2008-01-01
Two methods of solving indeterminate structural-mechanics problems have been developed as products of research on the theory of strain compatibility. In these methods, stresses are considered to be the primary unknowns (in contrast to strains and displacements being considered as the primary unknowns in some prior methods). One of these methods, denoted the integrated force method (IFM), makes it possible to compute stresses, strains, and displacements with high fidelity by use of modest finite-element models that entail relatively small amounts of computation. The other method, denoted the completed Beltrami Mitchell formulation (CBMF), enables direct determination of stresses in an elastic continuum with general boundary conditions, without the need to first calculate displacements as in traditional methods. The equilibrium equation, the compatibility condition, and the material law are the three fundamental concepts of the theory of structures. For almost 150 years, it has been commonly supposed that the theory is complete. However, until now, the understanding of the compatibility condition remained incomplete, and the compatibility condition was confused with the continuity condition. Furthermore, the compatibility condition as applied to structures in its previous incomplete form was inconsistent with the strain formulation in elasticity.
Methods of Genomic Competency Integration in Practice
Jenkins, Jean; Calzone, Kathleen A.; Caskey, Sarah; Culp, Stacey; Weiner, Marsha; Badzek, Laurie
2015-01-01
Purpose Genomics is increasingly relevant to health care, necessitating support for nurses to incorporate genomic competencies into practice. The primary aim of this project was to develop, implement, and evaluate a year-long genomic education intervention that trained, supported, and supervised institutional administrator and educator champion dyads to increase nursing capacity to integrate genomics through assessments of program satisfaction and institutional achieved outcomes. Design Longitudinal study of 23 Magnet Recognition Program® Hospitals (21 intervention, 2 controls) participating in a 1-year new competency integration effort aimed at increasing genomic nursing competency and overcoming barriers to genomics integration in practice. Methods Champion dyads underwent genomic training consisting of one in-person kick-off training meeting followed by monthly education webinars. Champion dyads designed institution-specific action plans detailing objectives, methods or strategies used to engage and educate nursing staff, timeline for implementation, and outcomes achieved. Action plans focused on a minimum of seven genomic priority areas: champion dyad personal development; practice assessment; policy content assessment; staff knowledge needs assessment; staff development; plans for integration; and anticipated obstacles and challenges. Action plans were updated quarterly, outlining progress made as well as inclusion of new methods or strategies. Progress was validated through virtual site visits with the champion dyads and chief nursing officers. Descriptive data were collected on all strategies or methods utilized, and timeline for achievement. Descriptive data were analyzed using content analysis. Findings The complexity of the competency content and the uniqueness of social systems and infrastructure resulted in a significant variation of champion dyad interventions. Conclusions Nursing champions can facilitate change in genomic nursing capacity through
Curran, Patrick J.; Hussong, Andrea M.; Cai, Li; Huang, Wenjing; Chassin, Laurie; Sher, Kenneth J.; Zucker, Robert A.
2010-01-01
There are a number of significant challenges encountered when studying development over an extended period of time including subject attrition, changing measurement structures across group and developmental period, and the need to invest substantial time and money. Integrative data analysis is an emerging set of methodologies that overcomes many of the challenges of single sample designs through the pooling of data drawn from multiple existing developmental studies. This approach is characterized by a host of advantages, but this also introduces several new complexities that must be addressed prior to broad adoption by developmental researchers. In this paper we focus on methods for fitting measurement models and creating scale scores using data drawn from multiple longitudinal studies. We present findings from the analysis of repeated measures of internalizing symptomatology that were pooled from three existing developmental studies. We describe and demonstrate each step in the analysis and we conclude with a discussion of potential limitations and directions for future research. PMID:18331129
Curriculum Integration in Arts Education: Connecting Multiple Art Forms through the Idea of "Space"
ERIC Educational Resources Information Center
Bautista, Alfredo; Tan, Liang See; Ponnusamy, Letchmi Devi; Yau, Xenia
2016-01-01
Arts integration research has focused on documenting how the teaching of specific art forms can be integrated with "core" academic subject matters (e.g. science, mathematics and literacy). However, the question of how the teaching of multiple art forms themselves can be integrated in schools remains to be explored by educational…
Solution methods for very highly integrated circuits.
Nong, Ryan; Thornquist, Heidi K.; Chen, Yao; Mei, Ting; Santarelli, Keith R.; Tuminaro, Raymond Stephen
2010-12-01
While advances in manufacturing enable the fabrication of integrated circuits containing tens-to-hundreds of millions of devices, the time-sensitive modeling and simulation necessary to design these circuits poses a significant computational challenge. This is especially true for mixed-signal integrated circuits where detailed performance analyses are necessary for the individual analog/digital circuit components as well as the full system. When the integrated circuit has millions of devices, performing a full system simulation is practically infeasible using currently available Electrical Design Automation (EDA) tools. The principal reason for this is the time required for the nonlinear solver to compute the solutions of large linearized systems during the simulation of these circuits. The research presented in this report aims to address the computational difficulties introduced by these large linearized systems by using Model Order Reduction (MOR) to (i) generate specialized preconditioners that accelerate the computation of the linear system solution and (ii) reduce the overall dynamical system size. MOR techniques attempt to produce macromodels that capture the desired input-output behavior of larger dynamical systems and enable substantial speedups in simulation time. Several MOR techniques that have been developed under the LDRD on 'Solution Methods for Very Highly Integrated Circuits' will be presented in this report. Among those presented are techniques for linear time-invariant dynamical systems that either extend current approaches or improve the time-domain performance of the reduced model using novel error bounds and a new approach for linear time-varying dynamical systems that guarantees dimension reduction, which has not been proven before. Progress on preconditioning power grid systems using multi-grid techniques will be presented as well as a framework for delivering MOR techniques to the user community using Trilinos and the Xyce circuit simulator
Wang, Jinlian; Zuo, Yiming; Liu, Lun; Man, Yangao; Tadesse, Mahlet G.; Ressom, Habtom W
2014-01-01
Background Prediction of functional modules is indispensable for detecting protein deregulation in human complex diseases such as cancer. Bayesian network (BN) is one of the most commonly used models to integrate heterogeneous data from multiple sources such as protein domain, interactome, functional annotation, genome-wide gene expression, and the literature. Methods and Results In this paper, we present a BN classifier that is customized to: 1) increase the ability to integrate diverse information from different sources, 2) effectively predict protein-protein interactions, 3) infer aberrant networks with scale-free and small world properties, and 4) group molecules into functional modules or pathways based on the primary function and biological features. Application of this model on discovering protein biomarkers of hepatocelluar carcinoma (HCC) leads to the identification of functional modules that provide insights into the mechanism of the development and progression of HCC. These functional modules include cell cycle deregulation, increased angiogenesis (e.g., vascular endothelial growth factor, blood vessel morphogenesis), oxidative metabolic alterations, and aberrant activation of signaling pathways involved in cellular proliferation, survival, and differentiation. Conclusion The discoveries and conclusions derived from our customized BN classifier are consistent with previously published results. The proposed approach for determining BN structure facilitates the integration of heterogeneous data from multiple sources to elucidate the mechanisms of complex diseases. PMID:24736851
Manservisi, Fabiana; Marquillas, Clara Babot; Buscaroli, Annalisa; Huff, James; Lauriola, Michelina; Mandrioli, Daniele; Manservigi, Marco; Panzacchi, Simona; Silbergeld, Ellen K.; Belpoggi, Fiorella
2016-01-01
Background: For nearly five decades long-term studies in rodents have been the accepted benchmark for assessing chronic long-term toxic effects, particularly carcinogenicity, of chemicals. The European Food Safety Authority (EFSA) and the World Health Organization (WHO) have pointed out that the current set of internationally utilized test methods capture only some of the potential adverse effects associated with exposures to these agents over the lifetime. Objectives: In this paper, we propose the adaption of the carcinogenicity bioassay to integrate additional protocols for comprehensive long-term toxicity assessment that includes developmental exposures and long-term outcomes, capable of generating information on a broad spectrum of different end points. Discussion: An integrated study design based on a stepwise process is described that includes the priority end points of the Economic Co-operation and Development and the National Toxicology Program guidelines on carcinogenicity and chronic toxicity and developmental and reproductive toxicity. Integrating a comprehensive set of relevant toxicological end points in a single protocol represents an opportunity to optimize animal use in accordance with the 3Rs (replacement, reduction and refinement). This strategy has the potential to provide sufficient data on multiple windows of susceptibility of specific interest for risk assessments and public health decision-making by including prenatal, lactational, neonatal exposures and evaluating outcomes over the lifespan. Conclusion: This integrated study design is efficient in that the same generational cohort of rats used for evaluating long-term outcomes can be monitored in satellite parallel experiments to measure biomarkers and other parameters related to system-specific responses including metabolic alterations and endocrine disturbances. Citation: Manservisi F, Babot Marquillas C, Buscaroli A, Huff J, Lauriola M, Mandrioli D, Manservigi M, Panzacchi S, Silbergeld
Integrability: mathematical methods for studying solitary waves theory
NASA Astrophysics Data System (ADS)
Wazwaz, Abdul-Majid
2014-03-01
In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the
2013-01-01
Background Immunohistochemistry (IHC) is a well-established method for the analysis of protein expression in tissue specimens and constitutes one of the most common methods performed in pathology laboratories worldwide. However, IHC is a multi-layered method based on subjective estimations and differences in staining and interpretation has been observed between facilities, suggesting that the analysis of proteins on tissue would benefit from protocol optimization and standardization. Here we describe how the emerging and operator independent tool of real-time immunohistochemistry (RT-IHC) reveals a time resolved description of antibody interacting with target protein in formalin fixed paraffin embedded tissue. The aim was to understand the technical aspects of RT-IHC, regarding generalization of the concept and to what extent it can be considered a quantitative method. Results Three different antibodies labeled with fluorescent or radioactive labels were applied on nine different tissue samples from either human or mouse, and the results for all RT-IHC analyses distinctly show that the method is generally applicable. The collected binding curves showed that the majority of the antibody-antigen interactions did not reach equilibrium within 3 hours, suggesting that standardized protocols for immunohistochemistry are sometimes inadequately optimized. The impact of tissue size and thickness as well as the position of the section on the glass petri dish was assessed in order for practical details to be further elucidated for this emerging technique. Size and location was found to affect signal magnitude to a larger extent than thickness, but the signal from all measurements were still sufficient to trace the curvature. The curvature, representing the kinetics of the interaction, was independent of thickness, size and position and may be a promising parameter for the evaluation of e.g. biopsy sections of different sizes. Conclusions It was found that RT-IHC can be used
ERIC Educational Resources Information Center
Crawford, Carrie L.
1990-01-01
Reviews literature on hypnosis, imagery, and metaphor as applied to the treatment and integration of those with multiple personality disorder (MPD) and dissociative states. Considers diagnostic criteria of MPD; explores current theories of etiology and treatment; and suggests specific examples of various clinical methods of treatment using…
Integrative Data Analysis: The Simultaneous Analysis of Multiple Data Sets
ERIC Educational Resources Information Center
Curran, Patrick J.; Hussong, Andrea M.
2009-01-01
There are both quantitative and methodological techniques that foster the development and maintenance of a cumulative knowledge base within the psychological sciences. Most noteworthy of these techniques is meta-analysis, which allows for the synthesis of summary statistics drawn from multiple studies when the original data are not available.…
Method for measuring multiple scattering corrections between liquid scintillators
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Method for measuring multiple scattering corrections between liquid scintillators
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-07-01
A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Balliu, Brunilda; Tsonaka, Roula; Boehringer, Stefan; Houwing-Duistermaat, Jeanine
2015-03-01
Integrative omics, the joint analysis of outcome and multiple types of omics data, such as genomics, epigenomics, and transcriptomics data, constitute a promising approach for powerful and biologically relevant association studies. These studies often employ a case-control design, and often include nonomics covariates, such as age and gender, that may modify the underlying omics risk factors. An open question is how to best integrate multiple omics and nonomics information to maximize statistical power in case-control studies that ascertain individuals based on the phenotype. Recent work on integrative omics have used prospective approaches, modeling case-control status conditional on omics, and nonomics risk factors. Compared to univariate approaches, jointly analyzing multiple risk factors with a prospective approach increases power in nonascertained cohorts. However, these prospective approaches often lose power in case-control studies. In this article, we propose a novel statistical method for integrating multiple omics and nonomics factors in case-control association studies. Our method is based on a retrospective likelihood function that models the joint distribution of omics and nonomics factors conditional on case-control status. The new method provides accurate control of Type I error rate and has increased efficiency over prospective approaches in both simulated and real data.
Zhao, Dong; Su, Baiquan; Chen, Guowen; Liao, Hongen
2015-04-20
In this paper, we present a polyhedron-shaped floating autostereoscopic display viewable from 360 degrees using integral photography (IP) and multiple semitransparent mirrors. IP combined with polyhedron-shaped multiple semitransparent mirrors is used to achieve a 360 degree viewable floating three-dimensional (3D) autostereoscopic display, having the advantage of being able to be viewed by several observers from various viewpoints simultaneously. IP is adopted to generate a 3D autostereoscopic image with full parallax property. Multiple semitransparent mirrors reflect corresponding IP images, and the reflected IP images are situated around the center of the polyhedron-shaped display device for producing the floating display. The spatial reflected IP images reconstruct a floating autostereoscopic image viewable from 360 degrees. We manufactured two prototypes for producing such displays and performed two sets of experiments to evaluate the feasibility of the method described above. The results of our experiments showed that our approach can achieve a floating autostereoscopic display viewable from surrounding area. Moreover, it is shown the proposed method is feasible to facilitate the continuous viewpoint of a whole 360 degree display without flipping.
Parallel methods for dynamic simulation of multiple manipulator systems
NASA Technical Reports Server (NTRS)
Mcmillan, Scott; Sadayappan, P.; Orin, David E.
1993-01-01
In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
New compensation method for bulk optical sensors with multiple birefringences.
Lee, K S
1989-06-01
The dielectric tensor of an anisotropic crystal with multiple perturbations is presented to include the effects of multiple perturbations. To study electromagnetic wave propagation in anisotropic crystals subject to various influences the perturbed dielectric tensor is substituted into Maxwell's equation. Then, a 2 x 2 transmission matrix formalism, based on a normal-mode approach, is extended to anisotropic crystals possessing multiple birefringences to develop compensation schemes for ac optical sensors employing the crystal. It is shown that a new compensation method utilizing two analyzers can eliminate the effects of both unwanted linear birefringences and unwanted circular birefringences on the stability of the ac bulk polarimetric optical sensor. The conditions (here referred to as the quenching condition) in which the compensation method becomes important are also derived for both the voltage (or electric field) and current (or magnetic field) sensors.
Identifying multiple submissions in Internet research: preserving data integrity.
Bowen, Anne M; Daniel, Candice M; Williams, Mark L; Baird, Grayson L
2008-11-01
Internet-based sexuality research with hidden populations has become increasingly popular. Respondent anonymity may encourage participation and lower social desirability, but associated disinhibition may promote multiple submissions, especially when incentives are offered. The goal of this study was to identify the usefulness of different variables for detecting multiple submissions from repeat responders and to explore incentive effects. The data included 1,900 submissions from a three-session Internet intervention with a pretest and three post-test questionnaires. Participants were men who have sex with men and incentives were offered to rural participants for completing each questionnaire. The final number of submissions included 1,273 "unique", 132 first submissions by "repeat responders" and 495 additional submissions by the "repeat responders" (N = 1,900). Four categories of repeat responders were identified: "infrequent" (2-5 submissions), "persistent" (6-10 submissions), "very persistent" (11-30 submissions), and "hackers" (more than 30 submissions). Internet Provider (IP) addresses, user names, and passwords were the most useful for identifying "infrequent" repeat responders. "Hackers" often varied their IP address and identifying information to prevent easy identification, but investigating the data for small variations in IP, using reverse telephone look up, and patterns across usernames and passwords were helpful. Incentives appeared to play a role in stimulating multiple submissions, especially from the more sophisticated "hackers". Finally, the web is ever evolving and it will be necessary to have good programmers and staff who evolve as fast as "hackers".
Satellite attitude prediction by multiple time scales method
NASA Technical Reports Server (NTRS)
Tao, Y. C.; Ramnath, R.
1975-01-01
An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.
Multiple integral representation for the trigonometric SOS model with domain wall boundaries
NASA Astrophysics Data System (ADS)
Galleas, W.
2012-05-01
Using the dynamical Yang-Baxter algebra we derive a functional equation for the partition function of the trigonometric SOS model with domain wall boundary conditions. The solution of the equation is given in terms of a multiple contour integral.
Partial-Credit Scoring Methods for Multiple-Choice Tests.
ERIC Educational Resources Information Center
Frary, Robert B.
1989-01-01
Multiple-choice response and scoring methods that attempt to determine an examinee's degree of knowledge about each item in order to produce a total test score are reviewed. There is apparently little advantage to such schemes; however, they may have secondary benefits such as providing feedback to enhance learning. (SLD)
Methods for the Joint Meta-Analysis of Multiple Tests
ERIC Educational Resources Information Center
Trikalinos, Thomas A.; Hoaglin, David C.; Small, Kevin M.; Terrin, Norma; Schmid, Christopher H.
2014-01-01
Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests'…
Accelerating Ab Initio Path Integral Simulations via Imaginary Multiple-Timestepping.
Cheng, Xiaolu; Herr, Jonathan D; Steele, Ryan P
2016-04-12
This work investigates the use of multiple-timestep schemes in imaginary time for computationally efficient ab initio equilibrium path integral simulations of quantum molecular motion. In the simplest formulation, only every n(th) path integral replica is computed at the target level of electronic structure theory, whereas the remaining low-level replicas still account for nuclear motion quantum effects with a more computationally economical theory. Motivated by recent developments for multiple-timestep techniques in real-time classical molecular dynamics, both 1-electron (atomic-orbital basis set) and 2-electron (electron correlation) truncations are shown to be effective. Structural distributions and thermodynamic averages are tested for representative analytic potentials and ab initio molecular examples. Target quantum chemistry methods include density functional theory and second-order Møller-Plesset perturbation theory, although any level of theory is formally amenable to this framework. For a standard two-level splitting, computational speedups of 1.6-4.0x are observed when using a 4-fold reduction in time slices; an 8-fold reduction is feasible in some cases. Multitiered options further reduce computational requirements and suggest that quantum mechanical motion could potentially be obtained at a cost not significantly different from the cost of classical simulations.
ERIC Educational Resources Information Center
Daniel, Shannon M.
2015-01-01
In this self-study, the author reflects on her implementation of empathetic, critical integrations of multiple perspectives (ECI), which she designed to afford preservice teachers the opportunity to discuss and collectively reflect upon the oft-diverging multiple perspectives, values, and practices they experience during their practicum (Daniel,…
Characterizing lentic freshwater fish assemblages using multiple sampling methods
Fischer, Jesse R.; Quist, Michael
2014-01-01
Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.
Method and apparatus for fiber optic multiple scattering suppression
NASA Technical Reports Server (NTRS)
Ackerson, Bruce J. (Inventor)
2000-01-01
The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.
Students' integration of multiple representations in a titration experiment
NASA Astrophysics Data System (ADS)
Kunze, Nicole M.
A complete understanding of a chemical concept is dependent upon a student's ability to understand the microscopic or particulate nature of the phenomenon and integrate the microscopic, symbolic, and macroscopic representations of the phenomenon. Acid-base chemistry is a general chemistry topic requiring students to understand the topics of chemical reactions, solutions, and equilibrium presented earlier in the course. In this study, twenty-five student volunteers from a second semester general chemistry course completed two interviews. The first interview was completed prior to any classroom instruction on acids and bases. The second interview took place after classroom instruction, a prelab activity consisting of a titration calculation worksheet, a titration computer simulation, or a microscopic level animation of a titration, and two microcomputer-based laboratory (MBL) titration experiments. During the interviews, participants were asked to define and describe acid-base concepts and in the second interview they also drew the microscopic representations of four stages in an acid-base titration. An analysis of the data showed that participants had integrated the three representations of an acid-base titration to varying degrees. While some participants showed complete understanding of acids, bases, titrations, and solution chemistry, other participants showed several alternative conceptions concerning strong acid and base dissociation, the formation of titration products, and the dissociation of soluble salts. Before instruction, participants' definitions of acid, base, and pH were brief and consisted of descriptive terms. After instruction, the definitions were more scientific and reflected the definitions presented during classroom instruction.
2012-01-01
Background Protein-protein interactions (PPIs) play crucial roles in virtually every aspect of cellular function within an organism. Over the last decade, the development of novel high-throughput techniques has resulted in enormous amounts of data and provided valuable resources for studying protein interactions. However, these high-throughput protein interaction data are often associated with high false positive and false negative rates. It is therefore highly desirable to develop scalable methods to identify these errors from the computational perspective. Results We have developed a robust computational technique for assessing the reliability of interactions and predicting new interactions by combining manifold embedding with multiple information integration. Validation of the proposed method was performed with extensive experiments on densely-connected and sparse PPI networks of yeast respectively. Results demonstrate that the interactions ranked top by our method have high functional homogeneity and localization coherence. Conclusions Our proposed method achieves better performances than the existing methods no matter assessing or predicting protein interactions. Furthermore, our method is general enough to work over a variety of PPI networks irrespectively of densely-connected or sparse PPI network. Therefore, the proposed algorithm is a much more promising method to detect both false positive and false negative interactions in PPI networks. PMID:22595000
Exercise in multiple sclerosis -- an integral component of disease management
2012-01-01
Multiple sclerosis (MS) is the most common chronic inflammatory disorder of the central nervous system (CNS) in young adults. The disease causes a wide range of symptoms depending on the localization and characteristics of the CNS pathology. In addition to drug-based immunomodulatory treatment, both drug-based and non-drug approaches are established as complementary strategies to alleviate existing symptoms and to prevent secondary diseases. In particular, physical therapy like exercise and physiotherapy can be customized to the individual patient's needs and has the potential to improve the individual outcome. However, high quality systematic data on physical therapy in MS are rare. This article summarizes the current knowledge on the influence of physical activity and exercise on disease-related symptoms and physical restrictions in MS patients. Other treatment strategies such as drug treatments or cognitive training were deliberately excluded for the purposes of this article. PMID:22738091
Mafuba, Kay; Gates, Bob
2012-12-01
This paper explores and advocates the use of sequential multiple methods as a contemporary strategy for undertaking research. Sequential multiple methods involve the use of results obtained through one data collection method to determine the direction and implementation of subsequent stages of a research project (Morse, 1991; Morgan, 1998). This paper will also explore the significance of how triangulating research at the epistemological, theoretical and methodological levels could enhance research. Finally the paper evaluates the significance of sequential multiple method in learning disability nursing research practice.
A method for interactive specification of multiple-block topologies
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.; Mccann, Karen M.
1991-01-01
A method is presented for dealing with the vast amount of topological and other data which must be specified to generate a multiple-block computational grid. Specific uses of the graphical capabilities of a powerful scientific workstation are described which reduce the burden on the user of collecting and formatting such large amounts of data. A program to implement this method, 3DPREP, is described. A plotting transformation algorithm, some useful software tools, notes on programming, and a database organization are also presented. Example grids developed using the method are shown.
Galerkin projection methods for solving multiple related linear systems
Chan, T.F.; Ng, M.; Wan, W.L.
1996-12-31
We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.
Integrating regional conservation priorities for multiple objectives into national policy
Beger, Maria; McGowan, Jennifer; Treml, Eric A.; Green, Alison L.; White, Alan T.; Wolff, Nicholas H.; Klein, Carissa J.; Mumby, Peter J.; Possingham, Hugh P.
2015-01-01
Multinational conservation initiatives that prioritize investment across a region invariably navigate trade-offs among multiple objectives. It seems logical to focus where several objectives can be achieved efficiently, but such multi-objective hotspots may be ecologically inappropriate, or politically inequitable. Here we devise a framework to facilitate a regionally cohesive set of marine-protected areas driven by national preferences and supported by quantitative conservation prioritization analyses, and illustrate it using the Coral Triangle Initiative. We identify areas important for achieving six objectives to address ecosystem representation, threatened fauna, connectivity and climate change. We expose trade-offs between areas that contribute substantially to several objectives and those meeting one or two objectives extremely well. Hence there are two strategies to guide countries choosing to implement regional goals nationally: multi-objective hotspots and complementary sets of single-objective priorities. This novel framework is applicable to any multilateral or global initiative seeking to apply quantitative information in decision making. PMID:26364769
Measuring multiple residual-stress components using the contour method and multiple cuts
Prime, Michael B; Swenson, Hunter; Pagliaro, Pierluigi; Zuccarello, Bernardo
2009-01-01
The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.
Multiple-Time Step Ab Initio Molecular Dynamics Based on Two-Electron Integral Screening.
Fatehi, Shervin; Steele, Ryan P
2015-03-10
A multiple-timestep ab initio molecular dynamics scheme based on varying the two-electron integral screening method used in Hartree-Fock or density functional theory calculations is presented. Although screening is motivated by numerical considerations, it is also related to separations in the length- and timescales characterizing forces in a molecular system: Loose thresholds are sufficient to describe fast motions over short distances, while tight thresholds may be employed for larger length scales and longer times, leading to a practical acceleration of ab initio molecular dynamics simulations. Standard screening approaches can lead, however, to significant discontinuities in (and inconsistencies between) the energy and gradient when the screening threshold is loose, making them inappropriate for use in dynamics. To remedy this problem, a consistent window-screening method that smooths these discontinuities is devised. Further algorithmic improvements reuse electronic-structure information within the dynamics step and enhance efficiency relative to a naı̈ve multiple-timestepping protocol. The resulting scheme is shown to realize meaningful reductions in the cost of Hartree-Fock and B3LYP simulations of a moderately large system, the protonated sarcosine/glycine dipeptide embedded in a 19-water cluster.
Shi, Hua; Liu, Hu-Chen; Li, Ping; Xu, Xue-Guo
2017-01-01
With increased worldwide awareness of environmental issues, healthcare waste (HCW) management has received much attention from both researchers and practitioners over the past decade. The task of selecting the optimum treatment technology for HCWs is a challenging decision making problem involving conflicting evaluation criteria and multiple stakeholders. In this paper, we develop an integrated decision making framework based on cloud model and MABAC method for evaluating and selecting the best HCW treatment technology from a multiple stakeholder perspective. The introduced framework deals with uncertain linguistic assessments of alternatives by using interval 2-tuple linguistic variables, determines decision makers' relative weights based on the uncertainty and divergence degrees of every decision maker, and obtains the ranking of all HCW disposal alternatives with the aid of an extended MABAC method. Finally, an empirical example from Shanghai, China, is provided to illustrate the feasibility and effectiveness of the proposed approach. Results indicate that the methodology being proposed is more suitable and effective to handle the HCW treatment technology selection problem under vague and uncertain information environment.
Integrating multiple scientific computing needs via a Private Cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.
2014-06-01
In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.
Lidar Tracking of Multiple Fluorescent Tracers: Method and Field Test
NASA Technical Reports Server (NTRS)
Eberhard, Wynn L.; Willis, Ron J.
1992-01-01
Past research and applications have demonstrated the advantages and usefulness of lidar detection of a single fluorescent tracer to track air motions. Earlier researchers performed an analytical study that showed good potential for lidar discrimination and tracking of two or three different fluorescent tracers at the same time. The present paper summarizes the multiple fluorescent tracer method, discusses its expected advantages and problems, and describes our field test of this new technique.
Assessing District Energy Systems Performance Integrated with Multiple Thermal Energy Storages
NASA Astrophysics Data System (ADS)
Rezaie, Behnaz
The goal of this study is to examine various energy resources in district energy (DE) systems and then DE system performance development by means of multiple thermal energy storages (TES) application. This study sheds light on areas not yet investigated precisely in detail. Throughout the research, major components of the heat plant, energy suppliers of the DE systems, and TES characteristics are separately examined; integration of various configurations of the multiple TESs in the DE system is then analysed. In the first part of the study, various sources of energy are compared, in a consistent manner, financially and environmentally. The TES performance is then assessed from various aspects. Then, TES(s) and DE systems with several sources of energy are integrated, and are investigated as a heat process centre. The most efficient configurations of the multiple TESs integrated with the DE system are investigated. Some of the findings of this study are applied on an actual DE system. The outcomes of this study provide insight for researchers and engineers who work in this field, as well as policy makers and project managers who are decision-makers. The accomplishments of the study are original developments TESs and DE systems. As an original development the Enviro-Economic Function, to balance the economic and environmental aspects of energy resources technologies in DE systems, is developed; various configurations of multiple TESs, including series, parallel, and general grid, are developed. The developed related functions are discharge temperature and energy of the TES, and energy and exergy efficiencies of the TES. The TES charging and discharging behavior of TES instantaneously is also investigated to obtain the charging temperature, the maximum charging temperature, the charging energy flow, maximum heat flow capacity, the discharging temperature, the minimum charging temperature, the discharging energy flow, the maximum heat flow capacity, and performance
Integrating Stratification and Information Approaches for Multiple Constrained CAT.
ERIC Educational Resources Information Center
Leung, Chi-Keung; Chang, Hua-Hua; Hau, Kit-Tai
It is widely believed that item selection methods using the maximum information approach (MI) can maintain high efficiency in trait estimation by repeatedly choosing high discriminating (alpha) items. However, the consequence is that they lead to extremely skewed item exposure distribution in which items with high alpha values becoming overly…
Integration of multiple research disciplines on the International Space Station
NASA Technical Reports Server (NTRS)
Penley, N. J.; Uri, J.; Sivils, T.; Bartoe, J. D.
2000-01-01
The International Space Station will provide an extremely high-quality, long-duration microgravity environment for the conduct of research. In addition, the ISS offers a platform for performing observations of Earth and Space from a high-inclination orbit, outside of the Earth's atmosphere. This unique environment and observational capability offers the opportunity for advancement in a diverse set of research fields. Many of these disciplines do not relate to one another, and present widely differing approaches to study, as well as different resource and operational requirements. Significant challenges exist to ensure the highest quality research return for each investigation. Requirements from different investigations must be identified, clarified, integrated and communicated to ISS personnel in a consistent manner. Resources such as power, crew time, etc. must be apportioned to allow the conduct of each investigation. Decisions affecting research must be made at the strategic level as well as at a very detailed execution level. The timing of the decisions can range from years before an investigation to real-time operations. The international nature of the Space Station program adds to the complexity. Each participating country must be assured that their interests are represented during the entire planning and operations process. A process for making decisions regarding research planning, operations, and real-time replanning is discussed. This process ensures adequate representation of all research investigators. It provides a means for timely decisions, and it includes a means to ensure that all ISS International Partners have their programmatic interests represented. c 2000 Published by Elsevier Science Ltd. All rights reserved.
Propagation error minimization method for multiple structural displacement monitoring system
NASA Astrophysics Data System (ADS)
Jeon, Haemin; Shin, Jae-Uk; Myung, Hyun
2013-04-01
In the previous study, a visually servoed paired structured light system (ViSP) which is composed of two sides facing each other, each with one or two lasers, a 2-DOF manipulator, a camera, and a screen has been proposed. The lasers project their parallel beams to the screen on the opposite side and 6-DOF relative displacement between two sides is estimated by calculating positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive civil structures such as long-span bridges or high-rise buildings, the whole area should be divided into multiple partitions and each ViSP module is placed in each partition in a cascaded manner. In other words, the movement of the entire structure can be monitored by multiplying the estimated displacements from multiple ViSP modules. In the multiplication, however, there is a major problem that the displacement estimation error is propagated throughout the multiple modules. To solve the problem, propagation error minimization method (PEMM) which uses Newton-Raphson formulation inspired by the error back-propagation algorithm is proposed. In this method, a propagation error at the last module is calculated and then the estimated displacement from ViSP at each partition is updated in reverse order by using the proposed PEMM that minimizes the propagation error. To verify the performance of the proposed method, various simulations and experimental tests have been performed. The results show that the propagation error is significantly reduced after applying PEMM.
Calculation of transonic flows using an extended integral equation method
NASA Technical Reports Server (NTRS)
Nixon, D.
1976-01-01
An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.
An Integrated Approach for Accessing Multiple Datasets through LANCE
NASA Astrophysics Data System (ADS)
Murphy, K. J.; Teague, M.; Conover, H.; Regner, K.; Beaumont, B.; Masuoka, E.; Vollmer, B.; Theobald, M.; Durbin, P.; Michael, K.; Boller, R. A.; Schmaltz, J. E.; Davies, D.; Horricks, K.; Ilavajhala, S.; Thompson, C. K.; Bingham, A.
2011-12-01
The NASA/GSFC Land Atmospheres Near-real time Capability for EOS (LANCE) provides imagery for approximately 40 data products from MODIS, AIRS, AMSR-E and OMI to support the applications community in the study of a variety of phenomena. Thirty-six of these products are available within 2.5 hours of observation at the spacecraft. The data set includes the population density data provided by the EOSDIS Socio-Economic Data and Applications Center (SEDAC). The purpose of this paper is to describe the variety of tools that have been developed by LANCE to support user access to the imagery. The long-standing Rapid Response system has been integrated into LANCE and is a major vehicle for the distribution of the imagery to end users. There are presently approximately 10,000 anonymous users per month accessing these imagery. The products are grouped into 14 applications categories such as Smoke Plumes, Pollution, Fires, Agriculture and the selection of any category will make relevant subsets of the 40 products available as possible overlays in an interactive Web Client utilizing Web Mapping Service (WMS) to support user investigations (http://lance2.modaps.eosdis.nasa.gov/wms/). For example, selecting Severe Storms will include 6 products for MODIS, OMI, AIRS, and AMSR-E plus the SEDAC population density data. The client and WMS were developed using open-source technologies such as OpenLayers and MapServer and provides a uniform, browser-based access to data products. All overlays are downloadable in PNG, JPEG, or GeoTiff form up to 200MB per request. The WMS was beta-tested with the user community and substantial performance improvements were made through the use of such techniques as tile-caching. LANCE established a partnership with Physical Oceanography Distributed Active Archive Center (PO DAAC) to develop an alternative presentation for the 40 data products known as the State of the Earth (SOTE). This provides a Google Earth-based interface to the products grouped in
Differential operator multiplication method for fractional differential equations
NASA Astrophysics Data System (ADS)
Tang, Shaoqiang; Ying, Yuping; Lian, Yanping; Lin, Stephen; Yang, Yibo; Wagner, Gregory J.; Liu, Wing Kam
2016-11-01
Fractional derivatives play a very important role in modeling physical phenomena involving long-range correlation effects. However, they raise challenges of computational cost and memory storage requirements when solved using current well developed numerical methods. In this paper, the differential operator multiplication method is proposed to address the issues by considering a reaction-advection-diffusion equation with a fractional derivative in time. The linear fractional differential equation is transformed into an integer order differential equation by the proposed method, which can fundamentally fix the aforementioned issues for select fractional differential equations. In such a transform, special attention should be paid to the initial conditions for the resulting differential equation of higher integer order. Through numerical experiments, we verify the proposed method for both fractional ordinary differential equations and partial differential equations.
NASA Technical Reports Server (NTRS)
Banyukevich, A.; Ziolkovski, K.
1975-01-01
A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.
An integrated modeling method for wind turbines
NASA Astrophysics Data System (ADS)
Fadaeinedjad, Roohollah
To study the interaction of the electrical, mechanical, and aerodynamic aspects of a wind turbine, a detailed model that considers all these aspects must be used. A drawback of many studies in the area of wind turbine simulation is that either a very simple mechanical model is used with a detailed electrical model, or vice versa. Hence the interactions between electrical and mechanical aspects of wind turbine operation are not accurately taken into account. In this research, it will be shown that a combination of different simulation packages, namely TurbSim, FAST, and Simulink can be used to model the aerodynamic, mechanical, and electrical aspects of a wind turbine in detail. In this thesis, after a review of some wind turbine concepts and software tools, a simulation structure is proposed for studying wind turbines that integrates the mechanical and electrical components of a wind energy conversion device. Based on the simulation structure, a comprehensive model for a three-bladed variable speed wind turbine with doubly-fed induction generator is developed. Using the model, the impact of a voltage sag on the wind turbine tower vibration is investigated under various operating conditions such as power system short circuit level, mechanical parameters, and wind turbine operating conditions. It is shown how an electrical disturbance can cause more sustainable tower vibrations under high speed and turbulent wind conditions, which may disrupt the operation of pitch control system. A similar simulation structure is used to model a two-bladed fixed speed wind turbine with an induction generator. An extension of the concept is introduced by adding a diesel generator system. The model is utilized to study the impact of the aeroelastic aspects of wind turbine (i.e. tower shadow, wind shears, yaw error, turbulence, and mechanical vibrations) on the power quality of a stand-alone wind-diesel system. Furthermore, an IEEE standard flickermeter model is implemented in a
Field Evaluation of Personal Sampling Methods for Multiple Bioaerosols
Wang, Chi-Hsun; Chen, Bean T.; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine
2015-01-01
Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols. PMID:25799419
Lattice Boltzmann equation method for multiple immiscible continuum fluids.
Spencer, T J; Halliday, I; Care, C M
2010-12-01
This paper generalizes the two-component algorithm of Sec. , extending it, in Sec. , to describe N>2 mutually immiscible fluids in the isothermal continuum regime. Each fluid has an independent interfacial tension. While retaining all its computational advantages, we remove entirely the empiricism associated with contact behavior in our previous multiple immiscible fluid models [M. M. Dupin, Phys. Rev. E 73, 055701(R) (2006); Med. Eng. Phys. 28, 13 (2006)] while solidifying the physical foundations. Moreover, the model relies upon a fluid-fluid segregation which is simpler, computationally faster, more free of artifacts (i.e., the interfacial microcurrent), and upon an interface-inducing force distribution which is analytic. The method is completely symmetric between any numbers of immiscible fluids and stable over a wide range of directly input interfacial tension. We present data on the steady-state properties of multiple interface model, which are in good agreement with theory [R. E. Johnson and S. S. Sadhal, Annu. Rev. Fluid Mech. 17, 289 (1985)], specifically on the shapes of multidrop systems. Section is an analysis of the kinetic and continuum-scale descriptions of the underlying two-component lattice Boltzmann model for immiscible fluids, extendable to more than two immiscible fluids. This extension requires (i) the use of a more local kinetic equation perturbation which is (ii) free from a reliance on measured interfacial curvature. It should be noted that viewed simply as a two-component method, the continuum algorithm is inferior to our previous methods, reported by Lishchuk [Phys. Rev. E 67, 036701 (2003)] and Halliday [Phys. Rev. E 76, 026708 (2007)]. Greater stability and parameter range is achieved in multiple drop simulations by using the forced multi-relaxation-time lattice Boltzmann method developed, along with (for completeness) a forced exactly incompressible Bhatnagar-Gross-Krook lattice Boltzmann model, in the Appendix. These appended schemes
Integrated Multiple “-omics” Data Reveal Subtypes of Hepatocellular Carcinoma
Liu, Gang; Dong, Chuanpeng; Liu, Lei
2016-01-01
Hepatocellular carcinoma is one of the most heterogeneous cancers, as reflected by its multiple grades and difficulty to subtype. In this study, we integrated copy number variation, DNA methylation, mRNA, and miRNA data with the developed “cluster of cluster” method and classified 256 HCC samples from TCGA (The Cancer Genome Atlas) into five major subgroups (S1-S5). We observed that this classification was associated with specific mutations and protein expression, and we detected that each subgroup had distinct molecular signatures. The subclasses were associated not only with survival but also with clinical observations. S1 was characterized by bulk amplification on 8q24, TP53 mutation, low lipid metabolism, highly expressed onco-proteins, attenuated tumor suppressor proteins and a worse survival rate. S2 and S3 were characterized by telomere hypomethylation and a low expression of TERT and DNMT1/3B. Compared to S2, S3 was associated with less copy number variation and some good prognosis biomarkers, including CRP and CYP2E1. In contrast, the mutation rate of CTNNB1 was higher in S3. S4 was associated with bulk amplification and various molecular characteristics at different biological levels. In summary, we classified the HCC samples into five subgroups using multiple “-omics” data. Each subgroup had a distinct survival rate and molecular signature, which may provide information about the pathogenesis of subtypes in HCC. PMID:27806083
Solution of elastoplastic torsion problem by boundary integral method
NASA Technical Reports Server (NTRS)
Mendelson, A.
1975-01-01
The boundary integral method was applied to the elastoplastic analysis of the torsion of prismatic bars, and the results are compared with those obtained by the finite difference method. Although fewer unknowns were used, very good accuracy was obtained with the boundary integral method. Both simply and multiply connected bodies can be handled with equal ease.
Robust control of multiple integrators subject to input saturation and disturbance
NASA Astrophysics Data System (ADS)
Ding, Shihong; Zheng, Wei Xing
2015-04-01
This paper is concerned with the problem of robust stabilisation of multiple integrators systems subject to input saturation and disturbance from the viewpoint of state feedback and output feedback. First of all, without considering the disturbance, a backstepping-like method in conjunction with a series of saturation functions with different saturation levels is employed to design a nested-saturation based state-feedback controller with pre-chosen parameters. On this basis, taking the disturbance into account, a sliding mode disturbance observer (DOB) is adopted to estimate the states and the disturbance. Then, by combining the above state-feedback controller and the estimated states together, a composite controller with disturbance compensation is developed. With the removal of the non-increasing restriction on the saturation levels, the controller design becomes very flexible and the convergence performance of the closed-loop system is much improved. Meanwhile, with the aid of the estimated values by the DOB, we obtain not only the output-feedback control scheme but also the better disturbance rejection property for the closed-loop system. A simulation example of a triple integrators system is presented to substantiate the usefulness of the proposed technique.
A Comparison of Multiple-Event Location Methods
NASA Astrophysics Data System (ADS)
Engdahl, E. R.; Rodi, W.; Bergman, E. A.; Waldhauser, F.; Pavlis, G. L.; Israelsson, H.; Dewey, J. W.
2003-12-01
Multiple-event location methods solve jointly for the location parameters (hypocenters and origin times) of seismic events in a cluster and travel-time corrections at the stations recording the events. This paper reports some preliminary comparisons of five such methods that have been developed over the years: hypocentral decomposition (HDC), double differencing (DD), progressive multiple-event location (PMEL), joint hypocenter determination (JHD), and a recently developed algorithm based on grid search (GMEL). We have applied each method to two adjacent earthquake clusters in Turkey: 33 events from the 17 Aug 1999 Izmit earthquake sequence and 41 events from the 12 Nov 1999 Duzce sequence. Previously, Engdahl and Bergman (2001) had applied HDC to these clusters using Pn and teleseismic P arrival times from NEIC and ground-truth (local network) locations for a few of the events. Their data set comprised approximately 3500 arrivals at 640 stations for the Izmit cluster and 3200 arrivals at 600 stations for Duzce. We applied the other multiple-event location methods to the same set of phase picks, using the same phase identifications and fixed event depths that were used in the HDC analysis. While the five algorithms are quite different in their computational approach, our initial results indicate that the methods yield quite similar relative event locations when they are applied with the same data and assumptions. However, they resolve the trade-off between the centroid location of a cluster and station corrections differently, and they also differ in how they use ground-truth information to constrain this trade-off and obtain absolute event locations. The locations relative to the cluster centroids generally agreed within 5 km, but was on the order of 10 km in some instances. This may have to do with the different schemes for weighting data used by the different methods, which cannot always be equalized between methods. To test this hypothesis, we applied GMEL with
Power-efficient method for IM-DD optical transmission of multiple OFDM signals.
Effenberger, Frank; Liu, Xiang
2015-05-18
We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The
Methods for radiation detection and characterization using a multiple detector probe
Akers, Douglas William; Roybal, Lyle Gene
2014-11-04
Apparatuses, methods, and systems relating to radiological characterization of environments are disclosed. Multi-detector probes with a plurality of detectors in a common housing may be used to substantially concurrently detect a plurality of different radiation activities and types. Multiple multi-detector probes may be used in a down-hole environment to substantially concurrently detect radioactive activity and contents of a buried waste container. Software may process, analyze, and integrate the data from the different multi-detector probes and the different detector types therein to provide source location and integrated analysis as to the source types and activity in the measured environment. Further, the integrated data may be used to compensate for differential density effects and the effects of radiation shielding materials within the volume being measured.
Integrated navigation method based on inertial navigation system and Lidar
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi
2016-04-01
An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.
A multiple-phenotype imputation method for genetic studies.
Dahl, Andrew; Iotchkova, Valentina; Baud, Amelie; Johansson, Åsa; Gyllensten, Ulf; Soranzo, Nicole; Mott, Richard; Kranis, Andreas; Marchini, Jonathan
2016-04-01
Genetic association studies have yielded a wealth of biological discoveries. However, these studies have mostly analyzed one trait and one SNP at a time, thus failing to capture the underlying complexity of the data sets. Joint genotype-phenotype analyses of complex, high-dimensional data sets represent an important way to move beyond simple genome-wide association studies (GWAS) with great potential. The move to high-dimensional phenotypes will raise many new statistical problems. Here we address the central issue of missing phenotypes in studies with any level of relatedness between samples. We propose a multiple-phenotype mixed model and use a computationally efficient variational Bayesian algorithm to fit the model. On a variety of simulated and real data sets from a range of organisms and trait types, we show that our method outperforms existing state-of-the-art methods from the statistics and machine learning literature and can boost signals of association.
A multiple phenotype imputation method for genetic studies
Dahl, Andrew; Iotchkova, Valentina; Baud, Amelie; Johansson, Åsa; Gyllensten, Ulf; Soranzo, Nicole; Mott, Richard; Kranis, Andreas; Marchini, Jonathan
2016-01-01
Genetic association studies have yielded a wealth of biologic discoveries. However, these have mostly analyzed one trait and one SNP at a time, thus failing to capture the underlying complexity of these datasets. Joint genotype-phenotype analyses of complex, high-dimensional datasets represent an important way to move beyond simple GWAS with great potential. The move to high-dimensional phenotypes will raise many new statistical problems. In this paper we address the central issue of missing phenotypes in studies with any level of relatedness between samples. We propose a multiple phenotype mixed model and use a computationally efficient variational Bayesian algorithm to fit the model. On a variety of simulated and real datasets from a range of organisms and trait types, we show that our method outperforms existing state-of-the-art methods from the statistics and machine learning literature and can boost signals of association. PMID:26901065
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
NASA Astrophysics Data System (ADS)
Carr, M. C.; Baker, G. S.; Herrmann, N.; Yerka, S.; Angst, M.
2008-12-01
The objectives of this project are to (1) utilize quantitative integration of multiple geophysical techniques, (2) determine geophysical anomalies that may indicate locations of various archaeological structures, and (3) develop techniques of quantifying causes of uncertainty. Two sites are used to satisfy these objectives. The first, representing a site with unknown target features, is an archaeological site on the Tennessee River floodplain. The area is divided into 437 (20 x 20 m) plots with 0.5 m spacing where magnetic gradiometry profiles were collected in a zig-zag pattern, resulting in 350 km of line data. Once anomalies are identified in the magnetics data, potential excavation sites for archeological features are determined and other geophysical techniques are utilized to gain confidence in choosing which anomalies to excavate. Several grids are resurveyed using Ground Penetrating Radar (GPR) and EM-31 with a 0.25 m spacing in a grid pattern. A quantitative method of integrating data into one comprehensive set is developed, enhancing interpretation because each geophysical technique utilized within this study produced a unique response to noise and the targets. Spatial visualization software is used to interpolate irregularly spaced XYZ data into a regularly spaced grid and display the geophysical data in 3D representations. Once all data are exported from each individual instrument, grid files are created for quantitative merging of the data and to create grid-based maps including contour, image, shaded relief, and surface maps. Statistics were calculated from anomaly classification in the data and excavated features present. To study this methodology in a more controlled setting, a second site is used. This site is analogous to the first in that it is along the Tennessee River floodplain on the same bedrock units. However, this analog site contains known targets (previously buried and accurately located) including size, shape, and orientation. Four
Lattice Boltzmann equation method for multiple immiscible continuum fluids
NASA Astrophysics Data System (ADS)
Spencer, T. J.; Halliday, I.; Care, C. M.
2010-12-01
This paper generalizes the two-component algorithm of Sec. , extending it, in Sec. , to describe N>2 mutually immiscible fluids in the isothermal continuum regime. Each fluid has an independent interfacial tension. While retaining all its computational advantages, we remove entirely the empiricism associated with contact behavior in our previous multiple immiscible fluid models [M. M. Dupin , Phys. Rev. E 73, 055701(R) (2006)10.1103/PhysRevE.73.055701; Med. Eng. Phys. 28, 13 (2006)10.1016/j.medengphy.2005.04.015] while solidifying the physical foundations. Moreover, the model relies upon a fluid-fluid segregation which is simpler, computationally faster, more free of artifacts (i.e., the interfacial microcurrent), and upon an interface-inducing force distribution which is analytic. The method is completely symmetric between any numbers of immiscible fluids and stable over a wide range of directly input interfacial tension. We present data on the steady-state properties of multiple interface model, which are in good agreement with theory [R. E. Johnson and S. S. Sadhal, Annu. Rev. Fluid Mech. 17, 289 (1985)10.1146/annurev.fl.17.010185.001445], specifically on the shapes of multidrop systems. Section is an analysis of the kinetic and continuum-scale descriptions of the underlying two-component lattice Boltzmann model for immiscible fluids, extendable to more than two immiscible fluids. This extension requires (i) the use of a more local kinetic equation perturbation which is (ii) free from a reliance on measured interfacial curvature. It should be noted that viewed simply as a two-component method, the continuum algorithm is inferior to our previous methods, reported by Lishchuk [Phys. Rev. E 67, 036701 (2003)]10.1103/PhysRevE.76.036701 and Halliday [Phys. Rev. E 76, 026708 (2007)]10.1103/PhysRevE.76.026708. Greater stability and parameter range is achieved in multiple drop simulations by using the forced multi-relaxation-time lattice Boltzmann method developed
NASA Astrophysics Data System (ADS)
Choudhury, A. Ghose; Guha, Partha; Khanra, Barun
2009-10-01
The Darboux integrability method is particularly useful to determine first integrals of nonplanar autonomous systems of ordinary differential equations, whose associated vector fields are polynomials. In particular, we obtain first integrals for a variant of the generalized Raychaudhuri equation, which has appeared in string inspired modern cosmology.
Technology Integration in a One-to-One Laptop Initiative: A Multiple Case Study Analysis
ERIC Educational Resources Information Center
Jones, Marsha B.
2013-01-01
The purpose of this multiple case study analysis was to examine teachers' experiences and perceptions in order to understand what actions and interactions supported or inhibited technology integration during a one-to-one laptop initiative. This research sought to gain teachers' perspectives on the challenges and successes they faced as classroom…
ERIC Educational Resources Information Center
Rega, Bonney
Noting that linguistic and mathematical/logical are the two kinds of intelligences the educational system encourages and that the educational system, as well as science in general, tends to neglect the nonverbal form of intellect, this paper describes Howard Gardner's multiple intelligences theory and Peter Kline's theory of integrative learning…
The Effect of Sensory Integration Treatment on Children with Multiple Disabilities.
ERIC Educational Resources Information Center
Din, Feng S.; Lodato, Donna M.
Six children with multiple disabilities (ages 5 to 8) participated in this evaluation of the effect of sensory integration treatment on sensorimotor function and academic learning. The children had cognitive abilities ranging from sub-average to significantly sub-average, three were non-ambulatory, one had severe behavioral problems, and each…
Multiplicity and Self-Identity: Trauma and Integration in Shirley Mason's Art
ERIC Educational Resources Information Center
Thompson, Geoffrey
2011-01-01
This viewpoint appeared in its original form as the catalogue essay that accompanied the exhibition "Multiplicity and Self-Identity: Trauma and Integration in Shirley Mason's Art," curated by the author for Gallery 2110, Sacramento, CA, and the 2010 Annual Conference of the American Art Therapy Association. The exhibition featured 17 artworks by…
Multiple proviral integration events after virological synapse-mediated HIV-1 spread
Russell, Rebecca A.; Martin, Nicola; Mitar, Ivonne; Jones, Emma; Sattentau, Quentin J.
2013-08-15
HIV-1 can move directly between T cells via virological synapses (VS). Although aspects of the molecular and cellular mechanisms underlying this mode of spread have been elucidated, the outcomes for infection of the target cell remain incompletely understood. We set out to determine whether HIV-1 transfer via VS results in productive, high-multiplicity HIV-1 infection. We found that HIV-1 cell-to-cell spread resulted in nuclear import of multiple proviruses into target cells as seen by fluorescence in-situ hybridization. Proviral integration into the target cell genome was significantly higher than that seen in a cell-free infection system, and consequent de novo viral DNA and RNA production in the target cell detected by quantitative PCR increased over time. Our data show efficient proviral integration across VS, implying the probability of multiple integration events in target cells that drive productive T cell infection. - Highlights: • Cell-to-cell HIV-1 infection delivers multiple vRNA copies to the target cell. • Cell-to-cell infection results in productive infection of the target cell. • Cell-to-cell transmission is more efficient than cell-free HIV-1 infection. • Suggests a mechanism for recombination in cells infected with multiple viral genomes.
Integration of Multiple Genomic and Phenotype Data to Infer Novel miRNA-Disease Associations
Zhou, Meng; Cheng, Liang; Yang, Haixiu; Wang, Jing; Sun, Jie; Wang, Zhenzhen
2016-01-01
MicroRNAs (miRNAs) play an important role in the development and progression of human diseases. The identification of disease-associated miRNAs will be helpful for understanding the molecular mechanisms of diseases at the post-transcriptional level. Based on different types of genomic data sources, computational methods for miRNA-disease association prediction have been proposed. However, individual source of genomic data tends to be incomplete and noisy; therefore, the integration of various types of genomic data for inferring reliable miRNA-disease associations is urgently needed. In this study, we present a computational framework, CHNmiRD, for identifying miRNA-disease associations by integrating multiple genomic and phenotype data, including protein-protein interaction data, gene ontology data, experimentally verified miRNA-target relationships, disease phenotype information and known miRNA-disease connections. The performance of CHNmiRD was evaluated by experimentally verified miRNA-disease associations, which achieved an area under the ROC curve (AUC) of 0.834 for 5-fold cross-validation. In particular, CHNmiRD displayed excellent performance for diseases without any known related miRNAs. The results of case studies for three human diseases (glioblastoma, myocardial infarction and type 1 diabetes) showed that all of the top 10 ranked miRNAs having no known associations with these three diseases in existing miRNA-disease databases were directly or indirectly confirmed by our latest literature mining. All these results demonstrated the reliability and efficiency of CHNmiRD, and it is anticipated that CHNmiRD will serve as a powerful bioinformatics method for mining novel disease-related miRNAs and providing a new perspective into molecular mechanisms underlying human diseases at the post-transcriptional level. CHNmiRD is freely available at http://www.bio-bigdata.com/CHNmiRD. PMID:26849207
Integration of Multiple Genomic and Phenotype Data to Infer Novel miRNA-Disease Associations.
Shi, Hongbo; Zhang, Guangde; Zhou, Meng; Cheng, Liang; Yang, Haixiu; Wang, Jing; Sun, Jie; Wang, Zhenzhen
2016-01-01
MicroRNAs (miRNAs) play an important role in the development and progression of human diseases. The identification of disease-associated miRNAs will be helpful for understanding the molecular mechanisms of diseases at the post-transcriptional level. Based on different types of genomic data sources, computational methods for miRNA-disease association prediction have been proposed. However, individual source of genomic data tends to be incomplete and noisy; therefore, the integration of various types of genomic data for inferring reliable miRNA-disease associations is urgently needed. In this study, we present a computational framework, CHNmiRD, for identifying miRNA-disease associations by integrating multiple genomic and phenotype data, including protein-protein interaction data, gene ontology data, experimentally verified miRNA-target relationships, disease phenotype information and known miRNA-disease connections. The performance of CHNmiRD was evaluated by experimentally verified miRNA-disease associations, which achieved an area under the ROC curve (AUC) of 0.834 for 5-fold cross-validation. In particular, CHNmiRD displayed excellent performance for diseases without any known related miRNAs. The results of case studies for three human diseases (glioblastoma, myocardial infarction and type 1 diabetes) showed that all of the top 10 ranked miRNAs having no known associations with these three diseases in existing miRNA-disease databases were directly or indirectly confirmed by our latest literature mining. All these results demonstrated the reliability and efficiency of CHNmiRD, and it is anticipated that CHNmiRD will serve as a powerful bioinformatics method for mining novel disease-related miRNAs and providing a new perspective into molecular mechanisms underlying human diseases at the post-transcriptional level. CHNmiRD is freely available at http://www.bio-bigdata.com/CHNmiRD.
Missing data methods in Mendelian randomization studies with multiple instruments.
Burgess, Stephen; Seaman, Shaun; Lawlor, Debbie A; Casas, Juan P; Thompson, Simon G
2011-11-01
Mendelian randomization studies typically have low power. Where there are several valid candidate genetic instruments, precision can be gained by using all the instruments available. However, sporadically missing genetic data can offset this gain. The authors describe 4 Bayesian methods for imputing the missing data based on a missing-at-random assumption: multiple imputations, single nucleotide polymorphism (SNP) imputation, latent variables, and haplotype imputation. These methods are demonstrated in a simulation study and then applied to estimate the causal relation between C-reactive protein and each of fibrinogen and coronary heart disease, based on 3 SNPs in British Women's Heart and Health Study participants assessed at baseline between May 1999 and June 2000. A complete-case analysis based on all 3 SNPs was found to be more precise than analyses using any 1 SNP alone. Precision is further improved by using any of the 4 proposed missing data methods; the improvement is equivalent to about a 25% increase in sample size. All methods gave similar results, which were apparently not overly sensitive to violation of the missing-at-random assumption. Programming code for the analyses presented is available online.
Thermally integrated staged methanol reformer and method
Skala, Glenn William; Hart-Predmore, David James; Pettit, William Henry; Borup, Rodney Lynn
2001-01-01
A thermally integrated two-stage methanol reformer including a heat exchanger and first and second reactors colocated in a common housing in which a gaseous heat transfer medium circulates to carry heat from the heat exchanger into the reactors. The heat transfer medium comprises principally hydrogen, carbon dioxide, methanol vapor and water vapor formed in a first stage reforming reaction. A small portion of the circulating heat transfer medium is drawn off and reacted in a second stage reforming reaction which substantially completes the reaction of the methanol and water remaining in the drawn-off portion. Preferably, a PrOx reactor will be included in the housing upstream of the heat exchanger to supplement the heat provided by the heat exchanger.
System and method for inventorying multiple remote objects
Carrender, Curtis L.; Gilbert, Ronald W.
2007-10-23
A system and method of inventorying multiple objects utilizing a multi-level or a chained radio frequency identification system. The system includes a master tag and a plurality of upper level tags and lower level tags associated with respective objects. The upper and lower level tags communicate with each other and the master tag so that reading of the master tag reveals the presence and absence of upper and lower level tags. In the chained RF system, the upper and lower level tags communicate locally with each other in a manner so that more remote tags that are out of range of some of the upper and lower level tags have their information relayed through adjacent tags to the master tag and thence to a controller.
System and method for inventorying multiple remote objects
Carrender, Curtis L.; Gilbert, Ronald W.
2009-12-29
A system and method of inventorying multiple objects utilizing a multi-level or a chained radio frequency identification system. The system includes a master tag and a plurality of upper level tags and lower level tags associated with respective objects. The upper and lower level tags communicate with each other and the master tag so that reading of the master tag reveals the presence and absence of upper and lower level tags. In the chained RF system, the upper and lower level tags communicate locally with each other in a manner so that more remote tags that are out of range of some of the upper and lower level tags have their information relayed through adjacent tags to the master tag and thence to a controller.
Treatment of domain integrals in boundary element methods
Nintcheu Fata, Sylvain
2012-01-01
A systematic and rigorous technique to calculate domain integrals without a volume-fitted mesh has been developed and validated in the context of a boundary element approximation. In the proposed approach, a domain integral involving a continuous or weakly-singular integrand is first converted into a surface integral by means of straight-path integrals that intersect the underlying domain. Then, the resulting surface integral is carried out either via analytic integration over boundary elements or by use of standard quadrature rules. This domain-to-boundary integral transformation is derived from an extension of the fundamental theorem of calculus to higher dimension, and the divergence theorem. In establishing the method, it is shown that the higher-dimensional version of the first fundamental theorem of calculus corresponds to the well-known Poincare lemma. The proposed technique can be employed to evaluate integrals defined over simply- or multiply-connected domains with Lipschitz boundaries which are embedded in an Euclidean space of arbitrary but finite dimension. Combined with the singular treatment of surface integrals that is widely available in the literature, this approach can also be utilized to effectively deal with boundary-value problems involving non-homogeneous source terms by way of a collocation or a Galerkin boundary integral equation method using only the prescribed surface discretization. Sample problems associated with the three-dimensional Poisson equation and featuring the Newton potential are successfully solved by a constant element collocation method to validate this study.
Higher order time integration methods for two-phase flow
NASA Astrophysics Data System (ADS)
Kees, Christopher E.; Miller, Cass T.
Time integration methods that adapt in both the order of approximation and time step have been shown to provide efficient solutions to Richards' equation. In this work, we extend the same method of lines approach to solve a set of two-phase flow formulations and address some mass conservation issues from the previous work. We analyze these formulations and the nonlinear systems that result from applying the integration methods, placing particular emphasis on their index, range of applicability, and mass conservation characteristics. We conduct numerical experiments to study the behavior of the numerical models for three test problems. We demonstrate that higher order integration in time is more efficient than standard low-order methods for a variety of practical grids and integration tolerances, that the adaptive scheme successfully varies the step size in response to changing conditions, and that mass balance can be maintained efficiently using variable-order integration and an appropriately chosen numerical model formulation.
NASA Astrophysics Data System (ADS)
McGillivary, P. A.; Borges de Sousa, J.; Martins, R.; Rajan, K.
2012-12-01
Autonomous platforms are increasingly used as components of Integrated Ocean Observing Systems and oceanographic research cruises. Systems deployed can include gliders or propeller-driven autonomous underwater vessels (AUVs), autonomous surface vessels (ASVs), and unmanned aircraft systems (UAS). Prior field campaigns have demonstrated successful communication, sensor data fusion and visualization for studies using gliders and AUVs. However, additional requirements exist for incorporating ASVs and UASs into ship operations. For these systems to be optimally integrated into research vessel data management and operational planning systems involves addressing three key issues: real-time field data availability, platform coordination, and data archiving for later analysis. A fleet of AUVs, ASVs and UAS deployed from a research vessel is best operated as a system integrated with the ship, provided communications among them can be sustained. For this purpose, Disruptive Tolerant Networking (DTN) software protocols for operation in communication-challenged environments help ensure reliable high-bandwidth communications. Additionally, system components need to have considerable onboard autonomy, namely adaptive sampling capabilities using their own onboard sensor data stream analysis. We discuss Oceanographic Decision Support System (ODSS) software currently used for situational awareness and planning onshore, and in the near future event detection and response will be coordinated among multiple vehicles. Results from recent field studies from oceanographic research vessels using AUVs, ASVs and UAS, including the Rapid Environmental Picture (REP-12) cruise, are presented describing methods and results for use of multi-vehicle communication and deliberative control networks, adaptive sampling with single and multiple platforms, issues relating to data management and archiving, and finally challenges that remain in addressing these technological issues. Significantly, the
Inaccuracy in the treatment of multiple-order diffraction by secondary-edge-source methods.
Summers, Jason E
2013-06-01
Existing secondary-edge-source methods based on the Biot-Tolstoy solution for diffraction from an infinite wedge compute multiple-order diffraction by cascading the integration over secondary sources used to determine first-order diffraction from the edge. It is demonstrated here that this approach errs in some important cases because it neglects slope-diffraction contributions. This error is illustrated by considering the case of an infinite slit in a thin, hard screen. Comparisons with measurements for this case and analytical solutions for the case of a circular aperture in a thin, hard screen are used as a basis to gauge the magnitude of the error.
Criteria for quantitative and qualitative data integration: mixed-methods research methodology.
Lee, Seonah; Smith, Carrol A M
2012-05-01
Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Integrated management of thesis using clustering method
NASA Astrophysics Data System (ADS)
Astuti, Indah Fitri; Cahyadi, Dedy
2017-02-01
Thesis is one of major requirements for student in pursuing their bachelor degree. In fact, finishing the thesis involves a long process including consultation, writing manuscript, conducting the chosen method, seminar scheduling, searching for references, and appraisal process by the board of mentors and examiners. Unfortunately, most of students find it hard to match all the lecturers' free time to sit together in a seminar room in order to examine the thesis. Therefore, seminar scheduling process should be on the top of priority to be solved. Manual mechanism for this task no longer fulfills the need. People in campus including students, staffs, and lecturers demand a system in which all the stakeholders can interact each other and manage the thesis process without conflicting their timetable. A branch of computer science named Management Information System (MIS) could be a breakthrough in dealing with thesis management. This research conduct a method called clustering to distinguish certain categories using mathematics formulas. A system then be developed along with the method to create a well-managed tool in providing some main facilities such as seminar scheduling, consultation and review process, thesis approval, assessment process, and also a reliable database of thesis. The database plays an important role in present and future purposes.
Damping identification in frequency domain using integral method
NASA Astrophysics Data System (ADS)
Guo, Zhiwei; Sheng, Meiping; Ma, Jiangang; Zhang, Wulin
2015-03-01
A new method for damping identification of linear system in frequency domain is presented, by using frequency response function (FRF) with integral method. The FRF curve is firstly transformed to other type of frequency-related curve by changing the representations of horizontal and vertical axes. For the newly constructed frequency-related curve, integral is conducted and the area forming from the new curve is used to determine the damping. Three different methods based on integral are proposed in this paper, which are called FDI-1, FDI-2 and FDI-3 method, respectively. For a single degree of freedom (Sdof) system, the formulated relation of each method between integrated area and loss factor is derived theoretically. The numeral simulation and experiment results show that, the proposed integral methods have high precision, strong noise resistance and are very stable in repeated measurements. Among the three integral methods, FDI-3 method is the most recommended because of its higher accuracy and simpler algorithm. The new methods are limited to linear system in which modes are well separated, and for closely spaced mode system, mode decomposition process should be conducted firstly.
Monitoring gray wolf populations using multiple survey methods
Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.
2013-01-01
The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method
Neural Network Emulation of the Integral Equation Model with Multiple Scattering
Pulvirenti, Luca; Ticconi, Francesca; Pierdicca, Nazzareno
2009-01-01
The Integral Equation Model with multiple scattering (IEMM) represents a well-established method that provides a theoretical framework for the scattering of electromagnetic waves from rough surfaces. A critical aspect is the long computational time required to run such a complex model. To deal with this problem, a neural network technique is proposed in this work. In particular, we have adopted neural networks to reproduce the backscattering coefficients predicted by IEMM at L- and C-bands, thus making reference to presently operative satellite radar sensors, i.e., that aboard ERS-2, ASAR on board ENVISAT (C-band), and PALSAR aboard ALOS (L-band). The neural network-based model has been designed for radar observations of both flat and tilted surfaces, in order to make it applicable for hilly terrains too. The assessment of the proposed approach has been carried out by comparing neural network-derived backscattering coefficients with IEMM-derived ones. Different databases with respect to those employed to train the networks have been used for this purpose. The outcomes seem to prove the feasibility of relying on a neural network approach to efficiently and reliably approximate an electromagnetic model of surface scattering. PMID:22408496
Sandroff, Brian M.; Pula, John H.; Motl, Robert W.
2013-01-01
Background. Retinal nerve fiber layer thickness (RNFLT) and total macular volume (TMV) represent markers of neuroaxonal degeneration within the anterior visual pathway that might correlate with ambulation in persons with multiple sclerosis (MS). Objective. This study examined the associations between RNFLT and TMV with ambulatory parameters in MS. Methods. Fifty-eight MS patients underwent a neurological examination for generation of an expanded disability status scale (EDSS) score and measurement of RNFLT and TMV using optical coherence tomography (OCT). Participants completed the 6-minute walk (6MW) and the timed 25-foot walk (T25FW). The associations were examined using generalized estimating equation models that accounted for within-patient, inter-eye correlations, and controlled for disease duration, EDSS score, and age. Results. RNFLT was not significantly associated with 6MW (P = 0.99) or T25FW (P = 0.57). TMV was significantly associated with 6MW (P = 0.023) and T25FW (P = 0.005). The coefficients indicated that unit differences in 6MW (100 feet) and T25FW (1 second) were associated with 0.040 and −0.048 unit differences in TMV (mm3), respectively. Conclusion. Integrity of the anterior visual pathway, particularly TMV, might represent a noninvasive measure of neuroaxonal degeneration that is correlated with ambulatory function in MS. PMID:23864950
Exponential Methods for the Time Integration of Schroedinger Equation
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
Method for distinguishing multiple targets using time-reversal acoustics
Berryman, James G.
2004-06-29
A method for distinguishing multiple targets using time-reversal acoustics. Time-reversal acoustics uses an iterative process to determine the optimum signal for locating a strongly reflecting target in a cluttered environment. An acoustic array sends a signal into a medium, and then receives the returned/reflected signal. This returned/reflected signal is then time-reversed and sent back into the medium again, and again, until the signal being sent and received is no longer changing. At that point, the array has isolated the largest eigenvalue/eigenvector combination and has effectively determined the location of a single target in the medium (the one that is most strongly reflecting). After the largest eigenvalue/eigenvector combination has been determined, to determine the location of other targets, instead of sending back the same signals, the method sends back these time reversed signals, but half of them will also be reversed in sign. There are various possibilities for choosing which half to do sign reversal. The most obvious choice is to reverse every other one in a linear array, or as in a checkerboard pattern in 2D. Then, a new send/receive, send-time reversed/receive iteration can proceed. Often, the first iteration in this sequence will be close to the desired signal from a second target. In some cases, orthogonalization procedures must be implemented to assure the returned signals are in fact orthogonal to the first eigenvector found.
NASA Astrophysics Data System (ADS)
Platteter, Dale G.; Cheek, Tom F., Jr.
1988-12-01
A description is given of the radiation improvements obtained by fabricating bipolar integrated circuits on oxygen-implanted silicon-on-insulator substrates that were manufactured with multiple (low-dose) implants. Bipolar 74ALSOO gates fabricated on these substrates showed an improvement in total dose and dose-rate radiation response over identical circuits fabricated in bulk silicon. Defects in SIMOX material were reduced by over four orders of magnitude. The results demonstrate that bipolar devices, fabricated on multiple-implant SIMOX substrates, can compete with conventional dielectric isolation for many radiation-hardened system applications.
A Rationale for Mixed Methods (Integrative) Research Programmes in Education
ERIC Educational Resources Information Center
Niaz, Mansoor
2008-01-01
Recent research shows that research programmes (quantitative, qualitative and mixed) in education are not displaced (as suggested by Kuhn) but rather lead to integration. The objective of this study is to present a rationale for mixed methods (integrative) research programs based on contemporary philosophy of science (Lakatos, Giere, Cartwright,…
Multiple trim magnets, or magic fingers,'' for insertion device field integral correction
Hoyer, E.; Marks, S.; Pipersky, P.; Schlueter, R. )
1995-02-01
Multiple trim magnets (MTMs), also known as magic fingers,'' are an arrangement of magnets for reducing integrated magnetic-field errors in insertion devices. The idea is to use transverse arrays of permanent magnets, hence the name multiple trim magnets,'' above and below the midplane, to correct both normal and skew longitudinal magnetic-field integral errors in a device. MTMs are typically installed at the ends of an ID. Adjustments are made by changing either the size, position, or orientation of each trim magnet. Application of the MTMs to the ALS undulators reduced both the normal and skew longitudinal field integral errors, over the entire 20 mm[times]60 mm good field region,'' of the beam aperture by as much as an order of magnitude. The requirements included corrections of field and gradients outside the multipole convergence radius. Additionally, these trim magnet arrays provided correction of the linear component of the integrated field gradients for particles with trajectories not parallel to the nominal beam axis. The MTM concept, design, construction, tests that demonstrated feasibility, and magnetic-field integral reduction of ALS undulators are presented.
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
NASA Astrophysics Data System (ADS)
Rao, Gottipaty N.; Karpf, Andreas
2011-05-01
We report on the development of a new sensor for NO2 with ultrahigh sensitivity of detection. This has been accomplished by combining off-axis integrated cavity output spectroscopy (OA-ICOS) (which can provide large path lengths of the order of several km in a small volume cell) with multiple line integrated absorption spectroscopy (MLIAS) (where we integrate the absorption spectra over a large number of rotational-vibrational transitions of the molecular species to further improve the sensitivity). Employing an external cavity tunable quantum cascade laser operating in the 1601 - 1670 cm-1 range and a high-finesse optical cavity, the absorption spectra of NO2 over 100 transitions in the R-band have been recorded. From the observed linear relationship between the integrated absorption vs. concentration of NO2, we report an effective sensitivity of detection of 10 ppt for NO2. To the best of our knowledge, this is among the most sensitive levels of detection of NO2 to date. A sensitive sensor for the detection of NO2 will be helpful to monitor the ambient air quality, combustion emissions from the automobiles, power plants, aircraft and for the detection of nitrate based explosives (which are commonly used in improvised explosives (IEDs)). Additionally such a sensor would be valuable for the study of complex chemical reactions that undergo in the atmosphere resulting in the formation of photochemical smog, tropospheric ozone and acid rain.
Coupling equivalent plate and finite element formulations in multiple-method structural analyses
NASA Technical Reports Server (NTRS)
Giles, Gary L.; Norwood, Keith
1994-01-01
A coupled multiple-method analysis procedure for use late in conceptual design or early in preliminary design of aircraft structures is described. Using this method, aircraft wing structures are represented with equivalent plate models, and structural details such as engine/pylon structure, landing gear, or a 'stick' model of a fuselage are represented with beam finite element models. These two analysis methods are implemented in an integrated multiple-method formulation that involves the assembly and solution of a combined set of linear equations. The corresponding solution vector contains coefficients of the polynomials that describe the deflection of the wing and also the components of translations and rotations at the joints of the beam members. Two alternative approaches for coupling the methods are investigated; one using transition finite elements and the other using Lagrange multipliers. The coupled formulation is applied to the static analysis and vibration analysis of a conceptual design model of a fighter aircraft. The results from the coupled method are compared with corresponding results from an analysis in which the entire model is composed of finite elements.
Scientific concepts and applications of integrated discrete multiple organ co-culture technology
Gayathri, Loganathan; Dhanasekaran, Dharumadurai; Akbarsha, Mohammad A.
2015-01-01
Over several decades, animals have been used as models to investigate the human-specific drug toxicity, but the outcomes are not always reliably extrapolated to the humans in vivo. Appropriate in vitro human-based experimental system that includes in vivo parameters is required for the evaluation of multiple organ interaction, multiple organ/organ-specific toxicity, and metabolism of xenobiotic compounds to avoid the use of animals for toxicity testing. One such versatile in vitro technology in which human primary cells could be used is integrated discrete multiple organ co-culture (IdMOC). IdMOC system adopts wells-within-well concept that facilitates co-culture of cells from different organs in a discrete manner, separately in the respective media in the smaller inner wells which are then interconnected by an overlay of a universal medium in the large containing well. This novel in vitro approach mimics the in vivo situation to a great extent, and employs cells from multiple organs that are physically separated but interconnected by a medium that mimics the systemic circulation and provides for multiple organ interaction. Applications of IdMOC include assessment of multiple organ toxicity, drug distribution, organ-specific toxicity, screening of anticancer drugs, metabolic cytotoxicity, etc. PMID:25969651
Modeling of multiple-optical-axis pattern-integrated interference lithography systems.
Sedivy, Donald E; Gaylord, Thomas K
2014-06-01
The image quality and collimation in a multiple-optical-axis pattern-integrated interference lithography system are evaluated for an elementary optical system composed of single-element lenses. Image quality and collimation are individually and jointly optimized for these lenses. Example images for a jointly optimized system are simulated using a combination of ray tracing and Fourier analysis. Even with these nonoptimized components, reasonable fidelity is shown to be possible.
Huff, Markus; Papenmeier, Frank
2013-01-14
In multiple-object tracking, participants can track several moving objects among identical distractors. It has recently been shown that the human visual system uses motion information in order to keep track of targets (St. Clair et al., Journal of Vision, 10(4), 1-13). Texture on the surface of an object that moved in the opposite direction to the object itself impaired tracking performance. In this study, we examined the temporal interval at which texture motion and object motion is integrated in dynamic scenes. In two multiple-object tracking experiments, we manipulated the texture motion on the objects: The texture either moved in the same direction as the objects, in the opposite direction, or alternated between the same and opposite direction at varying intervals. In Experiment 1, we show that the integration of object motion and texture motion can take place at intervals as short as 100 ms. In Experiment 2, we show that there is a linear relationship between the proportion of opposite texture motion and tracking performance. We suggest that texture motion might cause shifts in perceived object locations, thus influencing tracking performance.
The eye in hand: predicting others' behavior by integrating multiple sources of information
Pezzulo, Giovanni; Costantini, Marcello
2015-01-01
The ability to predict the outcome of other beings' actions confers significant adaptive advantages. Experiments have assessed that human action observation can use multiple information sources, but it is currently unknown how they are integrated and how conflicts between them are resolved. To address this issue, we designed an action observation paradigm requiring the integration of multiple, potentially conflicting sources of evidence about the action target: the actor's gaze direction, hand preshape, and arm trajectory, and their availability and relative uncertainty in time. In two experiments, we analyzed participants' action prediction ability by using eye tracking and behavioral measures. The results show that the information provided by the actor's gaze affected participants' explicit predictions. However, results also show that gaze information was disregarded as soon as information on the actor's hand preshape was available, and this latter information source had widespread effects on participants' prediction ability. Furthermore, as the action unfolded in time, participants relied increasingly more on the arm movement source, showing sensitivity to its increasing informativeness. Therefore, the results suggest that the brain forms a robust estimate of the actor's motor intention by integrating multiple sources of information. However, when informative motor cues such as a preshaped hand with a given grip are available and might help in selecting action targets, people tend to capitalize on such motor cues, thus turning out to be more accurate and fast in inferring the object to be manipulated by the other's hand. PMID:25568158
NASA Astrophysics Data System (ADS)
Alqurashi, Muwaffaq; Wang, Jinling
2015-03-01
For positioning, navigation and timing (PNT) purposes, GNSS or GNSS/INS integration is utilised to provide real-time solutions. However, any potential sensor failures or faulty measurements due to malfunctions of sensor components or harsh operating environments may cause unsatisfactory estimation for PNT parameters. The inability for immediate detecting faulty measurements or sensor component failures will reduce the overall performance of the system. So, real time detection and identification of faulty measurements is required to make the system more accurate and reliable for different applications that need real time solutions such as real time mapping for safety or emergency purposes. Consequently, it is necessary to implement an online fault detection and isolation (FDI) algorithm which is a statistic-based approach to detect and identify multiple faults.However, further investigations on the performance of the FDI for multiple fault scenarios is still required. In this paper, the performance of the FDI method under multiple fault scenarios is evaluated, e.g., for two, three and four faults in the GNSS and GNSS/INS measurements under different conditions of visible satellites and satellites geometry. Besides, the reliability (e.g., MDB) and separability (correlation coefficients between faults detection statistics) measures are also investigated to measure the capability of the FDI method. A performance analysis of the FDI method is conducted under the geometric constraints, to show the importance of the FDI method in terms of fault detectability and separability for robust positioning and navigation for real time applications.
NASA Astrophysics Data System (ADS)
Xie, Guizhong; Zhang, Dehai; Zhang, Jianming; Meng, Fannian; Du, Wenliao; Wen, Xiaoyu
2016-12-01
As a widely used numerical method, boundary element method (BEM) is efficient for computer aided engineering (CAE). However, boundary integrals with near singularity need to be calculated accurately and efficiently to implement BEM for CAE analysis on thin bodies successfully. In this paper, the distance in the denominator of the fundamental solution is first designed as an equivalent form using approximate expansion and the original sinh method can be revised into a new form considering the minimum distance and the approximate expansion. Second, the acquisition of the projection point by Newton-Raphson method is introduced. We acquire the nearest point between the source point and element edge by solving a cubic equation if the location of the projection point is outside the element, where boundary integrals with near singularity appear. Finally, the subtriangles of the local coordinate space are mapped into the integration space and the sinh method is applied in the integration space. The revised sinh method can be directly performed in the integration element. Averification test of our method is proposed. Results demonstrate that our method is effective for regularizing the boundary integrals with near singularity.
An integrated lean-methods approach to hospital facilities redesign.
Nicholas, John
2012-01-01
Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Modulation of C. elegans Touch Sensitivity Is Integrated at Multiple Levels
Chen, Xiaoyin
2014-01-01
Sensory systems can adapt to different environmental signals. Here we identify four conditions that modulate anterior touch sensitivity in Caenorhabditis elegans after several hours and demonstrate that such sensory modulation is integrated at multiple levels to produce a single output. Prolonged vibration involving integrin signaling directly sensitizes the touch receptor neurons (TRNs). In contrast, hypoxia, the dauer state, and high salt reduce touch sensitivity by preventing the release of long-range neuroregulators, including two insulin-like proteins. Integration of these latter inputs occurs at upstream neurohormonal cells and at the insulin signaling cascade within the TRNs. These signals and those from integrin signaling converge to modulate touch sensitivity by regulating AKT kinases and DAF-16/FOXO. Thus, activation of either the integrin or insulin pathways can compensate for defects in the other pathway. This modulatory system integrates conflicting signals from different modalities, and adapts touch sensitivity to both mechanical and non-mechanical conditions. PMID:24806678
Understanding Physiology in the Continuum: Integration of Information from Multiple -Omics Levels
Kamisoglu, Kubra; Acevedo, Alison; Almon, Richard R.; Coyle, Susette; Corbett, Siobhan; Dubois, Debra C.; Nguyen, Tung T.; Jusko, William J.; Androulakis, Ioannis P.
2017-01-01
In this paper, we discuss approaches for integrating biological information reflecting diverse physiologic levels. In particular, we explore statistical and model-based methods for integrating transcriptomic, proteomic and metabolomics data. Our case studies reflect responses to a systemic inflammatory stimulus and in response to an anti-inflammatory treatment. Our paper serves partly as a review of existing methods and partly as a means to demonstrate, using case studies related to human endotoxemia and response to methylprednisolone (MPL) treatment, how specific questions may require specific methods, thus emphasizing the non-uniqueness of the approaches. Finally, we explore novel ways for integrating -omics information with PKPD models, toward the development of more integrated pharmacology models. PMID:28289389
Comparison of time integration methods for the evolution of galaxies
NASA Astrophysics Data System (ADS)
Degraaf, W.
In the simulation of the evolution of elliptical galaxies, Leap-Frog is currently the most frequently used time integration method. The question is whether other methods perform better than this classical method. Improvements may also be expected from the use of variable step-lengths. We compare Leap-Frog with several other methods, namely: a fourth-order Nystrom method, a symplectic method, and DOPRI-five and eight. DOPRI uses variable steps of its own accord. For the other methods we construct a variable step procedure ourselves. The comparison of the methods is carried out in three Hamiltonian test problems.
Explicit Integration of Extremely Stiff Reaction Networks: Partial Equilibrium Methods
Guidry, Mike W; Billings, J. J.; Hix, William Raphael
2013-01-01
In two preceding papers [1,2] we have shown that, when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the new partial equilibrium methods, give an integration scheme that plausibly can deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that algebraically stabilized explicit methods may offer alternatives to implicit integration of even extremely stiff systems, and that these methods may permit integration of much larger networks than have been feasible previously in a variety of fields.
NASA Astrophysics Data System (ADS)
Li, D. H.; Zhang, X.; Sze, K. Y.; Liu, Y.
2016-10-01
In this paper, the extended layerwise method (XLWM), which was developed for laminated composite beams with multiple delaminations and transverse cracks (Li et al. in Int J Numer Methods Eng 101:407-434, 2015), is extended to laminated composite plates. The strong and weak discontinuous functions along the thickness direction are adopted to simulate multiple delaminations and interlaminar interfaces, respectively, whilst transverse cracks are modeled by the extended finite element method (XFEM). The interaction integral method and maximum circumferential tensile criterion are used to calculate the stress intensity factor (SIF) and crack growth angle, respectively. The XLWM for laminated composite plates can accurately predicts the displacement and stress fields near the crack tips and delamination fronts. The thickness distribution of SIF and thus the crack growth angles in different layers can be obtained. These information cannot be predicted by using other existing shell elements enriched by XFEM. Several numerical examples are studied to demonstrate the capabilities of the XLWM in static response analyses, SIF calculations and crack growth predictions.
Laser housing having integral mounts and method of manufacturing same
Herron, Michael Alan; Brickeen, Brian Keith
2004-10-19
A housing adapted to position, support, and facilitate aligning various components, including an optical path assembly, of a laser. In a preferred embodiment, the housing is constructed from a single piece of material and broadly comprises one or more through-holes; one or more cavities; and one or more integral mounts, wherein the through-holes and the cavities cooperate to define the integral mounts. Securement holes machined into the integral mounts facilitate securing components within the integral mounts using set screws, adhesive, or a combination thereof. In a preferred method of making the housing, the through-holes and cavities are first machined into the single piece of material, with at least some of the remaining material forming the integral mounts.
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.
MS-kNN: protein function prediction by integrating multiple data sources
2013-01-01
Background Protein function determination is a key challenge in the post-genomic era. Experimental determination of protein functions is accurate, but time-consuming and resource-intensive. A cost-effective alternative is to use the known information about sequence, structure, and functional properties of genes and proteins to predict functions using statistical methods. In this paper, we describe the Multi-Source k-Nearest Neighbor (MS-kNN) algorithm for function prediction, which finds k-nearest neighbors of a query protein based on different types of similarity measures and predicts its function by weighted averaging of its neighbors' functions. Specifically, we used 3 data sources to calculate the similarity scores: sequence similarity, protein-protein interactions, and gene expressions. Results We report the results in the context of 2011 Critical Assessment of Function Annotation (CAFA). Prior to CAFA submission deadline, we evaluated our algorithm on 1,302 human test proteins that were represented in all 3 data sources. Using only the sequence similarity information, MS-kNN had term-based Area Under the Curve (AUC) accuracy of Gene Ontology (GO) molecular function predictions of 0.728 when 7,412 human training proteins were used, and 0.819 when 35,622 training proteins from multiple eukaryotic and prokaryotic organisms were used. By aggregating predictions from all three sources, the AUC was further improved to 0.848. Similar result was observed on prediction of GO biological processes. Testing on 595 proteins that were annotated after the CAFA submission deadline showed that overall MS-kNN accuracy was higher than that of baseline algorithms Gotcha and BLAST, which were based solely on sequence similarity information. Since only 10 of the 595 proteins were represented by all 3 data sources, and 66 by two data sources, the difference between 3-source and one-source MS-kNN was rather small. Conclusions Based on our results, we have several useful insights: (1
System and method for integrating hazard-based decision making tools and processes
Hodgin, C Reed [Westminster, CO
2012-03-20
A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.
Tuning of PID controllers for integrating systems using direct synthesis method.
Anil, Ch; Padma Sree, R
2015-07-01
A PID controller is designed for various forms of integrating systems with time delay using direct synthesis method. The method is based on comparing the characteristic equation of the integrating system and PID controller with a filter with the desired characteristic equation. The desired characteristic equation comprises of multiple poles which are placed at the same desired location. The tuning parameter is adjusted so as to achieve the desired robustness. Tuning rules in terms of process parameters are given for various forms of integrating systems. The tuning parameter can be selected for the desired robustness by specifying Ms value. The proposed controller design method is applied to various transfer function models and to the nonlinear model equations of jacketed CSTR to show its effectiveness and applicability.
Application of integrated fluid-thermal-structural analysis methods
NASA Technical Reports Server (NTRS)
Wieting, Allan R.; Dechaumphai, Pramote; Bey, Kim S.; Thornton, Earl A.; Morgan, Ken
1988-01-01
Hypersonic vehicles operate in a hostile aerothermal environment which has a significant impact on their aerothermostructural performance. Significant coupling occurs between the aerodynamic flow field, structural heat transfer, and structural response creating a multidisciplinary interaction. Interfacing state-of-the-art disciplinary analysis methods is not efficient, hence interdisciplinary analysis methods integrated into a single aerothermostructural analyzer are needed. The NASA Langley Research Center is developing such methods in an analyzer called LIFTS (Langley Integrated Fluid-Thermal-Structural) analyzer. The evolution and status of LIFTS is reviewed and illustrated through applications.
[Study on plastic film thickness measurement by integral spectrum method].
Qiu, Chao; Sun, Xiao-Gang
2013-01-01
Band integral transmission was defined and plastic film thickness measurement model was built by analyzing the intensity variation when the light passes plastic film, after the concept of band Lambert Law was proposed. Polypropylene film samples with different thickness were taken as the research object, and their spectral transmission was measured by the spectrometer. The relationship between thickness and band integral transmission is fitted using the model mentioned before. The feasibility of developing new broad band plastic film thickness on-line measurement system based on this method was analysed employing the ideal blackbody at temperature of 500 K. The experimental results indicate that plastic film thickness will be measured accurately by integral spectrum method. Plastic film thickness on-line measurement system based on this method will hopefully solve the problems of that based on dual monochromatic light contrast method, such as low accuracy, poor universality and so on.
Musoke, David; Miiro, George; Karani, George; Morris, Keith; Kasasa, Simon; Ndejjo, Rawlance; Nakiyingi-Miiro, Jessica; Guwatudde, David; Musoke, Miph Boses
2015-01-01
Background The World Health Organization recommends use of multiple approaches to control malaria. The integrated approach to malaria prevention advocates the use of several malaria prevention methods in a holistic manner. This study assessed perceptions and practices on integrated malaria prevention in Wakiso district, Uganda. Methods A clustered cross-sectional survey was conducted among 727 households from 29 villages using both quantitative and qualitative methods. Assessment was done on awareness of various malaria prevention methods, potential for use of the methods in a holistic manner, and reasons for dislike of certain methods. Households were classified as using integrated malaria prevention if they used at least two methods. Logistic regression was used to test for factors associated with the use of integrated malaria prevention while adjusting for clustering within villages. Results Participants knew of the various malaria prevention methods in the integrated approach including use of insecticide treated nets (97.5%), removing mosquito breeding sites (89.1%), clearing overgrown vegetation near houses (97.9%), and closing windows and doors early in the evenings (96.4%). If trained, most participants (68.6%) would use all the suggested malaria prevention methods of the integrated approach. Among those who would not use all methods, the main reasons given were there being too many (70.2%) and cost (32.0%). Only 33.0% households were using the integrated approach to prevent malaria. Use of integrated malaria prevention by households was associated with reading newspapers (AOR 0.34; 95% CI 0.22 –0.53) and ownership of a motorcycle/car (AOR 1.75; 95% CI 1.03 – 2.98). Conclusion Although knowledge of malaria prevention methods was high and perceptions on the integrated approach promising, practices on integrated malaria prevention was relatively low. The use of the integrated approach can be improved by promoting use of multiple malaria prevention methods
When Curriculum and Technology Meet: Technology Integration in Methods Courses
ERIC Educational Resources Information Center
Keeler, Christy G.
2008-01-01
Reporting on the results of an action research study, this manuscript provides examples of strategies used to integrate technology into a content methods course. The study used reflective teaching of a social studies methods course at a major Southwestern university in 10 course sections over a four-semester period. In alignment with the research…
A Comparison of Treatment Integrity Assessment Methods for Behavioral Intervention
ERIC Educational Resources Information Center
Koh, Seong A.
2010-01-01
The purpose of this study was to examine the similarity of outcomes from three different treatment integrity (TI) methods, and to identify the method which best corresponded to the assessment of a child's behavior. Six raters were recruited through individual contact via snowball sampling. A modified intervention component list and 19 video clips…
Error Analysis and Calibration Method of a Multiple Field-of-View Navigation System
Shi, Shuai; Zhao, Kaichun; You, Zheng; Ouyang, Chenguang; Cao, Yongkui; Wang, Zhenzhou
2017-01-01
The Multiple Field-of-view Navigation System (MFNS) is a spacecraft subsystem built to realize the autonomous navigation of the Spacecraft Inside Tiangong Space Station. This paper introduces the basics of the MFNS, including its architecture, mathematical model and analysis, and numerical simulation of system errors. According to the performance requirement of the MFNS, the calibration of both intrinsic and extrinsic parameters of the system is assumed to be essential and pivotal. Hence, a novel method based on the geometrical constraints in object space, called checkerboard-fixed post-processing calibration (CPC), is proposed to solve the problem of simultaneously obtaining the intrinsic parameters of the cameras integrated in the MFNS and the transformation between the MFNS coordinate and the cameras’ coordinates. This method utilizes a two-axis turntable and a prior alignment of the coordinates is needed. Theoretical derivation and practical operation of the CPC method are introduced. The calibration experiment results of the MFNS indicate that the extrinsic parameter accuracy of the CPC reaches 0.1° for each Euler angle and 0.6 mm for each position vector component (1σ). A navigation experiment verifies the calibration result and the performance of the MFNS. The MFNS is found to work properly, and the accuracy of the position vector components and Euler angle reaches 1.82 mm and 0.17° (1σ) respectively. The basic mechanism of the MFNS may be utilized as a reference for the design and analysis of multiple-camera systems. Moreover, the calibration method proposed has practical value for its convenience for use and potential for integration into a toolkit. PMID:28327538
Multiple Integration of the Heat-Conduction Equation for a Space Bounded From the Inside
NASA Astrophysics Data System (ADS)
Kot, V. A.
2016-03-01
An N-fold integration of the heat-conduction equation for a space bounded from the inside has been performed using a system of identical equalities with definition of the temperature function by a power polynomial with an exponential factor. It is shown that, in a number of cases, the approximate solutions obtained can be considered as exact because their errors comprise hundredths and thousandths of a percent. The method proposed for N-fold integration represents an alternative to classical integral transformations.
Integral Method of Boundary Characteristics in Solving the Stefan Problem: Dirichlet Condition
NASA Astrophysics Data System (ADS)
Kot, V. A.
2016-09-01
The integral method of boundary characteristics is considered as applied to the solution of the Stefan problem with a Dirichlet condition. On the basis of the multiple integration of the heat-conduction equation, a sequence of identical equalities with boundary characteristics in the form of n-fold integrals of the surface temperature has been obtained. It is shown that, in the case where the temperature profile is defined by an exponential polynomial and the Stefan condition is not fulfilled at a moving interphase boundary, the accuracy of solving the Stefan problem with a Dirichlet condition by the integral method of boundary characteristics is higher by several orders of magnitude than the accuracy of solving this problem by other known approximate methods and that the solutions of the indicated problem with the use of the fourth-sixth degree polynomials on the basis of the integral method of boundary characteristics are exact in essence. This method surpasses the known numerical methods by many orders of magnitude in the accuracy of calculating the position of the interphase boundary and is approximately equal to them in the accuracy of calculating the temperature profile.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
A flexible importance sampling method for integrating subgrid processes
NASA Astrophysics Data System (ADS)
Raut, E. K.; Larson, V. E.
2016-01-01
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that
NASA Astrophysics Data System (ADS)
Cheng, Anyu; Jiang, Xiao; Li, Yongfu; Zhang, Chao; Zhu, Hao
2017-01-01
This study proposes a multiple sources and multiple measures based traffic flow prediction algorithm using the chaos theory and support vector regression method. In particular, first, the chaotic characteristics of traffic flow associated with the speed, occupancy, and flow are identified using the maximum Lyapunov exponent. Then, the phase space of multiple measures chaotic time series are reconstructed based on the phase space reconstruction theory and fused into a same multi-dimensional phase space using the Bayesian estimation theory. In addition, the support vector regression (SVR) model is designed to predict the traffic flow. Numerical experiments are performed using the data from multiple sources. The results show that, compared with the single measure, the proposed method has better performance for the short-term traffic flow prediction in terms of the accuracy and timeliness.
Boundary integral equation method for electromagnetic and elastic waves
NASA Astrophysics Data System (ADS)
Chen, Kun
calculating Brillouin diagram in eigenvalue problem and for normal incidence in scattering problem. Thirdly, a high order Nyström method is developed for elastodynamic scattering that features a simple local correction scheme due to a careful choice of basis functions. A novel simple and efficient singularity subtraction scheme and a new effective near singularity subtraction scheme are proposed for performing singular and nearly singular integrals on curvilinear triangular elements. The robustness, high accuracy and high order convergence of the proposed approached are demonstrated by numerical results. Finally, the multilevel fast multipole algorithm (MLFMA) is applied to accelerate the proposed Nyström method for solving large scale problems. A Formulation that can significantly reduce the memory requirements in MLFMA is come up with. Numerical examples in frequency domain are first given to show the accuracy and efficiency of the algorithm. By solving at multiple frequencies and performing the inverse Fourier transform, time domain results are also presented that are of interest to ultrasonic non-destructive evaluation.
A bin integral method for solving the kinetic collection equation
NASA Astrophysics Data System (ADS)
Wang, Lian-Ping; Xue, Yan; Grabowski, Wojciech W.
2007-09-01
A new numerical method for solving the kinetic collection equation (KCE) is proposed, and its accuracy and convergence are investigated. The method, herein referred to as the bin integral method with Gauss quadrature (BIMGQ), makes use of two binwise moments, namely, the number and mass concentration in each bin. These two degrees of freedom define an extended linear representation of the number density distribution for each bin following Enukashvily (1980). Unlike previous moment-based methods in which the gain and loss integrals are evaluated for a target bin, the concept of source-bin pair interactions is used to transfer bin moments from source bins to target bins. Collection kernels are treated by bilinear interpolations. All binwise interaction integrals are then handled exactly by Gauss quadrature of various orders. In essence the method combines favorable features in previous spectral moment-based and bin-based pair-interaction (or flux) methods to greatly enhance the logic, consistency, and simplicity in the numerical method and its implementation. Quantitative measures are developed to rigorously examine the accuracy and convergence properties of BIMGQ for both the Golovin kernel and hydrodynamic kernels. It is shown that BIMGQ has a superior accuracy for the Golovin kernel and a monotonic convergence behavior for hydrodynamic kernels. Direct comparisons are also made with the method of Berry and Reinhardt (1974), the linear flux method of Bott (1998), and the linear discrete method of Simmel et al. (2002).
Explicit Integration of Extremely Stiff Reaction Networks: Asymptotic Methods
Guidry, Mike W; Budiardja, R.; Feger, E.; Billings, J. J.; Hix, William Raphael; Messer, O.E.B.; Roche, K. J.; McMahon, E.; He, M.
2013-01-01
We show that, even for extremely stiff systems, explicit integration may compete in both accuracy and speed with implicit methods if algebraic methods are used to stabilize the numerical integration. The stabilizing algebra differs for systems well removed from equilibrium and those near equilibrium. This paper introduces a quantitative distinction between these two regimes and addresses the former case in depth, presenting explicit asymptotic methods appropriate when the system is extremely stiff but only weakly equilibrated. A second paper [1] examines quasi-steady-state methods as an alternative to asymptotic methods in systems well away from equilibrium and a third paper [2] extends these methods to equilibrium conditions in extremely stiff systems using partial equilibrium methods. All three papers present systematic evidence for timesteps competitive with implicit methods. Because explicit methods can execute a timestep faster than an implicit method, our results imply that algebraically stabilized explicit algorithms may offer a means to integration of larger networks than have been feasible previously in various disciplines.
Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face.
NASA Astrophysics Data System (ADS)
Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens
2015-04-01
The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.
[The academic education in nursing and multiple-victim incidents: an integrative review].
Salvador, Pétala Tuani Candido de Oliveira; Dantas, Rodrigo Assis Neves; Dantas, Daniele Vieira; Torres, Gilson de Vasconcelos
2012-06-01
The objective of this study is to reflect on the knowledge, competencies and skill that must be promoted during the academic education of nurses for an effective professional practice in view of a multiple-victim incident (MVI). This is an integrative literature review regarding academic nursing education. The literature survey was performed on the BDENF, LILACS, SciELO, MEDLINE, Web of Knowledge and HighWire Press databases, using the following descriptors: higher education; nursing education; emergency nursing; and mass casualty incidents. The publications permitted considerations regarding the following themes: particularities; competencies and skills essential in nursing practice in view of multiple-victim incidents; and the professors' strategies to promote those competencies and skills. The literature analysis demonstrated that nursing education should be configured as a space to develop critical thinking skills, which requires professors to have an eclectic educational background.
Characterization of multiple-bit errors from single-ion tracks in integrated circuits
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Edmonds, L. D.; Smith, L. S.
1989-01-01
The spread of charge induced by an ion track in an integrated circuit and its subsequent collection at sensitive nodal junctions can cause multiple-bit errors. The authors have experimentally and analytically investigated this phenomenon using a 256-kb dynamic random-access memory (DRAM). The effects of different charge-transport mechanisms are illustrated, and two classes of ion-track multiple-bit error clusters are identified. It is demonstrated that ion tracks that hit a junction can affect the lateral spread of charge, depending on the nature of the pull-up load on the junction being hit. Ion tracks that do not hit a junction allow the nearly uninhibited lateral spread of charge.
ePRISM: A case study in multiple proxy and mixed temporal resolution integration
Robinson, Marci M.; Dowsett, Harry J.
2010-01-01
As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.
NASA Astrophysics Data System (ADS)
Dahlin, K.; Asner, G. P.
2010-12-01
The ability to map plant species distributions has long been one of the key goals of terrestrial remote sensing. Achieving this goal has been challenging, however, due to technical constraints and the difficulty in relating remote observations to ground measurements. Advances in both the types of data that can be collected remotely and in available analytical tools like multiple endmember spectral mixture analysis (MESMA) are allowing for rapid improvements in this field. In 2007 the Carnegie Airborne Observatory (CAO) acquired high resolution lidar and hyperspectral imagery of Jasper Ridge Biological Preserve (Woodside, California). The site contains a mosaic of vegetation types, from grassland to chaparral to evergreen forest. To build a spectral library, 415 GPS points were collected in the field, made up of 44 plant species, six plant categories (for nonphotosynthetic vegetation), and four substrate types. Using the lidar data to select the most illuminated pixels as seen from the aircraft (based on canopy shape and viewing angle), we then reduced the spectral library to only the most fully lit pixels. To identify individual plant species in the imagery, first the hyperspectral data was used to calculate the normalized difference vegetation index (NDVI), and then pixels with an NDVI less than 0.15 were removed from further analysis. The remaining image was stratified into five classes based on vegetation height derived from the lidar data. For each class, a suite of possible endmembers was identified and then three endmember selection procedures (endmember average RMS, minimum average spectral angle, and count based endmember selection) were employed to select the most representative endmembers from each species in each class. Two and three endmember models were then applied and each pixel was assigned a species or plant category based on the highest endmember fraction. To validate the approach, an independent set of 200 points was collected throughout the
Current DOT research on the effect of multiple site damage on structural integrity
NASA Astrophysics Data System (ADS)
Tong, P.; Arin, Kemal; Jeong, David Y.; Greif, R.; Brewer, John C.; Bobo, Stephan N.; Sampath, Sam N.
1992-07-01
Multiple site damage (MSD) is a type of cracking that may be found in aging airplanes and which may adversely affect their continuing airworthiness. The Volpe National Transportation Systems Center has supported the Federal Aviation Administration Technical Center on structural integrity research for the past two and half years. The work has focused on understanding the behavior of MSD, detection of MSD during airframe inspection, and the avoidance of MSD in future designs. These three elements of the MSD problem are addressed and a summary of the completed work, the current status, and requirements for future research is provided.
NASA Astrophysics Data System (ADS)
Sakari, Charli; APOGEE Team
2017-01-01
Abundance variations are a common feature of Milky Way globular clusters. The globular clusters in M31 are too distant for detailed abundance studies of their individual stars; however, cluster abundances can be determined through high resolution, integrated light (IL) spectroscopy. In this talk, I discuss how IL abundances can be interpreted in the context of multiple populations. In particular, I will present new infrared abudances of 25 M31 globular clusters, derived from IL spectra from the Apache Point Observatory Galactic Evolution Experiment (APOGEE). These H band spectra allow determinations of C, N, and O from molecular features, and Fe, Na, Mg, Al, Si, Ca, Ti, and K from atomic features. The integrated abundance ratios are then investigated with cluster [Fe/H] and mass.
Liu, Kevin F R
2007-05-01
While pursuing economic development, countries around the world have become aware of the importance of environmental sustainability; therefore, the evaluation of environmental sustainability has become a significant issue. Traditionally, multiple-criteria decision-making (MCDM) was widely used as a way of evaluating environmental sustainability, Recently, several researchers have attempted to implement this evaluation with fuzzy logic since they recognized the assessment of environmental sustainability as a subjective judgment Intuition. This paper outlines a new evaluation-framework of environmental sustainability, which integrates fuzzy logic into MCDM. This evaluation-framework consists of 36 structured and 5 unstructured decision-points, wherein MCDM is used to handle the former and fuzzy logic serves for the latter, With the integrated evaluation-framework, the evaluations of environmental sustainability in 146 countries are calculated, ranked and clustered, and the evaluation results are very helpful to these countries, as they identify their obstacles towards environmental sustainability.
Position synchronised control of multiple robotic manipulators based on integral sliding mode
NASA Astrophysics Data System (ADS)
Zhao, Dongya; Zhu, Quanmin
2014-03-01
In this study, a new position synchronised control algorithm is developed for multiple robotic manipulator systems. In the merit of system synchronisation and integral sliding mode control, the proposed approach can stabilise position tracking of each robotic manipulator while coordinating its motion with the other manipulators. With the integral sliding mode, the proposed approach has insensitiveness against the lumped system uncertainty within the entire process of operation. Further, a perturbation estimator is proposed to reduce chattering effect. The corresponding stability analysis is presented to lay a foundation for theoretical understanding to the underlying issues as well as safely operating real systems. An illustrative example is bench tested to validate the effectiveness of the proposed approach.
Villoslada, Pablo; Baranzini, Sergio
2012-07-15
New "omic" technologies and their application to systems biology approaches offer new opportunities for biomarker discovery in complex disorders, including multiple sclerosis (MS). Recent studies using massive genotyping, DNA arrays, antibody arrays, proteomics, glycomics, and metabolomics from different tissues (blood, cerebrospinal fluid, brain) have identified many molecules associated with MS, defining both susceptibility and functional targets (e.g., biomarkers). Such discoveries involve many different levels in the complex organizational hierarchy of humans (DNA, RNA, protein, etc.), and integrating these datasets into a coherent model with regard to MS pathogenesis would be a significant step forward. Given the dynamic and heterogeneous nature of MS, validating biomarkers is mandatory. To develop accurate markers of disease prognosis or therapeutic response that are clinically useful, combining molecular, clinical, and imaging data is necessary. Such an integrative approach would pave the way towards better patient care and more effective clinical trials that test new therapies, thus bringing the paradigm of personalized medicine in MS one step closer.
Digital methods of photopeak integration in activation analysis.
NASA Technical Reports Server (NTRS)
Baedecker, P. A.
1971-01-01
A study of the precision attainable by several methods of gamma-ray photopeak integration has been carried out. The 'total peak area' method, the methods proposed by Covell, Sterlinski, and Quittner, and some modifications of these methods have been considered. A modification by Wasson of the total peak area method is considered to be the most advantageous due to its simplicity and the relatively high precision obtainable with this technique. A computer routine for the analysis of spectral data from nondestructive activation analysis experiments employing a Ge(Li) detector-spectrometer system is described.
NASA Astrophysics Data System (ADS)
Aimez, Vincent; Paquette, Michel; Beauvais, Jacques; Beerens, Jean; Poole, Philip J.; Charbonneau, N. Sylvain
1998-09-01
A monolithic optoelectronic chip containing multiple emission wavelength laser diodes has been developed. The semiconductor quantum well lasers have Fabry-Perot cavities of 500 micrometers in length. Electrical insulation between individual integrated devices has been achieved by wet etching the top contact layer and by a lift-off of the surface metal contact between the different lasers. The electroluminescence peak emission spectra of the integrated laser diodes has been shifted over a 25 nm range and 74 nm for discrete devices. Blueshifting of the emission wavelength has been achieved by quantum well intermixing using an industrial low energy ion implanter to generate point defects and a rapid thermal annealer to promote interdiffusion of the barrier and quantum well atoms during the recrystallization anneal. Phosphorus ions were implanted with an energy of 360 keV to precisely defined regions of the heterostructure with SiO2 serving as a masking material. Thus reference and intermixed regions were integrated on a single component. Integrated and discrete laser diodes have been assessed in terms of threshold currents and emission wavelengths.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Scott, S.
2013-12-01
While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into
Functional integration between brain regions at rest occurs in multiple-frequency bands.
Gohel, Suril R; Biswal, Bharat B
2015-02-01
Studies of resting-state fMRI have shown that blood oxygen level dependent (BOLD) signals giving rise to temporal correlation across voxels (or regions) are dominated by low-frequency fluctuations in the range of ∼ 0.01-0.1 Hz. These low-frequency fluctuations have been further divided into multiple distinct frequency bands (slow-5 and -4) based on earlier neurophysiological studies, though low sampling frequency of fMRI (∼ 0.5 Hz) has substantially limited the exploration of other known frequency bands of neurophysiological origins (slow-3, -2, and -1). In this study, we used resting-state fMRI data acquired from 21 healthy subjects at a higher sampling frequency of 1.5 Hz to assess the presence of resting-state functional connectivity (RSFC) across multiple frequency bands: slow-5 to slow-1. The effect of different frequency bands on spatial extent and connectivity strength for known resting-state networks (RSNs) was also evaluated. RSNs were derived using independent component analysis and seed-based correlation. Commonly known RSNs, such as the default mode, the fronto-parietal, the dorsal attention, and the visual networks, were consistently observed at multiple frequency bands. Significant inter-hemispheric connectivity was observed between each seed and its contra lateral brain region across all frequency bands, though overall spatial extent of seed-based correlation maps decreased in slow-2 and slow-1 frequency bands. These results suggest that functional integration between brain regions at rest occurs over multiple frequency bands and RSFC is a multiband phenomenon. These results also suggest that further investigation of BOLD signal in multiple frequency bands for related cognitive processes should be undertaken.
Accelerometer Method and Apparatus for Integral Display and Control Functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1998-01-01
Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.
Approximation method to compute domain related integrals in structural studies
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2015-11-01
Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the
An Integrated Approach to Research Methods and Capstone
ERIC Educational Resources Information Center
Postic, Robert; McCandless, Ray; Stewart, Beth
2014-01-01
In 1991, the AACU issued a report on improving undergraduate education suggesting, in part, that a curriculum should be both comprehensive and cohesive. Since 2008, we have systematically integrated our research methods course with our capstone course in an attempt to accomplish the twin goals of comprehensiveness and cohesion. By taking this…
Integrating Methods and Materials: Developing Trainees' Reading Skills.
ERIC Educational Resources Information Center
Jarvis, Jennifer
1987-01-01
Explores issues arising from a research project which studied ways of meeting the reading needs of trainee primary school teachers (from Malawi and Tanzania) of English as a foreign language. Topics discussed include: the classroom teaching situation; teaching "quality"; and integration of materials and methods. (CB)
A Modeling Method of Multiple Targets Assignment under Multiple UAVs’ Cooperation
NASA Astrophysics Data System (ADS)
Wang, Q. H.; Wan, G.; Cao, X. F.; Xie, L. X.
2017-03-01
Aiming at the multiple UAVs’ cooperation in the complex environment, detailed analysis about targets assignment model is made in the paper. Firstly, three basic situations are discussed according to the quantitative relationship between the UAVs and the targets. Then in order to make the targets model more practical, the probability that the UAVs’ damage is also taken into consideration. Following, basic particle swarm optimization algorithm is adopted to solve the model which has great performance in efficiency and convergence. Finally, three-dimensional environment is simulated to verify the model. Simulation results show that the model is practical and close to the actual environment.
NASA Astrophysics Data System (ADS)
Sica, R. J.; Haefele, A.
2014-12-01
The measurement of temperature in the middle atmosphere with Rayleigh-scatter lidars is an important technique for assessing atmospheric change. Current retrieval schemes for these temperature have several shortcoming which can be overcome using an optimal estimation method (OEM). OEMs are applied to the retrieval of temperature from Rayleigh-scatter lidar measurements using both single and multiple channel measurements. Forward models are presented that completely characterize the measurement and allow the simultaneous retrieval of temperature, dead time and background. The method allows a full uncertainty budget to be obtained on a per profile basis that includes, in addition to the statistical uncertainties, the smoothing error and uncertainties due to Rayleigh extinction, ozone absorption, the lidar constant, nonlinearity in the counting system, variation of the Rayleigh-scatter cross section with altitude, pressure, acceleration due to gravity and the variation of mean molecular mass with altitude. The vertical resolution of the temperature profile is found at each height, and a quantitative determination is made of the maximum height to which the retrieval is valid. A single temperature profile can be retrieved from measurements with multiple channels that cover different height ranges, vertical resolutions and even different detection methods. The OEM employed is shown to give robust estimates of temperature consistent with previous methods, while requiring minimal computational time. This demonstrated success of lidar temperature retrievals using an OEM opens new possibilities in atmospheric science for measurement integration between active and passive remote sensing instruments. We are currently working on extending our method to simultaneously retrieve water vapour and temperature using Raman-scatter lidar measurements.
NASA Technical Reports Server (NTRS)
Schneider, Harold
1959-01-01
This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
Encrypting three-dimensional information system based on integral imaging and multiple chaotic maps
NASA Astrophysics Data System (ADS)
Xing, Yan; Wang, Qiong-Hua; Xiong, Zhao-Long; Deng, Huan
2016-02-01
An encrypting three-dimensional (3-D) information system based on integral imaging (II) and multiple chaotic maps is proposed. In the encrypting process, the elemental image array (EIA) which represents spatial and angular information of the real 3-D scene is picked up by a microlens array. Subsequently, R, G, and B color components decomposed by the EIA are encrypted using multiple chaotic maps. Finally, these three encrypted components are interwoven to obtain the cipher information. The decryption process implements the reverse operation of the encryption process for retrieving the high-quality 3-D images. Since the encrypted EIA has the data redundancy property due to II, and all parameters of the pickup part are the secret keys of the encrypting system, the system sensitivity on the changes of the plaintext and secret keys can be significantly improved. Moreover, the algorithm based on multiple chaotic maps can effectively enhance the security. A preliminary experiment is carried out, and the experimental results verify the effectiveness, robustness, and security of the proposed system.
Rice, Glenn; Teuschler, Linda; MacDonel, Margaret; Butler, Jim; Finster, Molly; Hertzberg, Rick; Harou, Lynne
2007-07-01
Available in abstract form only. Full text of publication follows: As information about environmental contamination has increased in recent years, so has public interest in the combined effects of multiple contaminants. This interest has been highlighted by recent tragedies such as the World Trade Center disaster and hurricane Katrina. In fact, assessing multiple contaminants, exposures, and effects has long been an issue for contaminated sites, including U.S. Department of Energy (DOE) legacy waste sites. Local citizens have explicitly asked the federal government to account for cumulative risks, with contaminants moving offsite via groundwater flow, surface runoff, and air dispersal being a common emphasis. Multiple exposures range from ingestion and inhalation to dermal absorption and external gamma irradiation. Three types of concerns can lead to cumulative assessments: (1) specific sources or releases - e.g., industrial facilities or accidental discharges; (2) contaminant levels - in environmental media or human tissues; and (3) elevated rates of disease - e.g., asthma or cancer. The specific initiator frames the assessment strategy, including a determination of appropriate models to be used. Approaches are being developed to better integrate a variety of data, extending from environmental to internal co-location of contaminants and combined effects, to support more practical assessments of cumulative health risks. (authors)
Predicted PAR1 inhibitors from multiple computational methods
NASA Astrophysics Data System (ADS)
Wang, Ying; Liu, Jinfeng; Zhu, Tong; Zhang, Lujia; He, Xiao; Zhang, John Z. H.
2016-08-01
Multiple computational approaches are employed in order to find potentially strong binders of PAR1 from the two molecular databases: the Specs database containing more than 200,000 commercially available molecules and the traditional Chinese medicine (TCM) database. By combining the use of popular docking scoring functions together with detailed molecular dynamics simulation and protein-ligand free energy calculations, a total of fourteen molecules are found to be potentially strong binders of PAR1. The atomic details in protein-ligand interactions of these molecules with PAR1 are analyzed to help understand the binding mechanism which should be very useful in design of new drugs.
Integration of multiple determinants in the neuronal computation of economic values.
Raghuraman, Anantha P; Padoa-Schioppa, Camillo
2014-08-27
Economic goods may vary on multiple dimensions (determinants). A central conjecture in decision neuroscience is that choices between goods are made by comparing subjective values computed through the integration of all relevant determinants. Previous work identified three groups of neurons in the orbitofrontal cortex (OFC) of monkeys engaged in economic choices: (1) offer value cells, which encode the value of individual offers; (2) chosen value cells, which encode the value of the chosen good; and (3) chosen juice cells, which encode the identity of the chosen good. In principle, these populations could be sufficient to generate a decision. Critically, previous work did not assess whether offer value cells (the putative input to the decision) indeed encode subjective values as opposed to physical properties of the goods, and/or whether offer value cells integrate multiple determinants. To address these issues, we recorded from the OFC while monkeys chose between risky outcomes. Confirming previous observations, three populations of neurons encoded the value of individual offers, the value of the chosen option, and the value-independent choice outcome. The activity of both offer value cells and chosen value cells encoded values defined by the integration of juice quantity and probability. Furthermore, both populations reflected the subjective risk attitude of the animals. We also found additional groups of neurons encoding the risk associated with a particular option, the risky nature of the chosen option, and whether the trial outcome was positive or negative. These results provide substantial support for the conjecture described above and for the involvement of OFC in good-based decisions.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
ERIC Educational Resources Information Center
Tang, Kok-Sing; Delgado, Cesar; Moje, Elizabeth Birr
2014-01-01
This paper presents an integrative framework for analyzing science meaning-making with representations. It integrates the research on multiple representations and multimodal representations by identifying and leveraging the differences in their units of analysis in two dimensions: timescale and compositional grain size. Timescale considers the…
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Detection method for dissociation of multiple-charged ions
Smith, Richard D.; Udseth, Harold R.; Rockwood, Alan L.
1991-01-01
Dissociations of multiple-charged ions are detected and analyzed by charge-separation tandem mass spectrometry. Analyte molecules are ionized to form multiple-charged parent ions. A particular charge parent ion state is selected in a first-stage mass spectrometer and its mass-to-charge ratio (M/Z) is detected to determine its mass and charge. The selected parent ions are then dissociated, each into a plurality of fragments including a set of daughter ions each having a mass of at least one molecular weight and a charge of at least one. Sets of daughter ions resulting from the dissociation of one parent ion (sibling ions) vary in number but typically include two to four ions, one or more multiply-charged. A second stage mass spectrometer detects mass-to-charge ratio (m/z) of the daughter ions and a temporal or temporo-spatial relationship among them. This relationship is used to correlate the daughter ions to determine which (m/z) ratios belong to a set of sibling ions. Values of mass and charge of each of the sibling ions are determined simultaneously from their respective (m/z) ratios such that the sibling ion charges are integers and sum to the parent ion charge.
Pisella, L; Binkofski, F; Lasek, K; Toni, I; Rossetti, Y
2006-01-01
The current dominant view of the visual system is marked by the functional and anatomical dissociation between a ventral stream specialised for perception and a dorsal stream specialised for action. The "double-dissociation" between visual agnosia (VA), a deficit of visual recognition, and optic ataxia (OA), a deficit of visuo-manual guidance, considered as consecutive to ventral and dorsal damage, respectively, has provided the main argument for this dichotomic view. In the first part of this paper, we show that the currently available empirical data do not suffice to support a double-dissociation between OA and VA. In the second part, we review evidence coming from human neuropsychology and monkey data, which cast further doubts on the validity of a simple double-dissociation between perception and action because they argue for a far more complex organisation with multiple parallel visual-to-motor connections: 1. A dorso-dorsal pathway (involving the most dorsal part of the parietal and pre-motor cortices): for immediate visuo-motor control--with OA as typical disturbance. The latest research about OA is reviewed, showing how these patients exhibit deficits restricted to the most direct and fast visuo-motor transformations. We also propose that mild mirror ataxia, consisting of misreaching errors when the controlesional hand is guided to a visual goal though a mirror, could correspond to OA with an isolated "hand effect". 2. A ventral stream-prefrontal pathway (connections from the ventral visual stream to pre-frontal areas, by-passing the parietal areas): for "mediate" control (involving spatial or temporal transpositions [Rossetti, Y., & Pisella, L. (2003). Mediate responses as direct evidence for intention: Neuropsychology of Not to-, Not now- and Not there-tasks. In S. Johnson (Ed.), Cognitive Neuroscience perspectives on the problem of intentional action (pp. 67-105). MIT Press.])--with VA as typical disturbance. Preserved visuo-manual guidance in patients
Metcalf, Jessica L; Prost, Stefan; Nogués-Bravo, David; DeChaine, Eric G; Anderson, Christian; Batra, Persaram; Araújo, Miguel B; Cooper, Alan; Guralnick, Robert P
2014-02-22
One of the grand goals of historical biogeography is to understand how and why species' population sizes and distributions change over time. Multiple types of data drawn from disparate fields, combined into a single modelling framework, are necessary to document changes in a species's demography and distribution, and to determine the drivers responsible for change. Yet truly integrated approaches are challenging and rarely performed. Here, we discuss a modelling framework that integrates spatio-temporal fossil data, ancient DNA, palaeoclimatological reconstructions, bioclimatic envelope modelling and coalescence models in order to statistically test alternative hypotheses of demographic and potential distributional changes for the iconic American bison (Bison bison). Using different assumptions about the evolution of the bioclimatic niche, we generate hypothetical distributional and demographic histories of the species. We then test these demographic models by comparing the genetic signature predicted by serial coalescence against sequence data derived from subfossils and modern populations. Our results supported demographic models that include both climate and human-associated drivers of population declines. This synthetic approach, integrating palaeoclimatology, bioclimatic envelopes, serial coalescence, spatio-temporal fossil data and heterochronous DNA sequences, improves understanding of species' historical biogeography by allowing consideration of both abiotic and biotic interactions at the population level.
Metcalf, Jessica L.; Prost, Stefan; Nogués-Bravo, David; DeChaine, Eric G.; Anderson, Christian; Batra, Persaram; Araújo, Miguel B.; Cooper, Alan; Guralnick, Robert P.
2014-01-01
One of the grand goals of historical biogeography is to understand how and why species' population sizes and distributions change over time. Multiple types of data drawn from disparate fields, combined into a single modelling framework, are necessary to document changes in a species's demography and distribution, and to determine the drivers responsible for change. Yet truly integrated approaches are challenging and rarely performed. Here, we discuss a modelling framework that integrates spatio-temporal fossil data, ancient DNA, palaeoclimatological reconstructions, bioclimatic envelope modelling and coalescence models in order to statistically test alternative hypotheses of demographic and potential distributional changes for the iconic American bison (Bison bison). Using different assumptions about the evolution of the bioclimatic niche, we generate hypothetical distributional and demographic histories of the species. We then test these demographic models by comparing the genetic signature predicted by serial coalescence against sequence data derived from subfossils and modern populations. Our results supported demographic models that include both climate and human-associated drivers of population declines. This synthetic approach, integrating palaeoclimatology, bioclimatic envelopes, serial coalescence, spatio-temporal fossil data and heterochronous DNA sequences, improves understanding of species' historical biogeography by allowing consideration of both abiotic and biotic interactions at the population level. PMID:24403338
NASA Astrophysics Data System (ADS)
Congdon, Peter
2010-03-01
This paper describes a structural equation methodology for obtaining social capital scores for survey subjects from multiple indicators of social support, neighbourhood and trust perceptions, and memberships of organizations. It adjusts for variation that is likely to occur in levels of social capital according to geographic context (e.g. level of area deprivation, geographic region, level of urbanity) and demographic group. Social capital is used as an explanatory factor for psychological distress using data from the 2006 Health Survey for England. A highly significant effect of social capital in reducing the chance of psychiatric caseness is obtained after controlling for other individual and geographic risk factors. Allowing for social capital has considerable effects on the impacts on psychiatric health of other risk factors. In particular, the impact of area deprivation category is much reduced. There is also evidence of significant differentiation in social capital between population categories and geographic contexts.
Method to integrate full particle orbit in toroidal plasmas
NASA Astrophysics Data System (ADS)
Wei, X. S.; Xiao, Y.; Kuley, A.; Lin, Z.
2015-09-01
It is important to integrate full particle orbit accurately when studying charged particle dynamics in electromagnetic waves with frequency higher than cyclotron frequency. We have derived a form of the Boris scheme using magnetic coordinates, which can be used effectively to integrate the cyclotron orbit in toroidal geometry over a long period of time. The new method has been verified by a full particle orbit simulation in toroidal geometry without high frequency waves. The full particle orbit calculation recovers guiding center banana orbit. This method has better numeric properties than the conventional Runge-Kutta method for conserving particle energy and magnetic moment. The toroidal precession frequency is found to match that from guiding center simulation. Many other important phenomena in the presence of an electric field, such as E × B drift, Ware pinch effect and neoclassical polarization drift are also verified by the full orbit simulation.
NASA Astrophysics Data System (ADS)
Wong, Kin-Yiu; Gao, Jiali
2007-12-01
Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.
Shen, Yao Qing; Burger, Gertraud
2007-01-01
Background Knowing the subcellular location of proteins provides clues to their function as well as the interconnectivity of biological processes. Dozens of tools are available for predicting protein location in the eukaryotic cell. Each tool performs well on certain data sets, but their predictions often disagree for a given protein. Since the individual tools each have particular strengths, we set out to integrate them in a way that optimally exploits their potential. The method we present here is applicable to various subcellular locations, but tailored for predicting whether or not a protein is localized in mitochondria. Knowledge of the mitochondrial proteome is relevant to understanding the role of this organelle in global cellular processes. Results In order to develop a method for enhanced prediction of subcellular localization, we integrated the outputs of available localization prediction tools by several strategies, and tested the performance of each strategy with known mitochondrial proteins. The accuracy obtained (up to 92%) surpasses by far the individual tools. The method of integration proved crucial to the performance. For the prediction of mitochondrion-located proteins, integration via a two-layer decision tree clearly outperforms simpler methods, as it allows emphasis of biologically relevant features such as the mitochondrial targeting peptide and transmembrane domains. Conclusion We developed an approach that enhances the prediction accuracy of mitochondrial proteins by uniting the strength of specialized tools. The combination of machine-learning based integration with biological expert knowledge leads to improved performance. This approach also alleviates the conundrum of how to choose between conflicting predictions. Our approach is easy to implement, and applicable to predicting subcellular locations other than mitochondria, as well as other biological features. For a trial of our approach, we provide a webservice for mitochondrial protein
Lagerwaard, Frank J. Hoorn, Elles A.P. van der; Verbakel, Wilko; Haasbeek, Cornelis J.A.; Slotman, Ben J.; Senan, Suresh
2009-09-01
Purpose: Volumetric modulated arc therapy (RapidArc [RA]; Varian Medical Systems, Palo Alto, CA) allows for the generation of intensity-modulated dose distributions by use of a single gantry rotation. We used RA to plan and deliver whole-brain radiotherapy (WBRT) with a simultaneous integrated boost in patients with multiple brain metastases. Methods and Materials: Composite RA plans were generated for 8 patients, consisting of WBRT (20 Gy in 5 fractions) with an integrated boost, also 20 Gy in 5 fractions, to Brain metastases, and clinically delivered in 3 patients. Summated gross tumor volumes were 1.0 to 37.5 cm{sup 3}. RA plans were measured in a solid water phantom by use of Gafchromic films (International Specialty Products, Wayne, NJ). Results: Composite RA plans could be generated within 1 hour. Two arcs were needed to deliver the mean of 1,600 monitor units with a mean 'beam-on' time of 180 seconds. RA plans showed excellent coverage of planning target volume for WBRT and planning target volume for the boost, with mean volumes receiving at least 95% of the prescribed dose of 100% and 99.8%, respectively. The mean conformity index was 1.36. Composite plans showed much steeper dose gradients outside Brain metastases than plans with a conventional summation of WBRT and radiosurgery. Comparison of calculated and measured doses showed a mean gamma for double-arc plans of 0.30, and the area with a gamma larger than 1 was 2%. In-room times for clinical RA sessions were approximately 20 minutes for each patient. Conclusions: RA treatment planning and delivery of integrated plans of WBRT and boosts to multiple brain metastases is a rapid and accurate technique that has a higher conformity index than conventional summation of WBRT and radiosurgery boost.
Multiple cell radiation detector system, and method, and submersible sonde
Johnson, Larry O.; McIsaac, Charles V.; Lawrence, Robert S.; Grafwallner, Ervin G.
2002-01-01
A multiple cell radiation detector includes a central cell having a first cylindrical wall providing a stopping power less than an upper threshold; an anode wire suspended along a cylindrical axis of the central cell; a second cell having a second cylindrical wall providing a stopping power greater than a lower threshold, the second cylindrical wall being mounted coaxially outside of the first cylindrical wall; a first end cap forming a gas-tight seal at first ends of the first and second cylindrical walls; a second end cap forming a gas-tight seal at second ends of the first and second cylindrical walls; and a first group of anode wires suspended between the first and second cylindrical walls.
Yoga as a method of symptom management in multiple sclerosis.
Frank, Rachael; Larimore, Jennifer
2015-01-01
Multiple Sclerosis (MS) is an immune-mediated process in which the body's immune system damages myelin in the central nervous system (CNS). The onset of this disorder typically occurs in young adults, and it is more common among women. Currently, there is no cure and the long-term disease progression makes symptomatic management critical for maintaining quality of life. Several pharmacotherapeutic agents are approved for treatment, but many patients seek complementary and alternative interventions. Reviews have been conducted regarding broad topics such as mindfulness-based interventions for people diagnosed with MS and the impact of yoga on a range of neurological disorders. The objective of the present review is to examine the potential benefits of yoga for individuals with MS and address its use in managing symptoms including pain, mental health, fatigue, spasticity, balance, bladder control, and sexual function.
Material mechanical characterization method for multiple strains and strain rates
Erdmand, III, Donald L.; Kunc, Vlastimil; Simunovic, Srdjan; Wang, Yanli
2016-01-19
A specimen for measuring a material under multiple strains and strain rates. The specimen including a body having first and second ends and a gage region disposed between the first and second ends, wherein the body has a central, longitudinal axis passing through the first and second ends. The gage region includes a first gage section and a second gage section, wherein the first gage section defines a first cross-sectional area that is defined by a first plane that extends through the first gage section and is perpendicular to the central, longitudinal axis. The second gage section defines a second cross-sectional area that is defined by a second plane that extends through the second gage section and is perpendicular to the central, longitudinal axis and wherein the first cross-sectional area is different in size than the second cross-sectional area.
Yoga as a method of symptom management in multiple sclerosis
Frank, Rachael; Larimore, Jennifer
2015-01-01
Multiple Sclerosis (MS) is an immune-mediated process in which the body's immune system damages myelin in the central nervous system (CNS). The onset of this disorder typically occurs in young adults, and it is more common among women. Currently, there is no cure and the long-term disease progression makes symptomatic management critical for maintaining quality of life. Several pharmacotherapeutic agents are approved for treatment, but many patients seek complementary and alternative interventions. Reviews have been conducted regarding broad topics such as mindfulness-based interventions for people diagnosed with MS and the impact of yoga on a range of neurological disorders. The objective of the present review is to examine the potential benefits of yoga for individuals with MS and address its use in managing symptoms including pain, mental health, fatigue, spasticity, balance, bladder control, and sexual function. PMID:25983675
Smeared star spot location estimation using directional integral method.
Hou, Wang; Liu, Haibo; Lei, Zhihui; Yu, Qifeng; Liu, Xiaochun; Dong, Jing
2014-04-01
Image smearing significantly affects the accuracy of attitude determination of most star sensors. To ensure the accuracy and reliability of a star sensor under image smearing conditions, a novel directional integral method is presented for high-precision star spot location estimation to improve the accuracy of attitude determination. Simulations based on the orbit data of the challenging mini-satellite payload satellite were performed. Simulation results demonstrated that the proposed method exhibits high performance and good robustness, which indicates that the method can be applied effectively.
Li, Albert P
2009-09-01
The application of the Integrated Discrete Multiple Organ Co-culture (IdMOC) system in the evaluation of organ-specific toxicity is reviewed. In vitro approaches to predict in vivo toxicity have met with limited success, mainly because of the complexity of in vivo toxic responses. In vivo properties that are not well-represented in vitro include organ-specific responses, multiple organ metabolism, and multiple organ interactions. The IdMOC system has been developed to address these deficiencies. The system uses a 'wells-within-a-well' concept for the co-culturing of cells or tissue slices from different organs as physically separated (discrete) entities in the small inner wells. These inner wells are nevertheless interconnected (integrated) by overlying culture medium in the large outer containing well. The IdMOC system thereby models the in vivo situation, in which multiple organs are physically separated but interconnected by the systemic circulation, permitting multiple organ interactions. The IdMOC system, with either cells or tissue slices from multiple organs, can be used to evaluate cell type-specific or organ-specific toxicity.
Blood viscosity measurement: an integral method using Doppler ultrasonic profiles
NASA Astrophysics Data System (ADS)
Flaud, P.; Bensalah, A.
2005-12-01
The aim of this work is to present a new indirect and noninvasive method for the measurement of the Newtonian blood viscosity. Based on an integral form of the axial Navier-Stokes equation, this method is particularly suited for in vivo investigations using ultrasonic arterial blood velocity profiles. Its main advantage is that it is applicable to periodic as well as non periodic flows. Moreover it does not require classical filtering methods enhancing signal to noise ratio of the physiological signals. This method only requires the knowledge of the velocimetric data measured inside a spatially and temporally optimized zone of the Doppler velocity profiles. The results obtained using numerical simulation as well as in vitro or in vivo experiments prove the effectiveness of the method. It is then well adapted to the clinical environment as a systematic quasi on-line method for the measurement of the blood viscosity.
Li, Z; Möttönen, J; Sillanpää, M J
2015-12-01
Linear regression-based quantitative trait loci/association mapping methods such as least squares commonly assume normality of residuals. In genetics studies of plants or animals, some quantitative traits may not follow normal distribution because the data include outlying observations or data that are collected from multiple sources, and in such cases the normal regression methods may lose some statistical power to detect quantitative trait loci. In this work, we propose a robust multiple-locus regression approach for analyzing multiple quantitative traits without normality assumption. In our method, the objective function is least absolute deviation (LAD), which corresponds to the assumption of multivariate Laplace distributed residual errors. This distribution has heavier tails than the normal distribution. In addition, we adopt a group LASSO penalty to produce shrinkage estimation of the marker effects and to describe the genetic correlation among phenotypes. Our LAD-LASSO approach is less sensitive to the outliers and is more appropriate for the analysis of data with skewedly distributed phenotypes. Another application of our robust approach is on missing phenotype problem in multiple-trait analysis, where the missing phenotype items can simply be filled with some extreme values, and be treated as outliers. The efficiency of the LAD-LASSO approach is illustrated on both simulated and real data sets.
Villena, Jorge Fernandez; Polimeridis, Athanasios G; Eryaman, Yigitcan; Adalsteinsson, Elfar; Wald, Lawrence L; White, Jacob K; Daniel, Luca
2016-11-01
A fast frequency domain full-wave electromagnetic simulation method is introduced for the analysis of MRI coils loaded with the realistic human body models. The approach is based on integral equation methods decomposed into two domains: 1) the RF coil array and shield, and 2) the human body region where the load is placed. The analysis of multiple coil designs is accelerated by introducing the precomputed magnetic resonance Green functions (MRGFs), which describe how the particular body model used responds to the incident fields from external sources. These MRGFs, which are precomputed once for a given body model, can be combined with any integral equation solver and reused for the analysis of many coil designs. This approach provides a fast, yet comprehensive, analysis of coil designs, including the port S-parameters and the electromagnetic field distribution within the inhomogeneous body. The method solves the full-wave electromagnetic problem for a head array in few minutes, achieving a speed up of over 150 folds with root mean square errors in the electromagnetic field maps smaller than 0.4% when compared to the unaccelerated integral equation-based solver. This enables the characterization of a large number of RF coil designs in a reasonable time, which is a first step toward an automatic optimization of multiple parameters in the design of transmit arrays, as illustrated in this paper, but also receive arrays.
Integration of isothermal amplification methods in microfluidic devices: Recent advances.
Giuffrida, Maria Chiara; Spoto, Giuseppe
2017-04-15
The integration of nucleic acids detection assays in microfluidic devices represents a highly promising approach for the development of convenient, cheap and efficient diagnostic tools for clinical, food safety and environmental monitoring applications. Such tools are expected to operate at the point-of-care and in resource-limited settings. The amplification of the target nucleic acid sequence represents a key step for the development of sensitive detection protocols. The integration in microfluidic devices of the most popular technology for nucleic acids amplifications, polymerase chain reaction (PCR), is significantly limited by the thermal cycling needed to obtain the target sequence amplification. This review provides an overview of recent advances in integration of isothermal amplification methods in microfluidic devices. Isothermal methods, that operate at constant temperature, have emerged as promising alternative to PCR and greatly simplify the implementation of amplification methods in point-of-care diagnostic devices and devices to be used in resource-limited settings. Possibilities offered by isothermal methods for digital droplet amplification are discussed.
Borja, Angel; Bricker, Suzanne B; Dauer, Daniel M; Demetriades, Nicolette T; Ferreira, João G; Forbes, Anthony T; Hutchings, Pat; Jia, Xiaoping; Kenchington, Richard; Carlos Marques, João; Zhu, Changbo
2008-09-01
In recent years, several sets of legislation worldwide (Oceans Act in USA, Australia or Canada; Water Framework Directive or Marine Strategy in Europe, National Water Act in South Africa, etc.) have been developed in order to address ecological quality or integrity, within estuarine and coastal systems. Most such legislation seeks to define quality in an integrative way, by using several biological elements, together with physico-chemical and pollution elements. Such an approach allows assessment of ecological status at the ecosystem level ('ecosystem approach' or 'holistic approach' methodologies), rather than at species level (e.g. mussel biomonitoring or Mussel Watch) or just at chemical level (i.e. quality objectives) alone. Increasing attention has been paid to the development of tools for different physico-chemical or biological (phytoplankton, zooplankton, benthos, algae, phanerogams, fishes) elements of the ecosystems. However, few methodologies integrate all the elements into a single evaluation of a water body. The need for such integrative tools to assess ecosystem quality is very important, both from a scientific and stakeholder point of view. Politicians and managers need information from simple and pragmatic, but scientifically sound methodologies, in order to show to society the evolution of a zone (estuary, coastal area, etc.), taking into account human pressures or recovery processes. These approaches include: (i) multidisciplinarity, inherent in the teams involved in their implementation; (ii) integration of biotic and abiotic factors; (iii) accurate and validated methods in determining ecological integrity; and (iv) adequate indicators to follow the evolution of the monitored ecosystems. While some countries increasingly use the establishment of marine parks to conserve marine biodiversity and ecological integrity, there is awareness (e.g. in Australia) that conservation and management of marine ecosystems cannot be restricted to Marine Protected
An Illustration to Assist in Comparing and Remembering Several Multiplicity Adjustment Methods
ERIC Educational Resources Information Center
Hasler, Mario
2017-01-01
There are many well-known or new methods to adjust statistical tests for multiplicity. This article provides an illustration helping lecturers or consultants to remember the differences of three important multiplicity adjustment methods and to explain them to non-statisticians.
The Multiple-Car Method. Exploring Its Use in Driver and Traffic Safety Education. Second Edition.
ERIC Educational Resources Information Center
American Driver and Traffic Safety Education Association, Washington, DC.
Primarily written for school administrators and driver education teachers, this publication presents information on planning and implementing the multiple car method of driver instruction. An introductory section presents a definition of the multiple car method and its history of development. It is defined as an off-street paved area incorporating…
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2004-05-18
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2006-12-12
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Integration of sample analysis method (SAM) for polychlorinated biphenyls
Monagle, M.; Johnson, R.C.
1996-05-01
A completely integrated Sample Analysis Method (SAM) has been tested as part of the Contaminant Analysis Automation program. The SAM system was tested for polychlorinated biphenyl samples using five Standard Laboratory Modules{trademark}: two Soxtec{trademark} modules, a high volume concentrator module, a generic materials handling module, and the gas chromatographic module. With over 300 samples completed within the first phase of the validation, recovery and precision data were comparable to manual methods. Based on experience derived from the first evaluation of the automated system, efforts are underway to improve sample recoveries and integrate a sample cleanup procedure. In addition, initial work in automating the extraction of semivolatile samples using this system will also be discussed.
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Prinn, Ronald; Webster, Mort
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies.
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941
Integration of Multiple Data Sources to Simulate the Dynamics of Land Systems.
Deng, Xiangzheng; Su, Hongbo; Zhan, Jinyan
2008-02-04
In this paper we present and develop a new model, which we have calledDynamics of Land Systems (DLS). The DLS model is capable of integrating multiple datasources to simulate the dynamics of a land system. Three main modules are incorporatedin DLS: a spatial regression module, to explore the relationship between land uses andinfluencing factors, a scenario analysis module of the land uses of a region during thesimulation period and a spatial disaggregation module, to allocate land use changes froma regional level to disaggregated grid cells. A case study on Taips County in North Chinais incorporated in this paper to test the functionality of DLS. The simulation results underthe baseline, economic priority and environmental scenarios help to understand the landsystem dynamics and project near future land-use trajectories of a region, in order tofocus management decisions on land uses and land use planning.
Integrative analysis of multiple diverse omics datasets by sparse group multitask regression
Lin, Dongdong; Zhang, Jigang; Li, Jingyao; He, Hao; Deng, Hong-Wen; Wang, Yu-Ping
2014-01-01
A variety of high throughput genome-wide assays enable the exploration of genetic risk factors underlying complex traits. Although these studies have remarkable impact on identifying susceptible biomarkers, they suffer from issues such as limited sample size and low reproducibility. Combining individual studies of different genetic levels/platforms has the promise to improve the power and consistency of biomarker identification. In this paper, we propose a novel integrative method, namely sparse group multitask regression, for integrating diverse omics datasets, platforms, and populations to identify risk genes/factors of complex diseases. This method combines multitask learning with sparse group regularization, which will: (1) treat the biomarker identification in each single study as a task and then combine them by multitask learning; (2) group variables from all studies for identifying significant genes; (3) enforce sparse constraint on groups of variables to overcome the “small sample, but large variables” problem. We introduce two sparse group penalties: sparse group lasso and sparse group ridge in our multitask model, and provide an effective algorithm for each model. In addition, we propose a significance test for the identification of potential risk genes. Two simulation studies are performed to evaluate the performance of our integrative method by comparing it with conventional meta-analysis method. The results show that our sparse group multitask method outperforms meta-analysis method significantly. In an application to our osteoporosis studies, 7 genes are identified as significant genes by our method and are found to have significant effects in other three independent studies for validation. The most significant gene SOD2 has been identified in our previous osteoporosis study involving the same expression dataset. Several other genes such as TREML2, HTR1E, and GLO1 are shown to be novel susceptible genes for osteoporosis, as confirmed from other
ERIC Educational Resources Information Center
Rimpiläinen, Sanna
2015-01-01
What do different research methods and approaches "do" in practice? The article seeks to discuss this point by drawing upon socio-material research approaches and empirical examples taken from the early stages of an extensive case study on an interdisciplinary project between two multidisciplinary fields of study, education and computer…
Yoshikawa, Miho; Zhang, Ming; Toyota, Koki
2017-01-01
Complete bioremediation of soils containing multiple volatile organic compounds (VOCs) remains a challenge. To explore the possibility of complete bioremediation through integrated anaerobic-aerobic biodegradation, laboratory feasibility tests followed by alternate anaerobic-aerobic and aerobic-anaerobic biodegradation tests were performed. Chlorinated ethylenes, including tetrachloroethylene (PCE), trichloroethylene (TCE), cis-dichloroethylene (cis-DCE), and vinyl chloride (VC), and dichloromethane (DCM) were used for anaerobic biodegradation, whereas benzene, toluene, and DCM were used for aerobic biodegradation tests. Microbial communities involved in the biodegradation tests were analyzed to characterize the major bacteria that may contribute to biodegradation. The results demonstrated that integrated anaerobic-aerobic biodegradation was capable of completely degrading the seven VOCs with initial concentration of each VOC less than 30 mg/L. Benzene and toluene were degraded within 8 days, and DCM was degraded within 20 to 27 days under aerobic conditions when initial oxygen concentrations in the headspaces of test bottles were set to 5.3% and 21.0%. Dehalococcoides sp., generally considered sensitive to oxygen, survived aerobic conditions for 28 days and was activated during the subsequent anaerobic biodegradation. However, degradation of cis-DCE was suppressed after oxygen exposure for more than 201 days, suggesting the loss of viability of Dehalococcoides sp., as they are the only known anaerobic bacteria that can completely biodegrade chlorinated ethylenes to ethylene. Anaerobic degradation of DCM following previous aerobic degradation was complete, and yet-unknown microbes may be involved in the process. The findings may provide a scientific and practical basis for the complete bioremediation of multiple contaminants in situ and a subject for further exploration.
Inclusion of Separation in Integral Boundary Layer Methods
NASA Astrophysics Data System (ADS)
Wallace, Brodie; O'Neill, Charles
2016-11-01
An integral boundary layer (IBL) method coupled with a potential flow solver quickly allows simulating aerodynamic flows, allowing for aircraft geometries to be rapidly designed and optimized. However, most current IBL methods lack the ability to accurately model three-dimensional separated flows. Various IBL equations and closure relations were investigated in an effort to develop an IBL capable of modeling separation. Solution techniques, including a Newton's method and the inverse matrix solving program GMRES, as well as methods for coupling an IBL with a potential flow solver were also investigated. Results for two-dimensional attached flow as well as methods for expanding an IBL to model three-dimensional separation are presented. Funding from NSF REU site Grant EEC 1358991 is greatly appreciated.
Linear Multistep Methods for Integrating Reversible Differential Equations
NASA Astrophysics Data System (ADS)
Evans, N. Wyn; Tremaine, Scott
1999-10-01
This paper studies multistep methods for the integration of reversible dynamical systems, with particular emphasis on the planar Kepler problem. It has previously been shown by Cano & Sanz-Serna that reversible linear multisteps for first-order differential equations are generally unstable. Here we report on a subset of these methods-the zero-growth methods-that evade these instabilities. We provide an algorithm for identifying these rare methods. We find and study all zero-growth, reversible multisteps with six or fewer steps. This select group includes two well-known second-order multisteps (the trapezoidal and explicit midpoint methods), as well as three new fourth-order multisteps-one of which is explicit. Variable time steps can be readily implemented without spoiling the reversibility. Tests on Keplerian orbits show that these new reversible multisteps work well on orbits with low or moderate eccentricity, although at least 100 steps per radian are required for stability.
NASA Astrophysics Data System (ADS)
Guo, Xixiong; Zhong, Chengwen; Zhuo, Congshan; Cao, Jun
2014-04-01
As a fundamental subject in fluid mechanics, sophisticated cavity flow patterns due to the movement of multi-lids have been routinely analyzed by the computational fluid dynamics community. Unlike those reported computational studies that were conducted using more conventional numerical methods, this paper features employing the multiple-relaxation-time (MRT) lattice Boltzmann method (LBM) to numerically investigate the two-dimensional cavity flows generated by the movements of two adjacent lids. The obtained MRT-LBM results reveal a number of important bifurcation flow features, such as the symmetry and steadiness of cavity flows at low Reynolds numbers, the multiplicity of stable cavity flow patterns when the Reynolds number exceeds its first critical value, as well as the periodicity of the cavity flow after the second critical Reynolds number is reached. Detailed flow characteristics are reported that include the critical Reynolds numbers, the locations of the vortex centers, and the values of stream function at the vortex centers. Through systematic comparison against the simulation results obtained elsewhere by using the lattice Bhatnagar-Gross-Krook model and other numerical schemes, not only does the MRT-LBM approach exhibit fairly satisfactory accuracy, but also demonstrates its remarkable flexibility that renders the adjustment of its multiple relaxation factors fully manageable and, thus, particularly accommodates the need of effectively investigating the multiplicity of flow patterns with complex behaviors.
High-throughput sequencing of multiple amplicons for barcoding and integrative taxonomy
Cruaud, Perrine; Rasplus, Jean-Yves; Rodriguez, Lillian Jennifer; Cruaud, Astrid
2017-01-01
Until now, the potential of NGS for the construction of barcode libraries or integrative taxonomy has been seldom realised. Here, we amplified (two-step PCR) and simultaneously sequenced (MiSeq) multiple markers from hundreds of fig wasp specimens. We also developed a workflow for quality control of the data. Illumina and Sanger sequences accumulated in the past years were compared. Interestingly, primers and PCR conditions used for the Sanger approach did not require optimisation to construct the MiSeq library. After quality controls, 87% of the species (76% of the specimens) had a valid MiSeq sequence for each marker. Importantly, major clusters did not always correspond to the targeted loci. Nine specimens exhibited two divergent sequences (up to 10%). In 95% of the species, MiSeq and Sanger sequences obtained from the same sampling were similar. For the remaining 5%, species were paraphyletic or the sequences clustered into divergent groups on the Sanger + MiSeq trees (>7%). These problematic cases may represent coding NUMTS or heteroplasms. Our results illustrate that Illumina approaches are not artefact-free and confirm that Sanger databases can contain non-target genes. This highlights the importance of quality controls, working with taxonomists and using multiple markers for DNA-taxonomy or species diversity assessment. PMID:28165046
Fuzzy adaptive interacting multiple model nonlinear filter for integrated navigation sensor fusion.
Tseng, Chien-Hao; Chang, Chih-Wen; Jwo, Dah-Jing
2011-01-01
In this paper, the application of the fuzzy interacting multiple model unscented Kalman filter (FUZZY-IMMUKF) approach to integrated navigation processing for the maneuvering vehicle is presented. The unscented Kalman filter (UKF) employs a set of sigma points through deterministic sampling, such that a linearization process is not necessary, and therefore the errors caused by linearization as in the traditional extended Kalman filter (EKF) can be avoided. The nonlinear filters naturally suffer, to some extent, the same problem as the EKF for which the uncertainty of the process noise and measurement noise will degrade the performance. As a structural adaptation (model switching) mechanism, the interacting multiple model (IMM), which describes a set of switching models, can be utilized for determining the adequate value of process noise covariance. The fuzzy logic adaptive system (FLAS) is employed to determine the lower and upper bounds of the system noise through the fuzzy inference system (FIS). The resulting sensor fusion strategy can efficiently deal with the nonlinear problem for the vehicle navigation. The proposed FUZZY-IMMUKF algorithm shows remarkable improvement in the navigation estimation accuracy as compared to the relatively conventional approaches such as the UKF and IMMUKF.
NASA Astrophysics Data System (ADS)
Dávila, Diana; Tarancón, Albert; Calaza, Carlos; Salleras, Marc; Fernández-Regúlez, Marta; Paulo, Alvaro San; Fonseca, Luis
2013-07-01
Low-dimensional structures have been shown to be promising candidates for enhancing the thermoelectric properties of semiconductors, paving the way for integration of thermoelectric generators into silicon microtechnology. With this aim, dense arrays of well-oriented and size-controlled silicon nanowires (Si NWs) obtained by the chemical vapor deposition (CVD)-vapor-liquid-solid (VLS) mechanism have been implemented into microfabricated structures to develop planar unileg thermoelectric microgenerators ( μTEGs). Different low-thermal-mass suspended structures have been designed and microfabricated on silicon-on-insulator (SOI) substrates to operate as microthermoelements using p-type Si NW arrays as the thermoelectric material. To obtain nanowire arrays with effective lengths larger than normally attained by the VLS technique, structures composed of multiple ordered arrays consecutively bridged by transversal microspacers have been fabricated. The successive linkage of multiple Si NW arrays enabled the development of larger temperature differences while preserving good electrical contact. This gives rise to small internal thermoelement resistances, enhancing the performance of the devices as energy harvesters.
An integrated voice and data multiple-access scheme for a land-mobile satellite system
NASA Technical Reports Server (NTRS)
Li, V. O. K.; Yan, T.-Y.
1984-01-01
An analytical study is performed of the satellite requirements for a land mobile satellite system (LMSS). The spacecraft (MSAT-X) would be in GEO and would be compatible with multiple access by mobile radios and antennas and fixed stations. The FCC has received a petition from NASA to reserve the 821-825 and 866-870 MHz frequencies for the LMSS, while communications with fixed earth stations would be in the Ku band. MSAT-X transponders would alter the frequencies of signal and do no processing in the original configuration considered. Channel use would be governed by an integrated demand-assigned, multiple access protocol, which would divide channels into reservation and information channels, governed by a network management center. Further analyses will cover tradeoffs between data and voice users, probability of blocking, and the performance impacts of on-board switching and variable bandwidth assignment. Initial calculations indicate that a large traffic volume can be handled with acceptable delays and voice blocking probabilities.
Method for integrating microelectromechanical devices with electronic circuitry
Montague, S.; Smith, J.H.; Sniegowski, J.J.; McWhorter, P.J.
1998-08-25
A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry. 13 figs.
Method for integrating microelectromechanical devices with electronic circuitry
Montague, Stephen; Smith, James H.; Sniegowski, Jeffry J.; McWhorter, Paul J.
1998-01-01
A method for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry.
Synthesis of aircraft structures using integrated design and analysis methods
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Goetz, R. C.
1978-01-01
A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.
Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches
NASA Astrophysics Data System (ADS)
Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo
This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.
Ducrotoy, M J; Yahyaoui Azami, H; El Berbri, I; Bouslikhane, M; Fassi Fihri, O; Boué, F; Petavy, A F; Dakkak, A; Welburn, S; Bardosh, K L
2015-12-01
Integrating the control of multiple neglected zoonoses at the community-level holds great potential, but critical data is missing to inform the design and implementation of different interventions. In this paper we present an evaluation of an integrated health messaging intervention, using powerpoint presentations, for five bacterial (brucellosis and bovine tuberculosis) and dog-associated (rabies, cystic echinococcosis and leishmaniasis) zoonotic diseases in Sidi Kacem Province, northwest Morocco. Conducted by veterinary and epidemiology students between 2013 and 2014, this followed a process-based approach that encouraged sequential adaptation of images, key messages, and delivery strategies using auto-evaluation and end-user feedback. We describe the challenges and opportunities of this approach, reflecting on who was targeted, how education was conducted, and what tools and approaches were used. Our results showed that: (1) replacing words with local pictures and using "hands-on" activities improved receptivity; (2) information "overload" easily occurred when disease transmission pathways did not overlap; (3) access and receptivity at schools was greater than at the community-level; and (4) piggy-backing on high-priority diseases like rabies offered an important avenue to increase knowledge of other zoonoses. We conclude by discussing the merits of incorporating our validated education approach into the school curriculum in order to influence long-term behaviour change.
The blackboard model - A framework for integrating multiple cooperating expert systems
NASA Technical Reports Server (NTRS)
Erickson, W. K.
1985-01-01
The use of an artificial intelligence (AI) architecture known as the blackboard model is examined as a framework for designing and building distributed systems requiring the integration of multiple cooperating expert systems (MCXS). Aerospace vehicles provide many examples of potential systems, ranging from commercial and military aircraft to spacecraft such as satellites, the Space Shuttle, and the Space Station. One such system, free-flying, spaceborne telerobots to be used in construction, servicing, inspection, and repair tasks around NASA's Space Station, is examined. The major difficulties found in designing and integrating the individual expert system components necessary to implement such a robot are outlined. The blackboard model, a general expert system architecture which seems to address many of the problems found in designing and building such a system, is discussed. A progress report on a prototype system under development called DBB (Distributed BlackBoard model) is given. The prototype will act as a testbed for investigating the feasibility, utility, and efficiency of MCXS-based designs developed under the blackboard model.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, A. T.; Cannon, A. J.
2015-06-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event
Support Operators Method for the Diffusion Equation in Multiple Materials
Winters, Andrew R.; Shashkov, Mikhail J.
2012-08-14
A second-order finite difference scheme for the solution of the diffusion equation on non-uniform meshes is implemented. The method allows the heat conductivity to be discontinuous. The algorithm is formulated on a one dimensional mesh and is derived using the support operators method. A key component of the derivation is that the discrete analog of the flux operator is constructed to be the negative adjoint of the discrete divergence, in an inner product that is a discrete analog of the continuum inner product. The resultant discrete operators in the fully discretized diffusion equation are symmetric and positive definite. The algorithm is generalized to operate on meshes with cells which have mixed material properties. A mechanism to recover intermediate temperature values in mixed cells using a limited linear reconstruction is introduced. The implementation of the algorithm is verified and the linear reconstruction mechanism is compared to previous results for obtaining new material temperatures.
Method and apparatus for determining material structural integrity
Pechersky, Martin
1996-01-01
A non-destructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis techniques to determine the damping loss factor of a material. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity as a function of time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method. If an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the amount of coil current used in vibrating the magnet. If a reciprocating transducer is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by a force gauge in the reciprocating transducer. Using known vibrational analysis methods, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity measurements. The damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
Noor-Mohammadi, Samaneh; Pourmir, Azadeh; Johannes, Tyler W
2012-11-01
Recombinant protein expression in the chloroplasts of green algae has recently become more routine; however, the heterologous expression of multiple proteins or complete biosynthetic pathways remains a significant challenge. Here, we show that a modified DNA Assembler approach can be used to rapidly assemble multiple-gene biosynthetic pathways in yeast and then integrate these assembled pathways at a site-specific location in the chloroplast genome of the microalgal species Chlamydomonas reinhardtii. As a proof of concept, this method was used to successfully integrate and functionally express up to three reporter proteins (AphA6, AadA, and GFP) in the chloroplast of C. reinhardtii. An analysis of the relative gene expression of the engineered strains showed significant differences in the mRNA expression levels of the reporter genes and thus highlights the importance of proper promoter/untranslated region selection when constructing a target pathway. This new method represents a useful genetic tool in the construction and integration of complex biochemical pathways into the chloroplast genome of microalgae and should aid current efforts to engineer algae for biofuels production and other desirable natural products.
Efficient Fully Implicit Time Integration Methods for Modeling Cardiac Dynamics
Rose, Donald J.; Henriquez, Craig S.
2013-01-01
Implicit methods are well known to have greater stability than explicit methods for stiff systems, but they often are not used in practice due to perceived computational complexity. This paper applies the Backward Euler method and a second-order one-step two-stage composite backward differentiation formula (C-BDF2) for the monodomain equations arising from mathematically modeling the electrical activity of the heart. The C-BDF2 scheme is an L-stable implicit time integration method and easily implementable. It uses the simplest Forward Euler and Backward Euler methods as fundamental building blocks. The nonlinear system resulting from application of the Backward Euler method for the monodomain equations is solved for the first time by a nonlinear elimination method, which eliminates local and non-symmetric components by using a Jacobian-free Newton solver, called Newton-Krylov solver. Unlike other fully implicit methods proposed for the monodomain equations in the literature, the Jacobian of the global system after the nonlinear elimination has much smaller size, is symmetric and possibly positive definite, which can be solved efficiently by standard optimal solvers. Numerical results are presented demonstrating that the C-BDF2 scheme can yield accurate results with less CPU times than explicit methods for both a single patch and spatially extended domains. PMID:19126449
A survey of motif discovery methods in an integrated framework
Sandve, Geir Kjetil; Drabløs, Finn
2006-01-01
Background There has been a growing interest in computational discovery of regulatory elements, and a multitude of motif discovery methods have been proposed. Computational motif discovery has been used with some success in simple organisms like yeast. However, as we move to higher organisms with more complex genomes, more sensitive methods are needed. Several recent methods try to integrate additional sources of information, including microarray experiments (gene expression and ChlP-chip). There is also a growing awareness that regulatory elements work in combination, and that this combinatorial behavior must be modeled for successful motif discovery. However, the multitude of methods and approaches makes it difficult to get a good understanding of the current status of the field. Results This paper presents a survey of methods for motif discovery in DNA, based on a structured and well defined framework that integrates all relevant elements. Existing methods are discussed according to this framework. Conclusion The survey shows that although no single method takes all relevant elements into consideration, a very large number of different models treating the various elements separately have been tried. Very often the choices that have been made are not explicitly stated, making it difficult to compare different implementations. Also, the tests that have been used are often not comparable. Therefore, a stringent framework and improved test methods are needed to evaluate the different approaches in order to conclude which ones are most promising. Reviewers: This article was reviewed by Eugene V. Koonin, Philipp Bucher (nominated by Mikhail Gelfand) and Frank Eisenhaber. PMID:16600018
Methods and systems for integrating fluid dispensing technology with stereolithography
Medina, Francisco; Wicker, Ryan; Palmer, Jeremy A.; Davis, Don W.; Chavez, Bart D.; Gallegos, Phillip L.
2010-02-09
An integrated system and method of integrating fluid dispensing technologies (e.g., direct-write (DW)) with rapid prototyping (RP) technologies (e.g., stereolithography (SL)) without part registration comprising: an SL apparatus and a fluid dispensing apparatus further comprising a translation mechanism adapted to translate the fluid dispensing apparatus along the Z-, Y- and Z-axes. The fluid dispensing apparatus comprises: a pressurized fluid container; a valve mechanism adapted to control the flow of fluid from the pressurized fluid container; and a dispensing nozzle adapted to deposit the fluid in a desired location. To aid in calibration, the integrated system includes a laser sensor and a mechanical switch. The method further comprises building a second part layer on top of the fluid deposits and optionally accommodating multi-layered circuitry by incorporating a connector trace. Thus, the present invention is capable of efficiently building single and multi-material SL fabricated parts embedded with complex three-dimensional circuitry using DW.
2014-01-01
Background As an abstract mapping of the gene regulations in the cell, gene regulatory network is important to both biological research study and practical applications. The reverse engineering of gene regulatory networks from microarray gene expression data is a challenging research problem in systems biology. With the development of biological technologies, multiple time-course gene expression datasets might be collected for a specific gene network under different circumstances. The inference of a gene regulatory network can be improved by integrating these multiple datasets. It is also known that gene expression data may be contaminated with large errors or outliers, which may affect the inference results. Results A novel method, Huber group LASSO, is proposed to infer the same underlying network topology from multiple time-course gene expression datasets as well as to take the robustness to large error or outliers into account. To solve the optimization problem involved in the proposed method, an efficient algorithm which combines the ideas of auxiliary function minimization and block descent is developed. A stability selection method is adapted to our method to find a network topology consisting of edges with scores. The proposed method is applied to both simulation datasets and real experimental datasets. It shows that Huber group LASSO outperforms the group LASSO in terms of both areas under receiver operating characteristic curves and areas under the precision-recall curves. Conclusions The convergence analysis of the algorithm theoretically shows that the sequence generated from the algorithm converges to the optimal solution of the problem. The simulation and real data examples demonstrate the effectiveness of the Huber group LASSO in integrating multiple time-course gene expression datasets and improving the resistance to large errors or outliers. PMID:25350697
Method and apparatus for determining material structural integrity
Pechersky, M.J.
1994-01-01
Disclosed are a nondestructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis to determine the damping loss factor. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity vs time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method: if an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the coil current. If a reciprocating transducer is used, the vibrational force is determined by a force gauge in the transducer. Using vibrational analysis, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity data. Damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
Integrated Force Method Solution to Indeterminate Structural Mechanics Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Halford, Gary R.
2004-01-01
Strength of materials problems have been classified into determinate and indeterminate problems. Determinate analysis primarily based on the equilibrium concept is well understood. Solutions of indeterminate problems required additional compatibility conditions, and its comprehension was not exclusive. A solution to indeterminate problem is generated by manipulating the equilibrium concept, either by rewriting in the displacement variables or through the cutting and closing gap technique of the redundant force method. Compatibility improvisation has made analysis cumbersome. The authors have researched and understood the compatibility theory. Solutions can be generated with equal emphasis on the equilibrium and compatibility concepts. This technique is called the Integrated Force Method (IFM). Forces are the primary unknowns of IFM. Displacements are back-calculated from forces. IFM equations are manipulated to obtain the Dual Integrated Force Method (IFMD). Displacement is the primary variable of IFMD and force is back-calculated. The subject is introduced through response variables: force, deformation, displacement; and underlying concepts: equilibrium equation, force deformation relation, deformation displacement relation, and compatibility condition. Mechanical load, temperature variation, and support settling are equally emphasized. The basic theory is discussed. A set of examples illustrate the new concepts. IFM and IFMD based finite element methods are introduced for simple problems.
Integral structural-functional method for characterizing microbial populations
NASA Astrophysics Data System (ADS)
Yakushev, A. V.
2015-04-01
An original integral structural-functional method has been proposed for characterizing microbial communities. The novelty of the approach is the in situ study of microorganisms based on the growth kinetics of microbial associations in liquid nutrient broth media under selective conditions rather than on the level of taxa or large functional groups. The method involves the analysis of the integral growth model of a periodic culture. The kinetic parameters of such associations reflect their capacity of growing on different media, i.e., their physiological diversity, and the metabolic capacity of the microorganisms for growth on a nutrient medium. Therefore, the obtained parameters are determined by the features of the microbial ecological strategies. The inoculation of a dense medium from the original inoculate allows characterizing the taxonomic composition of the dominants in the soil community. The inoculation from the associations developed on selective media characterizes the composition of syntrophic groups, which fulfill a specific function in nature. This method is of greater information value than the classical methods of inoculation on selective media.
Gregg, Watson W; Rousseaux, Cécile S
2014-01-01
Quantifying change in ocean biology using satellites is a major scientific objective. We document trends globally for the period 1998–2012 by integrating three diverse methodologies: ocean color data from multiple satellites, bias correction methods based on in situ data, and data assimilation to provide a consistent and complete global representation free of sampling biases. The results indicated no significant trend in global pelagic ocean chlorophyll over the 15 year data record. These results were consistent with previous findings that were based on the first 6 years and first 10 years of the SeaWiFS mission. However, all of the Northern Hemisphere basins (north of 10° latitude), as well as the Equatorial Indian basin, exhibited significant declines in chlorophyll. Trend maps showed the local trends and their change in percent per year. These trend maps were compared with several other previous efforts using only a single sensor (SeaWiFS) and more limited time series, showing remarkable consistency. These results suggested the present effort provides a path forward to quantifying global ocean trends using multiple satellite missions, which is essential if we are to understand the state, variability, and possible changes in the global oceans over longer time scales. PMID:26213675
Gregg, Watson W; Rousseaux, Cécile S
2014-09-01
Quantifying change in ocean biology using satellites is a major scientific objective. We document trends globally for the period 1998-2012 by integrating three diverse methodologies: ocean color data from multiple satellites, bias correction methods based on in situ data, and data assimilation to provide a consistent and complete global representation free of sampling biases. The results indicated no significant trend in global pelagic ocean chlorophyll over the 15 year data record. These results were consistent with previous findings that were based on the first 6 years and first 10 years of the SeaWiFS mission. However, all of the Northern Hemisphere basins (north of 10° latitude), as well as the Equatorial Indian basin, exhibited significant declines in chlorophyll. Trend maps showed the local trends and their change in percent per year. These trend maps were compared with several other previous efforts using only a single sensor (SeaWiFS) and more limited time series, showing remarkable consistency. These results suggested the present effort provides a path forward to quantifying global ocean trends using multiple satellite missions, which is essential if we are to understand the state, variability, and possible changes in the global oceans over longer time scales.
Method and Apparatus for Simultaneous Processing of Multiple Functions
NASA Technical Reports Server (NTRS)
Stoica, Adrian (Inventor); Andrei, Radu (Inventor); Zhu, David (Inventor); Mojarradi, Mohammad Mehdi (Inventor); Vo, Tuan A. (Inventor)
2015-01-01
Electronic logic gates that operate using N logic state levels, where N is greater than 2, and methods of operating such gates. The electronic logic gates operate according to truth tables. At least two input signals each having a logic state that can range over more than two logic states are provided to the logic gates. The logic gates each provide an output signal that can have one of N logic states. Examples of gates described include NAND/NAND gates having two inputs A and B and NAND/NAND gates having three inputs A, B, and C, where A, B and C can take any of four logic states. Systems using such gates are described, and their operation illustrated. Optical logic gates that operate using N logic state levels are also described.
2012-01-01
Background The increasing prevalence of multiple chronic conditions has accentuated the importance of coordinating and integrating health care services. Patients with better continuity of care (COC) have a lower utilization rate of emergency department (ED) services, lower hospitalization and better care outcomes. Previous COC studies have focused on the care outcome of patients with a single chronic condition or that of physician-patient relationships; few studies have investigated the care outcome of patients with multiple chronic conditions. Using multi-chronic patients as subjects, this study proposes an integrated continuity of care (ICOC) index to verify the association between COC and care outcomes for two scopes of chronic conditions, at physician and medical facility levels. Methods This study used a dataset of 280,840 subjects, obtained from the Longitudinal Health Insurance Database (LHID 2005), compiled by the National Health Research Institutes, of the National Health Insurance Bureau of Taiwan. Principal Component Analysis (PCA) was used to integrate the indices of density, dispersion and sequence into ICOC to measure COC outcomes - the utilization rate of ED services and hospitalization. A Generalized Estimating Equations model was used to verify the care outcomes. Results We discovered that the higher the COC at medical facility level, the lower the utilization rate of ED services and hospitalization for patients; by contrast, the higher the COC at physician level, the higher the utilization rate of ED services (odds ratio > 1; Exp(β) = 2.116) and hospitalization (odds ratio > 1; Exp(β) = 1.688). When only those patients with major chronic conditions with the highest number of medical visits were considered, it was found that the higher the COC at both medical facility and physician levels, the lower the utilization rate of ED services and hospitalization. Conclusions The study shows that ICOC is more stable than single indices and
Accuracy of Multiple Pour Cast from Various Elastomer Impression Methods.
Haralur, Satheesh B; Saad Toman, Majed; Ali Al-Shahrani, Abdullah; Ali Al-Qarni, Abdullah
2016-01-01
The accurate duplicate cast obtained from a single impression reduces the profession clinical time, patient inconvenience, and extra material cost. The stainless steel working cast model assembly consisting of two abutments and one pontic area was fabricated. Two sets of six each custom aluminum trays were fabricated, with five mm spacer and two mm spacer. The impression methods evaluated during the study were additional silicone putty reline (two steps), heavy-light body (one step), monophase (one step), and polyether (one step). Type IV gypsum casts were poured at the interval of one hour, 12 hours, 24 hours, and 48 hours. The resultant cast was measured with traveling microscope for the comparative dimensional accuracy. The data obtained were subjected to Analysis of Variance test at significance level <0.05. The die obtained from two-step putty reline impression techniques had the percentage of variation for the height -0.36 to -0.97%, while diameter was increased by 0.40-0.90%. The values for one-step heavy-light body impression dies, additional silicone monophase impressions, and polyether were -0.73 to -1.21%, -1.34%, and -1.46% for the height and 0.50-0.80%, 1.20%, and -1.30% for the width, respectively.
Community Engagement in US Biobanking: Multiplicity of Meaning and Method
Haldeman, Kaaren M.; Cadigan, R. Jean; Davis, Arlene; Goldenberg, Aaron; Henderson, Gail E.; Lassiter, Dragana; Reavely, Erik
2014-01-01
Background/Aims Efforts to improve individual and population health increasingly rely on large scale collections of human biological specimens and associated data. Such collections or “biobanks” are hailed as valuable resources for facilitating translational biomedical research. However, biobanks also raise important ethical considerations, such as whether, how and why biobanks might engage with those who contributed specimens. This paper examines perceptions and practices of community engagement (CE) among individuals who operate six diverse biobanks in the U.S. Methods Twenty-four people from a diverse group of six biobanks were interviewed in-person or via telephone from March-July, 2011. Interview transcripts were coded and analyzed for common themes. Results Emergent themes include how biobank personnel understand “community” and community engagement as it pertains to biobank operations; information regarding the diversity of practices of CE; and the reasons why biobanks conduct CE. Conclusion Despite recommendations from federal agencies to conduct CE, the interpretation of CE varies widely among biobank employees, ultimately affecting how CE is practiced and what goals are achieved. PMID:24556734
Versatile method to generate multiple types of micropatterns.
Segerer, Felix Jakob; Röttgermann, Peter Johan Friedrich; Schuster, Simon; Piera Alberola, Alicia; Zahler, Stefan; Rädler, Joachim Oskar
2016-03-22
Micropatterning techniques have become an important tool for the study of cell behavior in controlled microenvironments. As a consequence, several approaches for the creation of micropatterns have been developed in recent years. However, the diversity of substrates, coatings, and complex patterns used in cell science is so great that no single existing technique is capable of fabricating designs suitable for all experimental conditions. Hence, there is a need for patterning protocols that are flexible with regard to the materials used and compatible with different patterning strategies to create more elaborate setups. In this work, the authors present a versatile approach to micropatterning. The protocol is based on plasma treatment, protein coating, and a poly(L-lysine)-grafted-poly(ethylene glycol) backfill step, and produces homogeneous patterns on a variety of substrates. Protein density within the patterns can be controlled, and density gradients of surface-bound protein can be formed. Moreover, by combining the method with microcontact printing, it is possible to generate patterns composed of three different components within one iteration of the protocol. The technique is simple to implement and should enable cell science labs to create a broad range of complex and highly specialized microenvironments.
Accuracy of Multiple Pour Cast from Various Elastomer Impression Methods
Saad Toman, Majed; Ali Al-Shahrani, Abdullah; Ali Al-Qarni, Abdullah
2016-01-01
The accurate duplicate cast obtained from a single impression reduces the profession clinical time, patient inconvenience, and extra material cost. The stainless steel working cast model assembly consisting of two abutments and one pontic area was fabricated. Two sets of six each custom aluminum trays were fabricated, with five mm spacer and two mm spacer. The impression methods evaluated during the study were additional silicone putty reline (two steps), heavy-light body (one step), monophase (one step), and polyether (one step). Type IV gypsum casts were poured at the interval of one hour, 12 hours, 24 hours, and 48 hours. The resultant cast was measured with traveling microscope for the comparative dimensional accuracy. The data obtained were subjected to Analysis of Variance test at significance level <0.05. The die obtained from two-step putty reline impression techniques had the percentage of variation for the height −0.36 to −0.97%, while diameter was increased by 0.40–0.90%. The values for one-step heavy-light body impression dies, additional silicone monophase impressions, and polyether were −0.73 to −1.21%, −1.34%, and −1.46% for the height and 0.50–0.80%, 1.20%, and −1.30% for the width, respectively. PMID:28096815
Remotely variable multiple bore ram system and method
Carnahan, D.A.
1989-05-02
A ram assembly is described for use in a ram type blow out preventer, comprising: a thrust rod defining a longitudinal axis; a ram block body interconnected to the rod and defining a vertical cylindrical and normal to the longitudinal axis of the thrust rod; a ram shoe preselected from a plurality of parts thereof having different bores, the shoe having a load shoulder with a vertical cylindrical pin hole extending through the shoulder, and further having pipe faces and a seal disposed between the pipe faces and circumscribing the shoe; and a lock pin means having a collet hole disposed in one end for releasably interconnecting the ram shoe and the ram block when the lock pin means, the lock pin hole, and the pin hole are coaligned in vertical registry with the lock pin means disposed in the lock pin hole and the pin hole. The patent also describes a method using a running tool suspended from a drill hole string for changing from a first location a pair of ram shoes releasably interconnected by pins to respective rams of a ram type blow out preventer.
Application of the boundary integral method to immiscible displacement problems
Masukawa, J.; Horne, R.N.
1988-08-01
This paper presents an application of the boundary integral method (BIM) to fluid displacement problems to demonstrate its usefulness in reservoir simulation. A method for solving two-dimensional (2D), piston-like displacement for incompressible fluids with good accuracy has been developed. Several typical example problems with repeated five-spot patterns were solved for various mobility ratios. The solutions were compared with the analytical solutions to demonstrate accuracy. Singularity programming was found to be a major advantage in handling flow in the vicinity of wells. The BIM was found to be an excellent way to solve immiscible displacement problems. Unlike analytic methods, it can accommodate complex boundary shapes and does not suffer from numerical dispersion at the front.
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
Fourier-sparsity integrated method for complex target ISAR imagery.
Gao, Xunzhang; Liu, Zhen; Chen, Haowen; Li, Xiang
2015-01-26
In existing sparsity-driven inverse synthetic aperture radar (ISAR) imaging framework a sparse recovery (SR) algorithm is usually applied to azimuth compression to achieve high resolution in the cross-range direction. For range compression, however, direct application of an SR algorithm is not very effective because the scattering centers resolved in the high resolution range profiles at different view angles always exhibit irregular range cell migration (RCM), especially for complex targets, which will blur the ISAR image. To alleviate the sparse recovery-induced RCM in range compression, a sparsity-driven framework for ISAR imaging named Fourier-sparsity integrated (FSI) method is proposed in this paper, which can simultaneously achieve better focusing performance in both the range and cross-range domains. Experiments using simulated data and real data demonstrate the superiority of our proposed framework over existing sparsity-driven methods and range-Doppler methods.
Application of integrated methods in mapping waste disposal areas
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Papadopoulos, Nikos; Papadopoulos, Ilias; Kouli, Maria; Vallianatos, Filippos; Sarris, Apostolos; Manios, Thrassyvoulos
2007-11-01
An integrated suite of environmental methods was used to characterize the hydrogeological, geological and tectonic regime of the largest waste disposal landfill of Crete Island, the Fodele municipal solid waste site (MSW), to determine the geometry of the landfill (depth and spatial extent of electrically conductive anomalies), to define the anisotropy caused by bedrock fabric fractures and to locate potential zones of electrically conductive contamination. A combination of geophysical methods and chemical analysis was implemented for the characterization and management of the landfill. Five different types of geophysical surveys were performed: (1) 2D electrical resistance tomography (ERT), (2) electromagnetic measurements using very low frequencies (VLF), (3) electromagnetic conductivity (EM31), (4) seismic refraction measurements (SR), and (5) ambient noise measurements (HVSR). The above geophysical methods were used with the aim of studying the subsurface properties of the landfill and to define the exact geometrical characteristics of the site under investigation.
NASA Astrophysics Data System (ADS)
Masychev, Victor I.
2000-11-01
In this research we present the results of approbation of two methods of optical caries diagnostics: PNC-spectral diagnostics and caries detection by laser integral fluorescence. The research was conducted in a dental clinic. PNC-method analyses parameters of probing laser radiation and PNC-spectrums of stimulated secondary radiations: backscattering and endogenous fluorescence of caries-involved bacterias. He-Ne-laser ((lambda) =632,8 nm, 1-2mW) was used as a source of probing (stimulated) radiation. For registration of signals, received from intact and pathological teeth PDA-detector was applied. PNC-spectrums were processed by special algorithms, and were displayed on PC monitor. The method of laser integral fluorescence was used for comparison. In this case integral power of fluorescence of human teeth was measured. As a source of probing (stimulated) radiation diode lasers ((lambda) =655 nm, 0.1 mW and 630nm, 1mW) and He-Ne laser were applied. For registration of signals Si-photodetector was used. Integral power was shown in a digital indicator. Advantages and disadvantages of these methods are described in this research. It is disclosed that the method of laser integral power of fluorescence has the following characteristics: simplicity of construction and schema-technical decisions. However the method of PNC-spectral diagnostics are characterized by considerably more sensitivity in diagnostics of initial caries and capability to differentiate pathologies of various stages (for example, calculus/initial caries). Estimation of spectral characteristics of PNC-signals allows eliminating a number of drawbacks, which are character for detection by method of laser integral fluorescence (for instance, detection of fluorescent fillings, plagues, calculus, discolorations generally, amalgam, gold fillings as if it were caries.
Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong
2015-02-01
Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.
NASA Astrophysics Data System (ADS)
Wang, Qiao; Zhou, Wei; Cheng, Yonggang; Ma, Gang; Chang, Xiaolin
2017-04-01
A line integration method (LIM) is proposed to calculate the domain integrals for 3D problems. In the proposed method, the domain integrals are transformed into boundary integrals and only line integrals on straight lines are needed to be computed. A background cell structure is applied to further simplify the line integrals and improve the accuracy. The method creates elements only on the boundary, and the integral lines are created from the boundary elements. The procedure is quite suitable for the boundary element method, and we have applied it to 3D situations. Directly applying the method is time-consuming since the complexity of the computational time is O( NM), where N and M are the numbers of nodes and lines, respectively. To overcome this problem, the fast multipole method is used with the LIM for large-scale computation. The numerical results show that the proposed method is efficient and accurate.
Integrating the Lactational Amenorrhea Method into a family planning program in Ecuador.
Wade, K B; Sevilla, F; Labbok, M H
1994-01-01
This paper reports the results of a 12-month implementation study documenting the process of integrating the Lactational Amenorrhea Method (LAM) into a multiple-method family planning service-delivery organization, the Céntro Médico de Orientación y Planificación Familiar (CEMOPLAF), in Ecuador. LAM was introduced as a family planning option in four CEMOPLAF clinics. LAM was accepted by 133 breastfeeding women during the program's first five months, representing about one-third of postpartum clients. Seventy-three percent of LAM acceptors were new to any family planning method. Follow-up interviews with a systematic sample of 67 LAM users revealed that the method was generally used correctly. Three pregnancies were reported, none by women who were following LAM as recommended. Service providers' knowledge of LAM resulted in earlier IUD insertions among breastfeeding women. Relationships with other maternal and child health organizations and programs were also established.
A Dynamic Integration Method for Borderland Database using OSM data
NASA Astrophysics Data System (ADS)
Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.
2013-11-01
Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.
ERIC Educational Resources Information Center
Brewe, Eric; Bruun, Jesper; Bearden, Ian G.
2016-01-01
We describe "Module Analysis for Multiple Choice Responses" (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual…
Mof-Tree: A Spatial Access Method To Manipulate Multiple Overlapping Features.
ERIC Educational Resources Information Center
Manolopoulos, Yannis; Nardelli, Enrico; Papadopoulos, Apostolos; Proietti, Guido
1997-01-01
Investigates the manipulation of large sets of two-dimensional data representing multiple overlapping features, and presents a new access method, the MOF-tree. Analyzes storage requirements and time with respect to window query operations involving multiple features. Examines both the pointer-based and pointerless MOF-tree representations.…
Mathies, Richard A.; Singhal, Pankaj; Xie, Jin; Glazer, Alexander N.
2002-01-01
This invention relates to a microfabricated capillary electrophoresis chip for detecting multiple redox-active labels simultaneously using a matrix coding scheme and to a method of selectively labeling analytes for simultaneous electrochemical detection of multiple label-analyte conjugates after electrophoretic or chromatographic separation.
Rowat, S C
1998-01-01
The central nervous, immune, and endocrine systems communicate through multiple common messengers. Over evolutionary time, what may be termed integrated defense system(s) (IDS) have developed to coordinate these communications for specific contexts; these include the stress response, acute-phase response, nonspecific immune response, immune response to antigen, kindling, tolerance, time-dependent sensitization, neurogenic switching, and traumatic dissociation (TD). These IDSs are described and their overlap is examined. Three models of disease production are generated: damage, in which IDSs function incorrectly; inadequate/inappropriate, in which IDS response is outstripped by a changing context; and evolving/learning, in which the IDS learned response to a context is deemed pathologic. Mechanisms of multiple chemical sensitivity (MCS) are developed from several IDS disease models. Model 1A is pesticide damage to the central nervous system, overlapping with body chemical burdens, TD, and chronic zinc deficiency; model 1B is benzene disruption of interleukin-1, overlapping with childhood developmental windows and hapten-antigenic spreading; and model 1C is autoimmunity to immunoglobulin-G (IgG), overlapping with spreading to other IgG-inducers, sudden spreading of inciters, and food-contaminating chemicals. Model 2A is chemical and stress overload, including comparison with the susceptibility/sensitization/triggering/spreading model; model 2B is genetic mercury allergy, overlapping with: heavy metals/zinc displacement and childhood/gestational mercury exposures; and model 3 is MCS as evolution and learning. Remarks are offered on current MCS research. Problems with clinical measurement are suggested on the basis of IDS models. Large-sample patient self-report epidemiology is described as an alternative or addition to clinical biomarker and animal testing. Images Figure 1 Figure 2 Figure 3 Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 PMID:9539008
Investigation of system integration methods for bubble domain flight recorders
NASA Technical Reports Server (NTRS)
Chen, T. T.; Bohning, O. D.
1975-01-01
System integration methods for bubble domain flight records are investigated. Bubble memory module packaging and assembly, the control electronics design and construction, field coils, and permanent magnet bias structure design are studied. A small 60-k bit engineering model was built and tested to demonstrate the feasibility of the bubble recorder. Based on the various studies performed, a projection is made on a 50,000,000-bit prototype recorder. It is estimated that the recorder will occupy 190 cubic in., weigh 12 lb, and consume 12 w power when all of its four tracks are operated in parallel at 150 kHz data rate.
The biocommunication method: On the road to an integrative biology
Witzany, Guenther
2016-01-01
ABSTRACT Although molecular biology, genetics, and related special disciplines represent a large amount of empirical data, a practical method for the evaluation and overview of current knowledge is far from being realized. The main concepts and narratives in these fields have remained nearly the same for decades and the more recent empirical data concerning the role of noncoding RNAs and persistent viruses and their defectives do not fit into this scenario. A more innovative approach such as applied biocommunication theory could translate empirical data into a coherent perspective on the functions within and between biological organisms and arguably lead to a sustainable integrative biology. PMID:27195071
Primal and Dual Integrated Force Methods Used for Stochastic Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.
2005-01-01
At the NASA Glenn Research Center, the primal and dual integrated force methods are being extended for the stochastic analysis of structures. The stochastic simulation can be used to quantify the consequence of scatter in stress and displacement response because of a specified variation in input parameters such as load (mechanical, thermal, and support settling loads), material properties (strength, modulus, density, etc.), and sizing design variables (depth, thickness, etc.). All the parameters are modeled as random variables with given probability distributions, means, and covariances. The stochastic response is formulated through a quadratic perturbation theory, and it is verified through a Monte Carlo simulation.
Method for deposition of a conductor in integrated circuits
Creighton, J. Randall; Dominguez, Frank; Johnson, A. Wayne; Omstead, Thomas R.
1997-01-01
A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten.
Method for deposition of a conductor in integrated circuits
Creighton, J.R.; Dominguez, F.; Johnson, A.W.; Omstead, T.R.
1997-09-02
A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten. 2 figs.
Method of producing an integral resonator sensor and case
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill V. (Inventor); Challoner, A. Dorian (Inventor); Hayworth, Ken J. (Inventor); Wiberg, Dean V. (Inventor); Yee, Karl Y. (Inventor)
2005-01-01
The present invention discloses an inertial sensor having an integral resonator. A typical sensor comprises a planar mechanical resonator for sensing motion of the inertial sensor and a case for housing the resonator. The resonator and a wall of the case are defined through an etching process. A typical method of producing the resonator includes etching a baseplate, bonding a wafer to the etched baseplate, through etching the wafer to form a planar mechanical resonator and the wall of the case and bonding an end cap wafer to the wall to complete the case.
Integration of Boltzmann machine and reverse analysis method
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2015-10-01
Reverse analysis method is actually a data mining technique to unearth relationships between data. By knowing the connection strengths by using Hopfield network, we can extract the relationships in data sets. Hopfield networks have recognized that some relaxation schemes have a joined cost function and the states of the network converge to local minima of this function. It had performed optimization of a well-defined function. However, there is no guarantee to find the best minimum in the network. Thus, Boltzmann machine has been introduced to overcome this problem. In this paper, we integrate both approaches to enhance data mining. We limit our work to Horn clauses.
A multi-disciplinary approach for the integrated assessment of multiple risks in delta areas.
NASA Astrophysics Data System (ADS)
Sperotto, Anna; Torresan, Silvia; Critto, Andrea; Marcomini, Antonio
2016-04-01
The assessment of climate change related risks is notoriously difficult due to the complex and uncertain combinations of hazardous events that might happen, the multiplicity of physical processes involved, the continuous changes and interactions of environmental and socio-economic systems. One important challenge lies in predicting and modelling cascades of natural and man -made hazard events which can be triggered by climate change, encompassing different spatial and temporal scales. Another regard the potentially difficult integration of environmental, social and economic disciplines in the multi-risk concept. Finally, the effective interaction between scientists and stakeholders is essential to ensure that multi-risk knowledge is translated into efficient adaptation and management strategies. The assessment is even more complex at the scale of deltaic systems which are particularly vulnerable to global environmental changes, due to the fragile equilibrium between the presence of valuable natural ecosystems and relevant economic activities. Improving our capacity to assess the combined effects of multiple hazards (e.g. sea-level rise, storm surges, reduction in sediment load, local subsidence, saltwater intrusion) is therefore essential to identify timely opportunities for adaptation. A holistic multi-risk approach is here proposed to integrate terminology, metrics and methodologies from different research fields (i.e. environmental, social and economic sciences) thus creating shared knowledge areas to advance multi risk assessment and management in delta regions. A first testing of the approach, including the application of Bayesian network analysis for the assessment of impacts of climate change on key natural systems (e.g. wetlands, protected areas, beaches) and socio-economic activities (e.g. agriculture, tourism), is applied in the Po river delta in Northern Italy. The approach is based on a bottom-up process involving local stakeholders early in different
Sensitivity method for integrated structure/active control law design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1987-01-01
The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.
Towards a Better Understanding of CMMI and Agile Integration - Multiple Case Study of Four Companies
NASA Astrophysics Data System (ADS)
Pikkarainen, Minna
The amount of software is increasing in the different domains in Europe. This provides the industries in smaller countries good opportunities to work in the international markets. Success in the global markets however demands the rapid production of high quality, error free software. Both CMMI and agile methods seem to provide a ready solution for quality and lead time improvements. There is not, however, much empirical evidence available either about 1) how the integration of these two aspects can be done in practice or 2) what it actually demands from assessors and software process improvement groups. The goal of this paper is to increase the understanding of CMMI and agile integration, in particular, focusing on the research question: how to use ‘lightweight’ style of CMMI assessments in agile contexts. This is done via four case studies in which assessments were conducted using the goals of CMMI integrated project management and collaboration and coordination with relevant stakeholder process areas and practices from XP and Scrum. The study shows that the use of agile practices may support the fulfilment of the goals of CMMI process areas but there are still many challenges for the agile teams to be solved within the continuous improvement programs. It also identifies practical advices to the assessors and improvement groups to take into consideration when conducting assessment in the context of agile software development.
An affinity-based genome walking method to find transgene integration loci in transgenic genome.
Thirulogachandar, V; Pandey, Prachi; Vaishnavi, C S; Reddy, Malireddy K
2011-09-15
Identifying a good transgenic event from the pool of putative transgenics is crucial for further characterization. In transgenic plants, the transgene can integrate in either single or multiple locations by disrupting the endogenes and/or in heterochromatin regions causing the positional effect. Apart from this, to protect the unauthorized use of transgenic plants, the signature of transgene integration for every commercial transgenic event needs to be characterized. Here we show an affinity-based genome walking method, named locus-finding (LF) PCR (polymerase chain reaction), to determine the transgene flanking sequences of rice plants transformed by Agrobacterium tumefaciens. LF PCR includes a primary PCR by a degenerated primer and transfer DNA (T-DNA)-specific primer, a nested PCR, and a method of enriching the desired amplicons by using a biotin-tagged primer that is complementary to the T-DNA. This enrichment technique separates the single strands of desired amplicons from the off-target amplicons, reducing the template complexity by several orders of magnitude. We analyzed eight transgenic rice plants and found the transgene integration loci in three different chromosomes. The characteristic illegitimate recombination of the Agrobacterium sp. was also observed from the sequenced integration loci. We believe that the LF PCR should be an indispensable technique in transgenic analysis.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
Real-time optical multiple object recognition and tracking system and method
NASA Astrophysics Data System (ADS)
Chao, Tien-Hsin; Liu, Hua Kuang
1987-12-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
Couple Beads: An integrated method of natural family planning
Mulcaire-Jones, George; Fehring, Richard J.; Bradshaw, Megan; Brower, Karen; Lubega, Gonzaga; Lubega, Paskazia
2016-01-01
Various fertility indicators are used by natural family planning methods to identify the fertile and infertile phases of a woman's menstrual cycle: mucus observations, cycle-day probabilities, basal body temperature readings, and hormonal measures of LH and estrogen. Simplified NFP methods generally make use of a single fertility indicator such as cycle-day probabilities (Standard Days Method) or mucus observations (Billings Ovulation Method). The Couple Bead Method integrates the two simplest fertility indicators, cycle-day probabilities and mucus observations, expanding its applicability to all women, regardless of cycle regularity and length. In determining cycle-day probabilities, the Couple Bead Method relies on a new data set from ultrasound-derived determinants of gestational age that more directly define the day of conception and the fertile window. By using a visual-based system of inexpensive colored beads, the Couple Bead Method can be used by couples of all educational and income levels. Lay Summary: Natural family planning methods provide education in regard to the signs of a woman's body which indicate if she is possibly fertile or not. Two important signs are the day of her menstrual cycle and her observations of bleeding and cervical mucus or dryness. The Couple Bead Method teaches a couple how to observe these signs and chart them with a system of colored beads. The Couple Bead Method can be used by women with regular or irregular cycles. The bead sets are inexpensive and consist of a length of plastic cord, colored “pony beads” and safety pins. PMID:27833183
Sugiura, Motoaki; Wakusawa, Keisuke; Sekiguchi, Atsushi; Sassa, Yuko; Jeong, Hyeonjeong; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta
2009-08-01
Humans extract behaviorally significant meaning from a situation by integrating meanings from multiple components of a complex daily environment. To determine the neural underpinnings of this ability, the authors performed functional magnetic resonance imaging of healthy subjects while the latter viewed naturalistic scenes of two people and an object, including a threatening situation of a person being attacked by an offender with an object. The authors used a two-factorial design: the object was either aversive or nonaversive, and the offender's action was either directed to the person or elsewhere. This allowed the authors to examine the neural response to object aversiveness and person-directed intention separately. A task unrelated to threat was also used to address incidental (i.e., subconscious or unintentional) detection. Assuming individual differences in incidental threat detection, the authors used a functional connectivity analysis using principal components analysis of intersubject variability. The left lateral orbitofrontal cortex and medial prefrontal cortex (MPFC) were specifically activated in response to a threatening situation. The threat-related component of intersubject variability was extracted from these data and showed a significant correlation with personality scores. There was also a correlation between threat-related intersubject variability and activation for object aversiveness in the left temporal pole and lateral orbitofrontal cortex; person-directed intention in the left superior frontal gyrus; threatening situations in the left MPFC; and independently for both factors in the right MPFC. Results demonstrate independent processing of object aversiveness and person-directed intention in the left temporal-orbitofrontal and superior frontal networks, respectively, and their integration into situational meaning in the MPFC.
Integrating Multiple Distribution Models to Guide Conservation Efforts of an Endangered Toad.
Treglia, Michael L; Fisher, Robert N; Fitzgerald, Lee A
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species' ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species' current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.
Integrating multiple distribution models to guide conservation efforts of an endangered toad
Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.
Integration of MicroRNA databases to study MicroRNAs associated with multiple sclerosis.
Angerstein, Charlotte; Hecker, Michael; Paap, Brigitte Katrin; Koczan, Dirk; Thamilarasan, Madhan; Thiesen, Hans-Jürgen; Zettl, Uwe Klaus
2012-06-01
MicroRNAs (miRNAs) are small non-coding RNAs which regulate many genes post-transcriptionally. In various contexts of medical science, miRNAs gained increasing attention over the last few years. Analyzing the functions, interactions and cellular effects of miRNAs is a very complex and challenging task. Many miRNA databases with diverse data contents have been developed. Here, we demonstrate how to integrate their information in a reasonable way on a set of miRNAs that were found to be dysregulated in the blood of patients with multiple sclerosis (MS). Using the miR2Disease database, we retrieved 16 miRNAs associated with MS according to four different studies. We studied the predicted and experimentally validated target genes of these miRNAs, their expression profiles in different blood cell types and brain tissues, the pathways and biological processes affected by these miRNAs as well as their regulation by transcription factors. Only miRNA-mRNA interactions that were predicted by at least seven different prediction algorithms were considered. This resulted in a network of 1,498 target genes. In this network, the MS-associated miRNAs hsa-miR-20a-5p and hsa-miR-20b-5p occurred as central hubs regulating about 500 genes each. Strikingly, many of the putative target genes play a role in T cell activation and signaling, and many have transcription factor activity. The latter suggests that miRNAs often act as regulators of regulators with many secondary effects on gene expression. Our present work provides a guideline on how information of different databases can be integrated in the analysis of miRNAs. Future investigations of miRNAs shall help to better understand the mechanisms underlying different diseases and their treatments.
Integrated dementia care in The Netherlands: a multiple case study of case management programmes.
Minkman, Mirella M N; Ligthart, Suzanne A; Huijsman, Robbert
2009-09-01
The number of dementia patients is growing, and they require a variety of services, making integrated care essential for the ability to continue living in the community. Many healthcare systems in developed countries are exploring new approaches for delivering health and social care. The purpose of this study was to describe and analyse a new approach in extensive case management programmes concerned with long-term dementia care in The Netherlands. The focus is on the characteristics, and success and failure factors of these programmes.A multiple case study was conducted in eight regional dementia care provider networks in The Netherlands. Based on a literature study, a questionnaire was developed for the responsible managers and case managers of the eight case management programmes. During 16 semistructured face-to-face interviews with both respondent groups, a deeper insight into the dementia care programmes was provided. Project documentation for all the cases was studied. The eight programmes were developed independently to improve the quality and continuity of long-term dementia care. The programmes show overlap in terms of their vision, tasks of case managers, case management process and the participating partners in the local dementia care networks. Differences concern the targeted dementia patient groups as well as the background of the case managers and their position in the local dementia care provider network. Factors for success concern the expert knowledge of case managers, investment in a strong provider network and coherent conditions for effective inter-organizational cooperation to deliver integrated care. When explored, caregiver and patient satisfaction was high. Further research into the effects on client outcomes, service use and costs is recommended in order to further analyse the impact of this approach in long-term care. To facilitate implementation, with a focus on joint responsibilities of the involved care providers, policy
Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.; Pollard, Colin J.; Warner, Kirstin F.; Remmers, Daniel L.; Sorensen, Daniel N.; Whinnery, LeRoy L.; Phillips, Jason J.; Shelley, Timothy J.; Reyes, Jose A.; Hsu, Peter C.; Reynolds, John G.
2013-03-25
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.
NASA Technical Reports Server (NTRS)
Fink, P. W.; Khayat, M. A.; Wilton, D. R.
2005-01-01
It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I
NASA Astrophysics Data System (ADS)
Naraghi, M. H. N.; Chung, B. T. F.
1982-06-01
A multiple step fixed random walk Monte Carlo method for solving heat conduction in solids with distributed internal heat sources is developed. In this method, the probability that a walker reaches a point a few steps away is calculated analytically and is stored in the computer. Instead of moving to the immediate neighboring point the walker is allowed to jump several steps further. The present multiple step random walk technique can be applied to both conventional Monte Carlo and the Exodus methods. Numerical results indicate that the present method compares well with finite difference solutions while the computation speed is much faster than that of single step Exodus and conventional Monte Carlo methods.
Integral equation methods for vesicle electrohydrodynamics in three dimensions
NASA Astrophysics Data System (ADS)
Veerapaneni, Shravan
2016-12-01
In this paper, we develop a new boundary integral equation formulation that describes the coupled electro- and hydro-dynamics of a vesicle suspended in a viscous fluid and subjected to external flow and electric fields. The dynamics of the vesicle are characterized by a competition between the elastic, electric and viscous forces on its membrane. The classical Taylor-Melcher leaky-dielectric model is employed for the electric response of the vesicle and the Helfrich energy model combined with local inextensibility is employed for its elastic response. The coupled governing equations for the vesicle position and its transmembrane electric potential are solved using a numerical method that is spectrally accurate in space and first-order in time. The method uses a semi-implicit time-stepping scheme to overcome the numerical stiffness associated with the governing equations.
Finite element methods for integrated aerodynamic heating analysis
NASA Technical Reports Server (NTRS)
Peraire, J.
1990-01-01
Over the past few years finite element based procedures for the solution of high speed viscous compressible flows were developed. The objective of this research is to build upon the finite element concepts which have already been demonstrated and to develop these ideas to produce a method which is applicable to the solution of large scale practical problems. The problems of interest range from three dimensional full vehicle Euler simulations to local analysis of three-dimensional viscous laminar flow. Transient Euler flow simulations involving moving bodies are also to be included. An important feature of the research is to be the coupling of the flow solution methods with thermal/structural modeling techniques to provide an integrated fluid/thermal/structural modeling capability. The progress made towards achieving these goals during the first twelve month period of the research is presented.
The reduced basis method for the electric field integral equation
Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.
2011-06-20
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.
The Flux-integral Method for Multidimensional Convection and Diffusion
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1994-01-01
The flux-integral method is a procedure for constructing an explicit, single-step, forward-in-time, conservative, control volume update of the unsteady, multidimensional convection-diffusion equation. The convective plus diffusive flux at each face of a control-volume cell is estimated by integrating the transported variable and its face-normal derivative over the volume swept out by the convecting velocity field. This yields a unique description of the fluxes, whereas other conservative methods rely on nonunique, arbitrary pseudoflux-difference splitting procedures. The accuracy of the resulting scheme depends on the form of the subcell interpolation assumed, given cell-average data. Cellwise constant behavior results in a (very artificially diffusive) first-order convection scheme. Second-order convection-diffusion schemes correspond to cellwise linear (or bilinear) subcell interpolation. Cellwise quadratic subcell interpolants generate a highly accurate convection-diffusion scheme with excellent phase accuracy. Under constant-coefficient conditions, this is a uniformly third-order polynomial interpolation algorithm (UTOPIA).
A spectral boundary integral method for flowing blood cells
NASA Astrophysics Data System (ADS)
Zhao, Hong; Isfahani, Amir H. G.; Olson, Luke N.; Freund, Jonathan B.
2010-05-01
A spectral boundary integral method for simulating large numbers of blood cells flowing in complex geometries is developed and demonstrated. The blood cells are modeled as finite-deformation elastic membranes containing a higher viscosity fluid than the surrounding plasma, but the solver itself is independent of the particular constitutive model employed for the cell membranes. The surface integrals developed for solving the viscous flow, and thereby the motion of the massless membrane, are evaluated using an O(NlogN) particle-mesh Ewald (PME) approach. The cell shapes, which can become highly distorted under physiologic conditions, are discretized with spherical harmonics. The resolution of these global basis functions is, of course, excellent, but more importantly they facilitate an approximate de-aliasing procedure that stabilizes the simulations without adding any numerical dissipation or further restricting the permissible numerical time step. Complex geometry no-slip boundaries are included using a constraint method that is coupled into an implicit system that is solved as part of the time advancement routine. The implementation is verified against solutions for axisymmetric flows reported in the literature, and its accuracy is demonstrated by comparison against exact solutions for relaxing surface deformations. It is also used to simulate flow of blood cells at 30% volume fraction in tubes between 4.9 and 16.9 μm in diameter. For these, it is shown to reproduce the well-known non-monotonic dependence of the effective viscosity on the tube diameter.
Multiple Roles of MYC in Integrating Regulatory Networks of Pluripotent Stem Cells
Fagnocchi, Luca; Zippo, Alessio
2017-01-01
Pluripotent stem cells (PSCs) are defined by their self-renewal potential, which permits their unlimited propagation, and their pluripotency, being able to generate cell of the three embryonic lineages. These properties render PSCs a valuable tool for both basic and medical research. To induce and stabilize the pluripotent state, complex circuitries involving signaling pathways, transcription regulators and epigenetic mechanisms converge on a core transcriptional regulatory network of PSCs, thus determining their cell identity. Among the transcription factors, MYC represents a central hub, which modulates and integrates multiple mechanisms involved both in the maintenance of pluripotency and in cell reprogramming. Indeed, it instructs the PSC-specific cell cycle, metabolism and epigenetic landscape, contributes to limit exit from pluripotency and modulates signaling cascades affecting the PSC identity. Moreover, MYC extends its regulation on pluripotency by controlling PSC-specific non-coding RNAs. In this report, we review the MYC-controlled networks, which support the pluripotent state and discuss how their perturbation could affect cell identity. We further discuss recent finding demonstrating a central role of MYC in triggering epigenetic memory in PSCs, which depends on the establishment of a WNT-centered self-reinforcing circuit. Finally, we comment on the therapeutic implications of the role of MYC in affecting PSCs. Indeed, PSCs are used for both disease and cancer modeling and to derive cells for regenerative medicine. For these reasons, unraveling the MYC-mediated mechanism in those cells is fundamental to exploit their full potential and to identify therapeutic targets. PMID:28217689
An integrated economic model of multiple types and uses of water
NASA Astrophysics Data System (ADS)
Luckmann, Jonas; Grethe, Harald; McDonald, Scott; Orlov, Anton; Siddig, Khalid
2014-05-01
Water scarcity is an increasing problem in many parts of the world and the management of water has become an important issue on the political economy agenda in many countries. As water is used in most economic activities and the allocation of water is often a complex problem involving different economic agents and sectors, Computable General Equilibrium (CGE) models have been proven useful to analyze water allocation problems, although their adaptation to include water is still relatively undeveloped. This paper provides a description of an integrated water-focused CGE model (STAGE_W) that includes multiple types and uses of water, and for the first time, the reclamation of wastewater as well as the provision of brackish groundwater as separate, independent activities with specific cost structures. The insights provided by the model are illustrated with an application to the Israeli water sector assuming that freshwater resources available to the economy are cut by 50%. We analyze how the Israeli economy copes with this shock if it reduces potable water supply compared with further investments in the desalination sector. The results demonstrate that the effects on the economy are slightly negative under both scenarios. Counter intuitively, the provision of additional potable water to the economy through desalination does not substantively reduce the negative outcomes. This is mainly due to the high costs of desalination, which are currently subsidized, with the distribution of the negative welfare effect over household groups dependent on how these subsidies are financed.
Optimal multiple-information integration inherent in a ring neural network
NASA Astrophysics Data System (ADS)
Takiyama, Ken
2017-02-01
Although several behavioral experiments have suggested that our neural system integrates multiple sources of information based on the certainty of each type of information in the manner of maximum-likelihood estimation, it is unclear how the maximum-likelihood estimation is implemented in our neural system. Here, I investigate the relationship between maximum-likelihood estimation and a widely used ring-type neural network model that is used as a model of visual, motor, or prefrontal cortices. Without any approximation or ansatz, I analytically demonstrate that the equilibrium of an order parameter in the neural network model exactly corresponds to the maximum-likelihood estimation when the strength of the symmetrical recurrent synaptic connectivity within a neural population is appropriately stronger than that of asymmetrical connectivity, that of local and external inputs, and that of symmetrical or asymmetrical connectivity between different neural populations. In this case, strengths of local and external inputs or those of symmetrical connectivity between different neural populations exactly correspond to the input certainty in maximum-likelihood estimation. Thus, my analysis suggests appropriately strong symmetrical recurrent connectivity as a possible candidate for implementing the maximum-likelihood estimation within our neural system.
Optical matrix-matrix multiplication method demonstrated by the use of a multifocus hololens.
Liang, Y Z; Liu, H K
1984-08-01
A method of optical matrix-matrix multiplication is presented. The feasibility of the method is also experimentally demonstrated by the use of a dichromated-gelatin multifocus holographic lens (hololens). With the specific values of matrices chosen, the average percentage error between the theoretical and experimental data of the elements of the output matrix of the multiplication of some specific pairs of 3 x 3 matrices is 0.4%, which corresponds to an 8-bit accuracy.
Optical matrix-matrix multiplication method demonstrated by the use of a multifocus hololens
NASA Technical Reports Server (NTRS)
Liu, H. K.; Liang, Y.-Z.
1984-01-01
A method of optical matrix-matrix multiplication is presented. The feasibility of the method is also experimentally demonstrated by the use of a dichromated-gelatin multifocus holographic lens (hololens). With the specific values of matrices chosen, the average percentage error between the theoretical and experimental data of the elements of the output matrix of the multiplication of some specific pairs of 3 x 3 matrices is 0.4 percent, which corresponds to an 8-bit accuracy.
MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems
NASA Astrophysics Data System (ADS)
Kopecky, Ken; Winer, Eliot
2014-06-01
Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity
NASA Astrophysics Data System (ADS)
Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei
2015-12-01
Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.
Bornstein, Robert F.
2015-01-01
Recent controversies have illuminated the strengths and limitations of different frameworks for conceptualizing personality pathology (e.g., trait perspectives, categorical models), and stimulated debate regarding how best to diagnose personality disorders (PDs) in DSM-5, and in other diagnostic systems (i.e., the International Classification of Diseases, the Psychodynamic Diagnostic Manual). In this article I argue that regardless of how PDs are conceptualized and which diagnostic system is employed, multi-method assessment must play a central role in PD diagnosis. By complementing self-reports with evidence from other domains (e.g., performance-based tests), a broader range of psychological processes are engaged in the patient, and the impact of self-perception and self-presentation biases may be better understood. By providing the assessor with evidence drawn from multiple modalities, some of which provide converging patterns and some of which yield divergent results, the assessor is compelled to engage this evidence more deeply. The mindful processing that ensues can help minimize the deleterious impact of naturally occurring information processing bias and distortion on the part of the clinician (e.g., heuristics, attribution errors), bringing greater clarity to the synthesis and integration of assessment data. PMID:25856565
Integrating multiple disturbance aspects: management of an invasive thistle, Carduus nutans
Zhang, Rui; Shea, Katriona
2012-01-01
Background and Aims Disturbances occur in most ecological systems, and play an important role in biological invasions. We delimit five key disturbance aspects: intensity, frequency, timing, duration and extent. Few studies address more than one of these aspects, yet interactions and interdependence between aspects may lead to complex outcomes. Methods In a two-cohort experimental study, we examined how multiple aspects (intensity, frequency and timing) of a mowing disturbance regime affect the survival, phenology, growth and reproduction of an invasive thistle Carduus nutans (musk thistle). Key Results Our results show that high intensity and late timing strongly delay flowering phenology and reduce plant survival, capitulum production and plant height. A significant interaction between intensity and timing further magnifies the main effects. Unexpectedly, high frequency alone did not effectively reduce reproduction. However, a study examining only frequency and intensity, and not timing, would have erroneously attributed the importance of timing to frequency. Conclusions We used management of an invasive species as an example to demonstrate the importance of a multiple-aspect disturbance framework. Failure to consider possible interactions, and the inherent interdependence of certain aspects, could result in misinterpretation and inappropriate management efforts. This framework can be broadly applied to improve our understanding of disturbance effects on individual responses, population dynamics and community composition. PMID:22199031
Modeling the Multiple-Antenna Wireless Channel Using Maximum Entropy Methods
NASA Astrophysics Data System (ADS)
Guillaud, M.; Debbah, M.; Moustakas, A. L.
2007-11-01
Analytical descriptions of the statistics of wireless channel models are desirable tools for communication systems engineering. When multiple antennas are available at the transmit and/or the receive side (the Multiple-Input Multiple-Output, or MIMO, case), the statistics of the matrix H representing the gains between the antennas of a transmit and a receive antenna array, and in particular the correlation between its coefficients, are known to be of paramount importance for the design of such systems. However these characteristics depend on the operating environment, since the electromagnetic propagation paths are dictated by the surroundings of the antenna arrays, and little knowledge about these is available at the time of system design. An approach using the Maximum Entropy principle to derive probability density functions for the channel matrix, based on various degrees of knowledge about the environment, is presented. The general idea is to apply the maximum entropy principle to obtain the distribution of each parameter of interest (e.g. correlation), and then to marginalize them out to obtain the full channel distribution. It was shown in previous works, using sophisticated integrals from statistical physics, that by using the full spatial correlation matrix E{vec(H)vec(H)H} as the intermediate modeling parameter, this method can yield surprisingly concise channel descriptions. In this case, the joint probability density function is shown to be merely a function of the Frobenius norm of the channel matrix |H|F. In the present paper, we investigate the case where information about the average covariance matrix is available (e.g. through measurements). The maximum entropy distribution of the covariance is derived under this constraint. Furthermore, we consider also the doubly correlated case, where the intermediate modeling parameters are chosen as the transmit- and receive-side channel covariance matrices (respectively E{HHH} and E{HHH}). We compare the
Efird, Jimmy Thomas; Nielsen, Susan Searles
2008-12-01
Epidemiological studies commonly test multiple null hypotheses. In some situations it may be appropriate to account for multiplicity using statistical methodology rather than simply interpreting results with greater caution as the number of comparisons increases. Given the one-to-one relationship that exists between confidence intervals and hypothesis tests, we derive a method based upon the Hochberg step-up procedure to obtain multiplicity corrected confidence intervals (CI) for odds ratios (OR) and by analogy for other relative effect estimates. In contrast to previously published methods that explicitly assume knowledge of P values, this method only requires that relative effect estimates and corresponding CI be known for each comparison to obtain multiplicity corrected CI.
Integrative assessment of multiple pesticides as risk factors for non-Hodgkin's lymphoma among men
De Roos, A J; Zahm, S; Cantor, K; Weisenburger, D; Holmes, F; Burmeister, L; Blair, A
2003-01-01
Methods: During the 1980s, the National Cancer Institute conducted three case-control studies of NHL in the midwestern United States. These pooled data were used to examine pesticide exposures in farming as risk factors for NHL in men. The large sample size (n = 3417) allowed analysis of 47 pesticides simultaneously, controlling for potential confounding by other pesticides in the model, and adjusting the estimates based on a prespecified variance to make them more stable. Results: Reported use of several individual pesticides was associated with increased NHL incidence, including organophosphate insecticides coumaphos, diazinon, and fonofos, insecticides chlordane, dieldrin, and copper acetoarsenite, and herbicides atrazine, glyphosate, and sodium chlorate. A subanalysis of these "potentially carcinogenic" pesticides suggested a positive trend of risk with exposure to increasing numbers. Conclusion: Consideration of multiple exposures is important in accurately estimating specific effects and in evaluating realistic exposure scenarios. PMID:12937207
Method to manage integration error in the Green-Kubo method
NASA Astrophysics Data System (ADS)
Oliveira, Laura de Sousa; Greaney, P. Alex
2017-02-01
The Green-Kubo method is a commonly used approach for predicting transport properties in a system from equilibrium molecular dynamics simulations. The approach is founded on the fluctuation dissipation theorem and relates the property of interest to the lifetime of fluctuations in its thermodynamic driving potential. For heat transport, the lattice thermal conductivity is related to the integral of the autocorrelation of the instantaneous heat flux. A principal source of error in these calculations is that the autocorrelation function requires a long averaging time to reduce remnant noise. Integrating the noise in the tail of the autocorrelation function becomes conflated with physically important slow relaxation processes. In this paper we present a method to quantify the uncertainty on transport properties computed using the Green-Kubo formulation based on recognizing that the integrated noise is a random walk, with a growing envelope of uncertainty. By characterizing the noise we can choose integration conditions to best trade off systematic truncation error with unbiased integration noise, to minimize uncertainty for a given allocation of computational resources.
2013-01-01
Background Accurate protein function annotation is a severe bottleneck when utilizing the deluge of high-throughput, next generation sequencing data. Keeping database annotations up-to-date has become a major scientific challenge that requires the development of reliable automatic predictors of protein function. The CAFA experiment provided a unique opportunity to undertake comprehensive 'blind testing' of many diverse approaches for automated function prediction. We report on the methodology we used for this challenge and on the lessons we learnt. Methods Our method integrates into a single framework a wide variety of biological information sources, encompassing sequence, gene expression and protein-protein interaction data, as well as annotations in UniProt entries. The methodology transfers functional categories based on the results from complementary homology-based and feature-based analyses. We generated the final molecular function and biological process assignments by combining the initial predictions in a probabilistic manner, which takes into account the Gene Ontology hierarchical structure. Results We propose a novel scoring function called COmbined Graph-Information Content similarity (COGIC) score for the comparison of predicted functional categories and benchmark data. We demonstrate that our integrative approach provides increased scope and accuracy over both the component methods and the naïve predictors. In line with previous studies, we find that molecular function predictions are more accurate than biological process assignments. Conclusions Overall, the results indicate that there is considerable room for improvement in the field. It still remains for the community to invest a great deal of effort to make automated function prediction a useful and routine component in the toolbox of life scientists. As already witnessed in other areas, community-wide blind testing experiments will be pivotal in establishing standards for the evaluation of
Yeung, Edward S.; Gong, Xiaoyi
2004-09-07
The present invention provides a method of analyzing multiple samples simultaneously by absorption detection. The method comprises: (i) providing a planar array of multiple containers, each of which contains a sample comprising at least one absorbing species, (ii) irradiating the planar array of multiple containers with a light source and (iii) detecting absorption of light with a detetion means that is in line with the light source at a distance of at leaat about 10 times a cross-sectional distance of a container in the planar array of multiple containers. The absorption of light by a sample indicates the presence of an absorbing species in it. The method can further comprise: (iv) measuring the amount of absorption of light detected in (iii) indicating the amount of the absorbing species in the sample. Also provided by the present invention is a system for use in the abov metho.The system comprises; (i) a light source comrnpising or consisting essentially of at leaat one wavelength of light, the absorption of which is to be detected, (ii) a planar array of multiple containers, and (iii) a detection means that is in line with the light source and is positioned in line with and parallel to the planar array of multiple contiainers at a distance of at least about 10 times a cross-sectional distance of a container.
Lee, Minjung; Dignam, James J.; Han, Junhee
2014-01-01
We propose a nonparametric approach for cumulative incidence estimation when causes of failure are unknown or missing for some subjects. Under the missing at random assumption, we estimate the cumulative incidence function using multiple imputation methods. We develop asymptotic theory for the cumulative incidence estimators obtained from multiple imputation methods. We also discuss how to construct confidence intervals for the cumulative incidence function and perform a test for comparing the cumulative incidence functions in two samples with missing cause of failure. Through simulation studies, we show that the proposed methods perform well. The methods are illustrated with data from a randomized clinical trial in early stage breast cancer. PMID:25043107
Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin
2016-09-01
Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Foernsler, Lynda J.
1996-01-01
Checklists are used by the flight crew to properly configure an aircraft for safe flight and to ensure a high level of safety throughout the duration of the flight. In addition, the checklist provides a sequential framework to meet cockpit operational requirements, and it fosters cross-checking of the flight deck configuration among crew members. This study examined the feasibility of integrating multiple checklists for non-normal procedures into a single procedure for a typical transport aircraft. For the purposes of this report, a typical transport aircraft is one that represents a midpoint between early generation aircraft (B-727/737-200 and DC-10) and modern glass cockpit aircraft (B747-400/777 and MD-11). In this report, potential conflicts among non-normal checklist items during multiple failure situations for a transport aircraft are identified and analyzed. The non-normal checklist procedure that would take precedence for each of the identified multiple failure flight conditions is also identified. The rationale behind this research is that potential conflicts among checklist items might exist when integrating multiple checklists for non-normal procedures into a single checklist. As a rule, multiple failures occurring in today's highly automated and redundant system transport aircraft are extremely improbable. In addition, as shown in this analysis, conflicts among checklist items in a multiple failure flight condition are exceedingly unlikely. The possibility of a multiple failure flight condition occurring with a conflict among checklist items is so remote that integration of the non-normal checklists into a single checklist appears to be a plausible option.
Prediction of phosphorylation sites based on the integration of multiple classifiers.
Han, R Z; Wang, D; Chen, Y H; Dong, L K; Fan, Y L
2017-02-23
Phosphorylation is an important part of post-translational modifications of proteins, and is essential for many biological activities. Phosphorylation and dephosphorylation can regulate signal transduction, gene expression, and cell cycle regulation in many cellular processes. Phosphorylation is extremely important for both basic research and drug discovery to rapidly and correctly identify the attributes of a new protein phosphorylation sites. Moreover, abnormal phosphorylation can be used as a key medical feature related to a disease in some cases. The using of computational methods could improve the accuracy of detection of phosphorylation sites, which can provide predictive guidance for the prevention of the occurrence and/or the best course of treatment for certain diseases. Furthermore, this approach can effectively reduce the costs of biological experiments. In this study, a flexible neural tree (FNT), particle swarm optimization, and support vector machine algorithms were used to classify data with secondary encoding according to the physical and chemical properties of amino acids for feature extraction. Comparison of the classification results obtained from the three classifiers showed that the classification of the FNT was the best. The three classifiers were then integrated in the form of a minority subordinate to the majority vote to obtain the results. The performance of the integrated model showed improvement in sensitivity (87.41%), specificity (87.60%), and accuracy (87.50%).
Pineda, Silvia; Real, Francisco X; Kogevinas, Manolis; Carrato, Alfredo; Chanock, Stephen J; Malats, Núria; Van Steen, Kristel
2015-12-01
Omics data integration is becoming necessary to investigate the genomic mechanisms involved in complex diseases. During the integration process, many challenges arise such as data heterogeneity, the smaller number of individuals in comparison to the number of parameters, multicollinearity, and interpretation and validation of results due to their complexity and lack of knowledge about biological processes. To overcome some of these issues, innovative statistical approaches are being developed. In this work, we propose a permutation-based method to concomitantly assess significance and correct by multiple testing with the MaxT algorithm. This was applied with penalized regression methods (LASSO and ENET) when exploring relationships between common genetic variants, DNA methylation and gene expression measured in bladder tumor samples. The overall analysis flow consisted of three steps: (1) SNPs/CpGs were selected per each gene probe within 1Mb window upstream and downstream the gene; (2) LASSO and ENET were applied to assess the association between each expression probe and the selected SNPs/CpGs in three multivariable models (SNP, CPG, and Global models, the latter integrating SNPs and CPGs); and (3) the significance of each model was assessed using the permutation-based MaxT method. We identified 48 genes whose expression levels were significantly associated with both SNPs and CPGs. Importantly, 36 (75%) of them were replicated in an independent data set (TCGA) and the performance of the proposed method was checked with a simulation study. We further support our results with a biological interpretation based on an enrichment analysis. The approach we propose allows reducing computational time and is flexible and easy to implement when analyzing several types of omics data. Our results highlight the importance of integrating omics data by applying appropriate statistical strategies to discover new insights into the complex genetic mechanisms involved in disease
Apparatus and method for defect testing of integrated circuits
Cole, E.I. Jr.; Soden, J.M.
2000-02-29
An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V(DD), to an IC under test and measures a transient voltage component, V(DDT), signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V(DDT) signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V(DDT) signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.
Apparatus and method for defect testing of integrated circuits
Cole, Jr., Edward I.; Soden, Jerry M.
2000-01-01
An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V.sub.DD, to an IC under test and measures a transient voltage component, V.sub.DDT, signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V.sub.DDT signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V.sub.DDT signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.
A method to optimize selection on multiple identified quantitative trait loci
Chakraborty, Reena; Moreau, Laurence; Dekkers, Jack CM
2002-01-01
A mathematical approach was developed to model and optimize selection on multiple known quantitative trait loci (QTL) and polygenic estimated breeding values in order to maximize a weighted sum of responses to selection over multiple generations. The model allows for linkage between QTL with multiple alleles and arbitrary genetic effects, including dominance, epistasis, and gametic imprinting. Gametic phase disequilibrium between the QTL and between the QTL and polygenes is modeled but polygenic variance is assumed constant. Breeding programs with discrete generations, differential selection of males and females and random mating of selected parents are modeled. Polygenic EBV obtained from best linear unbiased prediction models can be accommodated. The problem was formulated as a multiple-stage optimal control problem and an iterative approach was developed for its solution. The method can be used to develop and evaluate optimal strategies for selection on multiple QTL for a wide range of situations and genetic models. PMID:12081805
1992-06-01
34Needs Analysis and Requirements Specification For an Integration Toolkit and Methods"). The following is a summary or the results of that survey. Actual...1992 it Final Report for Period June 1991 May 1992 Approved for public release; Distriuution is unlimited. 2 92-2526492 0! 15 4, Man,,facturing...related thereto. This report has been reviewed by the Office of Public Affairs (ASD/PA) and is releasable to the National Technical Information
Erythrocyte shape classification using integral-geometry-based methods.
Gual-Arnau, X; Herold-García, S; Simó, A
2015-07-01
Erythrocyte shape deformations are related to different important illnesses. In this paper, we focus on one of the most important: the Sickle cell disease. This disease causes the hardening or polymerization of the hemoglobin that contains the erythrocytes. The study of this process using digital images of peripheral blood smears can offer useful results in the clinical diagnosis of these illnesses. In particular, it would be very valuable to find a rapid and reproducible automatic classification method to quantify the number of deformed cells and so gauge the severity of the illness. In this paper, we show the good results obtained in the automatic classification of erythrocytes in normal cells, sickle cells, and cells with other deformations, when we use a set of functions based on integral-geometry methods, an active contour-based segmentation method, and a k-NN classification algorithm. Blood specimens were obtained from patients with Sickle cell disease. Seventeen peripheral blood smears were obtained for the study, and 45 images of different fields were obtained. A specialist selected the cells to use, determining those cells which were normal, elongated, and with other deformations present in the images. A process of automatic classification, with cross-validation of errors with the proposed descriptors and with other two functions used in previous studies, was realized.
Attachment method for stacked integrated circuit (IC) chips
Bernhardt, Anthony F.; Malba, Vincent
1999-01-01
An attachment method for stacked integrated circuit (IC) chips. The method involves connecting stacked chips, such as DRAM memory chips, to each other and/or to a circuit board. Pads on the individual chips are rerouted to form pads on the side of the chip, after which the chips are stacked on top of each other whereby desired interconnections to other chips or a circuit board can be accomplished via the side-located pads. The pads on the side of a chip are connected to metal lines on a flexible plastic tape (flex) by anisotropically conductive adhesive (ACA). Metal lines on the flex are likewise connected to other pads on chips and/or to pads on a circuit board. In the case of a stack of DRAM chips, pads to corresponding address lines on the various chips may be connected to the same metal line on the flex to form an address bus. This method has the advantage of reducing the number of connections required to be made to the circuit board due to bussing; the flex can accommodate dimensional variation in the alignment of chips in the stack; bonding of the ACA is accomplished at low temperature and is otherwise simpler and less expensive than solder bonding; chips can be bonded to the ACA all at once if the sides of the chips are substantially coplanar, as in the case for stacks of identical chips, such as DRAM.
Attachment method for stacked integrated circuit (IC) chips
Bernhardt, A.F.; Malba, V.
1999-08-03
An attachment method for stacked integrated circuit (IC) chips is disclosed. The method involves connecting stacked chips, such as DRAM memory chips, to each other and/or to a circuit board. Pads on the individual chips are rerouted to form pads on the side of the chip, after which the chips are stacked on top of each other whereby desired interconnections to other chips or a circuit board can be accomplished via the side-located pads. The pads on the side of a chip are connected to metal lines on a flexible plastic tape (flex) by anisotropically conductive adhesive (ACA). Metal lines on the flex are likewise connected to other pads on chips and/or to pads on a circuit board. In the case of a stack of DRAM chips, pads to corresponding address lines on the various chips may be connected to the same metal line on the flex to form an address bus. This method has the advantage of reducing the number of connections required to be made to the circuit board due to bussing; the flex can accommodate dimensional variation in the alignment of chips in the stack; bonding of the ACA is accomplished at low temperature and is otherwise simpler and less expensive than solder bonding; chips can be bonded to the ACA all at once if the sides of the chips are substantially coplanar, as in the case for stacks of identical chips, such as DRAM. 12 figs.
Geometric and Integral Equation Methods for Scattering in Layered Media
NASA Astrophysics Data System (ADS)
Wiskin, James Walter
This dissertation is an extension of the Stenger -Johnson-Borup sinc and Fast Fourier Transform (FFT) based integral equation imaging algorithms to the case of a layered ambient medium. This scenario has medical, geophysical and nondestructive testing applications. It is also a first step in the direction of incorporating a geometric point of view in forward and inverse scattering. The construction of layered Green's functions and concomitant inverse scattering algorithms for inhomogeneities residing within a layered medium whose layers are known a priori is carried out. Computer simulations and numerical experiments investigate the ill -posedness of inverse scattering in this context. Both 2 and 3D ambient media are considered and the relationship to the distorted wave Born approximation are discussed. Noise contamination and attenuation in both the layered background medium and the inhomogeneity are included for realism. Global minimization techniques based on homotopy are introduced and generalized. Concepts from Cartan/Kahler differential geometry play a natural role in understanding homotopy methods of global minimization. These minimization methods have application to biomolecular modelling as well as scattering. Exterior Differential Forms provide a natural vehicle for extending results determined here to include shear effects in fully elastic media. It is also shown that the methods developed here can be extended to ambient media with different types of known structure.
NASA Astrophysics Data System (ADS)
Tazik, D.; Roehm, C. L.; Atkin, O.; Ayers, E.; Berukoff, S. J.; Fitzgerald, M.; Held, A. A.; Hinckley, E. S.; Kampe, T. U.; Liddell, M.; Phinn, S. R.; Taylor, J. R.; Thibault, K. M.; Thorpe, A.
2013-12-01
Distributed standardized sensor networks that collect coordinated airborne- and ground-based observations and are coupled with remotely sensed satellite imagery provide unique insight into complex ecological processes and feedbacks across a range of spatio-temporal scales. Measurements and information transfer at and across scales are key challenges in ecohydrology. A combination of approaches, for example, isotopic signatures of leaves, evapotranspiration using micrometeorological techniques, and water stress from remote sensing imagery, will improve our ability to integrate data across spatial scales. The collaboration among science networks such as the National Ecological Observatory Network (NEON) in the US and Terrestrial Ecosystem Research Network (TERN) in Australia will provide data that enable researchers to address complex questions regarding processes operating within and across systems, at site-to-continental scales and beyond. In this talk, we present several examples demonstrating combinations of remotely sensed and ground-based ecohydrological data collected using standardized methodologies across multiple sensor networks. Examples include: 1) determining ecohydrological controls on plant production at plot to regional scales; 2) interpreting atmospheric chemical and isotopic deposition gradients across geographic domains; 3) using the stable isotope signatures of small mammal tissues to track drought dynamics across space and time; 4) mapping water quality characteristics in optically complex waters using remotely sensed imagery and high temporal frequency ground based sensor calibration data and 5) scaling plot and individual plant level vegetation structure estimates to continental scale maps of vegetation and ground cover dynamics. Australian scientists are using TERN's infrastructure for improving Soil-Vegetation-Atmosphere Transfer (SVAT) modeling for Australian conditions by assessing plant photosynthetic and respiration performance across a
Anderson, Kari B.; Halpin, Stephen T.; Johnson, Alicia S.; Martin, R. Scott; Spence, Dana M.
2012-01-01
In Part II of this series describing the use of polystyrene (PS) devices for microfluidic-based cellular assays, various cellular types and detection strategies are employed to determine three fundamental assays often associated with cells. Specifically, using either integrated electrochemical sensing or optical measurements with a standard multi-well plate reader, cellular uptake, production, or release of important cellular analytes are determined on a PS-based device. One experiment involved the fluorescence measurement of nitric oxide (NO) produced within an endothelial cell line following stimulation with ATP. The result was a four-fold increase in NO production (as compared to a control), with this receptor-based mechanism of NO production verifying the maintenance of cell receptors following immobilization onto the PS substrate. The ability to monitor cellular uptake was also demonstrated by optical determination of Ca2+ into endothelial cells following stimulation with the Ca2+ ionophore A20317. The result was a significant increase (42%) in the calcium uptake in the presence of the ionophore, as compared to a control (17%) (p < 0.05). Finally, the release of catecholamines from a dopaminergic cell line (PC 12 cells) was electrochemically monitored, with the electrodes being embedded into the PS-based device. The PC 12 cells had better adherence on the PS devices, as compared to use of PDMS. Potassium-stimulation resulted in the release of 114 ± 11 µM catecholamines, a significant increase (p < 0.05) over the release from cells that had been exposed to an inhibitor (reserpine, 20 ± 2 µM of catecholamines). The ability to successfully measure multiple analytes, generated in different means from various cells under investigation, suggests that PS may be a useful material for microfluidic device fabrication, especially considering the enhanced cell adhesion to PS, its enhanced rigidity/amenability to automation, and its ability to enable a wider range of
Integration of multiple intraguild predator cues for oviposition decisions by a predatory mite
Walzer, Andreas; Schausberger, Peter
2012-01-01
In mutual intraguild predation (IGP), the role of individual guild members is strongly context dependent and, during ontogeny, can shift from an intraguild (IG) prey to a food competitor or to an IG predator. Consequently, recognition of an offspring's predator is more complex for IG than classic prey females. Thus, IG prey females should be able to modulate their oviposition decisions by integrating multiple IG predator cues and by experience. Using a guild of plant-inhabiting predatory mites sharing the spider mite Tetranychus urticae as prey and passing through ontogenetic role shifts in mutual IGP, we assessed the effects of single and combined direct cues of the IG predator Amblyseius andersoni (eggs and traces left by a female on the substrate) on prey patch selection and oviposition behaviour of naïve and IG predator-experienced IG prey females of Phytoseiulus persimilis. The IG prey females preferentially resided in patches without predator cues when the alternative patch contained traces of predator females or the cue combination. Preferential egg placement in patches without predator cues was only apparent in the choice situation with the cue combination. Experience increased the responsiveness of females exposed to the IG predator cue combination, indicated by immediate selection of the prey patch without predator cues and almost perfect oviposition avoidance in patches with the cue combination. We argue that the evolution of the ability of IG prey females to evaluate offspring's IGP risk accurately is driven by the irreversibility of oviposition and the functionally complex relationships between predator guild members. PMID:23264692
The U.S. Environmental Protection Agency recently established the Ecosystem Services Research Program to help formulate methods and models for conducting comprehensive risk assessments that quantify how multiple ecosystem services interact and respond in concert to environmental ...
Lindenmeyer, C.W.
1993-01-26
An apparatus and method to automate the handling of multiple digital tape cassettes for processing by commercially available cassette tape readers and recorders. A removable magazine rack stores a plurality of tape cassettes, and cooperates with a shuttle device that automatically inserts and removes cassettes from the magazine to the reader and vice-versa. Photocells are used to identify and index to the desired tape cassette. The apparatus allows digital information stored on multiple cassettes to be processed without significant operator intervention.
Lindenmeyer, Carl W.
1993-01-01
An apparatus and method to automate the handling of multiple digital tape cassettes for processing by commercially available cassette tape readers and recorders. A removable magazine rack stores a plurality of tape cassettes, and cooperates with a shuttle device that automatically inserts and removes cassettes from the magazine to the reader and vice-versa. Photocells are used to identify and index to the desired tape cassette. The apparatus allows digital information stored on multiple cassettes to be processed without significant operator intervention.
ERIC Educational Resources Information Center
Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee
2009-01-01
Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…
ERIC Educational Resources Information Center
Große, Cornelia S.
2014-01-01
It is commonly suggested to mathematics teachers to present learners different methods in order to solve one problem. This so-called "learning with multiple solution methods" is also recommended from a psychological point of view. However, existing research leaves many questions unanswered, particularly concerning the effects of…
A multiple testing method for hypotheses structured in a directed acyclic graph.
Meijer, Rosa J; Goeman, Jelle J
2015-01-01
We present a novel multiple testing method for testing null hypotheses that are structured in a directed acyclic graph (DAG). The method is a top-down method that strongly controls the familywise error rate and can be seen as a generalization of Meinshausen's procedure for tree-structured hypotheses. Just as Meinshausen's procedure, our proposed method can be used to test for variable importance, only the corresponding variable clusters can be chosen more freely, because the method allows for multiple parent nodes and partially overlapping hypotheses. An important application of our method is in gene set analysis, in which one often wants to test multiple gene sets as well as individual genes for their association with a clinical outcome. By considering the genes and gene sets as nodes in a DAG, our method enables us to test both for significant gene sets as well as for significant individual genes within the same multiple testing procedure. The method will be illustrated by testing Gene Ontology terms for evidence of differential expression in a survival setting and is implemented in the R package cherry.
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
Shin, H-M; McKone, T E; Bennett, D H
2016-11-17
We present a screening-level exposure-assessment method which integrates exposure from all plausible exposure pathways as a result of indoor residential use of cleaning products. The exposure pathways we considered are (i) exposure to a user during product use via inhalation and dermal, (ii) exposure to chemical residues left on clothing, (iii) exposure to all occupants from the portion released indoors during use via inhalation and dermal, and (iv) exposure to the general population due to down-the-drain disposal via inhalation and ingestion. We use consumer product volatilization models to account for the chemical fractions volatilized to air (fvolatilized ) and disposed down the drain (fdown-the-drain ) during product use. For each exposure pathway, we use a fate and exposure model to estimate intake rates (iR) in mg/kg/d. Overall, the contribution of the four exposure pathways to the total exposure varies by the type of cleaning activities and with chemical properties. By providing a more comprehensive exposure model and by capturing additional exposures from often-overlooked exposure pathways, our method allows us to compare the relative contribution of various exposure routes and could improve high-throughput exposure assessment for chemicals in cleaning products.
NASA Astrophysics Data System (ADS)
Pinkham, Raymond; Anderson, Daniel F.
1986-08-01
The continuing advancements in integrated circuit technology have placed new burdons on the circuit design engineer, who must rely extensively upon computer simulation to correctly predict circuit behavior. One challenge is to develop better modelling techniques to more accurately deal with complex p- n junction structures often used in modern VLSI designs. This paper presents an easily implemented method for deriving parameters which accurately model the behavior of MOS VLSI structures containing complex p- n junction capacitance components. The methodology is applicable to both planar and laterally diffused junctions, whether formed by direct ion implantation or by diffusion from a finite or infinite source. The theories behind the equations used and results of the application of this new technique are discussed. A flow chart for a fitter program based on the new method is presented and described. The corresponding program written for the TI-59 scientific programmable calculator is available. Final model parameters are given and are shown to produce a numerical capacitance model which is accurate to within 2%.
Liu, Zhenwei; Zhang, Huaguang; Zhang, Qingling
2010-11-01
This paper studies the stability problem of a class of recurrent neural networks (RNNs) with multiple delays. By using an augmented matrix-vector transformation for delays and a novel line integral-type Lyapunov-Krasovskii functional, a less conservative delay-dependent global asymptotical stability criterion is first proposed for RNNs with multiple delays. The obtained stability result is easy to check and improve upon the existing ones. Then, two numerical examples are given to verify the effectiveness of the proposed criterion.
Self-Adaptive Filon's Integration Method and Its Application to Computing Synthetic Seismograms
NASA Astrophysics Data System (ADS)
Zhang, Hai-Ming; Chen, Xiao-Fei
2001-03-01
Based on the principle of the self-adaptive Simpson integration method, and by incorporating the `fifth-order' Filon's integration algorithm [Bull. Seism. Soc. Am. 73(1983)913], we have proposed a simple and efficient numerical integration method, i.e., the self-adaptive Filon's integration method (SAFIM), for computing synthetic seismograms at large epicentral distances. With numerical examples, we have demonstrated that the SAFIM is not only accurate but also very efficient. This new integration method is expected to be very useful in seismology, as well as in computing similar oscillatory integrals in other branches of physics.
Maury, Jérôme; Germann, Susanne M; Baallal Jacobsen, Simo Abdessamad; Jensen, Niels B; Kildegaard, Kanchana R; Herrgård, Markus J; Schneider, Konstantin; Koza, Anna; Forster, Jochen; Nielsen, Jens; Borodina, Irina
2016-01-01
Saccharomyces cerevisiae is widely used in the biotechnology industry for production of ethanol, recombinant proteins, food ingredients and other chemicals. In order to generate highly producing and stable strains, genome integration of genes encoding metabolic pathway enzymes is the preferred option. However, integration of pathway genes in single or few copies, especially those encoding rate-controlling steps, is often not sufficient to sustain high metabolic fluxes. By exploiting the sequence diversity in the long terminal repeats (LTR) of Ty retrotransposons, we developed a new set of integrative vectors, EasyCloneMulti, that enables multiple and simultaneous integration of genes in S. cerevisiae. By creating vector backbones that combine consensus sequences that aim at targeting subsets of Ty sequences and a quickly degrading selective marker, integrations at multiple genomic loci and a range of expression levels were obtained, as assessed with the green fluorescent protein (GFP) reporter system. The EasyCloneMulti vector set was applied to balance the expression of the rate-controlling step in the β-alanine pathway for biosynthesis of 3-hydroxypropionic acid (3HP). The best 3HP producing clone, with 5.45 g.L(-1) of 3HP, produced 11 times more 3HP than the lowest producing clone, which demonstrates the capability of EasyCloneMulti vectors to impact metabolic pathway enzyme activity.
Baallal Jacobsen, Simo Abdessamad; Jensen, Niels B.; Kildegaard, Kanchana R.; Herrgård, Markus J.; Schneider, Konstantin; Koza, Anna; Forster, Jochen; Nielsen, Jens; Borodina, Irina
2016-01-01
Saccharomyces cerevisiae is widely used in the biotechnology industry for production of ethanol, recombinant proteins, food ingredients and other chemicals. In order to generate highly producing and stable strains, genome integration of genes encoding metabolic pathway enzymes is the preferred option. However, integration of pathway genes in single or few copies, especially those encoding rate-controlling steps, is often not sufficient to sustain high metabolic fluxes. By exploiting the sequence diversity in the long terminal repeats (LTR) of Ty retrotransposons, we developed a new set of integrative vectors, EasyCloneMulti, that enables multiple and simultaneous integration of genes in S. cerevisiae. By creating vector backbones that combine consensus sequences that aim at targeting subsets of Ty sequences and a quickly degrading selective marker, integrations at multiple genomic loci and a range of expression levels were obtained, as assessed with the green fluorescent protein (GFP) reporter system. The EasyCloneMulti vector set was applied to balance the expression of the rate-controlling step in the β-alanine pathway for biosynthesis of 3-hydroxypropionic acid (3HP). The best 3HP producing clone, with 5.45 g.L-1 of 3HP, produced 11 times more 3HP than the lowest producing clone, which demonstrates the capability of EasyCloneMulti vectors to impact metabolic pathway enzyme activity. PMID:26934490
A method to visualize the evolution of multiple interacting spatial systems
NASA Astrophysics Data System (ADS)
Heitzler, Magnus; Hackl, Jürgen; Adey, Bryan T.; Iosifescu-Enescu, Ionut; Lam, Juan Carlos; Hurni, Lorenz
2016-07-01
Integrated modeling approaches are being increasingly used to simulate the behavior of, and the interaction between, several interdependent systems. They are becoming more and more important in many fields, including, but not being limited to, civil engineering, hydrology and climate impact research. It is beneficial when using these approaches to be able to visualize both, the intermediary and final results of scenario-based analyses that are conducted in both, space and time. This requires appropriate visualization techniques that enable to efficiently navigate between multiple such scenarios. In recent years, several innovative visualization techniques have been developed that allow for such navigation purposes. These techniques, however, are limited to the representation of one system at a time. Improvements are possible with respect to the ability to visualize the results related to multiple scenarios for multiple interdependent spatio-temporal systems. To address this issue, existing multi-scenario navigation techniques based on small multiples and line graphs are extended by multiple system representations and inter-system impact representations. This not only allows to understand the evolution of the systems under consideration but also eases identifying events where one system influences another system significantly. In addition, the concept of selective branching is described that allows to remove otherwise redundant information from the visualization by considering the logical and temporal dependencies between these systems. This visualization technique is applied to a risk assessment methodology that allows to determine how different environmental systems (i.e. precipitation, flooding, and landslides) influence each other as well as how their impact on civil infrastructure affects society. The results of this work are concepts for improved visualization techniques for multiple interacting spatial systems. The successful validation with domain experts of
Comparison of methods for assessing integrity of equine sperm membranes.
Foster, M L; Love, C C; Varner, D D; Brinsko, S P; Hinrichs, K; Teague, S; Lacaze, K; Blanchard, T L
2011-07-15
Sperm membrane integrity (SMI) is thought to be an important measure of stallion sperm quality. The objective was to compare three methods for evaluating SMI: flow cytometry using SYBR-14/propidium iodide (PI) stain; an automated cell counting device using PI stain; and eosin-nigrosin stain. Raw equine semen was subjected to various treatments containing 20 to 80% seminal plasma in extender, with differing sperm concentrations, to simulate spontaneous loss of SMI. The SMI was assessed immediately, and after 1 and 2 d of cooled storage. Agreement between methods was determined according to Bland-Altman methodology. Eosin-nigrosin staining yielded higher (2%) overall mean values for SMI than did flow cytometry. Flow cytometry yielded higher (6%) overall mean values for SMI than did the automated cell counter. As percentage of membrane-damaged sperm increased, agreement of SMI measurement between methods decreased. When semen contained 50-79% membrane-intact sperm, the 95% limits of agreement between SMI determined by flow cytometry and eosin-nigrosin staining were greater (range = -26.9 to 24.3%; i.e., a 51.2% span) than for SMI determined by flow cytometry and the automated cell counter (range = -3.1 to 17.0%; 20.1% span). When sperm populations contained <50% membrane-intact sperm, the 95% limits of agreement between SMI determined by flow cytometry and eosin-nigrosin staining were greater (range = -35.9 to 19.0%; 54.9% span) than for SMI determined by flow cytometry and the automated cell counter (range = -11.6 to 28.7%; 40.3% span). We concluded that eosin-nigrosin staining assessments of percent membrane-intact sperm agreed less with flow cytometry when <80% of sperm had intact membranes, whereas automated cell counter assessments of percent membrane-intact sperm agreed less with flow cytometry when <30% of sperm had intact membranes.
Integrated borehole logging methods for wellhead protection applications
Paillet, Frederick L.; Pedler, W.H.
1996-01-01
Modeling of ground water infiltration and movement in the wellhead area is a critical part of an effective wellhead protection program. Such models depend on an accurate description of the aquifer in the wellhead area so that reliable estimates of contaminant travel times can be used in defining a protection area. Geophysical and hydraulic measurements in boreholes provide one of the most important methods for obtaining the data needed to specify wellhead protection measures. Most effective characterization of aquifers in the wellhead vicinity results when a variety of geophysical and hydraulic measurements are made where geophysical measurements can be calibrated in terms of hydraulic variables, and where measurements are made at somewhat different scales of investigation. The application of multiple geophysical measurements to ground water flow in the wellhead area is illustrated by examples in alluvial, fractured sedimentary, and fractured crystalline rock aquifers. Data obtained from a single test well are useful, but cannot indicate how conductive elements in the aquifer are connected to form large-scale flow paths. Geophysical and hydraulic measurements made in arrays of observation boreholes can provide information about such large-scale flow paths, and are especially useful in specifying aquifer properties in wellhead protection studies.
Kinjo, Akira R.
2016-01-01
The multiple sequence alignment (MSA) of a protein family provides a wealth of information in terms of the conservation pattern of amino acid residues not only at each alignment site but also between distant sites. In order to statistically model the MSA incorporating both short-range and long-range correlations as well as insertions, I have derived a lattice gas model of the MSA based on the principle of maximum entropy. The partition function, obtained by the transfer matrix method with a mean-field approximation, accounts for all possible alignments with all possible sequences. The model parameters for short-range and long-range interactions were determined by a self-consistent condition and by a Gaussian approximation, respectively. Using this model with and without long-range interactions, I analyzed the globin and V-set domains by increasing the “temperature” and by “mutating” a site. The correlations between residue conservation and various measures of the system’s stability indicate that the long-range interactions make the conservation pattern more specific to the structure, and increasingly stabilize better conserved residues. PMID:27924257
Unsteady aerodynamic simulation of multiple bodies in relative motion: A prototype method
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1989-01-01
A prototype method for time-accurate simulation of multiple aerodynamic bodies in relative motion is presented. The method is general and features unsteady chimera domain decomposition techniques and an implicit approximately factored finite-difference procedure to solve the time-dependent thin-layer Navier-Stokes equations. The method is applied to a set of two- and three- dimensional test problems to establish spatial and temporal accuracy, quantify computational efficiency, and begin to test overall code robustness.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be
Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng
2014-08-01
Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.
NASA Astrophysics Data System (ADS)
Ilhan, I.; Coakley, B. J.
2013-12-01
The Chukchi Edges project was designed to establish the relationship between the Chukchi Shelf and Borderland and indirectly test theories of opening for the Canada Basin. During this cruise, ~5300 km of 2D multi-channel reflection seismic profiles and other geophysical data (swath bathymetry, gravity, magnetics, sonobuoy refraction seismic) were collected from the RV Marcus G. Langseth across the transition between the Chukchi Shelf and Chukchi Borderland, where the water depths vary from 30 m to over 3 km. Multiples occur when seismic energy is trapped in a layer and reflected from an acoustic interface more than once. Various kinds of multiples occur during seismic data acquisition. These depend on the ray-path the seismic energy follows through the layers. One of the most common multiples is the surface related multiple, which occurs due to strong acoustic impedance contrast between the air and water. The reflected seismic energy from the water surface is trapped within the water column, thus reflects from the seafloor multiple times. Multiples overprint the primary reflections and complicate data interpretation. Both surface related multiple elimination (SRME) and forward parabolic radon transform multiple modeling methods were necessary to attenuate the multiples. SRME is applied to shot gathers starting with the near offset interpolation, multiple estimation using water depths, and subtracting the model multiple from the shot gathers. This method attenuated surface related multiple energy, however, peg-leg multiples remained in the data. The parabolic radon transform method minimized the effect of these multiples. This method is applied to normal moveout (NMO) corrected common mid-point gathers (CMP). The CMP gathers are fitted or modeled with curves estimated from the reference offset, moveout range, moveout increment parameters. Then, the modeled multiples are subtracted from the data. Preliminary outputs of these two methods show that the surface related
Keating, Kristina; Slater, Lee; Ntarlagiannis, Dimitris; Williams, Kenneth H.
2015-02-24
This documents contains the final report for the project "Integrated Geophysical Measurements for Bioremediation Monitoring: Combining Spectral Induced Polarization, Nuclear Magnetic Resonance and Magnetic Methods" (DE-SC0007049) Executive Summary: Our research aimed to develop borehole measurement techniques capable of monitoring subsurface processes, such as changes in pore geometry and iron/sulfur geochemistry, associated with remediation of heavy metals and radionuclides. Previous work has demonstrated that geophysical method spectral induced polarization (SIP) can be used to assess subsurface contaminant remediation; however, SIP signals can be generated from multiple sources limiting their interpretation value. Integrating multiple geophysical methods, such as nuclear magnetic resonance (NMR) and magnetic susceptibility (MS), with SIP, could reduce the ambiguity of interpretation that might result from a single method. Our research efforts entails combining measurements from these methods, each sensitive to different mineral forms and/or mineral-fluid interfaces, providing better constraints on changes in subsurface biogeochemical processes and pore geometries significantly improving our understanding of processes impacting contaminant remediation. The Rifle Integrated Field Research Challenge (IFRC) site was used as a test location for our measurements. The Rifle IFRC site is located at a former uranium ore-processing facility in Rifle, Colorado. Leachate from spent mill tailings has resulted in residual uranium contamination of both groundwater and sediments within the local aquifer. Studies at the site include an ongoing acetate amendment strategy, native microbial populations are stimulated by introduction of carbon intended to alter redox conditions and immobilize uranium. To test the geophysical methods in the field, NMR and MS logging measurements were collected before, during, and after acetate amendment. Next, laboratory NMR, MS, and SIP measurements
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT
Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965
Method of forming a multiple layer dielectric and a hot film sensor therewith
NASA Technical Reports Server (NTRS)
Hopson, Purnell, Jr. (Inventor); Tran, Sang Q. (Inventor)
1990-01-01
The invention is a method of forming a multiple layer dielectric for use in a hot-film laminar separation sensor. The multiple layer dielectric substrate is formed by depositing a first layer of a thermoelastic polymer such as on an electrically conductive substrate such as the metal surface of a model to be tested under cryogenic conditions and high Reynolds numbers. Next, a second dielectric layer of fused silica is formed on the first dielectric layer of thermoplastic polymer. A resistive metal film is deposited on selected areas of the multiple layer dielectric substrate to form one or more hot-film sensor elements to which aluminum electrical circuits deposited upon the multiple layered dielectric substrate are connected.
Phase unwrapping method based on multiple recording distances for digital holographic microscopy
NASA Astrophysics Data System (ADS)
Li, Yan; Xiao, Wen; Pan, Feng; Rong, Lu
2015-07-01
We present a phase unwrapping approach based on multiple recording distances for digital holographic microscopy. It unwrappes the ambiguous phase image by synthesizing the extracted continuous components from a set of multiple reconstructed phase images obtained from a series of holograms by slightly shifting the specimen longitudinally with a step more than the longitudinal correlation length of the coherent noise field. The experimental results demonstrate that the proposed method provides a more accurate calculation and better counteraction of phase noise than the methods proposed in previous research.
Non-destructive testing method and apparatus utilizing phase multiplication holography
Collins, H. Dale; Prince, James M.; Davis, Thomas J.
1984-01-01
An apparatus and method for imaging of structural characteristics in test objects using radiation amenable to coherent signal processing methods. Frequency and phase multiplication of received flaw signals is used to simulate a test wavelength at least one to two orders of magnitude smaller than the actual wavelength. The apparent reduction in wavelength between the illumination and recording radiation performs a frequency translation hologram. The hologram constructed with a high synthetic frequency and flaw phase multiplication is similar to a conventional acoustic hologram construction at the high frequency.
The use of artificial intelligence techniques to improve the multiple payload integration process
NASA Technical Reports Server (NTRS)
Cutts, Dannie E.; Widgren, Brian K.
1992-01-01
A maximum return of science and products with a minimum expenditure of time and resources is a major goal of mission payload integration. A critical component then, in successful mission payload integration is the acquisition and analysis of experiment requirements from the principal investigator and payload element developer teams. One effort to use artificial intelligence techniques to improve the acquisition and analysis of experiment requirements within the payload integration process is described.
Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory
Meilinger, Tobias; Watanabe, Katsumi
2016-01-01
Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions. PMID:27101011
Developing integrated methods to address complex resource and environmental issues
Smith, Kathleen S.; Phillips, Jeffrey D.; McCafferty, Anne E.; Clark, Roger N.
2016-02-08
IntroductionThis circular provides an overview of selected activities that were conducted within the U.S. Geological Survey (USGS) Integrated Methods Development Project, an interdisciplinary project designed to develop new tools and conduct innovative research requiring integration of geologic, geophysical, geochemical, and remote-sensing expertise. The project was supported by the USGS Mineral Resources Program, and its products and acquired capabilities have broad applications to missions throughout the USGS and beyond.In addressing challenges associated with understanding the location, quantity, and quality of mineral resources, and in investigating the potential environmental consequences of resource development, a number of field and laboratory capabilities and interpretative methodologies evolved from the project that have applications to traditional resource studies as well as to studies related to ecosystem health, human health, disaster and hazard assessment, and planetary science. New or improved tools and research findings developed within the project have been applied to other projects and activities. Specifically, geophysical equipment and techniques have been applied to a variety of traditional and nontraditional mineral- and energy-resource studies, military applications, environmental investigations, and applied research activities that involve climate change, mapping techniques, and monitoring capabilities. Diverse applied geochemistry activities provide a process-level understanding of the mobility, chemical speciation, and bioavailability of elements, particularly metals and metalloids, in a variety of environmental settings. Imaging spectroscopy capabilities maintained and developed within the project have been applied to traditional resource studies as well as to studies related to ecosystem health, human health, disaster assessment, and planetary science. Brief descriptions of capabilities and laboratory facilities and summaries of some
Method for integrating microelectromechanical devices with electronic circuitry
Barron, Carole C.; Fleming, James G.; Montague, Stephen
1999-01-01
A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry on a common substrate. The MEM device can be fabricated within a substrate cavity and encapsulated with a sacrificial material. This allows the MEM device to be annealed and the substrate planarized prior to forming electronic circuitry on the substrate using a series of standard processing steps. After fabrication of the electronic circuitry, the electronic circuitry can be protected by a two-ply protection layer of titanium nitride (TiN) and tungsten (W) during an etch release process whereby the MEM device is released for operation by etching away a portion of a sacrificial material (e.g. silicon dioxide or a silicate glass) that encapsulates the MEM device. The etch release process is preferably performed using a mixture of hydrofluoric acid (HF) and hydrochloric acid (HCI) which reduces the time for releasing the MEM device compared to use of a buffered oxide etchant. After release of the MEM device, the TiN:W protection layer can be removed with a peroxide-based etchant without damaging the electronic circuitry.
Reliable Transition State Searches Integrated with the Growing String Method.
Zimmerman, Paul
2013-07-09
The growing string method (GSM) is highly useful for locating reaction paths connecting two molecular intermediates. GSM has often been used in a two-step procedure to locate exact transition states (TS), where GSM creates a quality initial structure for a local TS search. This procedure and others like it, however, do not always converge to the desired transition state because the local search is sensitive to the quality of the initial guess. This article describes an integrated technique for simultaneous reaction path and exact transition state search. This is achieved by implementing an eigenvector following optimization algorithm in internal coordinates with Hessian update techniques. After partial convergence of the string, an exact saddle point search begins under the constraint that the maximized eigenmode of the TS node Hessian has significant overlap with the string tangent near the TS. Subsequent optimization maintains connectivity of the string to the TS as well as locks in the TS direction, all but eliminating the possibility that the local search leads to the wrong TS. To verify the robustness of this approach, reaction paths and TSs are found for a benchmark set of more than 100 elementary reactions.
NASA Astrophysics Data System (ADS)
Sica, Robert; Haefele, Alexander
2016-04-01
While the application of optimal estimation methods (OEMs) is well-known for the retrieval of atmospheric parameters from passive instruments, active instruments have typically not employed the OEM. For instance, the measurement of temperature in the middle atmosphere with Rayleigh-scatter lidars is an important technique for assessing atmospheric change. Current retrieval schemes for these temperatures have several shortcomings which can be overcome using an OEM. Forward models have been constructed that fully characterize the measurement and allow the simultaneous retrieval of temperature, dead time and background. The OEM allows a full uncertainty budget to be obtained on a per profile basis that includes, in addition to the statistical uncertainties, the smoothing error and uncertainties due to Rayleigh extinction, ozone absorption, the lidar constant, nonlinearity in the counting system, variation of the Rayleigh-scatter cross section with altitude, pressure, acceleration due to gravity and the variation of mean molecular mass with altitude. The vertical resolution of the temperature profile is found at each height, and a quantitative determination is made of the maximum height to which the retrieval is valid. A single temperature profile can be retrieved from measurements with multiple channels that cover different height ranges, vertical resolutions and even different detection methods. The OEM employed is shown to give robust estimates of temperature consistent with previous methods, while requiring minimal computational time. Retrieval of water vapour mixing ratio from vibrational Raman scattering lidar measurements is another example where an OEM offers a considerable advantage over the standard analysis technique, with the same advantages as discussed above for Rayleigh-scatter temperatures but with an additional benefit. The conversion of the lidar measurement into mixing ratio requires a calibration constant to be employed. Using OEM the calibration
NASA Astrophysics Data System (ADS)
Sica, Robert; Haefele, Alexander
2015-04-01
While the application of optimal estimation methods (OEMs) is well-known for the retrieval of atmospheric parameters from passive instruments, active instruments have typically not employed the OEM. For instance, the measurement of temperature in the middle atmosphere with Rayleigh-scatter lidars is an important technique for assessing atmospheric change. Current retrieval schemes for these temperatures have several shortcomings which can be overcome using an OEM. Forward models have been constructed that fully characterize the measurement and allow the simultaneous retrieval of temperature, dead time and background. The OEM allows a full uncertainty budget to be obtained on a per profile basis that includes, in addition to the statistical uncertainties, the smoothing error and uncertainties due to Rayleigh extinction, ozone absorption, the lidar constant, nonlinearity in the counting system, variation of the Rayleigh-scatter cross section with altitude, pressure, acceleration due to gravity and the variation of mean molecular mass with altitude. The vertical resolution of the temperature profile is found at each height, and a quantitative determination is made of the maximum height to which the retrieval is valid. A single temperature profile can be retrieved from measurements with multiple channels that cover different height ranges, vertical resolutions and even different detection methods. The OEM employed is shown to give robust estimates of temperature consistent with previous methods, while requiring minimal computational time. Retrieval of water vapour mixing ratio from vibrational Raman scattering lidar measurements is another example where an OEM offers a considerable advantage over the standard analysis technique, with the same advantages as discussed above for Rayleigh-scatter temperatures but with an additional benefit. The conversion of the lidar measurement into mixing ratio requires a calibration constant to be employed. Using OEM the calibration
Statistical Methods for Magnetic Resonance Image Analysis with Applications to Multiple Sclerosis
NASA Astrophysics Data System (ADS)
Pomann, Gina-Maria
image regression techniques have been shown to have modest performance for assessing the integrity of the blood-brain barrier based on imaging without contrast agents. These models have centered on the problem of cross-sectional classification in which patients are imaged at a single study visit and pre-contrast images are used to predict post-contrast imaging. In this paper, we extend these methods to incorporate historical imaging information, and we find the proposed model to exhibit improved performance. We further develop scan-stratified case-control sampling techniques that reduce the computational burden of local image regression models while respecting the low proportion of the brain that exhibits abnormal vascular permeability. In the third part of this thesis, we present methods to evaluate tissue damage in patients with MS. We propose a lag functional linear model to predict a functional response using multiple functional predictors observed at discrete grids with noise. Two procedures are proposed to estimate the regression parameter functions; 1) a semi-local smoothing approach using generalized cross-validation; and 2) a global smoothing approach using a restricted maximum likelihood framework. Numerical studies are presented to analyze predictive accuracy in many realistic scenarios. We find that the global smoothing approach results in higher predictive accuracy than the semi-local approach. The methods are employed to estimate a measure of tissue damage in patients with MS. In patients with MS, the myelin sheaths around the axons of the neurons in the brain and spinal cord are damaged. The model facilitates the use of commonly acquired imaging modalities to estimate a measure of tissue damage within lesions. The proposed model outperforms the cross-sectional models that do not account for temporal patterns of lesional development and repair.
Integrating multiple technologies to understand the foraging behaviour of Hawaiian monk seals
Littnan, Charles; Halpin, Patrick; Read, Andrew
2017-01-01
The objective of this research was to investigate and describe the foraging behaviour of monk seals in the main Hawaiian Islands. Specifically, our goal was to identify a metric to classify foraging behaviour from telemetry instruments. We deployed accelerometers, seal-mounted cameras and GPS tags on six monk seals during 2012–2014 on the islands of Molokai, Kauai and Oahu. We used pitch, calculated from the accelerometer, to identify search events and thus classify foraging dives. A search event and consequent ‘foraging dive’ occurred when the pitch was greater than or equal to 70° at a depth less than or equal to −3 m. By integrating data from the accelerometers with video and GPS, we were able to ground-truth this classification method and identify environmental variables associated with each foraging dive. We used Bayesian logistic regression to identify the variables that influenced search events. Dive depth, body motion (mean overall dynamic body acceleration during the dive) and proximity to the sea floor were the best predictors of search events for these seals. Search events typically occurred on long, deep dives, with more time spent at the bottom (more than 50% bottom time). We can now identify where monk seals are foraging in the main Hawaiian Islands (MHI) and what covariates influence foraging behaviour in this region. This increased understanding will inform management strategies and supplement outreach and recovery efforts.
Patabadige, Damith E W; Mickleburgh, Tom; Ferris, Lorin; Brummer, Gage; Culbertson, Anne H; Culbertson, Christopher T
2016-05-01
The ability to accurately control fluid transport in microfluidic devices is key for developing high-throughput methods for single cell analysis. Making small, reproducible changes to flow rates, however, to optimize lysis and injection using pumps external to the microfluidic device are challenging and time-consuming. To improve the throughput and increase the number of cells analyzed, we have integrated previously reported micropumps into a microfluidic device that can increase the cell analysis rate to ∼1000 cells/h and operate for over an hour continuously. In order to increase the flow rates sufficiently to handle cells at a higher throughput, three sets of pumps were multiplexed. These pumps are simple, low-cost, durable, easy to fabricate, and biocompatible. They provide precise control of the flow rate up to 9.2 nL/s. These devices were used to automatically transport, lyse, and electrophoretically separate T-Lymphocyte cells loaded with Oregon green and 6-carboxyfluorescein. Peak overlap statistics predicted the number of fully resolved single-cell electropherograms seen. In addition, there was no change in the average fluorescent dye peak areas indicating that the cells remained intact and the dyes did not leak out of the cells over the 1 h analysis time. The cell lysate peak area distribution followed that expected of an asynchronous steady-state population of immortalized cells.
A prediction method for aerodynamic sound produced by multiple elements in air ducts
NASA Astrophysics Data System (ADS)
Mak, C. M.
2005-10-01
A prediction method for aerodynamic sound produced by the interaction of multiple elements in a low speed flow duct has been developed. Same as the previous works of Mak and Yang for two in-duct elements, the concept of partially coherent sound fields is adopted to formulate the sound powers produced by interaction of multiple in-duct elements at frequencies below and above the cut-on frequency of the lowest transverse duct mode. An interaction factor is finally defined as a result of a simple relationship between the sound power due to the interaction of multiple in-duct elements and that due to a single in-duct element. The present study suggests that it is possible to predict the level and spectral distribution of the additional acoustic energy produced by the interaction of multiple in-duct elements. The proposed method therefore can form a basis of a generalized prediction method for aerodynamic sound produced by multiple in-duct elements in a ventilation system.
The initial rise method extended to multiple trapping levels in thermoluminescent materials.
Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E
2011-02-01
The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level.
Marucci, Evandro A.; Neves, Leandro A.; Valêncio, Carlo R.; Pinto, Alex R.; Cansian, Adriano M.; de Souza, Rogeria C. G.; Shiyou, Yang; Machado, José M.
2014-01-01
With the advance of genomic researches, the number of sequences involved in comparative methods has grown immensely. Among them, there are methods for similarities calculation, which are used by many bioinformatics applications. Due the huge amount of data, the union of low complexity methods with the use of parallel computing is becoming desirable. The k-mers counting is a very efficient method with good biological results. In this work, the development of a parallel algorithm for multiple sequence similarities calculation using the k-mers counting method is proposed. Tests show that the algorithm presents a very good scalability and a nearly linear speedup. For 14 nodes was obtained 12x speedup. This algorithm can be used in the parallelization of some multiple sequence alignment tools, such as MAFFT and MUSCLE. PMID:25140318
Integrating multiple data sources in species distribution modeling: a framework for data fusion.
Pacifici, Krishna; Reich, Brian J; Miller, David A W; Gardner, Beth; Stauffer, Glenn; Singh, Susheela; McKerrow, Alexa; Collazo, Jaime A
2017-03-01
The last decade has seen a dramatic increase in the use of species distribution models (SDMs) to characterize patterns of species' occurrence and abundance. Efforts to parameterize SDMs often create a tension between the quality and quantity of data available to fit models. Estimation methods that integrate both standardized and non-standardized data types offer a potential solution to the tradeoff between data quality and quantity. Recently several authors have developed approaches for jointly modeling two sources of data (one of high quality and one of lesser quality). We extend their work by allowing for explicit spatial autocorrelation in occurrence and detection error using a Multivariate Conditional Autoregressive (MVCAR) model and develop three models that share information in a less direct manner resulting in more robust performance when the auxiliary data is of lesser quality. We describe these three new approaches ("Shared," "Correlation," "Covariates") for combining data sources and show their use in a case study of the Brown-headed Nuthatch in the Southeastern U.S. and through simulations. All three of the approaches which used the second data source improved out-of-sample predictions relative to a single data source ("Single"). When information in the second data source is of high quality, the Shared model performs the best, but the Correlation and Covariates model also perform well. When the information quality in the second data source is of lesser quality, the Correlation and Covariates model performed better suggesting they are robust alternatives when little is known about auxiliary data collected opportunistically or through citizen scientists. Methods that allow for both data types to be used will maximize the useful information available for estimating species distributions.
An integrated pan-tropical biomass map using multiple reference datasets.
Avitabile, Valerio; Herold, Martin; Heuvelink, Gerard B M; Lewis, Simon L; Phillips, Oliver L; Asner, Gregory P; Armston, John; Ashton, Peter S; Banin, Lindsay; Bayol, Nicolas; Berry, Nicholas J; Boeckx, Pascal; de Jong, Bernardus H J; DeVries, Ben; Girardin, Cecile A J; Kearsley, Elizabeth; Lindsell, Jeremy A; Lopez-Gonzalez, Gabriela; Lucas, Richard; Malhi, Yadvinder; Morel, Alexandra; Mitchard, Edward T A; Nagy, Laszlo; Qie, Lan; Quinones, Marcela J; Ryan, Casey M; Ferry, Slik J W; Sunderland, Terry; Laurin, Gaia Vaglio; Gatti, Roberto Cazzolla; Valentini, Riccardo; Verbeeck, Hans; Wijaya, Arief; Willcock, Simon
2016-04-01
We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of field observations and locally calibrated high-resolution biomass maps, harmonized and upscaled to 14 477 1-km AGB estimates. Our data fusion approach uses bias removal and weighted linear averaging that incorporates and spatializes the biomass patterns indicated by the reference data. The method was applied independently in areas (strata) with homogeneous error patterns of the input (Saatchi and Baccini) maps, which were estimated from the reference data and additional covariates. Based on the fused map, we estimated AGB stock for the tropics (23.4 N-23.4 S) of 375 Pg dry mass, 9-18% lower than the Saatchi and Baccini estimates. The fused map also showed differing spatial patterns of AGB over large areas, with higher AGB density in the dense forest areas in the Congo basin, Eastern Amazon and South-East Asia, and lower values in Central America and in most dry vegetation areas of Africa than either of the input maps. The validation exercise, based on 2118 estimates from the reference dataset not used in the fusion process, showed that the fused map had a RMSE 15-21% lower than that of the input maps and, most importantly, nearly unbiased estimates (mean bias 5 Mg dry mass ha(-1) vs. 21 and 28 Mg ha(-1) for the input maps). The fusion method can be applied at any scale including the policy-relevant national level, where it can provide improved biomass estimates by integrating existing regional biomass maps as input maps and additional, country-specific reference datasets.
Rao, Gottipaty N; Karpf, Andreas
2011-05-01
We report on the development of a new sensor for NO₂ with ultrahigh sensitivity of detection. This has been accomplished by combining off-axis integrated cavity output spectroscopy (OA-ICOS) (which can provide large path lengths of the order of several kilometers in a small volume cell) with multiple-line integrated absorption spectroscopy (MLIAS) (where we integrate the absorption spectra over a large number of rotational-vibrational transitions of the molecular species to further improve the sensitivity). Employing an external cavity quantum cascade laser operating in the 1601-1670 cm⁻¹ range and a high-finesse optical cavity, the absorption spectra of NO₂ over 100 transitions in the R band have been recorded. From the observed linear relationship between the integrated absorption versus concentration of NO₂ and the standard deviation of the integrated absorption signal, we report an effective sensitivity of detection of approximately 28 ppt (parts in 10¹²) for NO₂ To the best of our knowledge, this is among the most sensitive levels of detection of NO₂ to date.
NASA Astrophysics Data System (ADS)
Kapil, V.; VandeVondele, J.; Ceriotti, M.
2016-02-01
The development and implementation of increasingly accurate methods for electronic structure calculations mean that, for many atomistic simulation problems, treating light nuclei as classical particles is now one of the most serious approximations. Even though recent developments have significantly reduced the overhead for modeling the quantum nature of the nuclei, the cost is still prohibitive when combined with advanced electronic structure methods. Here we present how multiple time step integrators can be combined with ring-polymer contraction techniques (effectively, multiple time stepping in imaginary time) to reduce virtually to zero the overhead of modelling nuclear quantum effects, while describing inter-atomic forces at high levels of electronic structure theory. This is demonstrated for a combination of MP2 and semi-local DFT applied to the Zundel cation. The approach can be seamlessly combined with other methods to reduce the computational cost of path integral calculations, such as high-order factorizations of the Boltzmann operator or generalized Langevin equation thermostats.
Kapil, V.; Ceriotti, M.; VandeVondele, J.
2016-02-07
The development and implementation of increasingly accurate methods for electronic structure calculations mean that, for many atomistic simulation problems, treating light nuclei as classical particles is now one of the most serious approximations. Even though recent developments have significantly reduced the overhead for modeling the quantum nature of the nuclei, the cost is still prohibitive when combined with advanced electronic structure methods. Here we present how multiple time step integrators can be combined with ring-polymer contraction techniques (effectively, multiple time stepping in imaginary time) to reduce virtually to zero the overhead of modelling nuclear quantum effects, while describing inter-atomic forces at high levels of electronic structure theory. This is demonstrated for a combination of MP2 and semi-local DFT applied to the Zundel cation. The approach can be seamlessly combined with other methods to reduce the computational cost of path integral calculations, such as high-order factorizations of the Boltzmann operator or generalized Langevin equation thermostats.
NASA Astrophysics Data System (ADS)
Li, M.; Helfrich, S.
2011-12-01
Global snow and ice cover is a key component in the climate and hydrologic system as well as daily weather forecasting. The National Oceanic and Atmospheric Administration (NOAA) has produced a daily northern hemisphere snow and ice cover chart since 1997 through the Interactive Multisensor Snow and Ice Mapping System (IMS). The IMS integrates and visualizes a wide variety of satellite data, as well as derived snow/ice products and surface observations, to provide meteorologists with the ability to interactively prepare the daily northern hemisphere snow and ice cover chart. These products are presently used as operational inputs into several weather prediction models and are applied in climate monitoring. The IMS is currently on its second version (released in 2004) and scheduled to be upgraded to the third version (V3) in 2013. The IMS V3 will have nearly 40 external inputs as data sources processed by the IMS, which fall into five data formats: binary image, HDF file, GeoTIFF image, Shapefile image and ASCII file. With the exception of the GeoTIFF and Shapefile files, which are used directly by IMS, all other types of data are pre-processed to ENVI image file format and "sectorized" for different areas around the northern hemisphere. The IMS V3 will generate daily snow and ice cover maps in five formats: ASCII, ENVI, GeoTIFF, GIF and GRIB2 and three resolutions: 24km, 4km and 1km. In this presentation, the methods are discussed for accessing and processing satellite data, model results and surface reports. All input data with varying formats and resolutions are processed to a fixed projection. The visualization methodology for IMS are provided for five different resolutions of 48km, 24km, 8km, 4km, 2km and 1km. This work will facilitate the future enhancement of IMS, provide users with an understanding of the software architecture, provide a prospectus on future data sources, and help to preserve the integrity of the long-standing satellite-derived snow and ice
ERIC Educational Resources Information Center
Schira Hagerman, Michelle
2014-01-01
This dissertation study presents an instructional intervention called LINKS: Learning to Integrate InterNet Knowledge Strategically. It reports evidence of the intervention's impact on two variables: (a) ninth graders' use of ten online reading and integration strategies while engaged in dyadic online inquiry on science topics in school, and (b)…
A review of statistical methods for data sets with multiple censoring points
Gilbert, R.O.
1995-07-06
This report reviews and summarizes recent literature on statistical methods for analyzing data sets that are censored by multiple censoring points. This report is organized as follows. Following the introductory comments in Section 2, a brief discussion of detection limits is given in Section 3. Sections 4 and 5 focus on data analysis methods for estimating parameters and testing hypotheses, respectively, when data sets are left censored with multiple censoring points. A list of publications that deal with a variety of other applications for censored data sets is provided in Section 6. Recommendations on future research for developing new or improved tools for statistically analyzing multiple left-censored data sets are provided in Section 7. The list of references is in Section 8.
Utility of network integrity methods in therapeutic target identification
Peng, Qian; Schork, Nicholas J.
2013-01-01
Analysis of the biological gene networks involved in a disease may lead to the identification of therapeutic targets. Such analysis requires exploring network properties, in particular the importance of individual network nodes (i.e., genes). There are many measures that consider the importance of nodes in a network and some may shed light on the biological significance and potential optimality of a gene or set of genes as therapeutic targets. This has been shown to be the case in cancer therapy. A dilemma exists, however, in finding the best therapeutic targets based on network analysis since the optimal targets should be nodes that are highly influential in, but not toxic to, the functioning of the entire network. In addition, cancer therapeutics targeting a single gene often result in relapse since compensatory, feedback and redundancy loops in the network may offset the activity associated with the targeted gene. Thus, multiple genes reflecting parallel functional cascades in a network should be targeted simultaneously, but require the identification of such targets. We propose a methodology that exploits centrality statistics characterizing the importance of nodes within a gene network that is constructed from the gene expression patterns in that network. We consider centrality measures based on both graph theory and spectral graph theory. We also consider the origins of a network topology, and show how different available representations yield different node importance results. We apply our techniques to tumor gene expression data and suggest that the identification of optimal therapeutic targets involving particular genes, pathways and sub-networks based on an analysis of the nodes in that network is possible and can facilitate individualized cancer treatments. The proposed methods also have the potential to identify candidate cancer therapeutic targets that are not thought to be oncogenes but nonetheless play important roles in the functioning of a cancer
Utility of network integrity methods in therapeutic target identification.
Peng, Qian; Schork, Nicholas J
2014-01-01
Analysis of the biological gene networks involved in a disease may lead to the identification of therapeutic targets. Such analysis requires exploring network properties, in particular the importance of individual network nodes (i.e., genes). There are many measures that consider the importance of nodes in a network and some may shed light on the biological significance and potential optimality of a gene or set of genes as therapeutic targets. This has been shown to be the case in cancer therapy. A dilemma exists, however, in finding the best therapeutic targets based on network analysis since the optimal targets should be nodes that are highly influential in, but not toxic to, the functioning of the entire network. In addition, cancer therapeutics targeting a single gene often result in relapse since compensatory, feedback and redundancy loops in the network may offset the activity associated with the targeted gene. Thus, multiple genes reflecting parallel functional cascades in a network should be targeted simultaneously, but require the identification of such targets. We propose a methodology that exploits centrality statistics characterizing the importance of nodes within a gene network that is constructed from the gene expression patterns in that network. We consider centrality measures based on both graph theory and spectral graph theory. We also consider the origins of a network topology, and show how different available representations yield different node importance results. We apply our techniques to tumor gene expression data and suggest that the identification of optimal therapeutic targets involving particular genes, pathways and sub-networks based on an analysis of the nodes in that network is possible and can facilitate individualized cancer treatments. The proposed methods also have the potential to identify candidate cancer therapeutic targets that are not thought to be oncogenes but nonetheless play important roles in the functioning of a cancer
Huard, Jérémy; Mueller, Stephanie; Gilles, Ernst D; Klingmüller, Ursula; Klamt, Steffen
2012-01-01
During liver regeneration, quiescent hepatocytes re-enter the cell cycle to proliferate and compensate for lost tissue. Multiple signals including hepatocyte growth factor, epidermal growth factor, tumor necrosis factor α, interleukin-6, insulin and transforming growth factor β orchestrate these responses and are integrated during the G1 phase of the cell cycle. To investigate how these inputs influence DNA synthesis as a measure for proliferation, we established a large-scale integrated logical model connecting multiple signaling pathways and the cell cycle. We constructed our model based upon established literature knowledge, and successively improved and validated its structure using hepatocyte-specific literature as well as experimental DNA synthesis data. Model analyses showed that activation of the mitogen-activated protein kinase and phosphatidylinositol 3-kinase pathways was sufficient and necessary for triggering DNA synthesis. In addition, we identified key species in these pathways that mediate DNA replication. Our model predicted oncogenic mutations that were compared with the COSMIC database, and proposed intervention targets to block hepatocyte growth factor-induced DNA synthesis, which we validated experimentally. Our integrative approach demonstrates that, despite the complexity and size of the underlying interlaced network, logical modeling enables an integrative understanding of signaling-controlled proliferation at the cellular level, and thus can provide intervention strategies for distinct perturbation scenarios at various regulatory levels. PMID:22443451
Membrane-type photonic integration of InGaN/GaN multiple-quantum-well diodes and waveguide
NASA Astrophysics Data System (ADS)
Gao, Xumin; Bai, Dan; Cai, Wei; Xu, Yin; Yuan, Jialei; Yang, Yongchao; Zhu, Guixia; Cao, Xun; Zhu, Hongbo; Wang, Yongjin
2017-02-01
We report here a membrane-type integration of InGaN/GaN multiple-quantum-well diodes (MQWDs) with a waveguide to build a highly integrated photonic system to perform functionalities on a GaN-on-silicon platform. Suspended MQWDs can be used as either for light-emitting diode (LED) or photodiode. In the fabricated photonic system, part of the LED emission is coupled into a suspended waveguide, and the guided light laterally propagates along the waveguide and is finally sensed by the photodiode. The photonic system can detect the in-plane guided light and the external incident light simultaneously. Planar optical communication experimentally demonstrates that the proof-of-concept monolithic photonic integration system can achieve the in-plane visible light communication. This work paves the way towards novel active electro-optical sensing systems and planar optical communication in the visible range.
NASA Technical Reports Server (NTRS)
Sidi, A.; Israeli, M.
1986-01-01
High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.
An objective method for partitioning the entire flood season into multiple sub-seasons
NASA Astrophysics Data System (ADS)
Chen, Lu; Singh, Vijay P.; Guo, Shenglian; Zhou, Jianzhong; Zhang, Junhong; Liu, Pan
2015-09-01
Information on flood seasonality is required in many practical applications, such as seasonal frequency analysis and reservoir operation. Several statistical methods for identifying flood seasonality have been widely used, such as directional method (DS) and relative frequency (RF) method. However, using these methods, flood seasons are identified subjectively by visually assessing the temporal distribution of flood occurrences. In this study, a new method is proposed to identify flood seasonality and partition the entire flood season into multiple sub-seasons objectively. A statistical experiment was carried out to evaluate the performance of the proposed method. Results demonstrated that the proposed method performed satisfactorily. Then the proposed approach was applied to the Geheyan and Baishan Reservoirs, China, having different flood regimes. It is shown that the proposed method performs extremely well for the observed data, and is more objective than the traditional methods.
NASA Technical Reports Server (NTRS)
Gaucher, Brian P. (Inventor); Grzyb, Janusz (Inventor); Liu, Duixian (Inventor); Pfeiffer, Ullrich R. (Inventor)
2008-01-01
Apparatus and methods are provided for packaging IC chips together with integrated antenna modules designed to provide a closed EM (electromagnetic) environment for antenna radiators, thereby allowing antennas to be designed independent from the packaging technology.
One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure dist...
29 CFR 4010.12 - Alternative method of compliance for certain sponsors of multiple employer plans.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Alternative method of compliance for certain sponsors of multiple employer plans. 4010.12 Section 4010.12 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION CERTAIN REPORTING AND DISCLOSURE REQUIREMENTS ANNUAL FINANCIAL AND ACTUARIAL INFORMATION REPORTING §...
Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?
ERIC Educational Resources Information Center
Xu, Yanbo; Mostow, Jack
2012-01-01
A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…
Learning Multiplication Facts: A Study of Children Taught by Discovery Methods in England.
ERIC Educational Resources Information Center
Steel, Sylvia; Funnell, Elaine
2001-01-01
Examined development of multiplication skills in 8- to 12-year-olds taught by discovery methods. Found a general shift away from less effective strategies across ages 8 to 12, but by 11 years, relatively few used the most effective strategy of retrieval for all operands. Effective strategy development was related to nonverbal reasoning ability and…
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
ERIC Educational Resources Information Center
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Propensity Scores: Method for Matching on Multiple Variables in Down Syndrome Research
ERIC Educational Resources Information Center
Blackford, Jennifer Urbano
2009-01-01
Confounding variables can affect the results from studies of children with Down syndrome and their families. Traditional methods for addressing confounders are often limited, providing control for only a few confounding variables. This study introduces propensity score matching to control for multiple confounding variables. Using Tennessee birth…
Magic Finger Teaching Method in Learning Multiplication Facts among Deaf Students
ERIC Educational Resources Information Center
Thai, Liong; Yasin, Mohd. Hanafi Mohd
2016-01-01
Deaf students face problems in mastering multiplication facts. This study aims to identify the effectiveness of Magic Finger Teaching Method (MFTM) and students' perception towards MFTM. The research employs a quasi experimental with non-equivalent pre-test and post-test control group design. Pre-test, post-test and questionnaires were used. As…
Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.
ERIC Educational Resources Information Center
Rowell, R. Kevin
In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…
ERIC Educational Resources Information Center
Mekonnen, Adugna K.
2010-01-01
This study develops a multiple measure placement method (MMPM) that is comprised of predictor variables such as ACCUPLACER math (ACCM), ACCUPLACER reading (ACCR), arithmetic diagnostic test (ADT), high school grade point average (HSGPA), high school mathematics performance (HSMP), and duration since last mathematics course taken in high school…
A Method for Imputing Response Options for Missing Data on Multiple-Choice Assessments
ERIC Educational Resources Information Center
Wolkowitz, Amanda A.; Skorupski, William P.
2013-01-01
When missing values are present in item response data, there are a number of ways one might impute a correct or incorrect response to a multiple-choice item. There are significantly fewer methods for imputing the actual response option an examinee may have provided if he or she had not omitted the item either purposely or accidentally. This…
NASA Astrophysics Data System (ADS)
Brewe, Eric; Bruun, Jesper; Bearden, Ian G.
2016-12-01
We describe Module Analysis for Multiple Choice Responses (MAMCR), a new methodology for carrying out network analysis on responses to multiple choice assessments. This method is used to identify modules of non-normative responses which can then be interpreted as an alternative to factor analysis. MAMCR allows us to identify conceptual modules that are present in student responses that are more specific than the broad categorization of questions that is possible with factor analysis and to incorporate non-normative responses. Thus, this method may prove to have greater utility in helping to modify instruction. In MAMCR the responses to a multiple choice assessment are first treated as a bipartite, student X response, network which is then projected into a response X response network. We then use data reduction and community detection techniques to identify modules of non-normative responses. To illustrate the utility of the method we have analyzed one cohort of postinstruction Force Concept Inventory (FCI) responses. From this analysis, we find nine modules which we then interpret. The first three modules include the following: Impetus Force, More Force Yields More Results, and Force as Competition or Undistinguished Velocity and Acceleration. This method has a variety of potential uses particularly to help classroom instructors in using multiple choice assessments as diagnostic instruments beyond the Force Concept Inventory.
A fast and well-conditioned spectral method for singular integral equations
NASA Astrophysics Data System (ADS)
Slevinsky, Richard Mikael; Olver, Sheehan
2017-03-01
We develop a spectral method for solving univariate singular integral equations over unions of intervals by utilizing Chebyshev and ultraspherical polynomials to reformulate the equations as almost-banded infinite-dimensional systems. This is accomplished by utilizing low rank approximations for sparse representations of the bivariate kernels. The resulting system can be solved in O (m2 n) operations using an adaptive QR factorization, where m is the bandwidth and n is the optimal number of unknowns needed to resolve the true solution. The complexity is reduced to O (mn) operations by pre-caching the QR factorization when the same operator is used for multiple right-hand sides. Stability is proved by showing that the resulting linear operator can be diagonally preconditioned to be a compact perturbation of the identity. Applications considered include the Faraday cage, and acoustic scattering for the Helmholtz and gravity Helmholtz equations, including spectrally accurate numerical evaluation of the far- and near-field solution. The JULIA software package SingularIntegralEquations.jl implements our method with a convenient, user-friendly interface.
Local discretization method for overdamped Brownian motion on a potential with multiple deep wells
NASA Astrophysics Data System (ADS)
Nguyen, P. T. T.; Challis, K. J.; Jack, M. W.
2016-11-01
We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.
[Hemostasis management in multiple trauma patients--value of near-patient diagnostic methods].
Jámbor, Csilla; Heindl, Bernhard; Spannagl, Michael; Rolfes, Caroline; Dinges, Gerhard Klaus; Frietsch, Thomas
2009-03-01
Massively transfused multiple trauma patients commonly develop a complex coagulopathy which needs immediate treatment. Near-patient diagnostic methods are available for the management of this coagulopathy and for the guidance of the therapeutic options with blood products and haemostatic drugs: conventional laboratory analysis methods adapted to the point-of-care (POC) situation (blood gas analysis, point of care PT, APTT and platelet count), and the complex whole blood methods used for near-patient coagulation monitoring (thromboelastometry and platelet function analysis). Based on the new Guidelines of the German Medical Association for the use of blood and plasma derivates, interventions with blood products and haemostatic drugs in multiple trauma patients are suggested. The diagnostic value of near-patient methods for coagulation monitoring is discussed.
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
From a breeding standpoint, multiple trait integration (MTI) is a four-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) using backcross breeding, ultimately regaining the performance attributes of the target hybrid along with reliable expression of the value-added traits. In the light of the overarching goal of recovering equivalent performance in the finished conversion, this study focuses on the first step of MTI, single event introgression, exploring the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events, incorporating eight events into the female hybrid parent and seven into the male parent. Single event introgression is conducted in parallel streams to convert the recurrent parent (RP) for individual events, with the primary objective of minimizing residual non-recurrent parent (NRP) germplasm, especially in the chromosomal proximity to the event (i.e. linkage drag). In keeping with a defined lower limit of 96.66 % overall RP germplasm recovery (i.e. ≤120 cM NRP germplasm given a genome size of 1,788 cM), a breeding goal for each of the 15 single event conversions was developed: <8 cM of residual NRP germplasm across the genome with ~1 cM in the 20 cM region flanking the event. Using computer simulation, we aimed to identify optimal breeding strategies for single event introgression to achieve this breeding goal, measuring efficiency in terms of number of backcross generations required, marker data points needed, and total population size across generations. Various selection schemes classified as three-stage, modified two-stage, and combined selection conducted from BC1 through BC3, BC4, or BC5 were compared. The breeding goal was achieved with a selection scheme involving five generations of marker-aided backcrossing, with BC1 through BC3 selected for the event of interest and minimal linkage drag at population size of 600, and BC4 and BC5 selected for