Multiple methods integration for structural mechanics analysis and design
NASA Technical Reports Server (NTRS)
Housner, J. M.; Aminpour, M. A.
1991-01-01
A new research area of multiple methods integration is proposed for joining diverse methods of structural mechanics analysis which interact with one another. Three categories of multiple methods are defined: those in which a physical interface are well defined; those in which a physical interface is not well-defined, but selected; and those in which the interface is a mathematical transformation. Two fundamental integration procedures are presented that can be extended to integrate various methods (e.g., finite elements, Rayleigh Ritz, Galerkin, and integral methods) with one another. Since the finite element method will likely be the major method to be integrated, its enhanced robustness under element distortion is also examined and a new robust shell element is demonstrated.
Method and system of integrating information from multiple sources
Alford, Francine A.; Brinkerhoff, David L.
2006-08-15
A system and method of integrating information from multiple sources in a document centric application system. A plurality of application systems are connected through an object request broker to a central repository. The information may then be posted on a webpage. An example of an implementation of the method and system is an online procurement system.
Integrating Multiple Teaching Methods into a General Chemistry Classroom.
ERIC Educational Resources Information Center
Francisco, Joseph S.; Nicoll, Gayle; Trautmann, Marcella
1998-01-01
Four different methods of teaching--cooperative learning, class discussions, concept maps, and lectures--were integrated into a freshman-level general chemistry course to compare students' levels of participation. Findings support the idea that multiple modes of learning foster the metacognitive skills necessary for mastering general chemistry.…
A multistage gene normalization system integrating multiple effective methods.
Li, Lishuang; Liu, Shanshan; Li, Lihua; Fan, Wenting; Huang, Degen; Zhou, Huiwei
2013-01-01
Gene/protein recognition and normalization is an important preliminary step for many biological text mining tasks. In this paper, we present a multistage gene normalization system which consists of four major subtasks: pre-processing, dictionary matching, ambiguity resolution and filtering. For the first subtask, we apply the gene mention tagger developed in our earlier work, which achieves an F-score of 88.42% on the BioCreative II GM testing set. In the stage of dictionary matching, the exact matching and approximate matching between gene names and the EntrezGene lexicon have been combined. For the ambiguity resolution subtask, we propose a semantic similarity disambiguation method based on Munkres' Assignment Algorithm. At the last step, a filter based on Wikipedia has been built to remove the false positives. Experimental results show that the presented system can achieve an F-score of 90.1%, outperforming most of the state-of-the-art systems. PMID:24349160
A Multistage Gene Normalization System Integrating Multiple Effective Methods
Li, Lishuang; Liu, Shanshan; Li, Lihua; Fan, Wenting; Huang, Degen; Zhou, Huiwei
2013-01-01
Gene/protein recognition and normalization is an important preliminary step for many biological text mining tasks. In this paper, we present a multistage gene normalization system which consists of four major subtasks: pre-processing, dictionary matching, ambiguity resolution and filtering. For the first subtask, we apply the gene mention tagger developed in our earlier work, which achieves an F-score of 88.42% on the BioCreative II GM testing set. In the stage of dictionary matching, the exact matching and approximate matching between gene names and the EntrezGene lexicon have been combined. For the ambiguity resolution subtask, we propose a semantic similarity disambiguation method based on Munkres' Assignment Algorithm. At the last step, a filter based on Wikipedia has been built to remove the false positives. Experimental results show that the presented system can achieve an F-score of 90.1%, outperforming most of the state-of-the-art systems. PMID:24349160
NASA Astrophysics Data System (ADS)
Tang, Xiaojun
2016-04-01
The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.
Nakamura, Kunio; Guizard, Nicolas; Fonov, Vladimir S.; Narayanan, Sridar; Collins, D. Louis; Arnold, Douglas L.
2013-01-01
Gray matter atrophy provides important insights into neurodegeneration in multiple sclerosis (MS) and can be used as a marker of neuroprotection in clinical trials. Jacobian integration is a method for measuring volume change that uses integration of the local Jacobian determinants of the nonlinear deformation field registering two images, and is a promising tool for measuring gray matter atrophy. Our main objective was to compare the statistical power of the Jacobian integration method to commonly used methods in terms of the sample size required to detect a treatment effect on gray matter atrophy. We used multi-center longitudinal data from relapsing–remitting MS patients and evaluated combinations of cross-sectional and longitudinal pre-processing with SIENAX/FSL, SPM, and FreeSurfer, as well as the Jacobian integration method. The Jacobian integration method outperformed these other commonly used methods, reducing the required sample size by a factor of 4–5. The results demonstrate the advantage of using the Jacobian integration method to assess neuroprotection in MS clinical trials. PMID:24266007
NASA Astrophysics Data System (ADS)
Kitaoka, Norihide; Hamaguchi, Souta; Nakagawa, Seiichi
To achieve high recognition performance for a wide variety of noise and for a wide range of signal-to-noise ratio, this paper presents methods for integration of four noise reduction algorithms: spectral subtraction with smoothing of time direction, temporal domain SVD-based speech enhancement, GMM-based speech estimation and KLT-based comb-filtering. In this paper, we proposed two types of combination methods of noise suppression algorithms: selection of front-end processor and combination of results from multiple recognition processes. Recognition results on the CENSREC-1 task showed the effectiveness of our proposed methods.kn-abstract=
Musick, Charles R.; Critchlow, Terence; Ganesh, Madhaven; Slezak, Tom; Fidelis, Krzysztof
2006-12-19
A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.
A graphical model method for integrating multiple sources of genome-scale data
Dvorkin, Daniel; Biehs, Brian; Kechris, Katerina
2016-01-01
Making effective use of multiple data sources is a major challenge in modern bioinformatics. Genome-wide data such as measures of transcription factor binding, gene expression, and sequence conservation, which are used to identify binding regions and genes that are important to major biological processes such as development and disease, can be difficult to use together due to the different biological meanings and statistical distributions of the heterogeneous data types, but each can provide valuable information for understanding the processes under study. Here we present methods for integrating multiple data sources to gain a more complete picture of gene regulation and expression. Our goal is to identify genes and cis-regulatory regions which play specific biological roles. We describe a graphical mixture model approach for data integration, examine the effect of using different model topologies, and discuss methods for evaluating the effectiveness of the models. Model fitting is computationally efficient and produces results which have clear biological and statistical interpretations. The Hedgehog and Dorsal signaling pathways in Drosophila, which are critical in embryonic development, are used as examples. PMID:23934610
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
Reichardt, Jens; Reichardt, Susanne
2006-04-20
A method is presented that permits the determination of the cloud effective particle size from Raman- or Rayleigh-integration temperature measurements that exploits the dependence of the multiple-scattering contributions to the lidar signals from heights above the cloud on the particle size of the cloud. Independent temperature information is needed for the determination of size. By use of Raman-integration temperatures, the technique is applied to cirrus measurements. The magnitude of the multiple-scattering effect and the above-cloud lidar signal strength limit the method's range of applicability to cirrus optical depths from 0.1 to 0.5. Our work implies that records of stratosphere temperature obtained with lidar may be affected by multiple scattering in clouds up to heights of 30 km and beyond. PMID:16633433
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information. PMID:25896820
Integration of Multiple Field Methods in Characterizing a Field Site with Bayesian Inverse Modeling
NASA Astrophysics Data System (ADS)
Savoy, H.; Dietrich, P.; Osorio-Murillo, C. A.; Kalbacher, T.; Kolditz, O.; Ames, D. P.; Rubin, Y.
2014-12-01
A hydraulic property of a field can be expressed as a space random function (SRF), and the parameters of that SRF can be constrained by the Method of Anchored Distributions (MAD). MAD is a general Bayesian inverse modeling technique that quantifies the uncertainty of SRF parameters by integrating various direct local data along with indirect non-local data. An example is given with a high-resolution 3D aquifer analog with known hydraulic conductivity (K) and porosity (n) at every location. MAD is applied using different combinations of simulated measurements of K, n, and different scales of hydraulic head that represent different field methods. The ln(K) and n SRF parameters are characterized with each of the method combinations to assess the influence of the methods on the SRFs and their implications. The forward modeling equations are solved by the numerical modeling software OpenGeoSys (opengeosys.org) and MAD is applied with the software MAD# (mad.codeplex.com). The inverse modeling results are compared to the aquifer analog for success evaluation. The goal of the study is to show how integrating combinations of multi-scale and multi-type measurements from the field via MAD can be used to reduce the uncertainty in field-scale SRFs, as well as point values, of hydraulic properties.
Numerical integration of a relativistic two-body problem via a multiple scales method
NASA Astrophysics Data System (ADS)
Abouelmagd, Elbaz I.; Elshaboury, S. M.; Selim, H. H.
2016-01-01
We offer an analytical study on the dynamics of a two-body problem perturbed by small post-Newtonian relativistic term. We prove that, while the angular momentum is not conserved, the motion is planar. We also show that the energy is subject to small changes due to the relativistic effect. We also offer a periodic solution to this problem, obtained by a method based on the separation of time scales. We demonstrate that our solution is more general than the method developed in the book by Brumberg (Essential Relativistic Celestial Mechanics, Hilger, Bristol, 1991). The practical applicability of this model may be in studies of the long-term evolution of relativistic binaries (neutron stars or black holes).
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Multiple detectors "Influence Method".
Rios, I J; Mayer, R E
2016-05-01
The "Influence Method" is conceived for the absolute determination of a nuclear particle flux in the absence of known detector efficiency and without the need to register coincidences of any kind. This method exploits the influence of the presence of one detector in the count rate of another detector, when they are placed one behind the other and define statistical estimators for the absolute number of incident particles and for the efficiency (Rios and Mayer, 2015a). Its detailed mathematical description was recently published (Rios and Mayer, 2015b) and its practical implementation in the measurement of a moderated neutron flux arising from an isotopic neutron source was exemplified in (Rios and Mayer, 2016). With the objective of further reducing the measurement uncertainties, in this article we extend the method for the case of multiple detectors placed one behind the other. The new estimators for the number of particles and the detection efficiency are herein derived. PMID:26943904
NASA Astrophysics Data System (ADS)
Chang, Xin
This dissertation proposal is concerned with the use of fast and broadband full-wave electromagnetic methods for modeling high speed interconnects (e.g, vertical vias and horizontal traces) and passive components (e.g, decoupling capacitors) for structures of PCB and packages, in 3D IC, Die-level packaging and SIW based devices, to effectively modeling the designs signal integrity (SI) and power integrity (PI) aspects. The main contributions finished in this thesis is to create a novel methodology, which hybridizes the Foldy-Lax multiple scattering equations based fast full wave method, method of moment (MoM) based 1D technology, modes decoupling based geometry decomposition and cavity modes expansions, to model and simulate the electromagnetic scattering effects for the irregular power/ground planes, multiple vias and traces, for fast and accurate analysis of link level simulation on multilayer electronic structures. For the modeling details, the interior massively-coupled multiple vias problem is modeled most-analytically by using the Foldy-Lax multiple scattering equations. The dyadic Green's functions of the magnetic field are expressed in terms of waveguide modes in the vertical direction and vector cylindrical wave expansions or cavity modes expansions in the horizontal direction, combined with 2D MoM realized by 1D technology. For the incident field of the case of vias in the arbitrarily shaped antipad in finite large cavity/waveguide, the exciting and scattering field coefficients are calculated based on the transformation which converts surface integration of magnetic surface currents in antipad into 1D line integration of surface charges on the vias and on the ground plane. Geometry decomposition method is applied to model and integrate both the vertical and horizontal interconnects/traces in arbitrarily shaped power/ground planes. Moreover, a new form of multiple scattering equations is derived for solving coupling effects among mixed metallic
Integral 3D display using multiple LCDs
NASA Astrophysics Data System (ADS)
Okaichi, Naoto; Miura, Masato; Arai, Jun; Mishina, Tomoyuki
2015-03-01
The quality of the integral 3D images created by a 3D imaging system was improved by combining multiple LCDs to utilize a greater number of pixels than that possible with one LCD. A prototype of the display device was constructed by using four HD LCDs. An integral photography (IP) image displayed by the prototype is four times larger than that reconstructed by a single display. The pixel pitch of the HD display used is 55.5 μm, and the number of elemental lenses is 212 horizontally and 119 vertically. The 3D image pixel count is 25,228, and the viewing angle is 28°. Since this method is extensible, it is possible to display an integral 3D image of higher quality by increasing the number of LCDs. Using this integral 3D display structure makes it possible to make the whole device thinner than a projector-based display system. It is therefore expected to be applied to the home television in the future.
The core of the research effort in the Regional Vulnerability Assessment Program (ReVA) is a set of data integration methods ranging from simple overlays to complex multivariate statistics. These methods are described in the EPA publication titled, "Regional Vulnerability Assess...
Multiple-stage integrating accelerometer
Devaney, H.F.
1984-06-27
An accelerometer assembly is provided for use in activating a switch in response to multiple acceleration pulses in series. The accelerometer includes a housing forming a chamber. An inertial mass or piston is slidably disposed in the chamber and spring biased toward a first or reset position. A damping system is also provided to damp piston movement in response to first and subsequent acceleration pulses. Additionally, a cam, including a Z-shaped slot, and cooperating follower pin slidably received therein are mounted to the piston and the housing. The middle or cross-over leg of the Z-shaped slot cooperates with the follower pin to block or limit piston movement and prevent switch activation in response to a lone acceleration pulse. The switch of the assembly is only activated after two or more separate acceleration pulses are sensed and the piston reaches the end of the chamber opposite the reset position.
Multiple-stage integrating accelerometer
Devaney, Howard F.
1986-01-01
An accelerometer assembly is provided for use in activating a switch in response to multiple acceleration pulses in series. The accelerometer includes a housing forming a chamber. An inertial mass or piston is slidably disposed in the chamber and spring biased toward a first or reset position. A damping system is also provided to damp piston movement in response to first and subsequent acceleration pulses. Additionally, a cam, including a Z-shaped slot, and cooperating follower pin slidably received therein are mounted to the piston and the housing. The middle or cross-over leg of the Z-shaped slot cooperates with the follower pin to block or limit piston movement and prevent switch activation in response to a lone acceleration pulse. The switch of the assembly is only activated after two or more separate acceleration pulses are sensed and the piston reaches the end of the chamber opposite the reset position.
Interstitial integrals in the multiple-scattering model
Swanson, J.R.; Dill, D.
1982-08-15
We present an efficient method for the evaluation of integrals involving multiple-scattering wave functions over the interstitial region. Transformation of the multicenter interstitial wave functions to a single center representation followed by a geometric projection reduces the integrals to products of analytic angular integrals and numerical radial integrals. The projection function, which has the value 1 in the interstitial region and 0 elsewhere, has a closed-form partial-wave expansion. The method is tested by comparing its results with exact normalization and dipole integrals; the differences are 2% at worst and typically less than 1%. By providing an efficient means of calculating Coulomb integrals, the method allows treatment of electron correlations using a multiple scattering basis set.
Applying Quadrature Rules with Multiple Nodes to Solving Integral Equations
Hashemiparast, S. M.; Avazpour, L.
2008-09-01
There are many procedures for the numerical solution of Fredholm integral equations. The main idea in these procedures is accuracy of the solution. In this paper, we use Gaussian quadrature with multiple nodes to improve the solution of these integral equations. The application of this method is illustrated via some examples, the related tables are given at the end.
Improving Inferences from Multiple Methods.
ERIC Educational Resources Information Center
Shotland, R. Lance; Mark, Melvin M.
1987-01-01
Multiple evaluation methods (MEMs) can cause an inferential challenge, although there are strategies to strengthen inferences. Practical and theoretical issues involved in the use by social scientists of MEMs, three potential problems in drawing inferences from MEMs, and short- and long-term strategies for alleviating these problems are outlined.…
Method for deploying multiple spacecraft
NASA Technical Reports Server (NTRS)
Sharer, Peter J. (Inventor)
2007-01-01
A method for deploying multiple spacecraft is disclosed. The method can be used in a situation where a first celestial body is being orbited by a second celestial body. The spacecraft are loaded onto a single spaceship that contains the multiple spacecraft and the spacecraft is launched from the second celestial body towards a third celestial body. The spacecraft are separated from each other while in route to the third celestial body. Each of the spacecraft is then subjected to the gravitational field of the third celestial body and each of the spacecraft assumes a different, independent orbit about the first celestial body. In those situations where the spacecraft are launched from Earth, the Sun can act as the first celestial body, the Earth can act as the second celestial body and the Moon can act as the third celestial body.
Multiplication method for sparse interferometric fringes.
Liu, Cong; Zhang, Xingyi; Zhou, Youhe
2016-04-01
Fringe analysis in the interferometry has been of long-standing interest to the academic community. However, the process of sparse fringe is always a headache in the measurement, especially when the specimen is very small. Through theoretical derivation and experimental measurements, our work demonstrates a new method for fringe multiplication. Theoretically, arbitrary integral-multiple fringe multiplication can be acquired by using the interferogram phase as the parameter. We simulate digital images accordingly and find that not only the skeleton lines of the multiplied fringe are very convenient to extract, but also the main frequency of which can be easily separated from the DC component. Meanwhile, the experimental results have a good agreement with the theoretic ones in a validation using the classical photoelasticity. PMID:27137055
Multiple ray cluster rendering for interactive integral imaging system.
Jiao, Shaohui; Wang, Xiaoguang; Zhou, Mingcai; Li, Weiming; Hong, Tao; Nam, Dongkyung; Lee, Jin-Ho; Wu, Enhua; Wang, Haitao; Kim, Ji-Yeun
2013-04-22
In this paper, we present an efficient Computer Generated Integral Imaging (CGII) method, called multiple ray cluster rendering (MRCR). Based on the MRCR, an interactive integral imaging system is realized, which provides accurate 3D image satisfying the changeable observers' positions in real time. The MRCR method can generate all the elemental image pixels within only one rendering pass by ray reorganization of multiple ray clusters and 3D content duplication. It is compatible with various graphic contents including mesh, point cloud, and medical data. Moreover, multi-sampling method is embedded in MRCR method for acquiring anti-aliased 3D image result. To our best knowledge, the MRCR method outperforms the existing CGII methods in both the speed performance and the display quality. Experimental results show that the proposed CGII method can achieve real-time computational speed for large-scale 3D data with about 50,000 points. PMID:23609712
Multiple identities and the integration of personality.
Gregg, G S
1995-09-01
Life-history interviews show narrators to shift among multiple, often contradictory self-representations. This article outlines a model that accounts for how a relatively small set of self-symbols and metaphors can form a grammar-like system that simultaneously defines and integrates multiple identities. Drawing on generative theories from linguistics, anthropology, and music, the model proposes that this system provides a unitary deep structure that can be configured in various arrangements to yield multiple surface structures. Each "surface" identity constructs an individual's emotions and social relations--and what he or she accepts as "Me" and rejects as "not-Me"--into a distinct pattern, with identity per se appearing as a dialogic or fugue-like structure of opposed voices. Study-of-lives interviews conducted by the author in urban America and rural Morocco are used to present the model and to demonstrate the pivotal role played by multistable or "structurally ambiguous" symbols in anchoring reversible self-representations which integrate personality as a system of organized contraction. The musical analogy is emphasized in order to build a bridge toward current research in cognitive science and toward efforts to formulate a "state integration" theory of personality development. PMID:7562365
Accelerated adaptive integration method.
Kaus, Joseph W; Arrar, Mehrnoosh; McCammon, J Andrew
2014-05-15
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
Accelerated Adaptive Integration Method
2015-01-01
Conformational changes that occur upon ligand binding may be too slow to observe on the time scales routinely accessible using molecular dynamics simulations. The adaptive integration method (AIM) leverages the notion that when a ligand is either fully coupled or decoupled, according to λ, barrier heights may change, making some conformational transitions more accessible at certain λ values. AIM adaptively changes the value of λ in a single simulation so that conformations sampled at one value of λ seed the conformational space sampled at another λ value. Adapting the value of λ throughout a simulation, however, does not resolve issues in sampling when barriers remain high regardless of the λ value. In this work, we introduce a new method, called Accelerated AIM (AcclAIM), in which the potential energy function is flattened at intermediate values of λ, promoting the exploration of conformational space as the ligand is decoupled from its receptor. We show, with both a simple model system (Bromocyclohexane) and the more complex biomolecule Thrombin, that AcclAIM is a promising approach to overcome high barriers in the calculation of free energies, without the need for any statistical reweighting or additional processors. PMID:24780083
Integrated management of multiple reservoir field developments
Lyons, S.L.; Chan, H.M.; Harper, J.L.; Boyett, B.A.; Dowson, P.R.; Bette, S.
1995-10-01
This paper consists of two sections. The authors first describe the coupling of a pipeline network model to a reservoir simulator and then the application of this new simulator to optimize the production strategy of two Mobil field developments. Mobil`s PEGASUS simulator is an integrated all purpose reservoir simulator that handles black-oil, compositional, faulted and naturally fractured reservoirs. The authors have extended the simulator to simultaneously model multiple reservoirs coupled with surface pipeline networks and processes. This allows them to account for the effects of geology, well placement, and surface production facilities on well deliverability in a fully integrated fashion. They have also developed a gas contract allocation system that takes the user-specified constraints, target rates and swing factors and automatically assigns rates to the individual wells of each reservoir. This algorithm calculates the overall deliverability and automatically reduces the user-specified target rates to meet the deliverability constraints. The algorithm and solution technique are described. This enhanced simulator has been applied to model a Mobil field development in the Southern Gas Basin, offshore United Kingdom, which consists of three separate gas reservoirs connected via a pipeline network. The simulator allowed the authors to accurately determine the impact on individual reservoir and total field performance by varying the development timing of these reservoirs. Several development scenarios are shown to illustrate the capabilities of PEGASUS. Another application of this technology is in the field developments in North Sumatra, Indonesia. Here the objective is to economically optimize the development of multiple fields to feed the PT Arun LNG facility. Consideration of a range of gas compositions, well productivity`s, and facilities constraints in an integrated fashion results in improved management of these assets. Model specifics are discussed.
Comparing three feedback internal multiple elimination methods
NASA Astrophysics Data System (ADS)
Song, Jiawen; Verschuur, Eric; Chen, Xiaohong
2013-08-01
Multiple reflections have posed a great challenge for current seismic imaging and inversion methods. Compared to surface multiples, internal multiples are more difficult to remove due to poorer move-out discrimination with primaries and we are left with wave equation-based prediction and subtraction methods. In this paper, we focus on the comparison of three data-driven internal multiple elimination (IME) methods based on the feedback model, where two are well established prediction-and-subtraction methods using back-propagated data and surface data, referred to as CFP-based method and surface-based method, respectively, and the third one, an inversion-based method, has been recently extended from estimation of primaries by sparse inversion (EPSI). All these three methods are based on the separation of events from above and below a certain level, after which internal multiples are predicted by convolutions and correlations. We begin with theory review of layer-related feedback IME methods, where implementation steps for each method are discussed, and involved event separation are further analyzed. Then, recursive application of the three IME methods is demonstrated on synthetic data and field data. It shows that the two well established prediction-and-subtraction methods provide similar primary estimation results, with most of the internal multiples being removed while multiple leakage and primary distortion have been observed where primaries and internal multiples interfere. In contrast, generalized EPSI provides reduced multiple leakage and better primary restoration which is of great value for current seismic amplitude-preserved processing. As a main conclusion, with adaptive subtraction avoided, the inversion-based method is more effective than the prediction-and-subtraction methods for internal multiple elimination when primaries and internal multiples overlap. However, the inversion-based method is quite computationally intensive, and more researches on
Multiple network interface core apparatus and method
Underwood, Keith D.; Hemmert, Karl Scott
2011-04-26
A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.
Multiple protocol fluorometer and method
Kolber, Zbigniew S.; Falkowski, Paul G.
2000-09-19
A multiple protocol fluorometer measures photosynthetic parameters of phytoplankton and higher plants using actively stimulated fluorescence protocols. The measured parameters include spectrally-resolved functional and optical absorption cross sections of PSII, extent of energy transfer between reaction centers of PSII, F.sub.0 (minimal), F.sub.m (maximal) and F.sub.v (variable) components of PSII fluorescence, photochemical and non-photochemical quenching, size of the plastoquinone (PQ) pool, and the kinetics of electron transport between Q.sub.a and PQ pool and between PQ pool and PSI. The multiple protocol fluorometer, in one embodiment, is equipped with an excitation source having a controlled spectral output range between 420 nm and 555 nm and capable of generating flashlets having a duration of 0.125-32 .mu.s, an interval between 0.5 .mu.s and 2 seconds, and peak optical power of up to 2 W/cm.sup.2. The excitation source is also capable of generating, simultaneous with the flashlets, a controlled continuous, background illumination.
Predicting Protein Function via Semantic Integration of Multiple Networks.
Yu, Guoxian; Fu, Guangyuan; Wang, Jun; Zhu, Hailong
2016-01-01
Determining the biological functions of proteins is one of the key challenges in the post-genomic era. The rapidly accumulated large volumes of proteomic and genomic data drives to develop computational models for automatically predicting protein function in large scale. Recent approaches focus on integrating multiple heterogeneous data sources and they often get better results than methods that use single data source alone. In this paper, we investigate how to integrate multiple biological data sources with the biological knowledge, i.e., Gene Ontology (GO), for protein function prediction. We propose a method, called SimNet, to Semantically i ntegrate multiple functional association Networks derived from heterogenous data sources. SimNet firstly utilizes GO annotations of proteins to capture the semantic similarity between proteins and introduces a semantic kernel based on the similarity. Next, SimNet constructs a composite network, obtained as a weighted summation of individual networks, and aligns the network with the kernel to get the weights assigned to individual networks. Then, it applies a network-based classifier on the composite network to predict protein function. Experiment results on heterogenous proteomic data sources of Yeast, Human, Mouse, and Fly show that, SimNet not only achieves better (or comparable) results than other related competitive approaches, but also takes much less time. The Matlab codes of SimNet are available at https://sites.google.com/site/guoxian85/simnet. PMID:26800544
Convergence and Discriminant: Assessing Multiple Traits Using Multiple Methods
ERIC Educational Resources Information Center
Pae, Hye K.
2012-01-01
Multiple traits of language proficiency as well as test method effects were concurrently analyzed to investigate interrelations of construct validity, convergent validity, and discriminant validity using multitrait-multimethod (MTMM) matrices. A total of 585 test takers' scores were derived from the field test of the "Pearson Test of English…
Integrating Multiple Intelligences in EFL/ESL Classrooms
ERIC Educational Resources Information Center
Bas, Gokhan
2008-01-01
This article deals with the integration of the theory of Multiple Intelligences in EFL/ESL classrooms. In this study, after the theory of multiple intelligences was presented shortly, the integration of this theory into English classrooms. Intelligence types in MI Theory were discussed and some possible application ways of these intelligence types…
Complementary and Integrative Medicine - Multiple Languages: MedlinePlus
... Are Here: Home → Multiple Languages → All Health Topics → Complementary and Integrative Medicine URL of this page: https://www.nlm.nih. ... V W XYZ List of All Topics All Complementary and Integrative Medicine - Multiple Languages To use the sharing features on ...
Code Division Multiple Access system candidate for integrated modular avionics
NASA Astrophysics Data System (ADS)
Mendez, Antonio J.; Gagliardi, Robert M.
1991-02-01
There are government and industry trends towards avionics modularity and integrated avionics. Key requirements implicit in these trends are suitable data communication concepts compatible with the integration concept. In this paper we explore the use ofCode Division Multiple Access (CDMA) techniques as an alternative to collision detection and collision avoidance multiple access techniques.
Complementary and Integrative Medicine - Multiple Languages: MedlinePlus
... Are Here: Home → Multiple Languages → All Health Topics → Complementary and Integrative Medicine URL of this page: https://medlineplus.gov/languages/ ... V W XYZ List of All Topics All Complementary and Integrative Medicine - Multiple Languages To use the sharing features on ...
Integrated Instruction: Multiple Intelligences and Technology
ERIC Educational Resources Information Center
McCoog, Ian J.
2007-01-01
Advancements in technology have changed the day to day operation of society. The ways in which we teach and learn have begun the same process. For this reason, we must reexamine instruction. In this article, the author analyzes the changing environment of educational technology and how to incorporate the theory of multiple intelligences. The…
Integrating Learning Styles and Multiple Intelligences.
ERIC Educational Resources Information Center
Silver, Harvey; Strong, Richard; Perini, Matthew
1997-01-01
Multiple-intelligences theory (MI) explores how cultures and disciplines shape human potential. Both MI and learning-style theories reject dominant ideologies of intelligence. Whereas learning styles are concerned with differences in the learning process, MI centers on learning content and products. Blending learning styles and MI theories via…
Integrating Qualitative and Quantitative Evaluation Methods in Substance Abuse Research.
ERIC Educational Resources Information Center
Dennis, Michael L.; And Others
1994-01-01
Some specific opportunities and techniques are described for combining and integrating qualitative and quantitative methods from the design stage of a substance abuse program evaluation through implementation and reporting. The multiple problems and requirements of such an evaluation make integrated methods essential. (SLD)
Building a cognitive map by assembling multiple path integration systems.
Wang, Ranxiao Frances
2016-06-01
Path integration and cognitive mapping are two of the most important mechanisms for navigation. Path integration is a primitive navigation system which computes a homing vector based on an animal's self-motion estimation, while cognitive map is an advanced spatial representation containing richer spatial information about the environment that is persistent and can be used to guide flexible navigation to multiple locations. Most theories of navigation conceptualize them as two distinctive, independent mechanisms, although the path integration system may provide useful information for the integration of cognitive maps. This paper demonstrates a fundamentally different scenario, where a cognitive map is constructed in three simple steps by assembling multiple path integrators and extending their basic features. The fact that a collection of path integration systems can be turned into a cognitive map suggests the possibility that cognitive maps may have evolved directly from the path integration system. PMID:26442503
Integral Methodological Pluralism in Science Education Research: Valuing Multiple Perspectives
ERIC Educational Resources Information Center
Davis, Nancy T.; Callihan, Laurie P.
2013-01-01
This article examines the multiple methodologies used in educational research and proposes a model that includes all of them as contributing to understanding educational contexts and research from multiple perspectives. The model, based on integral theory (Wilber in a theory of everything. Shambhala, Boston, 2000) values all forms of research as…
Integral methodological pluralism in science education research: valuing multiple perspectives
NASA Astrophysics Data System (ADS)
Davis, Nancy T.; Callihan, Laurie P.
2013-09-01
This article examines the multiple methodologies used in educational research and proposes a model that includes all of them as contributing to understanding educational contexts and research from multiple perspectives. The model, based on integral theory (Wilber in a theory of everything. Shambhala, Boston,
The Effects of Tasks on Integrating Information from Multiple Documents
ERIC Educational Resources Information Center
Cerdan, Raquel; Vidal-Abarca, Eduardo
2008-01-01
The authors examine 2 issues: (a) how students integrate information from multiple scientific documents to describe and explain a physical phenomenon that represents a subset of the information in the documents; and (b) the role of 2 sorts of tasks to achieve this type of integration, either writing an essay on a question requiring integration…
Lutken, Carol; Macelloni, Leonardo; D'Emidio, Marco; Dunbar, John; Higley, Paul
2015-01-31
detect short-term changes within the hydrates system, identify relationships/impacts of local oceanographic parameters on the hydrates system, and improve our understanding of how seafloor instability is affected by hydrates-driven changes. A 2009 DCR survey of MC118 demonstrated that we could image resistivity anomalies to a depth of 75m below the seafloor in water depths of 1km. We reconfigured this system to operate autonomously on the seafloor in a pre-programmed mode, for periods of months. We designed and built a novel seafloor lander and deployment capability that would allow us to investigate the seafloor at potential deployment sites and deploy instruments only when conditions met our criteria. This lander held the DCR system, controlling computers, and battery power supply, as well as instruments to record oceanographic parameters. During the first of two cruises to the study site, we conducted resistivity surveying, selected a monitoring site, and deployed the instrumented lander and DCR, centered on what appeared to be the most active locations within the site, programmed to collect a DCR profile, weekly. After a 4.5-month residence on the seafloor, the team recovered all equipment. Unfortunately, several equipment failures occurred prior to recovery of the instrument packages. Prior to the failures, however, two resistivity profiles were collected together with oceanographic data. Results show, unequivocally, that significant changes can occur in both hydrate volume and distribution during time periods as brief as one week. Occurrences appear to be controlled by both deep and near-surface structure. Results have been integrated with seismic data from the area and show correspondence in space of hydrate and structures, including faults and gas chimneys.
A multiple index integrating different levels of organization.
Cortes, Rui; Hughes, Samantha; Coimbra, Ana; Monteiro, Sandra; Pereira, Vítor; Lopes, Marisa; Pereira, Sandra; Pinto, Ana; Sampaio, Ana; Santos, Cátia; Carrola, João; de Jesus, Joaquim; Varandas, Simone
2016-10-01
Many methods in freshwater biomonitoring tend to be restricted to a few levels of biological organization, limiting the potential spectrum of measurable of cause-effect responses to different anthropogenic impacts. We combined distinct organisational levels, covering biological biomarkers (histopathological and biochemical reactions in liver and fish gills), community based bioindicators (fish guilds, invertebrate metrics/traits and chironomid pupal exuviae) and ecosystem functional indicators (decomposition rates) to assess ecological status at designated Water Framework Directive monitoring sites, covering a gradient of human impact across several rivers in northern Portugal. We used Random Forest to rank the variables that contributed more significantly to successfully predict the different classes of ecological status and also to provide specific cut levels to discriminate each WFD class based on reference condition. A total of 59 Biological Quality Elements and functional indicators were determined using this procedure and subsequently applied to develop the integrated Multiple Ecological Level Index (MELI Index), a potentially powerful bioassessment tool. PMID:27344015
A selective integrated tempering method.
Yang, Lijiang; Qin Gao, Yi
2009-12-01
In this paper, based on the integrated tempering sampling we introduce a selective integrated tempering sampling (SITS) method for the efficient conformation sampling and thermodynamics calculations for a subsystem in a large one, such as biomolecules solvated in aqueous solutions. By introducing a potential surface scaled with temperature, the sampling over the configuration space of interest (e.g., the solvated biomolecule) is selectively enhanced but the rest of the system (e.g., the solvent) stays largely unperturbed. The applications of this method to biomolecular systems allow highly efficient sampling over both energy and configuration spaces of interest. Comparing to the popular and powerful replica exchange molecular dynamics (REMD), the method presented in this paper is significantly more efficient in yielding relevant thermodynamics quantities (such as the potential of mean force for biomolecular conformational changes in aqueous solutions). It is more important that SITS but not REMD yielded results that are consistent with the traditional umbrella sampling free energy calculations when explicit solvent model is used since SITS avoids the sampling of the irrelevant phase space (such as the boiling water at high temperatures). PMID:19968339
Methods for comparing multiple digital PCR experiments.
Burdukiewicz, Michał; Rödiger, Stefan; Sobczyk, Piotr; Menschikowski, Mario; Schierack, Peter; Mackiewicz, Paweł
2016-09-01
The estimated mean copy per partition (λ) is the essential information from a digital PCR (dPCR) experiment because λ can be used to calculate the target concentration in a sample. However, little information is available how to statistically compare dPCR runs of multiple runs or reduplicates. The comparison of λ values from several runs is a multiple comparison problem, which can be solved using the binary structure of dPCR data. We propose and evaluate two novel methods based on Generalized Linear Models (GLM) and Multiple Ratio Tests (MRT) for comparison of digital PCR experiments. We enriched our MRT framework with computation of simultaneous confidence intervals suitable for comparing multiple dPCR runs. The evaluation of both statistical methods support that MRT is faster and more robust for dPCR experiments performed in large scale. Our theoretical results were confirmed by the analysis of dPCR measurements of dilution series. Both methods were implemented in the dpcR package (v. 0.2) for the open source R statistical computing environment. PMID:27551672
Adaptive wavelet methods - Matrix-vector multiplication
NASA Astrophysics Data System (ADS)
Černá, Dana; Finěk, Václav
2012-12-01
The design of most adaptive wavelet methods for elliptic partial differential equations follows a general concept proposed by A. Cohen, W. Dahmen and R. DeVore in [3, 4]. The essential steps are: transformation of the variational formulation into the well-conditioned infinite-dimensional l2 problem, finding of the convergent iteration process for the l2 problem and finally derivation of its finite dimensional version which works with an inexact right hand side and approximate matrix-vector multiplications. In our contribution, we shortly review all these parts and wemainly pay attention to approximate matrix-vector multiplications. Effective approximation of matrix-vector multiplications is enabled by an off-diagonal decay of entries of the wavelet stiffness matrix. We propose here a new approach which better utilize actual decay of matrix entries.
Prioritizing Cancer Therapeutic Small Molecules by Integrating Multiple OMICS Datasets
Lv, Sali; Xu, Yanjun; Chen, Xin; Li, Yan; Li, Ronghong; Wang, Qianghu
2012-01-01
Abstract Drug design is crucial for the effective discovery of anti-cancer drugs. The success or failure of drug design often depends on the leading compounds screened in pre-clinical studies. Many efforts, such as in vivo animal experiments and in vitro drug screening, have improved this process, but these methods are usually expensive and laborious. In the post-genomics era, it is possible to seek leading compounds for large-scale candidate small-molecule screening with multiple OMICS datasets. In the present study, we developed a computational method of prioritizing small molecules as leading compounds by integrating transcriptomics and toxicogenomics data. This method provides priority lists for the selection of leading compounds, thereby reducing the time required for drug design. We found 11 known therapeutic small molecules for breast cancer in the top 100 candidates in our list, 2 of which were in the top 10. Furthermore, another 3 of the top 10 small molecules were recorded as closely related to cancer treatment in the DrugBank database. A comparison of the results of our approach with permutation tests and shared gene methods demonstrated that our OMICS data-based method is quite competitive. In addition, we applied our method to a prostate cancer dataset. The results of this analysis indicated that our method surpasses both the shared gene method and random selection. These analyses suggest that our method may be a valuable tool for directing experimental studies in cancer drug design, and we believe this time- and cost-effective computational strategy will be helpful in future studies in cancer therapy. PMID:22917481
From multiple unitarity cuts to the coproduct of Feynman integrals
NASA Astrophysics Data System (ADS)
Abreu, Samuel; Britto, Ruth; Duhr, Claude; Gardi, Einan
2014-10-01
We develop techniques for computing and analyzing multiple unitarity cuts of Feynman integrals, and reconstructing the integral from these cuts. We study the relations among unitarity cuts of a Feynman integral computed via diagrammatic cutting rules, the discontinuity across the corresponding branch cut, and the coproduct of the integral. For single unitarity cuts, these relations are familiar. Here we show that they can be generalized to sequences of unitarity cuts in different channels. Using concrete one- and two-loop scalar integral examples we demonstrate that it is possible to reconstruct a Feynman integral from either single or double unitarity cuts. Our results offer insight into the analytic structure of Feynman integrals as well as a new approach to computing them.
A Fuzzy Logic Framework for Integrating Multiple Learned Models
Bobi Kai Den Hartog
1999-03-01
The Artificial Intelligence field of Integrating Multiple Learned Models (IMLM) explores ways to combine results from sets of trained programs. Aroclor Interpretation is an ill-conditioned problem in which trained programs must operate in scenarios outside their training ranges because it is intractable to train them completely. Consequently, they fail in ways related to the scenarios. We developed a general-purpose IMLM solution, the Combiner, and applied it to Aroclor Interpretation. The Combiner's first step, Scenario Identification (M), learns rules from very sparse, synthetic training data consisting of results from a suite of trained programs called Methods. S1 produces fuzzy belief weights for each scenario by approximately matching the rules. The Combiner's second step, Aroclor Presence Detection (AP), classifies each of three Aroclors as present or absent in a sample. The third step, Aroclor Quantification (AQ), produces quantitative values for the concentration of each Aroclor in a sample. AP and AQ use automatically learned empirical biases for each of the Methods in each scenario. Through fuzzy logic, AP and AQ combine scenario weights, automatically learned biases for each of the Methods in each scenario, and Methods' results to determine results for a sample.
Research on model of combining multiple neural networks by fuzzy integral-MNNF
NASA Astrophysics Data System (ADS)
Fu, Yue; Chai, Bianfang
2013-03-01
The method of multiple neural network Fusion using Fuzzy Integral (MNNF) presented by this paper is to improve the detection performance of data mining-based intrusion detection system. The basic idea of MNNF is to mine on distinct feature training dataset by neural networks separately, and detect TCP/IP data by different neural networks, and then nonlinearly combine the results from multiple neural networks by fuzzy integral. The experiment results show that this technique is superior to single neural networks for intrusion detection in terms of classification accuracy. Compared with other combination methods such as Majority, Average, Borda count, fuzzy integral is better than one of them.
Lamp method and apparatus using multiple reflections
MacLennan, D.A.; Turner, B.; Kipling, K.
1999-05-11
A method wherein the light in a sulfur or selenium lamp is reflected through the fill a multiplicity of times to convert ultraviolet radiation to visible is disclosed. A light emitting device comprised of an electrodeless envelope which bears a light reflecting covering around a first portion which does not crack due to differential thermal expansion and which has a second portion which comprises a light transmissive aperture. 20 figs.
Lamp method and apparatus using multiple reflections
MacLennan, Donald A.; Turner, Brian; Kipling, Kent
1999-01-01
A method wherein the light in a sulfur or selenium lamp is reflected through the fill a multiplicity of times to convert ultraviolet radiation to visible. A light emitting device comprised of an electrodeless envelope which bears a light reflecting covering around a first portion which does not crack due to differential thermal expansion and which has a second portion which comprises a light transmissive aperture.
Ergen, Kayra; Kentel, Elcin
2016-01-15
Stream gauges measure the temporal variation of water quantity; thus they are vital in managing water resources. The stream gauge network in Turkey includes a limited number of gauges and often streamflow estimates need to be generated at ungauged locations where reservoirs, small hydropower plants, weirs, etc. are planned. Prediction of streamflows at ungauged locations generally relies on donor gauges where flow is assumed to be similar to that at the ungauged location. Generally, donor stream gauges are selected based on geographical proximity. However, closer stream gauges are not always the most-correlated ones. The Map Correlation Method (MCM) enables development of a map that shows the spatial distribution of the correlation between a selected stream gauge and any other location within the study region. In this study, a new approach which combines MCM with the multiple-source site drainage-area ratio (DAR) method is used to estimate daily streamflows at ungauged catchments in the Western Black Sea Region. Daily streamflows predicted by the combined three-source sites DAR with MCM approach give higher Nash-Sutcliffe Efficiency (NSE) values than those predicted using the nearest stream gauge as the donor stream gauge, for most of the trial cases. Hydrographs and flow duration curves predicted using this approach are usually in better agreement with the observed hydrographs and flow duration curves than those predicted using the nearest catchment. PMID:26520038
Multiple time scale methods in tokamak magnetohydrodynamics
Jardin, S.C.
1984-01-01
Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B/sup 2//2..mu../sub 0/, which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed.
An Alternative Method for Multiplication of Rhotrices. Classroom Notes
ERIC Educational Resources Information Center
Sani, B.
2004-01-01
In this article, an alternative multiplication method for rhotrices is proposed. The method establishes some relationships between rhotrices and matrices. This article has discussed a modified multiplication method for rhotrices. The method has a direct relationship with matrix multiplication, and so rhotrices under this multiplication procedure…
Multiple frequency method for operating electrochemical sensors
Martin, Louis P.
2012-05-15
A multiple frequency method for the operation of a sensor to measure a parameter of interest using calibration information including the steps of exciting the sensor at a first frequency providing a first sensor response, exciting the sensor at a second frequency providing a second sensor response, using the second sensor response at the second frequency and the calibration information to produce a calculated concentration of the interfering parameters, using the first sensor response at the first frequency, the calculated concentration of the interfering parameters, and the calibration information to measure the parameter of interest.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
High integrity carrier phase navigation using multiple civil GPS signals
NASA Astrophysics Data System (ADS)
Jung, Jaewoo
2000-11-01
A navigation system should guide users to their destinations accurately and reliably. Among the many available navigation aids, the Global Positioning System stands out due to its unique capabilities. It is a satellite-based navigation system which covers the entire Earth with horizontal accuracy of 20 meters for stand alone civil users. Today, the GPS provides only one civil signal, but two more signals will be available in the near future. GPS will provide a second signal at 1227.60 MHz (L2) and a third signal at 1176.45 MHz (Lc), in addition to the current signal at 1575.42 MHz (L1). The focus of this thesis is exploring the possibility of using beat frequencies of these signals to provide navigation aid to users with high accuracy and integrity. To achieve high accuracy, the carrier phase differential GPS is used. The integer ambiguity is resolved using the Cascade Integer Resolution (CIR), which is defined in this thesis. The CIR is an instantaneous, geometry-free integer resolution method utilizing beat frequencies of GPS signals. To insure high integrity, the probability of incorrect integer ambiguity resolution using the CIR is analyzed. The CIR can immediately resolve the Lc integer ambiguity up to 2.4 km from the reference receiver, the Widelane (L1-L2) integer ambiguity up to 22 km, and the Extra Widelane (L2-Lc) integer ambiguity from there on, with probability of incorrect integer resolution of 10-4 . The optimal use of algebraic combinations of multiple GPS signals are also investigated in this thesis. Finally, the gradient of residual differential ionospheric error is estimated to stimated to increase performance of the CIR.
Downsizing of an integrated tracking unit for multiple applications
NASA Astrophysics Data System (ADS)
Steinway, William J.; Thomas, James E.; Nicoloff, Michael J.; Patz, Mark D.
1997-02-01
This paper describes the specifications and capabilities of the integrated tracking unit (ITU) and its multiple applications are presented. The original ITU was developed by Coleman Research Corporation (CRC) for several federal law enforcement agencies over a four-year period and it has been used for friendly and unfriendly vehicle and person position tracking. The ITU has been down-sized to reduce its physical size, weight, and power requirements with respect to the first generation unit. The ITU consists of a global positioning system (GPS) receiver for precise position location and a cellular phone to transmit voice and data to a PC base station with a modem interface. This paper describes the down-sizing of the unit introduced in CRC's 'An Integrated Tracking Unit for Multiple Applications' paper presented at the 1995 Counterdrug Technology Assessment Center's symposium in Nashua, NH. This paper provides a description of the ITU and tested applications.
Integration of multiple sensor fusion in controller design.
Abdelrahman, Mohamed; Kandasamy, Parameshwaran
2003-04-01
The main focus of this research is to reduce the risk of a catastrophic response of a feedback control system when some of the feedback data from the system sensors are not reliable, while maintaining a reasonable performance of the control system. In this paper a methodology for integrating multiple sensor fusion into the controller design is presented. The multiple sensor fusion algorithm produces, in addition to the estimate of the measurand, a parameter that measures the confidence in the estimated value. This confidence is integrated as a parameter into the controller to produce fast system response when the confidence in the estimate is high, and a slow response when the confidence in the estimate is low. Conditions for the stability of the system with the developed controller are discussed. This methodology is demonstrated on a cupola furnace model. The simulations illustrate the advantages of the new methodology. PMID:12708539
Deconstructing Calculation Methods, Part 3: Multiplication
ERIC Educational Resources Information Center
Thompson, Ian
2008-01-01
In this third of a series of four articles, the author deconstructs the primary national strategy's approach to written multiplication. The approach to multiplication, as set out on pages 12 to 15 of the primary national strategy's "Guidance paper" "Calculation" (DfES, 2007), is divided into six stages: (1) mental multiplication using…
NASA Astrophysics Data System (ADS)
O'Brien, Dominic; Haas, Harald; Rajbhandari, Sujan; Chun, Hyunchae; Faulkner, Grahame; Cameron, Katherine; Jalajakumari, Aravind V. N.; Henderson, Robert; Tsonev, Dobroslav; Ijaz, Muhammad; Chen, Zhe; Xie, Enyuan; McKendry, Jonathan J. D.; Herrnsdorf, Johannes; Gu, Erdan; Dawson, Martin D.
2015-01-01
Solid state lighting systems typically use multiple Light Emitting Diode (LED) die within a single lamp, and multiple lamps within a coverage space. This infrastructure forms the transmitters for Visible Light Communications (VLC), and the availability of low-cost detector arrays offers the possibility of building Multiple Input Multiple Output (MIMO) transmission systems. Different approaches to optical MIMO are being investigated as part of a UK government funded research programme, `Ultra-Parallel Visible Light Communications' (UPVLC). In this paper we present a brief review of the area and report results from systems that use integrated subsystems developed as part of the project. The scalability of these approaches and future directions will also be discussed.
Case studies: Soil mapping using multiple methods
NASA Astrophysics Data System (ADS)
Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald
2010-05-01
Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful
Automatic numerical integration methods for Feynman integrals through 3-loop
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.
2015-05-01
We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.
Surveillance systems integrating multiple sensors for enhanced situational awareness
NASA Astrophysics Data System (ADS)
Van Anda, J. B.; Van Anda, J. D.
2005-05-01
In the modern world of high value security systems a successful installation requires the sensors to produce more than just good IR images, preprocessed data from these images, imagery in multiple bands fused in intelligent ways with each other and with non imaging information such as Laser ranging is required. This paper describes a system where LW uncooled, color TV, low light level TV, and laser ranging information are fused in a integral Pan and Tilt system to provide a sensor suite with exceptional capabilities for seamlessly integration into an advanced security system. Advances integrated in this system includes the advances sensor suite, sensible symbology for situational awareness in case of operator intervention, parallax and focus tracking through zoom and sensor changes to enhance auto tracking and motion detection algorithms.
Method of descent for integrable lattices
NASA Astrophysics Data System (ADS)
Bogoyavlensky, Oleg
2009-05-01
A method of descent for constructing integrable Hamiltonian systems is introduced. The derived periodic and nonperiodic lattices possess Lax representations with spectral parameter and have plenty of first integrals. Examples of Liouville-integrable four-dimensional Hamiltonian Lotka-Volterra systems are presented.
Cao, D-S; Xiao, N; Li, Y-J; Zeng, W-B; Liang, Y-Z; Lu, A-P; Xu, Q-S; Chen, AF
2015-01-01
Identifying potential adverse drug reactions (ADRs) is critically important for drug discovery and public health. Here we developed a multiple evidence fusion (MEF) method for the large-scale prediction of drug ADRs that can handle both approved drugs and novel molecules. MEF is based on the similarity reference by collaborative filtering, and integrates multiple similarity measures from various data types, taking advantage of the complementarity in the data. We used MEF to integrate drug-related and ADR-related data from multiple levels, including the network structural data formed by known drug–ADR relationships for predicting likely unknown ADRs. On cross-validation, it obtains high sensitivity and specificity, substantially outperforming existing methods that utilize single or a few data types. We validated our prediction by their overlap with drug–ADR associations that are known in databases. The proposed computational method could be used for complementary hypothesis generation and rapid analysis of potential drug–ADR interactions. PMID:26451329
Integration methods for molecular dynamics
Leimkuhler, B.J.; Reich, S.; Skeel, R.D.
1996-12-31
Classical molecular dynamics simulation of a macromolecule requires the use of an efficient time-stepping scheme that can faithfully approximate the dynamics over many thousands of timesteps. Because these problems are highly nonlinear, accurate approximation of a particular solution trajectory on meaningful time intervals is neither obtainable nor desired, but some restrictions, such as symplecticness, can be imposed on the discretization which tend to imply good long term behavior. The presence of a variety of types and strengths of interatom potentials in standard molecular models places severe restrictions on the timestep for numerical integration used in explicit integration schemes, so much recent research has concentrated on the search for alternatives that possess (1) proper dynamical properties, and (2) a relative insensitivity to the fastest components of the dynamics. We survey several recent approaches. 48 refs., 2 figs.
NEXT Propellant Management System Integration With Multiple Ion Thrusters
NASA Technical Reports Server (NTRS)
Sovey, James S.; Soulas, George C.; Herman, Daniel A.
2011-01-01
As a critical part of the NEXT test validation process, a multiple-string integration test was performed on the NEXT propellant management system and ion thrusters. The objectives of this test were to verify that the PMS is capable of providing stable flow control to multiple thrusters operating over the NEXT system throttling range and to demonstrate to potential users that the NEXT PMS is ready for transition to flight. A test plan was developed for the sub-system integration test for verification of PMS and thruster system performance and functionality requirements. Propellant management system calibrations were checked during the single and multi-thruster testing. The low pressure assembly total flow rates to the thruster(s) were within 1.4 percent of the calibrated support equipment flow rates. The inlet pressures to the main, cathode, and neutralizer ports of Thruster PM1R were measured as the PMS operated in 1-thruster, 2-thruster, and 3-thruster configurations. It was found that the inlet pressures to Thruster PM1R for 2-thruster and 3-thruster operation as well as single thruster operation with the PMS compare very favorably indicating that flow rates to Thruster PM1R were similar in all cases. Characterizations of discharge losses, accelerator grid current, and neutralizer performance were performed as more operating thrusters were added to the PMS. There were no variations in these parameters as thrusters were throttled and single and multiple thruster operations were conducted. The propellant management system power consumption was at a fixed voltage to the DCIU and a fixed thermal throttle temperature of 75 C. The total power consumed by the PMS was 10.0, 17.9, and 25.2 W, respectively, for single, 2-thruster, and 3-thruster operation with the PMS. These sub-system integration tests of the PMS, the DCIU Simulator, and multiple thrusters addressed, in part, the NEXT PMS and propulsion system performance and functionality requirements.
A Collocation Method for Volterra Integral Equations
NASA Astrophysics Data System (ADS)
Kolk, Marek
2010-09-01
We propose a piecewise polynomial collocation method for solving linear Volterra integral equations of the second kind with logarithmic kernels which, in addition to a diagonal singularity, may have a singularity at the initial point of the interval of integration. An attainable order of the convergence of the method is studied. We illustrate our results with a numerical example.
Research in Mathematics Education: Multiple Methods for Multiple Uses
ERIC Educational Resources Information Center
Battista, Michael; Smith, Margaret S.; Boerst, Timothy; Sutton, John; Confrey, Jere; White, Dorothy; Knuth, Eric; Quander, Judith
2009-01-01
Recent federal education policies and reports have generated considerable debate about the meaning, methods, and goals of "scientific research" in mathematics education. Concentrating on the critical problem of determining which educational programs and practices reliably improve students' mathematics achievement, these policies and reports focus…
Multiple cue use and integration in pigeons (Columba livia).
Legge, Eric L G; Madan, Christopher R; Spetch, Marcia L; Ludvig, Elliot A
2016-05-01
Encoding multiple cues can improve the accuracy and reliability of navigation and goal localization. Problems may arise, however, if one cue is displaced and provides information which conflicts with other cues. Here we investigated how pigeons cope with cue conflict by training them to locate a goal relative to two landmarks and then varying the amount of conflict between the landmarks. When the amount of conflict was small, pigeons tended to integrate both cues in their search patterns. When the amount of conflict was large, however, pigeons used information from both cues independently. This context-dependent strategy for resolving spatial cue conflict agrees with Bayes optimal calculations for using information from multiple sources. PMID:26908004
Integrated control system and method
Wang, Paul Sai Keat; Baldwin, Darryl; Kim, Myoungjin
2013-10-29
An integrated control system for use with an engine connected to a generator providing electrical power to a switchgear is disclosed. The engine receives gas produced by a gasifier. The control system includes an electronic controller associated with the gasifier, engine, generator, and switchgear. A gas flow sensor monitors a gas flow from the gasifier to the engine through an engine gas control valve and provides a gas flow signal to the electronic controller. A gas oversupply sensor monitors a gas oversupply from the gasifier and provides an oversupply signal indicative of gas not provided to the engine. A power output sensor monitors a power output of the switchgear and provide a power output signal. The electronic controller changes gas production of the gasifier and the power output rating of the switchgear based on the gas flow signal, the oversupply signal, and the power output signal.
Robust rotational-velocity-Verlet integration methods
NASA Astrophysics Data System (ADS)
Rozmanov, Dmitri; Kusalik, Peter G.
2010-05-01
Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.
Fast integral methods for integrated optical systems simulations: a review
NASA Astrophysics Data System (ADS)
Kleemann, Bernd H.
2015-09-01
Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non
Integrative and regularized principal component analysis of multiple sources of data.
Liu, Binghui; Shen, Xiaotong; Pan, Wei
2016-06-15
Integration of data of disparate types has become increasingly important to enhancing the power for new discoveries by combining complementary strengths of multiple types of data. One application is to uncover tumor subtypes in human cancer research in which multiple types of genomic data are integrated, including gene expression, DNA copy number, and DNA methylation data. In spite of their successes, existing approaches based on joint latent variable models require stringent distributional assumptions and may suffer from unbalanced scales (or units) of different types of data and non-scalability of the corresponding algorithms. In this paper, we propose an alternative based on integrative and regularized principal component analysis, which is distribution-free, computationally efficient, and robust against unbalanced scales. The new method performs dimension reduction simultaneously on multiple types of data, seeking data-adaptive sparsity and scaling. As a result, in addition to feature selection for each type of data, integrative clustering is achieved. Numerically, the proposed method compares favorably against its competitors in terms of accuracy (in identifying hidden clusters), computational efficiency, and robustness against unbalanced scales. In particular, compared with a popular method, the new method was competitive in identifying tumor subtypes associated with distinct patient survival patterns when applied to a combined analysis of DNA copy number, mRNA expression, and DNA methylation data in a glioblastoma multiforme study. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26756854
Integrative Data Analysis: The Simultaneous Analysis of Multiple Data Sets
Curran, Patrick J.; Hussong, Andrea M.
2009-01-01
Both quantitative and methodological techniques exist that foster the development and maintenance of a cumulative knowledge base within the psychological sciences. Most noteworthy of these techniques is meta-analysis which allows for the synthesis of summary statistics drawn from multiple studies when the original data are not available. However, when the original data can be obtained from multiple studies, many advantages stem from the statistical analysis of the pooled data. The authors define integrative data analysis (IDA) as the analysis of multiple data sets that have been pooled into one. Although variants of IDA have been incorporated into other scientific disciplines, the use of these techniques are much less evident in psychology. In this paper the authors present an overview of IDA as it may be applied within the psychological sciences; a discussion of the relative advantages and disadvantages of IDA; a description of analytic strategies for analyzing pooled individual data; and offer recommendations for the use of IDA in practice. PMID:19485623
EMERGY METHODS: VALUABLE INTEGRATED ASSESSMENT TOOLS
NHEERL's Atlantic Ecology Division is investigating emergy methods as tools for integrated assessment in several projects evaluating environmental impacts, policies, and alternatives for remediation and intervention. Emergy accounting is a methodology that provides a quantitative...
Erlangga, Mokhammad Puput
2015-04-16
Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.
Integrability: mathematical methods for studying solitary waves theory
NASA Astrophysics Data System (ADS)
Wazwaz, Abdul-Majid
2014-03-01
In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835
Methods for biological data integration: perspectives and challenges
Gligorijević, Vladimir; Pržulj, Nataša
2015-01-01
Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biological entities. Standard network data analysis methods were shown to be limited in dealing with such heterogeneous networked data and consequently, new methods for integrative data analyses have been proposed. The integrative methods can collectively mine multiple types of biological data and produce more holistic, systems-level biological insights. We survey recent methods for collective mining (integration) of various types of networked biological data. We compare different state-of-the-art methods for data integration and highlight their advantages and disadvantages in addressing important biological problems. We identify the important computational challenges of these methods and provide a general guideline for which methods are suited for specific biological problems, or specific data types. Moreover, we propose that recent non-negative matrix factorization-based approaches may become the integration methodology of choice, as they are well suited and accurate in dealing with heterogeneous data and have many opportunities for further development. PMID:26490630
Quadrature rules with multiple nodes for evaluating integrals with strong singularities
NASA Astrophysics Data System (ADS)
Milovanovic, Gradimir V.; Spalevic, Miodrag M.
2006-05-01
We present a method based on the Chakalov-Popoviciu quadrature formula of Lobatto type, a rather general case of quadrature with multiple nodes, for approximating integrals defined by Cauchy principal values or by Hadamard finite parts. As a starting point we use the results obtained by L. Gori and E. Santi (cf. On the evaluation of Hilbert transforms by means of a particular class of Turan quadrature rules, Numer. Algorithms 10 (1995), 27-39; Quadrature rules based on s-orthogonal polynomials for evaluating integrals with strong singularities, Oberwolfach Proceedings: Applications and Computation of Orthogonal Polynomials, ISNM 131, Birkhauser, Basel, 1999, pp. 109-119). We generalize their results by using some of our numerical procedures for stable calculation of the quadrature formula with multiple nodes of Gaussian type and proposed methods for estimating the remainder term in such type of quadrature formulae. Numerical examples, illustrations and comparisons are also shown.
Multiple Integrated Complementary Healing Approaches: Energetics & Light for bone.
Gray, Michael G; Lackey, Brett R; Patrick, Evelyn F; Gray, Sandra L; Hurley, Susan G
2016-01-01
A synergistic-healing strategy that combines molecular targeting within a system-wide perspective is presented as the Multiple Integrated Complementary Healing Approaches: Energetics And Light (MICHAEL). The basis of the MICHAEL approach is the realization that environmental, nutritional and electromagnetic factors form a regulatory framework involved in bone and nerve healing. The interactions of light, energy, and nutrition with neural, hormonal and cellular pathways will be presented. Energetic therapies including electrical, low-intensity pulsed ultrasound and light based treatments affect growth, differentiation and proliferation of bone and nerve and can be utilized for their healing benefits. However, the benefits of these therapies can be impaired by the absence of nutritional, hormonal and organismal factors. For example, lack of sleep, disrupted circadian rhythms and vitamin-D deficiency can impair healing. Molecular targets, such as the Wnt pathway, protein kinase B and glucocorticoid signaling systems can be modulated by nutritional components, including quercetin, curcumin and Mg(2+) to enhance the healing process. The importance of water and water-regulation will be presented as an integral component. The effects of exercise and acupuncture on bone healing will also be discussed within the context of the MICHAEL approach. PMID:26804592
Tools and Models for Integrating Multiple Cellular Networks
Gerstein, Mark
2015-11-06
In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed
Integral Deferred Correction methods for scientific computing
NASA Astrophysics Data System (ADS)
Morton, Maureen Marilla
Since high order numerical methods frequently can attain accurate solutions more efficiently than low order methods, we develop and analyze new high order numerical integrators for the time discretization of ordinary and partial differential equations. Our novel methods address some of the issues surrounding high order numerical time integration, such as the difficulty of many popular methods' construction and handling the effects of disparate behaviors produce by different terms in the equations to be solved. We are motivated by the simplicity of how Deferred Correction (DC) methods achieve high order accuracy [72, 27]. DC methods are numerical time integrators that, rather than calculating tedious coefficients for order conditions, instead construct high order accurate solutions by iteratively improving a low order preliminary numerical solution. With each iteration, an error equation is solved, the error decreases, and the order of accuracy increases. Later, DC methods were adjusted to include an integral formulation of the residual, which stabilizes the method. These Spectral Deferred Correction (SDC) methods [25] motivated Integral Deferred Corrections (IDC) methods. Typically, SDC methods are limited to increasing the order of accuracy by one with each iteration due to smoothness properties imposed by the gridspacing. However, under mild assumptions, explicit IDC methods allow for any explicit rth order Runge-Kutta (RK) method to be used within each iteration, and then an order of accuracy increase of r is attained after each iteration [18]. We extend these results to the construction of implicit IDC methods that use implicit RK methods, and we prove analogous results for order of convergence. One means of solving equations with disparate parts is by semi-implicit integrators, handling a "fast" part implicitly and a "slow" part explicitly. We incorporate additive RK (ARK) integrators into the iterations of IDC methods in order to construct new arbitrary order
A survey of payload integration methods
NASA Technical Reports Server (NTRS)
Engels, R. C.; Harcrow, H. W.
1981-01-01
The most prominent payload integration methods are presented and evaluated. The paper outlines the problem and some of the difficulties encountered when analyzing a coupled booster/payload system. Descriptions of both full-scale and short-cut methods are given together with an assessment of their strengths and weaknesses. Finally, an extensive list of references is included.
Integrity of hypothalamic fibers and cognitive fatigue in multiple sclerosis.
Hanken, Katrin; Eling, Paul; Kastrup, Andreas; Klein, Jan; Hildebrandt, Helmut
2015-01-01
Cognitive fatigue is a common and disabling symptom of multiple sclerosis (MS), but little is known about its pathophysiology. The present study investigated whether the posterior hypothalamus, which is considered as the waking center, is associated with MS-related cognitive fatigue. We analyzed the integrity of posterior hypothalamic fibers in 49 patients with relapsing-remitting MS and 14 healthy controls. Diffusion tensor imaging (DTI) parameters were calculated for fibers between the posterior hypothalamus and, respectively, the mesencephalon, pons and prefrontal cortex. In addition, DTI parameters were computed for fibers between the anterior hypothalamus and these regions and for the corpus callosum. Cognitive fatigue was assessed using the Fatigue Scale for Motor and Cognitive Functions. Analyses of variance with repeated measures were performed to investigate the impact of cognitive fatigue on diffusion parameters. Cognitively fatigued patients (75.5%) showed a significantly lower mean axial and radial diffusivity for fibers between the posterior hypothalamus and the mesencephalon than cognitively non-fatigued patients (Group(⁎)Target area(⁎)Diffusion orientation: F=4.047; p=0.023). For fibers of the corpus callosum, MS patients presented significantly higher axial and radial diffusivity than healthy controls (Group(⁎)Diffusion orientation: F=9.904; p<0.001). Depressive mood, used as covariate, revealed significant interaction effects for anterior hypothalamic fibers (Target area(⁎)Diffusion orientation(⁎)Depression: F=5.882; p=0.021; Hemisphere(⁎)Diffusion orientation(⁎) Depression: F=8.744; p=0.008). Changes in integrity of fibers between the posterior hypothalamus and the mesencephalon appear to be associated with MS-related cognitive fatigue. These changes might cause an altered modulation of hypothalamic centers responsible for wakefulness. Furthermore, integrity of anterior hypothalamic fibers might be related to depression in MS. PMID
Efficient integration method for fictitious domain approaches
NASA Astrophysics Data System (ADS)
Duczek, Sascha; Gabbert, Ulrich
2015-10-01
In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.
Multiple Shooting-Local Linearization method for the identification of dynamical systems
NASA Astrophysics Data System (ADS)
Carbonell, F.; Iturria-Medina, Y.; Jimenez, J. C.
2016-08-01
The combination of the multiple shooting strategy with the generalized Gauss-Newton algorithm turns out in a recognized method for estimating parameters in ordinary differential equations (ODEs) from noisy discrete observations. A key issue for an efficient implementation of this method is the accurate integration of the ODE and the evaluation of the derivatives involved in the optimization algorithm. In this paper, we study the feasibility of the Local Linearization (LL) approach for the simultaneous numerical integration of the ODE and the evaluation of such derivatives. This integration approach results in a stable method for the accurate approximation of the derivatives with no more computational cost than that involved in the integration of the ODE. The numerical simulations show that the proposed Multiple Shooting-Local Linearization method recovers the true parameters value under different scenarios of noisy data.
Upcoming challenges for multiple sequence alignment methods in the high-throughput era
Kemena, Carsten; Notredame, Cedric
2009-01-01
This review focuses on recent trends in multiple sequence alignment tools. It describes the latest algorithmic improvements including the extension of consistency-based methods to the problem of template-based multiple sequence alignments. Some results are presented suggesting that template-based methods are significantly more accurate than simpler alternative methods. The validation of existing methods is also discussed at length with the detailed description of recent results and some suggestions for future validation strategies. The last part of the review addresses future challenges for multiple sequence alignment methods in the genomic era, most notably the need to cope with very large sequences, the need to integrate large amounts of experimental data, the need to accurately align non-coding and non-transcribed sequences and finally, the need to integrate many alternative methods and approaches. Contact: cedric.notredame@crg.es PMID:19648142
Students' Use of "Look Back" Strategies in Multiple Solution Methods
ERIC Educational Resources Information Center
Lee, Shin-Yi
2016-01-01
The purpose of this study was to investigate the relationship between both 9th-grade and 1st-year undergraduate students' use of "look back" strategies and problem solving performance in multiple solution methods, the difference in their use of look back strategies and problem solving performance in multiple solution methods, and the…
Multiple tag labeling method for DNA sequencing
Mathies, R.A.; Huang, X.C.; Quesada, M.A.
1995-07-25
A DNA sequencing method is described which uses single lane or channel electrophoresis. Sequencing fragments are separated in the lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radioisotope labels. 5 figs.
Multiple tag labeling method for DNA sequencing
Mathies, Richard A.; Huang, Xiaohua C.; Quesada, Mark A.
1995-01-01
A DNA sequencing method described which uses single lane or channel electrophoresis. Sequencing fragments are separated in said lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radio-isotope labels.
A method for assurance of image integrity in CAD-PACS integration
NASA Astrophysics Data System (ADS)
Zhou, Zheng
2007-03-01
Computer Aided Detection/Diagnosis (CAD) can greatly assist in the clinical decision making process, and therefore, has drawn tremendous research efforts. However, integrating independent CAD workstation results with the clinical diagnostic workflow still remains challenging. We have presented a CAD-PACS integration toolkit that complies with DICOM standard and IHE profiles. One major issue in CAD-PACS integration is the security of the images used in CAD post-processing and the corresponding CAD result images. In this paper, we present a method for assuring the integrity of both DICOM images used in CAD post-processing and the CAD image results that are in BMP or JPEG format. The method is evaluated in a PACS simulator that simulates clinical PACS workflow. It can also be applied to multiple CAD applications that are integrated with the PACS simulator. The successful development and evaluation of this method will provide a useful approach for assuring image integrity of the CAD-PACS integration in clinical diagnosis.
Evaluation of Scheduling Methods for Multiple Runways
NASA Technical Reports Server (NTRS)
Bolender, Michael A.; Slater, G. L.
1996-01-01
Several scheduling strategies are analyzed in order to determine the most efficient means of scheduling aircraft when multiple runways are operational and the airport is operating at different utilization rates. The study compares simulation data for two and three runway scenarios to results from queuing theory for an M/D/n queue. The direction taken, however, is not to do a steady-state, or equilibrium, analysis since this is not the case during a rush period at a typical airport. Instead, a transient analysis of the delay per aircraft is performed. It is shown that the scheduling strategy that reduces the delay depends upon the density of the arrival traffic. For light traffic, scheduling aircraft to their preferred runways is sufficient; however, as the arrival rate increases, it becomes more important to separate traffic by weight class. Significant delay reduction is realized when aircraft that belong to the heavy and small weight classes are sent to separate runways with large aircraft put into the 'best' landing slot.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-05
... COMMISSION Certain Integrated Circuit Packages Provided With Multiple Heat- Conducting Paths and Products... the sale within the United States after importation of certain integrated circuit packages provided... integrated circuit packages provided with multiple heat-conducting paths and products containing same...
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
Differential temperature integrating diagnostic method and apparatus
Doss, James D.; McCabe, Charles W.
1976-01-01
A method and device for detecting the presence of breast cancer in women by integrating the temperature difference between the temperature of a normal breast and that of a breast having a malignant tumor. The breast-receiving cups of a brassiere are each provided with thermally conductive material next to the skin, with a thermistor attached to the thermally conductive material in each cup. The thermistors are connected to adjacent arms of a Wheatstone bridge. Unbalance currents in the bridge are integrated with respect to time by means of an electrochemical integrator. In the absence of a tumor, both breasts maintain substantially the same temperature, and the bridge remains balanced. If the tumor is present in one breast, a higher temperature in that breast unbalances the bridge and the electrochemical cells integrate the temperature difference with respect to time.
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Berke, L.; Gallagher, R. H.
1991-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EEs) are integrated with the global compatibility conditions (CCs) to form the governing set of equations. In IFM the CCs are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Berke, Laszlo; Gallagher, Richard H.
1990-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EE's) are integrated with the global compatibility conditions (CC's) to form the governing set of equations. In IFM the CC's are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2008-06-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2012-05-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2013-10-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Multiple-analyte fluoroimmunoassay using an integrated optical waveguide sensor.
Plowman, T E; Durstchi, J D; Wang, H K; Christensen, D A; Herron, J N; Reichert, W M
1999-10-01
A silicon oxynitride integrated optical waveguide was used to evanescently excite fluorescence from a multianalyte sensor surface in a rapid, sandwich immunoassay format. Multiple analyte immunoassay (MAIA) results for two sets of three different analytes, one employing polyclonal and the other monoclonal capture antibodies, were compared with results for identical analytes performed in a single-analyte immunoassay (SAIA) format. The MAIA protocol was applied in both phosphate-buffered saline and simulated serum solutions. Point-to-point correlation values between the MAIA and SAIA results varied widely for the polyclonal antibodies (R2 = 0.42-0.98) and were acceptable for the monoclonal antibodies (R2 = 0.93-0.99). Differences in calculated receptor affinities were also evident with polyclonal antibodies, but not so with monoclonal antibodies. Polyclonal antibody capture layers tended to demonstrate departure from ideal receptor-ligand binding while monoclonal antibodies generally displayed monovalent binding. A third set of three antibodies, specific for three cardiac proteins routinely used to categorize myocardial infarction, were also evaluated with the two assay protocols. MAIA responses, over clinically significant ranges for creatin kinase MB, cardiac troponin I, and myoglobin agreed well with responses generated with SAIA protocols (R2 = 0.97-0.99). PMID:10517150
Impaired functional integration in multiple sclerosis: a graph theory study.
Rocca, Maria A; Valsasina, Paola; Meani, Alessandro; Falini, Andrea; Comi, Giancarlo; Filippi, Massimo
2016-01-01
Aim of this study was to explore the topological organization of functional brain network connectivity in a large cohort of multiple sclerosis (MS) patients and to assess whether its disruption contributes to disease clinical manifestations. Graph theoretical analysis was applied to resting state fMRI data from 246 MS patients and 55 matched healthy controls (HC). Functional connectivity between 116 cortical and subcortical brain regions was estimated using a bivariate correlation analysis. Global network properties (network degree, global efficiency, hierarchy, path length and assortativity) were abnormal in MS patients vs HC, and contributed to distinguish cognitively impaired MS patients (34%) from HC, but not the main MS clinical phenotypes. Compared to HC, MS patients also showed: (1) a loss of hubs in the superior frontal gyrus, precuneus and anterior cingulum in the left hemisphere; (2) a different lateralization of basal ganglia hubs (mostly located in the left hemisphere in HC, and in the right hemisphere in MS patients); and (3) a formation of hubs, not seen in HC, in the left temporal pole and cerebellum. MS patients also experienced a decreased nodal degree in the bilateral caudate nucleus and right cerebellum. Such a modification of regional network properties contributed to cognitive impairment and phenotypic variability of MS. An impairment of global integration (likely to reflect a reduced competence in information exchange between distant brain areas) occurs in MS and is associated with cognitive deficits. A regional redistribution of network properties contributes to cognitive status and phenotypic variability of these patients. PMID:25257603
Hamilton, Chris A; Hendrixson, Brent E; Brewer, Michael S; Bond, Jason E
2014-02-01
The North American tarantula genus Aphonopelma provides one of the greatest challenges to species delimitation and downstream identification in spiders because traditional morphological characters appear ineffective for evaluating limits of intra- and interspecific variation in the group. We evaluated the efficacy of numerous molecular-based approaches to species delimitation within Aphonopelma based upon the most extensive sampling of theraphosids to date, while also investigating the sensitivity of randomized taxon sampling on the reproducibility of species boundaries. Mitochondrial DNA (cytochrome c oxidase subunit I) sequences were sampled from 682 specimens spanning the genetic, taxonomic, and geographic breadth of the genus within the United States. The effects of random taxon sampling compared traditional Neighbor-Joining with three modern quantitative species delimitation approaches (ABGD, P ID(Liberal), and GMYC). Our findings reveal remarkable consistency and congruence across various approaches and sampling regimes, while highlighting highly divergent outcomes in GMYC. Our investigation allowed us to integrate methodologies into an efficient, consistent, and more effective general methodological workflow for estimating species boundaries within the mygalomorph spider genus Aphonopelma. Taken alone, these approaches are not particularly useful - especially in the absence of prior knowledge of the focal taxa. Only through the incorporation of multiple lines of evidence, employed in a hypothesis-testing framework, can the identification and delimitation of confident species boundaries be determined. A key point in studying closely related species, and perhaps one of the most important aspects of DNA barcoding, is to combine a sampling strategy that broadly identifies the extent of genetic diversity across the distributions of the species of interest and incorporates previous knowledge into the "species equation" (morphology, molecules, and natural history
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Bioluminescent bioreporter integrated circuit detection methods
Simpson, Michael L.; Paulus, Michael J.; Sayler, Gary S.; Applegate, Bruce M.; Ripp, Steven A.
2005-06-14
Disclosed are monolithic bioelectronic devices comprising a bioreporter and an OASIC. These bioluminescent bioreporter integrated circuit are useful in detecting substances such as pollutants, explosives, and heavy-metals residing in inhospitable areas such as groundwater, industrial process vessels, and battlefields. Also disclosed are methods and apparatus for detection of particular analytes, including ammonia and estrogen compounds.
Implicit integration methods for dislocation dynamics
NASA Astrophysics Data System (ADS)
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-03-01
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. This paper investigates the viability of high-order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Multiple time step integrators in ab initio molecular dynamics
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
Integration of multiple view plus depth data for free viewpoint 3D display
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Yoshida, Yuko; Kawamoto, Tetsuya; Fujii, Toshiaki; Mase, Kenji
2014-03-01
This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.
Fidelity of the Integrated Force Method Solution
NASA Technical Reports Server (NTRS)
Hopkins, Dale; Halford, Gary; Coroneos, Rula; Patnaik, Surya
2002-01-01
The theory of strain compatibility of the solid mechanics discipline was incomplete since St. Venant's 'strain formulation' in 1876. We have addressed the compatibility condition both in the continuum and the discrete system. This has lead to the formulation of the Integrated Force Method. A dual Integrated Force Method with displacement as the primal variable has also been formulated. A modest finite element code (IFM/Analyzers) based on the IFM theory has been developed. For a set of standard test problems the IFM results were compared with the stiffness method solutions and the MSC/Nastran code. For the problems IFM outperformed the existing methods. Superior IFM performance is attributed to simultaneous compliance of equilibrium equation and compatibility condition. MSC/Nastran organization expressed reluctance to accept the high fidelity IFM solutions. This report discusses the solutions to the examples. No inaccuracy was detected in the IFM solutions. A stiffness method code with a small programming effort can be improved to reap the many IFM benefits when implemented with the IFMD elements. Dr. Halford conducted a peer-review on the Integrated Force Method. Reviewers' response is included.
Method and apparatus for controlling multiple motors
Jones, Rollin G.; Kortegaard, Bert L.; Jones, David F.
1987-01-01
A method and apparatus are provided for simultaneously controlling a plurality of stepper motors. Addressing circuitry generates address data for each motor in a periodic address sequence. Memory circuits respond to the address data for each motor by accessing a corresponding memory location containing a first operational data set functionally related to a direction for moving the motor, speed data, and rate of speed change. First logic circuits respond to the first data set to generate a motor step command. Second logic circuits respond to the command from the first logic circuits to generate a third data set for replacing the first data set in memory with a current operational motor status, which becomes the first data set when the motor is next addressed.
The onion method for multiple perturbation theory
NASA Astrophysics Data System (ADS)
Cross, R. J.
1988-04-01
We develop a method of successive approximations for molecular scattering theory. This consists of a recipe for removing from the Schrödinger equation, one by one, the wave functions of a set of approximate solutions. The radial wave function is expressed as a linear combination of the well-behaved and singular solutions of the first approximation, and a set of coupled differential equations is obtained for the coefficients of the approximate solutions. A similar set of coefficients is obtained for the next approximation, and the exact coefficients are expressed in terms of the approximate coefficients to yield a set of second-level coefficients. The process can be continued like pealing off the layers of an onion. At each stage the coupled differential equations for the coefficients is equivalent to the Schrödinger equation. Finally, one can either ignore the remaining coefficients or approximate the coupled equations by a simple perturbation theory.
Orthogonal matrix factorization enables integrative analysis of multiple RNA binding proteins
Stražar, Martin; Žitnik, Marinka; Zupan, Blaž; Ule, Jernej; Curk, Tomaž
2016-01-01
Motivation: RNA binding proteins (RBPs) play important roles in post-transcriptional control of gene expression, including splicing, transport, polyadenylation and RNA stability. To model protein–RNA interactions by considering all available sources of information, it is necessary to integrate the rapidly growing RBP experimental data with the latest genome annotation, gene function, RNA sequence and structure. Such integration is possible by matrix factorization, where current approaches have an undesired tendency to identify only a small number of the strongest patterns with overlapping features. Because protein–RNA interactions are orchestrated by multiple factors, methods that identify discriminative patterns of varying strengths are needed. Results: We have developed an integrative orthogonality-regularized nonnegative matrix factorization (iONMF) to integrate multiple data sources and discover non-overlapping, class-specific RNA binding patterns of varying strengths. The orthogonality constraint halves the effective size of the factor model and outperforms other NMF models in predicting RBP interaction sites on RNA. We have integrated the largest data compendium to date, which includes 31 CLIP experiments on 19 RBPs involved in splicing (such as hnRNPs, U2AF2, ELAVL1, TDP-43 and FUS) and processing of 3’UTR (Ago, IGF2BP). We show that the integration of multiple data sources improves the predictive accuracy of retrieval of RNA binding sites. In our study the key predictive factors of protein–RNA interactions were the position of RNA structure and sequence motifs, RBP co-binding and gene region type. We report on a number of protein-specific patterns, many of which are consistent with experimentally determined properties of RBPs. Availability and implementation: The iONMF implementation and example datasets are available at https://github.com/mstrazar/ionmf. Contact: tomaz.curk@fri.uni-lj.si Supplementary information: Supplementary data are available
One-step integration of multiple genes into the oleaginous yeast Yarrowia lipolytica.
Gao, Shuliang; Han, Linna; Zhu, Li; Ge, Mei; Yang, Sheng; Jiang, Yu; Chen, Daijie
2014-12-01
Yarrowia lipolytica is an unconventional yeast, and is generally recognized as safe (GRAS). It provides a versatile fermentation platform that is used commercially to produce many added-value products. Here we report a multiple fragment assembly method that allows one-step integration of an entire β-carotene biosynthesis pathway (~11 kb, consisting of four genes) via in vivo homologous recombination into the rDNA locus of the Y. lipolytica chromosome. The highest efficiency was 21%, and the highest production of β-carotene was 2.2 ± 0.3 mg per g dry cell weight. The total procedure was completed in less than one week, as compared to a previously reported sequential gene integration method that required n weeks for n genes. This time-saving method will facilitate synthetic biology, metabolic engineering and functional genomics studies of Y. lipolytica. PMID:25216641
Numerical methods for engine-airframe integration
Murthy, S.N.B.; Paynter, G.C.
1986-01-01
Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.
A parallel multiple path tracing method based on OptiX for infrared image generation
NASA Astrophysics Data System (ADS)
Wang, Hao; Wang, Xia; Liu, Li; Long, Teng; Wu, Zimu
2015-12-01
Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.
Package for integrated optic circuit and method
Kravitz, S.H.; Hadley, G.R.; Warren, M.E.; Carson, R.F.; Armendariz, M.G.
1998-08-04
A structure and method are disclosed for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package. 6 figs.
Package for integrated optic circuit and method
Kravitz, Stanley H.; Hadley, G. Ronald; Warren, Mial E.; Carson, Richard F.; Armendariz, Marcelino G.
1998-01-01
A structure and method for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package.
A Methodology for Multiple Rule System Integration and Resolution Within a Singular Knowledge Base
NASA Technical Reports Server (NTRS)
Kautzmann, Frank N., III
1988-01-01
Expert Systems which support knowledge representation by qualitative modeling techniques experience problems, when called upon to support integrated views embodying description and explanation, especially when other factors such as multiple causality, competing rule model resolution, and multiple uses of knowledge representation are included. A series of prototypes are being developed to demonstrate the feasibility of automating the process of systems engineering, design and configuration, and diagnosis and fault management. A study involves not only a generic knowledge representation; it must also support multiple views at varying levels of description and interaction between physical elements, systems, and subsystems. Moreover, it will involve models of description and explanation for each level. This multiple model feature requires the development of control methods between rule systems and heuristics on a meta-level for each expert system involved in an integrated and larger class of expert system. The broadest possible category of interacting expert systems is described along with a general methodology for the knowledge representation and control of mutually exclusive rule systems.
In silico gene prioritization by integrating multiple data sources.
Chen, Yixuan; Wang, Wenhui; Zhou, Yingyao; Shields, Robert; Chanda, Sumit K; Elston, Robert C; Li, Jing
2011-01-01
Identifying disease genes is crucial to the understanding of disease pathogenesis, and to the improvement of disease diagnosis and treatment. In recent years, many researchers have proposed approaches to prioritize candidate genes by considering the relationship of candidate genes and existing known disease genes, reflected in other data sources. In this paper, we propose an expandable framework for gene prioritization that can integrate multiple heterogeneous data sources by taking advantage of a unified graphic representation. Gene-gene relationships and gene-disease relationships are then defined based on the overall topology of each network using a diffusion kernel measure. These relationship measures are in turn normalized to derive an overall measure across all networks, which is utilized to rank all candidate genes. Based on the informativeness of available data sources with respect to each specific disease, we also propose an adaptive threshold score to select a small subset of candidate genes for further validation studies. We performed large scale cross-validation analysis on 110 disease families using three data sources. Results have shown that our approach consistently outperforms other two state of the art programs. A case study using Parkinson disease (PD) has identified four candidate genes (UBB, SEPT5, GPR37 and TH) that ranked higher than our adaptive threshold, all of which are involved in the PD pathway. In particular, a very recent study has observed a deletion of TH in a patient with PD, which supports the importance of the TH gene in PD pathogenesis. A web tool has been implemented to assist scientists in their genetic studies. PMID:21731658
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Generating nonlinear FM chirp radar signals by multiple integrations
Doerry, Armin W.
2011-02-01
A phase component of a nonlinear frequency modulated (NLFM) chirp radar pulse can be produced by performing digital integration operations over a time interval defined by the pulse width. Each digital integration operation includes applying to a respectively corresponding input parameter value a respectively corresponding number of instances of digital integration.
Scheuermann, Thomas H; Brautigam, Chad A
2015-04-01
Isothermal titration calorimetry (ITC) has become a standard and widely available tool to measure the thermodynamic parameters of macromolecular associations. Modern applications of the method, including global analysis and drug screening, require the acquisition of multiple sets of data; sometimes these data sets number in the hundreds. Therefore, there is a need for quick, precise, and automated means to process the data, particularly at the first step of data analysis, which is commonly the integration of the raw data to yield an interpretable isotherm. Herein, we describe enhancements to an algorithm that previously has been shown to provide an automated, unbiased, and high-precision means to integrate ITC data. These improvements allow for the speedy and precise serial integration of an unlimited number of ITC data sets, and they have been implemented in the freeware program NITPIC, version 1.1.0. We present a comprehensive comparison of the performance of this software against an older version of NITPIC and a current version of Origin, which is commonly used for integration. The new methods recapitulate the excellent performance of the previous versions of NITPIC while speeding it up substantially, and their precision is significantly better than that of Origin. This new version of NITPIC is therefore well suited to the serial integration of many ITC data sets. PMID:25524420
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
Information Integration in Multiple Cue Judgment: A Division of Labor Hypothesis
ERIC Educational Resources Information Center
Juslin, Peter; Karlsson, Linnea; Olsson, Henrik
2008-01-01
There is considerable evidence that judgment is constrained to additive integration of information. The authors propose an explanation of why serial and additive cognitive integration can produce accurate multiple cue judgment both in additive and non-additive environments in terms of an adaptive division of labor between multiple representations.…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
... COMMISSION Certain Integrated Circuit Packages Provided With Multiple Heat- Conducting Paths and Products... With Multiple Heat-Conducting Paths and Products Containing Same, DN 2899; the Commission is soliciting... multiple heat-conducting paths and products containing same. The complaint names as respondents...
Integrating stakeholder values with multiple attributes to quantify watershed performance
NASA Astrophysics Data System (ADS)
Shriver, Deborah M.; Randhir, Timothy O.
2006-08-01
Integrating stakeholder values into the process of quantifying impairment of ecosystem functions is an important aspect of watershed assessment and planning. This study develops a classification and prioritization model to assess potential impairment in watersheds. A systematic evaluation of a broad set of abiotic, biotic, and human indicators of watershed structure and function was used to identify the level of degradation at a subbasin scale. Agencies and communities can use the method to effectively target and allocate resources to areas of greatest restoration need. The watershed performance measure (WPM) developed in this study is composed of three major components: (1) hydrologic processes (water quantity and quality), (2) biodiversity at a species scale (core and priority habitat for rare and endangered species and species richness) and landscape scale (impacts of fragmentation), and (3) urban impacts as assessed in the built environment (effective impervious area) and population effects (densities and density of toxic waste sites). Simulation modeling using the Soil and Water Assessment Tool (SWAT), monitoring information, and spatial analysis with GIS were used to assess each criterion in developing this model. Weights for attributes of potential impairment were determined through the use of the attribute prioritization procedure with a panel of expert stakeholders. This procedure uses preselected attributes and corresponding stakeholder values and is data intensive. The model was applied to all subbasins of the Chicopee River Watershed of western Massachusetts, an area with a mixture of rural, heavily forested lands, suburban, and urbanized areas. Highly impaired subbasins in one community were identified using this methodology and evaluated for principal forms of degradation and potential restoration policies and BMPs. This attribute-based prioritization method could be used in identifying baselines, prioritization policies, and adaptive community
Integrated Force Method for Indeterminate Structures
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Halford, Gary R.; Patnaik, Surya N.
2008-01-01
Two methods of solving indeterminate structural-mechanics problems have been developed as products of research on the theory of strain compatibility. In these methods, stresses are considered to be the primary unknowns (in contrast to strains and displacements being considered as the primary unknowns in some prior methods). One of these methods, denoted the integrated force method (IFM), makes it possible to compute stresses, strains, and displacements with high fidelity by use of modest finite-element models that entail relatively small amounts of computation. The other method, denoted the completed Beltrami Mitchell formulation (CBMF), enables direct determination of stresses in an elastic continuum with general boundary conditions, without the need to first calculate displacements as in traditional methods. The equilibrium equation, the compatibility condition, and the material law are the three fundamental concepts of the theory of structures. For almost 150 years, it has been commonly supposed that the theory is complete. However, until now, the understanding of the compatibility condition remained incomplete, and the compatibility condition was confused with the continuity condition. Furthermore, the compatibility condition as applied to structures in its previous incomplete form was inconsistent with the strain formulation in elasticity.
Methods of Genomic Competency Integration in Practice
Jenkins, Jean; Calzone, Kathleen A.; Caskey, Sarah; Culp, Stacey; Weiner, Marsha; Badzek, Laurie
2015-01-01
Purpose Genomics is increasingly relevant to health care, necessitating support for nurses to incorporate genomic competencies into practice. The primary aim of this project was to develop, implement, and evaluate a year-long genomic education intervention that trained, supported, and supervised institutional administrator and educator champion dyads to increase nursing capacity to integrate genomics through assessments of program satisfaction and institutional achieved outcomes. Design Longitudinal study of 23 Magnet Recognition Program® Hospitals (21 intervention, 2 controls) participating in a 1-year new competency integration effort aimed at increasing genomic nursing competency and overcoming barriers to genomics integration in practice. Methods Champion dyads underwent genomic training consisting of one in-person kick-off training meeting followed by monthly education webinars. Champion dyads designed institution-specific action plans detailing objectives, methods or strategies used to engage and educate nursing staff, timeline for implementation, and outcomes achieved. Action plans focused on a minimum of seven genomic priority areas: champion dyad personal development; practice assessment; policy content assessment; staff knowledge needs assessment; staff development; plans for integration; and anticipated obstacles and challenges. Action plans were updated quarterly, outlining progress made as well as inclusion of new methods or strategies. Progress was validated through virtual site visits with the champion dyads and chief nursing officers. Descriptive data were collected on all strategies or methods utilized, and timeline for achievement. Descriptive data were analyzed using content analysis. Findings The complexity of the competency content and the uniqueness of social systems and infrastructure resulted in a significant variation of champion dyad interventions. Conclusions Nursing champions can facilitate change in genomic nursing capacity through
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Wang, Jinlian; Zuo, Yiming; Liu, Lun; Man, Yangao; Tadesse, Mahlet G.; Ressom, Habtom W
2014-01-01
Background Prediction of functional modules is indispensable for detecting protein deregulation in human complex diseases such as cancer. Bayesian network (BN) is one of the most commonly used models to integrate heterogeneous data from multiple sources such as protein domain, interactome, functional annotation, genome-wide gene expression, and the literature. Methods and Results In this paper, we present a BN classifier that is customized to: 1) increase the ability to integrate diverse information from different sources, 2) effectively predict protein-protein interactions, 3) infer aberrant networks with scale-free and small world properties, and 4) group molecules into functional modules or pathways based on the primary function and biological features. Application of this model on discovering protein biomarkers of hepatocelluar carcinoma (HCC) leads to the identification of functional modules that provide insights into the mechanism of the development and progression of HCC. These functional modules include cell cycle deregulation, increased angiogenesis (e.g., vascular endothelial growth factor, blood vessel morphogenesis), oxidative metabolic alterations, and aberrant activation of signaling pathways involved in cellular proliferation, survival, and differentiation. Conclusion The discoveries and conclusions derived from our customized BN classifier are consistent with previously published results. The proposed approach for determining BN structure facilitates the integration of heterogeneous data from multiple sources to elucidate the mechanisms of complex diseases. PMID:24736851
Curriculum Integration in Arts Education: Connecting Multiple Art Forms through the Idea of "Space"
ERIC Educational Resources Information Center
Bautista, Alfredo; Tan, Liang See; Ponnusamy, Letchmi Devi; Yau, Xenia
2016-01-01
Arts integration research has focused on documenting how the teaching of specific art forms can be integrated with "core" academic subject matters (e.g. science, mathematics and literacy). However, the question of how the teaching of multiple art forms themselves can be integrated in schools remains to be explored by educational…
A Novel Method of Line Detection using Image Integration Method
NASA Astrophysics Data System (ADS)
Lin, Daniel; Sun, Bo
2015-03-01
We developed a novel line detection algorithm based on image integration method. Hough Transformation uses spatial image gradient method to detect lines on an image. This is problematic because if the image has a region of high noise intensity, the gradient would point towards the noisy region . Denoising the noisy image requires an application of sophisticated noise reduction algorithm which increases computation complexity. Our algorithm can remedy this problem by averaging the pixels around the image region of interest. We were able to detect collagen fiber lines on an image produced by confocal microscope.
Solution methods for very highly integrated circuits.
Nong, Ryan; Thornquist, Heidi K.; Chen, Yao; Mei, Ting; Santarelli, Keith R.; Tuminaro, Raymond Stephen
2010-12-01
While advances in manufacturing enable the fabrication of integrated circuits containing tens-to-hundreds of millions of devices, the time-sensitive modeling and simulation necessary to design these circuits poses a significant computational challenge. This is especially true for mixed-signal integrated circuits where detailed performance analyses are necessary for the individual analog/digital circuit components as well as the full system. When the integrated circuit has millions of devices, performing a full system simulation is practically infeasible using currently available Electrical Design Automation (EDA) tools. The principal reason for this is the time required for the nonlinear solver to compute the solutions of large linearized systems during the simulation of these circuits. The research presented in this report aims to address the computational difficulties introduced by these large linearized systems by using Model Order Reduction (MOR) to (i) generate specialized preconditioners that accelerate the computation of the linear system solution and (ii) reduce the overall dynamical system size. MOR techniques attempt to produce macromodels that capture the desired input-output behavior of larger dynamical systems and enable substantial speedups in simulation time. Several MOR techniques that have been developed under the LDRD on 'Solution Methods for Very Highly Integrated Circuits' will be presented in this report. Among those presented are techniques for linear time-invariant dynamical systems that either extend current approaches or improve the time-domain performance of the reduced model using novel error bounds and a new approach for linear time-varying dynamical systems that guarantees dimension reduction, which has not been proven before. Progress on preconditioning power grid systems using multi-grid techniques will be presented as well as a framework for delivering MOR techniques to the user community using Trilinos and the Xyce circuit simulator
ERIC Educational Resources Information Center
Crawford, Carrie L.
1990-01-01
Reviews literature on hypnosis, imagery, and metaphor as applied to the treatment and integration of those with multiple personality disorder (MPD) and dissociative states. Considers diagnostic criteria of MPD; explores current theories of etiology and treatment; and suggests specific examples of various clinical methods of treatment using…
Integrability: mathematical methods for studying solitary waves theory
NASA Astrophysics Data System (ADS)
Wazwaz, Abdul-Majid
2014-03-01
In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the
Multiple imputation methods for bivariate outcomes in cluster randomised trials.
DiazOrdaz, K; Kenward, M G; Gomes, M; Grieve, R
2016-09-10
Missing observations are common in cluster randomised trials. The problem is exacerbated when modelling bivariate outcomes jointly, as the proportion of complete cases is often considerably smaller than the proportion having either of the outcomes fully observed. Approaches taken to handling such missing data include the following: complete case analysis, single-level multiple imputation that ignores the clustering, multiple imputation with a fixed effect for each cluster and multilevel multiple imputation. We contrasted the alternative approaches to handling missing data in a cost-effectiveness analysis that uses data from a cluster randomised trial to evaluate an exercise intervention for care home residents. We then conducted a simulation study to assess the performance of these approaches on bivariate continuous outcomes, in terms of confidence interval coverage and empirical bias in the estimated treatment effects. Missing-at-random clustered data scenarios were simulated following a full-factorial design. Across all the missing data mechanisms considered, the multiple imputation methods provided estimators with negligible bias, while complete case analysis resulted in biased treatment effect estimates in scenarios where the randomised treatment arm was associated with missingness. Confidence interval coverage was generally in excess of nominal levels (up to 99.8%) following fixed-effects multiple imputation and too low following single-level multiple imputation. Multilevel multiple imputation led to coverage levels of approximately 95% throughout. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26990655
Method for measuring multiple scattering corrections between liquid scintillators
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Method for measuring multiple scattering corrections between liquid scintillators
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-07-01
A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Integrative Data Analysis: The Simultaneous Analysis of Multiple Data Sets
ERIC Educational Resources Information Center
Curran, Patrick J.; Hussong, Andrea M.
2009-01-01
There are both quantitative and methodological techniques that foster the development and maintenance of a cumulative knowledge base within the psychological sciences. Most noteworthy of these techniques is meta-analysis, which allows for the synthesis of summary statistics drawn from multiple studies when the original data are not available.…
Hu, Yuan-Liang; Chen, Zeng-Ping; Chen, Yao; Shi, Cai-Xia; Yu, Ru-Qin
2016-05-01
In this contribution, a multiplicative effects model for generalized multiple-internal-standard method (MEMGMIS) was proposed to solve the signal instability problem of LC-MS over time. MEMGMIS model seamlessly integrates the multiple-internal-standard strategy with multivariate calibration method, and takes full use of all the information carried by multiple internal standards during the quantification of target analytes. Unlike the existing methods based on multiple internal standards, MEMGMIS does not require selecting an optimal internal standard for the quantification of a specific analyte from multiple internal standards used. MEMGMIS was applied to a proof-of-concept model system: the simultaneous quantitative analysis of five edible artificial colorants in two kinds of cocktail drinks. Experimental results demonstrated that MEMGMIS models established on LC-MS data of calibration samples prepared with ultrapure water could provide quite satisfactory concentration predictions for colorants in cocktail samples from their LC-MS data measured 10days after the LC-MS analysis of the calibration samples. The average relative prediction errors of MEMGMIS models did not exceed 6.0%, considerably better than the corresponding values of commonly used univariate calibration models combined with multiple internal standards. The advantages of good performance and simple implementation render MEMGMIS model a promising alternative tool in quantitative LC-MS assays. PMID:27072522
Parallel methods for dynamic simulation of multiple manipulator systems
NASA Technical Reports Server (NTRS)
Mcmillan, Scott; Sadayappan, P.; Orin, David E.
1993-01-01
In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.
Zhao, Dong; Su, Baiquan; Chen, Guowen; Liao, Hongen
2015-04-20
In this paper, we present a polyhedron-shaped floating autostereoscopic display viewable from 360 degrees using integral photography (IP) and multiple semitransparent mirrors. IP combined with polyhedron-shaped multiple semitransparent mirrors is used to achieve a 360 degree viewable floating three-dimensional (3D) autostereoscopic display, having the advantage of being able to be viewed by several observers from various viewpoints simultaneously. IP is adopted to generate a 3D autostereoscopic image with full parallax property. Multiple semitransparent mirrors reflect corresponding IP images, and the reflected IP images are situated around the center of the polyhedron-shaped display device for producing the floating display. The spatial reflected IP images reconstruct a floating autostereoscopic image viewable from 360 degrees. We manufactured two prototypes for producing such displays and performed two sets of experiments to evaluate the feasibility of the method described above. The results of our experiments showed that our approach can achieve a floating autostereoscopic display viewable from surrounding area. Moreover, it is shown the proposed method is feasible to facilitate the continuous viewpoint of a whole 360 degree display without flipping. PMID:25969022
Multiple Solution Methods and Multiple Outcomes--Is It a Task for Kindergarten Children?
ERIC Educational Resources Information Center
Tsamir, Pessia; Tirosh, Dina; Tabach, Michal; Levenson, Esther
2010-01-01
Engaging students with multiple solution problems is considered good practice. Solutions to problems consist of the outcomes of the problem as well as the methods employed to reach these outcomes. In this study we analyze the results obtained from two groups of kindergarten children who engaged in one task, the Create an Equal Number Task. This…
ERIC Educational Resources Information Center
Urdan, Tim; Munoz, Chantico
2012-01-01
Multiple methods were used to examine the academic motivation and cultural identity of a sample of college undergraduates. The children of immigrant parents (CIPs, n = 52) and the children of non-immigrant parents (non-CIPs, n = 42) completed surveys assessing core cultural identity, valuing of cultural accomplishments, academic self-concept,…
Satellite attitude prediction by multiple time scales method
NASA Technical Reports Server (NTRS)
Tao, Y. C.; Ramnath, R.
1975-01-01
An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.
ERIC Educational Resources Information Center
Jennings, Todd, Ed.
Integrative education is defined as education that promotes learning and teaching in nonfragmented ways that embrace notions of holism, complexity, and interconnection. Furthermore, integrative education embraces the links, rather than the divisions, between the academic disciplines (e.g., arts and sciences) and between various subjective and…
Identifying multiple submissions in Internet research: preserving data integrity.
Bowen, Anne M; Daniel, Candice M; Williams, Mark L; Baird, Grayson L
2008-11-01
Internet-based sexuality research with hidden populations has become increasingly popular. Respondent anonymity may encourage participation and lower social desirability, but associated disinhibition may promote multiple submissions, especially when incentives are offered. The goal of this study was to identify the usefulness of different variables for detecting multiple submissions from repeat responders and to explore incentive effects. The data included 1,900 submissions from a three-session Internet intervention with a pretest and three post-test questionnaires. Participants were men who have sex with men and incentives were offered to rural participants for completing each questionnaire. The final number of submissions included 1,273 "unique", 132 first submissions by "repeat responders" and 495 additional submissions by the "repeat responders" (N = 1,900). Four categories of repeat responders were identified: "infrequent" (2-5 submissions), "persistent" (6-10 submissions), "very persistent" (11-30 submissions), and "hackers" (more than 30 submissions). Internet Provider (IP) addresses, user names, and passwords were the most useful for identifying "infrequent" repeat responders. "Hackers" often varied their IP address and identifying information to prevent easy identification, but investigating the data for small variations in IP, using reverse telephone look up, and patterns across usernames and passwords were helpful. Incentives appeared to play a role in stimulating multiple submissions, especially from the more sophisticated "hackers". Finally, the web is ever evolving and it will be necessary to have good programmers and staff who evolve as fast as "hackers". PMID:18240015
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Multiple integral representation for the trigonometric SOS model with domain wall boundaries
NASA Astrophysics Data System (ADS)
Galleas, W.
2012-05-01
Using the dynamical Yang-Baxter algebra we derive a functional equation for the partition function of the trigonometric SOS model with domain wall boundary conditions. The solution of the equation is given in terms of a multiple contour integral.
NASA Astrophysics Data System (ADS)
Mahmud, K.; Mariethoz, G.; Baker, A.; Sharma, A.
2015-01-01
Hydraulic conductivity is one of the most critical and at the same time one of the most uncertain parameters in many groundwater models. One problem commonly faced is that the data are usually not collected at the same scale as the discretized elements used in a numerical model. Moreover, it is common that different types of hydraulic conductivity measurements, corresponding to different spatial scales, coexist in a studied domain, which have to be integrated simultaneously. Here we address this issue in the context of Image Quilting, one of the recently developed multiple-point geostatistics methods. Based on a training image that represents fine-scale spatial variability, we use the simplified renormalization upscaling method to obtain a series of upscaled training images that correspond to the different scales at which measurements are available. We then apply Image Quilting with such a multiscale training image to be able to incorporate simultaneously conditioning data at several spatial scales of heterogeneity. The realizations obtained satisfy the conditioning data exactly across all scales, but it can come at the expense of a small approximation in the representation of the physical scale relationships. In order to mitigate this approximation, we iteratively apply a kriging-based correction to the finest scale that ensures local conditioning at the coarsest scales. The method is tested on a series of synthetic examples where it gives good results and shows potential for the integration of different measurement methods in real-case hydrogeological models.
Two-Dimensional Integral Combustion for Multiple Phase Flow
1997-05-05
This ANL multiphase two-dimensional combustion computer code solves conservation equations for gaseous species and solid particles (or droplets) of various sizes. General conservation laws, expressed by ellipitic-type partial differential equations are used in conjunction with rate equations governing the mass, momentum, enthaply, species, turbulent kinetic energy, and turbulent dissipation for a two-phase reacting flow. Associated submodels include an integral combustion, a two-parameter turbulence, a particle evaporation, and interfacial submodels. A newly-developed integral combustion submodel replacingmore » an Arrhenius-type differential reaction submodel is implemented to improve numerical convergence and enhance numerical stability. The two-parameter turbulence submodel is modified for both gas and solid phases. The evaporation submodel treats size dispersion as well as particle evaporation. Interfacial submodels use correlations to model interfacial momentum and energy transfer.« less
Integration of Multiple Organic Light Emitting Diodes and a Lens for Emission Angle Control
NASA Astrophysics Data System (ADS)
Rahadian, Fanny; Masada, Tatsuya; Fujieda, Ichiro
We propose to integrate a single lens on top of multiple OLEDs. Angular distribution of the light emitted from the lens surface is altered by turning on the OLEDs selectively. We can use such a light source as a backlight for a liquid crystal display to switch its viewing angle range and/or to display multiple images in different directions. Pixel-level integration would allow one to construct an OLED display with a similar emission angle control.
Accelerating Ab Initio Path Integral Simulations via Imaginary Multiple-Timestepping.
Cheng, Xiaolu; Herr, Jonathan D; Steele, Ryan P
2016-04-12
This work investigates the use of multiple-timestep schemes in imaginary time for computationally efficient ab initio equilibrium path integral simulations of quantum molecular motion. In the simplest formulation, only every n(th) path integral replica is computed at the target level of electronic structure theory, whereas the remaining low-level replicas still account for nuclear motion quantum effects with a more computationally economical theory. Motivated by recent developments for multiple-timestep techniques in real-time classical molecular dynamics, both 1-electron (atomic-orbital basis set) and 2-electron (electron correlation) truncations are shown to be effective. Structural distributions and thermodynamic averages are tested for representative analytic potentials and ab initio molecular examples. Target quantum chemistry methods include density functional theory and second-order Møller-Plesset perturbation theory, although any level of theory is formally amenable to this framework. For a standard two-level splitting, computational speedups of 1.6-4.0x are observed when using a 4-fold reduction in time slices; an 8-fold reduction is feasible in some cases. Multitiered options further reduce computational requirements and suggest that quantum mechanical motion could potentially be obtained at a cost not significantly different from the cost of classical simulations. PMID:26966920
Characterizing lentic freshwater fish assemblages using multiple sampling methods
Fischer, Jesse R.; Quist, Michael
2014-01-01
Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.
Identifying Multiple Submissions in Internet Research: Preserving Data Integrity
Bowen, Anne M.; Daniel, Candice M.; Williams, Mark L.; Baird, Grayson L.
2008-01-01
Internet-based sexuality research with hidden populations has become increasingly popular. Respondent anonymity may encourage participation and lower social desirability, but associated disinhibition may promote multiple submissions, especially when incentives are offered. The goal of this study was to identify the usefulness of different variables for detecting multiple submissions from repeat responders and to explore incentive effects. The data included 1,900 submissions from a three-session Internet intervention with a pretest and three post-test questionnaires. Participants were men who have sex with men and incentives were offered to rural participants for completing each questionnaire. The final number of submissions included 1,273 “unique”, 132 first submissions by “repeat responders” and 495 additional submissions by the “repeat responders” (N = 1,900). Four categories of repeat responders were identified: “infrequent” (2–5 submissions), “persistent” (6–10 submissions), “very persistent” (11–30 submissions), and “hackers” (more than 30 submissions). Internet Provider (IP) addresses, user names, and passwords were the most useful for identifying “infrequent” repeat responders. “Hackers” often varied their IP address and identifying information to prevent easy identification, but investigating the data for small variations in IP, using reverse telephone look up, and patterns across usernames and passwords were helpful. Incentives appeared to play a role in stimulating multiple submissions, especially from the more sophisticated “hackers”. Finally, the web is ever evolving and it will be necessary to have good programmers and staff who evolve as fast as “hackers”. PMID:18240015
Method and apparatus for fiber optic multiple scattering suppression
NASA Technical Reports Server (NTRS)
Ackerson, Bruce J. (Inventor)
2000-01-01
The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.
ERIC Educational Resources Information Center
Daniel, Shannon M.
2015-01-01
In this self-study, the author reflects on her implementation of empathetic, critical integrations of multiple perspectives (ECI), which she designed to afford preservice teachers the opportunity to discuss and collectively reflect upon the oft-diverging multiple perspectives, values, and practices they experience during their practicum (Daniel,…
Multiple grid method for the calculation of potential flow around three dimensional bodies
NASA Astrophysics Data System (ADS)
Wolff, H.
1982-01-01
The classical approach of representation of the solution by means of a doublet distribution on the boundary of the domain is considered. From the boundary condition, a Fredholm integral equation for the doublet distribution, mu, is obtained. By a piecewise constant function, mu is approximated. This numerical method results in a nonsparse system that is solved by a multiple grid iterative process. The convergence rate of this process is discussed and its performance is compared with the Jacobi iterative process. For flow around an ellipsoid, the multiple grid process turns out to be much more efficient than the Jacobi iterative process.
Energy Simulation of Integrated Multiple-Zone Variable Refrigerant Flow System
Shen, Bo; Rice, C Keith; Baxter, Van D
2013-01-01
We developed a detailed steady-state system model, to simulate the performance of an integrated five-zone variable refrigerant flow (VRF)heat pump system. The system is multi-functional, capable of space cooling, space heating, combined space cooling and water heating, and dedicated water heating. Methods were developed to map the VRF performance in each mode, based on the abundant data produced by the equipment system model. The performance maps were used in TRNSYS annual energy simulations. Using TRNSYS, we have successfully setup and run cases for a multiple-split, VRF heat pump and dehumidifier combination in 5-zone houses in 5 climates that control indoor dry-bulb temperature and relative humidity. We compared the calculated energy consumptions for the VRF heat pump against that of a baseline central air source heat pump, coupled with electric water heating and the standalone dehumidifiers. In addition, we investigated multiple control scenarios for the VRF heat pump, i.e. on/off control, variable indoor air flow rate, and using different zone temperature setting schedules, etc. The energy savings for the multiple scenarios were assessed.
A method for interactive specification of multiple-block topologies
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.; Mccann, Karen M.
1991-01-01
A method is presented for dealing with the vast amount of topological and other data which must be specified to generate a multiple-block computational grid. Specific uses of the graphical capabilities of a powerful scientific workstation are described which reduce the burden on the user of collecting and formatting such large amounts of data. A program to implement this method, 3DPREP, is described. A plotting transformation algorithm, some useful software tools, notes on programming, and a database organization are also presented. Example grids developed using the method are shown.
Improved parallel solution techniques for the integral transport matrix method
Zerr, Robert J; Azmy, Yousry Y
2010-11-23
Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution by up to {approx}50% when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing case are opticaUy thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block preconditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient preconditioner.
Integrated Dataset of Screening Hits against Multiple Neglected Disease Pathogens
Nwaka, Solomon; Besson, Dominique; Ramirez, Bernadette; Maes, Louis; Matheeussen, An; Bickle, Quentin; Mansour, Nuha R.; Yousif, Fouad; Townson, Simon; Gokool, Suzanne; Cho-Ngwa, Fidelis; Samje, Moses; Misra-Bhattacharya, Shailja; Murthy, P. K.; Fakorede, Foluke; Paris, Jean-Marc; Yeates, Clive; Ridley, Robert; Van Voorhis, Wesley C.; Geary, Timothy
2011-01-01
New chemical entities are desperately needed that overcome the limitations of existing drugs for neglected diseases. Screening a diverse library of 10,000 drug-like compounds against 7 neglected disease pathogens resulted in an integrated dataset of 744 hits. We discuss the prioritization of these hits for each pathogen and the strong correlation observed between compounds active against more than two pathogens and mammalian cell toxicity. Our work suggests that the efficiency of early drug discovery for neglected diseases can be enhanced through a collaborative, multi-pathogen approach. PMID:22247786
Improved Multiple-Coarsening Methods for Sn Discretizations of the Boltzmann Equation
Lee, Barry
2010-06-01
In a recent series of articles, the author presented a multiple-coarsening multigrid method for solving $S_n$ discretizations of the Boltzmann transport equation. This algorithm is applied to an integral equation for the scalar flux or moments. Although this algorithm is very efficient over parameter regimes that describe realistic neutron/photon transport applications, improved methods that can reduce the computational cost are presented in this paper. These improved methods are derived through a careful examination of the frequencies, particularly the near-nullspace, of the integral equation. In the earlier articles, the near-nullspace components were shown to be smooth in angle in the sense that the angular fluxes generated by these components are smooth in angle. In this paper, we present a spatial description of these near-nullspace components. Using the angular description of the earlier papers together with the spatial description reveals the intrinsic space-angle dependence of the integral equation's frequencies. This space-angle dependence is used to determine the appropriate space-angle grids to represent and efficiently attenuate the near-nullspace error components on. It will be shown that these components can have multiple spatial scales. By using only the appropriate space-angle grids that can represent these spatial scales in the original multiple-coarsening algorithm, an improved algorithm is obtained. Moreover, particularly for anisotropic scattering, recognizing the strong angle dependence of the angular fluxes generated by the high frequencies of the integral equation, another improved multiple-coarsening scheme is derived. Restricting this scheme to the appropriate space-angle grids produces a very efficient method.
Improved Multiple-Coarsening Methods for Sn Discretizations of the Boltzmann Equation
Lee, B
2008-12-01
In a recent series of articles, the author presented a multiple-coarsening multigrid method for solving S{sub n} discretizations of the Boltzmann transport equation. This algorithm is applied to an integral equation for the scalar flux or moments. Although this algorithm is very efficient over parameter regimes that describe realistic neutron/photon transport applications, improved methods that can reduce the computational cost are presented in this paper. These improved methods are derived through a careful examination of the frequencies, particularly the near-nullspace, of the integral equation. In the earlier articles, the near-nullspace components were shown to be smooth in angle in the sense that the angular fluxes generated by these components are smooth in angle. In this paper, we present a spatial description of these near-nullspace components. Using the angular description of the earlier papers together with the spatial description reveals the intrinsic space-angle dependence of the integral equation's frequencies. This space-angle dependence is used to determine the appropriate space-angle grids to represent and efficiently attenuate the near-nullspace error components on. It will be shown that these components can have multiple spatial scales. By using only the appropriate space-angle grids that can represent these spatial scales in the original multiple-coarsening algorithm, an improved algorithm is obtained. Moreover, particularly for anisotropic scattering, recognizing the strong angle dependence of the angular fluxes generated by the high frequencies of the integral equation, another improved multiple-coarsening scheme is derived. Restricting this scheme to the appropriate space-angle grids produces a very efficient method.
Galerkin projection methods for solving multiple related linear systems
Chan, T.F.; Ng, M.; Wan, W.L.
1996-12-31
We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.
Students' integration of multiple representations in a titration experiment
NASA Astrophysics Data System (ADS)
Kunze, Nicole M.
A complete understanding of a chemical concept is dependent upon a student's ability to understand the microscopic or particulate nature of the phenomenon and integrate the microscopic, symbolic, and macroscopic representations of the phenomenon. Acid-base chemistry is a general chemistry topic requiring students to understand the topics of chemical reactions, solutions, and equilibrium presented earlier in the course. In this study, twenty-five student volunteers from a second semester general chemistry course completed two interviews. The first interview was completed prior to any classroom instruction on acids and bases. The second interview took place after classroom instruction, a prelab activity consisting of a titration calculation worksheet, a titration computer simulation, or a microscopic level animation of a titration, and two microcomputer-based laboratory (MBL) titration experiments. During the interviews, participants were asked to define and describe acid-base concepts and in the second interview they also drew the microscopic representations of four stages in an acid-base titration. An analysis of the data showed that participants had integrated the three representations of an acid-base titration to varying degrees. While some participants showed complete understanding of acids, bases, titrations, and solution chemistry, other participants showed several alternative conceptions concerning strong acid and base dissociation, the formation of titration products, and the dissociation of soluble salts. Before instruction, participants' definitions of acid, base, and pH were brief and consisted of descriptive terms. After instruction, the definitions were more scientific and reflected the definitions presented during classroom instruction.
Promoting return of function in multiple sclerosis: An integrated approach
Gacias, Mar; Casaccia, Patrizia
2013-01-01
Multiple sclerosis is a disease characterized by inflammatory demyelination, axonal degeneration and progressive brain atrophy. Most of the currently available disease modifying agents proved to be very effective in managing the relapse rate, however progressive neuronal damage continues to occur and leads to progressive accumulation of irreversible disability. For this reason, any therapeutic strategy aimed at restoration of function must take into account not only immunomodulation, but also axonal protection and new myelin formation. We further highlight the importance of an holistic approach, which considers the variability of therapeutic responsiveness as the result of the interplay between genetic differences and the epigenome, which is in turn affected by gender, age and differences in life style including diet, exercise, smoking and social interaction. PMID:24363985
Promoting return of function in multiple sclerosis: An integrated approach.
Gacias, Mar; Casaccia, Patrizia
2013-10-01
Multiple sclerosis is a disease characterized by inflammatory demyelination, axonal degeneration and progressive brain atrophy. Most of the currently available disease modifying agents proved to be very effective in managing the relapse rate, however progressive neuronal damage continues to occur and leads to progressive accumulation of irreversible disability. For this reason, any therapeutic strategy aimed at restoration of function must take into account not only immunomodulation, but also axonal protection and new myelin formation. We further highlight the importance of an holistic approach, which considers the variability of therapeutic responsiveness as the result of the interplay between genetic differences and the epigenome, which is in turn affected by gender, age and differences in life style including diet, exercise, smoking and social interaction. PMID:24363985
Exercise in multiple sclerosis -- an integral component of disease management
2012-01-01
Multiple sclerosis (MS) is the most common chronic inflammatory disorder of the central nervous system (CNS) in young adults. The disease causes a wide range of symptoms depending on the localization and characteristics of the CNS pathology. In addition to drug-based immunomodulatory treatment, both drug-based and non-drug approaches are established as complementary strategies to alleviate existing symptoms and to prevent secondary diseases. In particular, physical therapy like exercise and physiotherapy can be customized to the individual patient's needs and has the potential to improve the individual outcome. However, high quality systematic data on physical therapy in MS are rare. This article summarizes the current knowledge on the influence of physical activity and exercise on disease-related symptoms and physical restrictions in MS patients. Other treatment strategies such as drug treatments or cognitive training were deliberately excluded for the purposes of this article. PMID:22738091
Method for high-accuracy multiplicity-correlation measurements
NASA Astrophysics Data System (ADS)
Gulbrandsen, K.; Søgaard, C.
2016-04-01
Multiplicity-correlation measurements provide insight into the dynamics of high-energy collisions. Models describing these collisions need these correlation measurements to tune the strengths of the underlying QCD processes which influence all observables. Detectors, however, often possess limited coverage or reduced efficiency that influence correlation measurements in obscure ways. In this paper, the effects of nonuniform detection acceptance and efficiency on the measurement of multiplicity correlations between two distinct detector regions (termed forward-backward correlations) are derived. An analysis method with such effects built in is developed and subsequently verified using different event generators. The resulting method accounts for acceptance and efficiency in a model-independent manner with high accuracy, thereby shedding light on the relative contributions of the underlying processes to particle production.
Multiple light scattering methods for multiphase flow diagnostics
NASA Astrophysics Data System (ADS)
Estevadeordal, Jordi
2015-11-01
Multiphase flows of gases and liquids containing droplets, bubbles, or particulates present light scattering imaging challenges due to the interference from each phase, such as secondary reflections, extinctions, absorptions, and refractions. These factors often prevent the unambiguous detection of each phase and also produce undesired beam steering. The effects can be especially complex in presence of dense phases, multispecies flows, and high pressure environments. This investigation reports new methods for overcoming these effects for quantitative measurements of velocity, density, and temperature fields. The methods are based on light scattering techniques combining Mie and filtered Rayleigh scattering and light extinction analyses and measurements. The optical layout is designed to perform multiple property measurements with improved signal from each phase via laser spectral and polarization characterization, etalon decontamination, and use of multiple wavelengths and imaging detectors.
Measuring multiple residual-stress components using the contour method and multiple cuts
Prime, Michael B; Swenson, Hunter; Pagliaro, Pierluigi; Zuccarello, Bernardo
2009-01-01
The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.
Reis, Ben Y; Mandl, Kenneth D
2003-01-01
Syndromic surveillance systems are being deployed widely to monitor for signals of covert bioterrorist attacks. Regional systems are being established through the integration of local surveillance data across multiple facilities. We studied how different methods of data integration affect outbreak detection performance. We used a simulation relying on a semi-synthetic dataset, introducing simulated outbreaks of different sizes into historical visit data from two hospitals. In one simulation, we introduced the synthetic outbreak evenly into both hospital datasets (aggregate model). In the second, the outbreak was introduced into only one or the other of the hospital datasets (local model). We found that the aggregate model had a higher sensitivity for detecting outbreaks that were evenly distributed between the hospitals. However, for outbreaks that were localized to one facility, maintaining individual models for each location proved to be better. Given the complementary benefits offered by both approaches, the results suggest building a hybrid system that includes both individual models for each location, and an aggregate model that combines all the data. We also discuss options for multi-level signal integration hierarchies. PMID:14728233
Pursuing the method of multiple working hypotheses for hydrological modeling
NASA Astrophysics Data System (ADS)
Clark, M. P.; Kavetski, D.; Fenicia, F.
2012-12-01
Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding of environmental dynamics at the catchment scale, which can be attributed to difficulties in measuring and representing the heterogeneity encountered in natural systems. This presentation advocates using the method of multiple working hypotheses for systematic and stringent testing of model alternatives in hydrology. We discuss how the multiple hypothesis approach provides the flexibility to formulate alternative representations (hypotheses) describing both individual processes and the overall system. When combined with incisive diagnostics to scrutinize multiple model representations against observed data, this provides hydrologists with a powerful and systematic approach for model development and improvement. Multiple hypothesis frameworks also support a broader coverage of the model hypothesis space and hence improve the quantification of predictive uncertainty arising from system and component non-identifiabilities. As part of discussing the advantages and limitations of multiple hypothesis frameworks, we critically review major contemporary challenges in hydrological hypothesis-testing, including exploiting different types of data to investigate the fidelity of alternative process representations, accounting for model structure ambiguities arising from major uncertainties in environmental data, quantifying regional differences in dominant hydrological processes, and the grander challenge of understanding the self-organization and optimality principles that may functionally explain and describe the heterogeneities evident in most environmental systems. We assess recent progress in these research directions, and how new advances are possible using multiple hypothesis
Pursuing the method of multiple working hypotheses for hydrological modeling
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri; Fenicia, Fabrizio
2011-09-01
Ambiguities in the representation of environmental processes have manifested themselves in a plethora of hydrological models, differing in almost every aspect of their conceptualization and implementation. The current overabundance of models is symptomatic of an insufficient scientific understanding of environmental dynamics at the catchment scale, which can be attributed to difficulties in measuring and representing the heterogeneity encountered in natural systems. This commentary advocates using the method of multiple working hypotheses for systematic and stringent testing of model alternatives in hydrology. We discuss how the multiple-hypothesis approach provides the flexibility to formulate alternative representations (hypotheses) describing both individual processes and the overall system. When combined with incisive diagnostics to scrutinize multiple model representations against observed data, this provides hydrologists with a powerful and systematic approach for model development and improvement. Multiple-hypothesis frameworks also support a broader coverage of the model hypothesis space and hence improve the quantification of predictive uncertainty arising from system and component nonidentifiabilities. As part of discussing the advantages and limitations of multiple-hypothesis frameworks, we critically review major contemporary challenges in hydrological hypothesis-testing, including exploiting different types of data to investigate the fidelity of alternative process representations, accounting for model structure ambiguities arising from major uncertainties in environmental data, quantifying regional differences in dominant hydrological processes, and the grander challenge of understanding the self-organization and optimality principles that may functionally explain and describe the heterogeneities evident in most environmental systems. We assess recent progress in these research directions, and how new advances are possible using multiple
Lidar Tracking of Multiple Fluorescent Tracers: Method and Field Test
NASA Technical Reports Server (NTRS)
Eberhard, Wynn L.; Willis, Ron J.
1992-01-01
Past research and applications have demonstrated the advantages and usefulness of lidar detection of a single fluorescent tracer to track air motions. Earlier researchers performed an analytical study that showed good potential for lidar discrimination and tracking of two or three different fluorescent tracers at the same time. The present paper summarizes the multiple fluorescent tracer method, discusses its expected advantages and problems, and describes our field test of this new technique.
Plant aquaporins: membrane channels with multiple integrated functions.
Maurel, Christophe; Verdoucq, Lionel; Luu, Doan-Trung; Santoni, Véronique
2008-01-01
Aquaporins are channel proteins present in the plasma and intracellular membranes of plant cells, where they facilitate the transport of water and/or small neutral solutes (urea, boric acid, silicic acid) or gases (ammonia, carbon dioxide). Recent progress was made in understanding the molecular bases of aquaporin transport selectivity and gating. The present review examines how a wide range of selectivity profiles and regulation properties allows aquaporins to be integrated in numerous functions, throughout plant development, and during adaptations to variable living conditions. Although they play a central role in water relations of roots, leaves, seeds, and flowers, aquaporins have also been linked to plant mineral nutrition and carbon and nitrogen fixation. PMID:18444909
Path integral method for DNA denaturation
NASA Astrophysics Data System (ADS)
Zoli, Marco
2009-04-01
The statistical physics of homogeneous DNA is investigated by the imaginary time path integral formalism. The base pair stretchings are described by an ensemble of paths selected through a macroscopic constraint, the fulfillment of the second law of thermodynamics. The number of paths contributing to the partition function strongly increases around and above a specific temperature Tc∗ , whereas the fraction of unbound base pairs grows continuously around and above Tc∗ . The latter is identified with the denaturation temperature. Thus, the separation of the two complementary strands appears as a highly cooperative phenomenon displaying a smooth crossover versus T . The thermodynamical properties have been computed in a large temperature range by varying the size of the path ensemble at the lower bound of the range. No significant physical dependence on the system size has been envisaged. The entropy grows continuously versus T while the specific heat displays a remarkable peak at Tc∗ . The location of the peak versus T varies with the stiffness of the anharmonic stacking interaction along the strand. The presented results suggest that denaturation in homogeneous DNA has the features of a second-order phase transition. The method accounts for the cooperative behavior of a very large number of degrees of freedom while the computation time is kept within a reasonable limit.
Integrating regional conservation priorities for multiple objectives into national policy.
Beger, Maria; McGowan, Jennifer; Treml, Eric A; Green, Alison L; White, Alan T; Wolff, Nicholas H; Klein, Carissa J; Mumby, Peter J; Possingham, Hugh P
2015-01-01
Multinational conservation initiatives that prioritize investment across a region invariably navigate trade-offs among multiple objectives. It seems logical to focus where several objectives can be achieved efficiently, but such multi-objective hotspots may be ecologically inappropriate, or politically inequitable. Here we devise a framework to facilitate a regionally cohesive set of marine-protected areas driven by national preferences and supported by quantitative conservation prioritization analyses, and illustrate it using the Coral Triangle Initiative. We identify areas important for achieving six objectives to address ecosystem representation, threatened fauna, connectivity and climate change. We expose trade-offs between areas that contribute substantially to several objectives and those meeting one or two objectives extremely well. Hence there are two strategies to guide countries choosing to implement regional goals nationally: multi-objective hotspots and complementary sets of single-objective priorities. This novel framework is applicable to any multilateral or global initiative seeking to apply quantitative information in decision making. PMID:26364769
Integrating regional conservation priorities for multiple objectives into national policy
Beger, Maria; McGowan, Jennifer; Treml, Eric A.; Green, Alison L.; White, Alan T.; Wolff, Nicholas H.; Klein, Carissa J.; Mumby, Peter J.; Possingham, Hugh P.
2015-01-01
Multinational conservation initiatives that prioritize investment across a region invariably navigate trade-offs among multiple objectives. It seems logical to focus where several objectives can be achieved efficiently, but such multi-objective hotspots may be ecologically inappropriate, or politically inequitable. Here we devise a framework to facilitate a regionally cohesive set of marine-protected areas driven by national preferences and supported by quantitative conservation prioritization analyses, and illustrate it using the Coral Triangle Initiative. We identify areas important for achieving six objectives to address ecosystem representation, threatened fauna, connectivity and climate change. We expose trade-offs between areas that contribute substantially to several objectives and those meeting one or two objectives extremely well. Hence there are two strategies to guide countries choosing to implement regional goals nationally: multi-objective hotspots and complementary sets of single-objective priorities. This novel framework is applicable to any multilateral or global initiative seeking to apply quantitative information in decision making. PMID:26364769
A method for shipboard treatment of multiple heat casualties.
Sweeney, W B; Krafte-Jacobs, B; Hansen, W; Saldana, M
1992-03-01
A method is presented for the treatment aboard ship of multiple patients afflicted with life-threatening heat illness, using an inflatable life raft cooling system. The potential benefits of this method include: (1) the utilization of readily available materials aboard U.S. Naval vessels; (2) the provision for rapid patient cooling by evaporation while maintaining patient safety and comfort; (3) the ability to treat many patients simultaneously with minimal attendant personnel; and (4) the maintenance of patient access allowing for monitoring and the administration of additional supportive measures. PMID:1603408
Integration over Multiple Timescales in Primary Auditory Cortex
Shamma, Shihab A.
2013-01-01
Speech and other natural vocalizations are characterized by large modulations in their sound envelope. The timing of these modulations contains critical information for discrimination of important features, such as phonemes. We studied how depression of synaptic inputs, a mechanism frequently reported in cortex, can contribute to the encoding of envelope dynamics. Using a nonlinear stimulus-response model that accounted for synaptic depression, we predicted responses of neurons in ferret primary auditory cortex (A1) to stimuli with natural temporal modulations. The depression model consistently performed better than linear and second-order models previously used to characterize A1 neurons, and it produced more biologically plausible fits. To test how synaptic depression can contribute to temporal stimulus integration, we used nonparametric maximum a posteriori decoding to compare the ability of neurons showing and not showing depression to reconstruct the stimulus envelope. Neurons showing evidence for depression reconstructed stimuli over a longer range of latencies. These findings suggest that variation in depression across the cortical population supports a rich code for representing the temporal dynamics of natural sounds. PMID:24305812
Integrating multiple scientific computing needs via a Private Cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.
2014-06-01
In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.
Integrating multiple irrigation technologies for overall improvement in irrigation.
Technology Transfer Automated Retrieval System (TEKTRAN)
There are many tools, techniques, and/or schemes to assist producers in irrigation water management and specifically in irrigation scheduling. This paper will highlight several of those but emphasize that several methods should be used simultaneously as an improved or advanced procedure to avoid bia...
Integrating the tools for an individualized prognosis in multiple sclerosis.
Fernández, O
2013-08-15
Clinicians treating multiple sclerosis (MS) patients need biomarkers in order to predict an individualized prognosis for every patient, that is, characteristics that can be measured in an objective manner, and that give information over normal or pathological processes, or about the response to a given therapeutic intervention. Pharmacogenetics/Genomics in the fields of MS by now can be considered a promise. In the meanwhile, clinicians should use the information provided by the many clinical epidemiological studies performed by now, telling us that there are some clinical markers of good prognosis (female sex, young age of onset, optic neuritis or isolated sensory symptoms at debut, long interval between initial and second relapse, no accumulation of disability after five years of disease evolution, normal or near normal magnetic resonance imaging (MRI) at onset). Some markers in biological samples are considered as potential prognostic markers like IgM and neurofilaments in CSF or antimyelin and chitinase 3-like 1 in blood (plasma/sera). Baseline MRI lesion number, lesion load and location have been closely associated with a worse evolution, as well as MRI measures related to axonal damage (black holes in T1, brain atrophy, grey matter atrophy (GMA) and white matter atrophy (WMA), magnetization transfer measures and intracortical lesions). Functional measures (OCT, evoked potentials) have a potential role in measuring neurodegeneration in MS and could be very useful tools for prognosis. Several mathematical approaches to estimate the risk of short term use early clinical and paraclinical biomarkers to predict the evolution of the disease. PMID:23692966
Assessing District Energy Systems Performance Integrated with Multiple Thermal Energy Storages
NASA Astrophysics Data System (ADS)
Rezaie, Behnaz
The goal of this study is to examine various energy resources in district energy (DE) systems and then DE system performance development by means of multiple thermal energy storages (TES) application. This study sheds light on areas not yet investigated precisely in detail. Throughout the research, major components of the heat plant, energy suppliers of the DE systems, and TES characteristics are separately examined; integration of various configurations of the multiple TESs in the DE system is then analysed. In the first part of the study, various sources of energy are compared, in a consistent manner, financially and environmentally. The TES performance is then assessed from various aspects. Then, TES(s) and DE systems with several sources of energy are integrated, and are investigated as a heat process centre. The most efficient configurations of the multiple TESs integrated with the DE system are investigated. Some of the findings of this study are applied on an actual DE system. The outcomes of this study provide insight for researchers and engineers who work in this field, as well as policy makers and project managers who are decision-makers. The accomplishments of the study are original developments TESs and DE systems. As an original development the Enviro-Economic Function, to balance the economic and environmental aspects of energy resources technologies in DE systems, is developed; various configurations of multiple TESs, including series, parallel, and general grid, are developed. The developed related functions are discharge temperature and energy of the TES, and energy and exergy efficiencies of the TES. The TES charging and discharging behavior of TES instantaneously is also investigated to obtain the charging temperature, the maximum charging temperature, the charging energy flow, maximum heat flow capacity, the discharging temperature, the minimum charging temperature, the discharging energy flow, the maximum heat flow capacity, and performance
Comparison of four methods for aggregating judgments from multiple experts
Booker, J.M.; Picard, R.R.
1991-01-01
This report describes a study that compares four different methods for aggregating expert judgment data given from multiple experts. These experts need not be a random sample of available experts. The experts estimate the same unknown parameter value. Their estimates need not be a representative set of sample values from an underlying distribution whose mean is an unknown parameter, {theta}. However, it is desired to combine the experts' estimates into a single aggregation estimate to reflect their amount of available knowledge about the unknown parameter. Many different aggregation estimators and methods have been proposed in the literature. However, few have been used, tested, or compared. Four different methods are chosen for this study which have been used or proposed for use in NRC studies. The set represents a cross section of the various types of methods. The results of this study do not indicate the use of any one method over another. Methods requiring minimal decision maker input are sensitive to the biases in the experts' responses. For these methods, there is no mechanism to adjust the experts' estimates to account for any known biases in the expert population such as optimism or pessimism. The results of this study indicate that these methods tend to perform poorly in all but the most ideal cases. Conversely, methods requiring extensive decision maker inputs are sensitive to misspecification. These methods perform poorly unless complete information is known about all the experts. That is, the decision maker's input parameters must nearly equal the actual values.
Differential operator multiplication method for fractional differential equations
NASA Astrophysics Data System (ADS)
Tang, Shaoqiang; Ying, Yuping; Lian, Yanping; Lin, Stephen; Yang, Yibo; Wagner, Gregory J.; Liu, Wing Kam
2016-08-01
Fractional derivatives play a very important role in modeling physical phenomena involving long-range correlation effects. However, they raise challenges of computational cost and memory storage requirements when solved using current well developed numerical methods. In this paper, the differential operator multiplication method is proposed to address the issues by considering a reaction-advection-diffusion equation with a fractional derivative in time. The linear fractional differential equation is transformed into an integer order differential equation by the proposed method, which can fundamentally fix the aforementioned issues for select fractional differential equations. In such a transform, special attention should be paid to the initial conditions for the resulting differential equation of higher integer order. Through numerical experiments, we verify the proposed method for both fractional ordinary differential equations and partial differential equations.
Integration of multiple research disciplines on the International Space Station
NASA Technical Reports Server (NTRS)
Penley, N. J.; Uri, J.; Sivils, T.; Bartoe, J. D.
2000-01-01
The International Space Station will provide an extremely high-quality, long-duration microgravity environment for the conduct of research. In addition, the ISS offers a platform for performing observations of Earth and Space from a high-inclination orbit, outside of the Earth's atmosphere. This unique environment and observational capability offers the opportunity for advancement in a diverse set of research fields. Many of these disciplines do not relate to one another, and present widely differing approaches to study, as well as different resource and operational requirements. Significant challenges exist to ensure the highest quality research return for each investigation. Requirements from different investigations must be identified, clarified, integrated and communicated to ISS personnel in a consistent manner. Resources such as power, crew time, etc. must be apportioned to allow the conduct of each investigation. Decisions affecting research must be made at the strategic level as well as at a very detailed execution level. The timing of the decisions can range from years before an investigation to real-time operations. The international nature of the Space Station program adds to the complexity. Each participating country must be assured that their interests are represented during the entire planning and operations process. A process for making decisions regarding research planning, operations, and real-time replanning is discussed. This process ensures adequate representation of all research investigators. It provides a means for timely decisions, and it includes a means to ensure that all ISS International Partners have their programmatic interests represented. c 2000 Published by Elsevier Science Ltd. All rights reserved.
Comparison of Multiple Gene Assembly Methods for Metabolic Engineering
NASA Astrophysics Data System (ADS)
Lu, Chenfeng; Mansoorabadi, Karen; Jeffries, Thomas
A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase chain reaction, and a SfiI-based ligation method. SfiI ligation was the only successful approach for assembling large DNA fragments that contained repeated homologous regions. In addition, the SfiI method has been improved over a similar, previous published technique so that it is more flexible and does not require polymerase chain reaction to incorporate adaptors. In the present study, Saccharomyces cerevisiae genes TAL1, TKL1, and PYK1 under control of the 6-phosphogluconate dehydrogenase promoter were successfully ligated together using multiple unique SfiI restriction sites. The desired construct was obtained 65% of the time during vector construction using four-piece ligations. The SfiI method consists of three steps: first a SfiI linker vector is constructed, whose multiple cloning site is flanked by two three-base linkers matching the neighboring SfiI linkers on SfiI digestion; second, the linkers are attached to the desired genes by cloning them into SfiI linker vectors; third, the genes flanked by the three-base linkers, are released by SfiI digestion. In the final step, genes of interest are joined together in a simple one-step ligation.
Field evaluation of personal sampling methods for multiple bioaerosols.
Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine
2015-01-01
Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols. PMID:25799419
Field Evaluation of Personal Sampling Methods for Multiple Bioaerosols
Wang, Chi-Hsun; Chen, Bean T.; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine
2015-01-01
Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols. PMID:25799419
NASA Technical Reports Server (NTRS)
Banyukevich, A.; Ziolkovski, K.
1975-01-01
A number of hybrid methods for solving Cauchy problems are described on the basis of an evaluation of advantages of single and multiple-point numerical integration methods. The selection criterion is the principle of minimizing computer time. The methods discussed include the Nordsieck method, the Bulirsch-Stoer extrapolation method, and the method of recursive Taylor-Steffensen power series.
Calculation of transonic flows using an extended integral equation method
NASA Technical Reports Server (NTRS)
Nixon, D.
1976-01-01
An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.
Enhancing subsurface information from the fusion of multiple geophysical methods
NASA Astrophysics Data System (ADS)
Jafargandomi, A.; Binley, A.
2011-12-01
Characterization of hydrologic systems is a key element in understanding and predicting their behaviour. Geophysical methods especially electrical methods (e.g., electrical resistivity tomography (ERT), induced polarization (IP) and electromagnetic (EM)) are becoming popular for such purpose due to their non-invasive nature, high sensitivity to hydrological parameters and the speed of measurements. However, interrogation of each geophysical method provides only limited information about some of the subsurface parameters. Therefore, in order to achieve a comprehensive picture from the hydrologic system, fusion of multiple geophysical data sets can be beneficial. Although a number of fusion approaches have been proposed in the literature, an aspect that has been generally overlooked is the assessment of information content from each measurement approach. Such an assessment provides useful insight for the design of future surveys. We develop a fusion strategy based on the capability of multiple geophysical methods to provide enough resolution to identify subsurface material parameters and structure. We apply a Bayesian framework to analyse the information in multiple geophysical data sets. In this approach multiple geophysical data sets are fed into a Markov chain Monte Carlo (McMC) inversion algorithm and the information content of the post-inversion result (posterior probability distribution) is quantified. We use Shannon's information measure to quantify the information obtained from the inversion of different combinations of geophysical data sets. In this strategy, information from multiple methods is brought together via introducing joint likelihood function and/or constraining the prior information. We apply the fusion tool to one of the target sites of the EU FP7 project ModelProbe which aims to develop technologies and tools for soil contamination assessment and site characterization. The target site is located close to Trecate (Novara - NW Italy). At this
Decoding intracranial EEG data with multiple kernel learning method
Schrouff, Jessica; Mourão-Miranda, Janaina; Phillips, Christophe; Parvizi, Josef
2016-01-01
Background Machine learning models have been successfully applied to neuroimaging data to make predictions about behavioral and cognitive states of interest. While these multivariate methods have greatly advanced the field of neuroimaging, their application to electrophysiological data has been less common especially in the analysis of human intracranial electroencephalography (iEEG, also known as electrocorticography or ECoG) data, which contains a rich spectrum of signals recorded from a relatively high number of recording sites. New method In the present work, we introduce a novel approach to determine the contribution of different bandwidths of EEG signal in different recording sites across different experimental conditions using the Multiple Kernel Learning (MKL) method. Comparison with existing method To validate and compare the usefulness of our approach, we applied this method to an ECoG dataset that was previously analysed and published with univariate methods. Results Our findings proved the usefulness of the MKL method in detecting changes in the power of various frequency bands during a given task and selecting automatically the most contributory signal in the most contributory site(s) of recording. Conclusions With a single computation, the contribution of each frequency band in each recording site in the estimated multivariate model can be highlighted, which then allows formulation of hypotheses that can be tested a posteriori with univariate methods if needed. PMID:26692030
An Integrated Approach for Accessing Multiple Datasets through LANCE
NASA Astrophysics Data System (ADS)
Murphy, K. J.; Teague, M.; Conover, H.; Regner, K.; Beaumont, B.; Masuoka, E.; Vollmer, B.; Theobald, M.; Durbin, P.; Michael, K.; Boller, R. A.; Schmaltz, J. E.; Davies, D.; Horricks, K.; Ilavajhala, S.; Thompson, C. K.; Bingham, A.
2011-12-01
The NASA/GSFC Land Atmospheres Near-real time Capability for EOS (LANCE) provides imagery for approximately 40 data products from MODIS, AIRS, AMSR-E and OMI to support the applications community in the study of a variety of phenomena. Thirty-six of these products are available within 2.5 hours of observation at the spacecraft. The data set includes the population density data provided by the EOSDIS Socio-Economic Data and Applications Center (SEDAC). The purpose of this paper is to describe the variety of tools that have been developed by LANCE to support user access to the imagery. The long-standing Rapid Response system has been integrated into LANCE and is a major vehicle for the distribution of the imagery to end users. There are presently approximately 10,000 anonymous users per month accessing these imagery. The products are grouped into 14 applications categories such as Smoke Plumes, Pollution, Fires, Agriculture and the selection of any category will make relevant subsets of the 40 products available as possible overlays in an interactive Web Client utilizing Web Mapping Service (WMS) to support user investigations (http://lance2.modaps.eosdis.nasa.gov/wms/). For example, selecting Severe Storms will include 6 products for MODIS, OMI, AIRS, and AMSR-E plus the SEDAC population density data. The client and WMS were developed using open-source technologies such as OpenLayers and MapServer and provides a uniform, browser-based access to data products. All overlays are downloadable in PNG, JPEG, or GeoTiff form up to 200MB per request. The WMS was beta-tested with the user community and substantial performance improvements were made through the use of such techniques as tile-caching. LANCE established a partnership with Physical Oceanography Distributed Active Archive Center (PO DAAC) to develop an alternative presentation for the 40 data products known as the State of the Earth (SOTE). This provides a Google Earth-based interface to the products grouped in
Curran, Patrick J; Hussong, Andrea M; Cai, Li; Huang, Wenjing; Chassin, Laurie; Sher, Kenneth J; Zucker, Robert A
2008-03-01
There are a number of significant challenges researchers encounter when studying development over an extended period of time, including subject attrition, the changing of measurement structures across groups and developmental periods, and the need to invest substantial time and money. Integrative data analysis is an emerging set of methodologies that allows researchers to overcome many of the challenges of single-sample designs through the pooling of data drawn from multiple existing developmental studies. This approach is characterized by a host of advantages, but this also introduces several new complexities that must be addressed prior to broad adoption by developmental researchers. In this article, the authors focus on methods for fitting measurement models and creating scale scores using data drawn from multiple longitudinal studies. The authors present findings from the analysis of repeated measures of internalizing symptomatology that were pooled from three existing developmental studies. The authors describe and demonstrate each step in the analysis and conclude with a discussion of potential limitations and directions for future research. PMID:18331129
A New Method for Multiple Sperm Cells Tracking
Imani, Yoones; Teyfouri, Niloufar; Ahmadzadeh, Mohammad Reza; Golabbakhsh, Marzieh
2014-01-01
Motion analysis or quality assessment of human sperm cell is great important for clinical applications of male infertility. Sperm tracking is quite complex due to cell collision, occlusion and missed detection. The aim of this study is simultaneous tracking of multiple human sperm cells. In the first step in this research, the frame difference algorithm is used for background subtraction. There are some limitations to select an appropriate threshold value since the output accuracy is strongly dependent on the selected threshold value. To eliminate this dependency, we propose an improved non-linear diffusion filtering in the time domain. Non-linear diffusion filtering is a smoothing and noise removing approach that can preserve edges in images. Many sperms that move with different speeds in different directions eventually coincide. For multiple tracking over time, an optimal matching strategy is introduced that is based on the optimization of a new cost function. A Hungarian search method is utilized to obtain the best matching for all possible candidates. The results show nearly 3.24% frame based error in dataset of videos that contain more than 1 and less than 10 sperm cells. Hence the accuracy rate was 96.76%. These results indicate the validity of the proposed algorithm to perform multiple sperms tracking. PMID:24696807
A new method for multiple sperm cells tracking.
Imani, Yoones; Teyfouri, Niloufar; Ahmadzadeh, Mohammad Reza; Golabbakhsh, Marzieh
2014-01-01
Motion analysis or quality assessment of human sperm cell is great important for clinical applications of male infertility. Sperm tracking is quite complex due to cell collision, occlusion and missed detection. The aim of this study is simultaneous tracking of multiple human sperm cells. In the first step in this research, the frame difference algorithm is used for background subtraction. There are some limitations to select an appropriate threshold value since the output accuracy is strongly dependent on the selected threshold value. To eliminate this dependency, we propose an improved non-linear diffusion filtering in the time domain. Non-linear diffusion filtering is a smoothing and noise removing approach that can preserve edges in images. Many sperms that move with different speeds in different directions eventually coincide. For multiple tracking over time, an optimal matching strategy is introduced that is based on the optimization of a new cost function. A Hungarian search method is utilized to obtain the best matching for all possible candidates. The results show nearly 3.24% frame based error in dataset of videos that contain more than 1 and less than 10 sperm cells. Hence the accuracy rate was 96.76%. These results indicate the validity of the proposed algorithm to perform multiple sperms tracking. PMID:24696807
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The
Power-efficient method for IM-DD optical transmission of multiple OFDM signals.
Effenberger, Frank; Liu, Xiang
2015-05-18
We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important. PMID:26074605
Solution of elastoplastic torsion problem by boundary integral method
NASA Technical Reports Server (NTRS)
Mendelson, A.
1975-01-01
The boundary integral method was applied to the elastoplastic analysis of the torsion of prismatic bars, and the results are compared with those obtained by the finite difference method. Although fewer unknowns were used, very good accuracy was obtained with the boundary integral method. Both simply and multiply connected bodies can be handled with equal ease.
Methods for radiation detection and characterization using a multiple detector probe
Akers, Douglas William; Roybal, Lyle Gene
2014-11-04
Apparatuses, methods, and systems relating to radiological characterization of environments are disclosed. Multi-detector probes with a plurality of detectors in a common housing may be used to substantially concurrently detect a plurality of different radiation activities and types. Multiple multi-detector probes may be used in a down-hole environment to substantially concurrently detect radioactive activity and contents of a buried waste container. Software may process, analyze, and integrate the data from the different multi-detector probes and the different detector types therein to provide source location and integrated analysis as to the source types and activity in the measured environment. Further, the integrated data may be used to compensate for differential density effects and the effects of radiation shielding materials within the volume being measured.
Integrative methods for studying cardiac energetics.
Diolez, Philippe; Deschodt-Arsac, Véronique; Calmettes, Guillaume; Gouspillou, Gilles; Arsac, Laurent; Dos Santos, Pierre; Jais, Pierre; Haissaguerre, Michel
2015-01-01
The more recent studies of human pathologies have essentially revealed the complexity of the interactions involved at the different levels of integration in organ physiology. Integrated organ thus reveals functional properties not predictable by underlying molecular events. It is therefore obvious that current fine molecular analyses of pathologies should be fruitfully combined with integrative approaches of whole organ function. It follows an important issue in the comprehension of the link between molecular events in pathologies, and whole organ function/dysfunction is the development of new experimental strategies aimed at the study of the integrated organ physiology. Cardiovascular diseases are a good example as heart submitted to ischemic conditions has to cope both with a decreased supply of nutrients and oxygen, and the necessary increased activity required to sustain whole body-including the heart itself-oxygenation.By combining the principles of control analysis with noninvasive (31)P NMR measurement of the energetic intermediates and simultaneous measurement of heart contractile activity, we developed MoCA (for Modular Control and Regulation Analysis), an integrative approach designed to study in situ control and regulation of cardiac energetics during contraction in intact beating perfused isolated heart (Diolez et al., Am J Physiol Regul Integr Comp Physiol 293(1):R13-R19, 2007). Because it gives real access to integrated organ function, MoCA brings out a new type of information-the "elasticities," referring to internal responses to metabolic changes-that may be a key to the understanding of the processes involved in pathologies. MoCA can potentially be used not only to detect the origin of the defects associated with the pathology, but also to provide the quantitative description of the routes by which these defects-or also drugs-modulate global heart function, therefore opening therapeutic perspectives. This review presents selected examples of the
Efficient implicit integration for finite-strain viscoplasticity with a nested multiplicative split
NASA Astrophysics Data System (ADS)
Shutov, A. V.
2016-07-01
An efficient and reliable stress computation algorithm is presented, which is based on implicit integration of the local evolution equations of multiplicative finite-strain plasticity/viscoplasticity. The algorithm is illustrated by an example involving a combined nonlinear isotropic/kinematic hardening; numerous backstress tensors are employed for a better description of the material behavior. The considered material model exhibits the so-called weak invariance under arbitrary isochoric changes of the reference configuration, and the presented algorithm retains this useful property. Even more: the weak invariance serves as a guide in constructing this algorithm. The constraint of inelastic incompressibility is exactly preserved as well. The proposed method is first-order accurate. Concerning the accuracy of the stress computation, the new algorithm is comparable to the Euler Backward method with a subsequent correction of incompressibility (EBMSC) and the classical exponential method (EM). Regarding the computational efficiency, the new algorithm is superior to the EBMSC and EM. Some accuracy tests are presented using parameters of the aluminum alloy 5754-O and the 42CrMo4 steel. FEM solutions of two boundary value problems using MSC.MARC are presented to show the correctness of the numerical implementation.
Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen
2014-01-01
Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information. PMID:24622773
Li, Lishuang; Zhang, Panpan; Zheng, Tianfu; Zhang, Hongying; Jiang, Zhenchao; Huang, Degen
2014-01-01
Protein-Protein Interaction (PPI) extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT) by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH). We evaluate our method with Support Vector Machine (SVM) and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information. PMID:24622773
Multiple ant colony algorithm method for selecting tag SNPs.
Liao, Bo; Li, Xiong; Zhu, Wen; Li, Renfa; Wang, Shulin
2012-10-01
The search for the association between complex disease and single nucleotide polymorphisms (SNPs) or haplotypes has recently received great attention. Finding a set of tag SNPs for haplotyping in a great number of samples is an important step to reduce cost for association study. Therefore, it is essential to select tag SNPs with more efficient algorithms. In this paper, we model problem of selection tag SNPs by MINIMUM TEST SET and use multiple ant colony algorithm (MACA) to search a smaller set of tag SNPs for haplotyping. The various experimental results on various datasets show that the running time of our method is less than GTagger and MLR. And MACA can find the most representative SNPs for haplotyping, so that MACA is more stable and the number of tag SNPs is also smaller than other evolutionary methods (like GTagger and NSGA-II). Our software is available upon request to the corresponding author. PMID:22480582
Multiple-time-stepping generalized hybrid Monte Carlo methods
NASA Astrophysics Data System (ADS)
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2-4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Integrated navigation method based on inertial navigation system and Lidar
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi
2016-04-01
An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.
Liu, Zhen; Li, Xiaojing; Li, Fengjiao; Zhang, Guangjun
2015-01-12
Single vision sensor cannot measure an entire object because of their limited field of view. Meanwhile, multiple rigidly-fixed vision sensors for the dynamic vision measurement of three-dimensional (3D) surface profilometry are complex and sensitive to strong environmental vibrations. To overcome these problems, a novel flexible dynamic measurement method for 3D surface profilometry based on multiple vision sensors is presented in this paper. A raster binocular stereo vision sensor is combined with a wide-field camera to produce a 3D optical probe. Multiple 3D optical probes are arranged around the object being measured, then many planar targets are set up. These planar targets function as the mediator to integrate the local 3D data measured by the raster binocular stereo vision sensors into the coordinate system. The proposed method is not sensitive to strong environmental vibrations, and the positions of these 3D optical probes need not be rigidly-fixed during the measurement. The validity of the proposed method is verified in a physical experiment with two 3D optical probes. When the measuring range of raster binocular stereo vision sensor is about 0.5 m × 0.38 m × 0.4 m and the size of the measured object is about 0.7 m, the accuracy of the proposed method could reach 0.12 mm. Meanwhile, the effectiveness of the proposed method in dynamic measurement is confirmed by measuring the rotating fan blades. PMID:25835684
The Effect of Sensory Integration Treatment on Children with Multiple Disabilities.
ERIC Educational Resources Information Center
Din, Feng S.; Lodato, Donna M.
Six children with multiple disabilities (ages 5 to 8) participated in this evaluation of the effect of sensory integration treatment on sensorimotor function and academic learning. The children had cognitive abilities ranging from sub-average to significantly sub-average, three were non-ambulatory, one had severe behavioral problems, and each…
Multiplicity and Self-Identity: Trauma and Integration in Shirley Mason's Art
ERIC Educational Resources Information Center
Thompson, Geoffrey
2011-01-01
This viewpoint appeared in its original form as the catalogue essay that accompanied the exhibition "Multiplicity and Self-Identity: Trauma and Integration in Shirley Mason's Art," curated by the author for Gallery 2110, Sacramento, CA, and the 2010 Annual Conference of the American Art Therapy Association. The exhibition featured 17 artworks by…
Multiple proviral integration events after virological synapse-mediated HIV-1 spread
Russell, Rebecca A.; Martin, Nicola; Mitar, Ivonne; Jones, Emma; Sattentau, Quentin J.
2013-08-15
HIV-1 can move directly between T cells via virological synapses (VS). Although aspects of the molecular and cellular mechanisms underlying this mode of spread have been elucidated, the outcomes for infection of the target cell remain incompletely understood. We set out to determine whether HIV-1 transfer via VS results in productive, high-multiplicity HIV-1 infection. We found that HIV-1 cell-to-cell spread resulted in nuclear import of multiple proviruses into target cells as seen by fluorescence in-situ hybridization. Proviral integration into the target cell genome was significantly higher than that seen in a cell-free infection system, and consequent de novo viral DNA and RNA production in the target cell detected by quantitative PCR increased over time. Our data show efficient proviral integration across VS, implying the probability of multiple integration events in target cells that drive productive T cell infection. - Highlights: • Cell-to-cell HIV-1 infection delivers multiple vRNA copies to the target cell. • Cell-to-cell infection results in productive infection of the target cell. • Cell-to-cell transmission is more efficient than cell-free HIV-1 infection. • Suggests a mechanism for recombination in cells infected with multiple viral genomes.
Integration of Multiple Genomic and Phenotype Data to Infer Novel miRNA-Disease Associations.
Shi, Hongbo; Zhang, Guangde; Zhou, Meng; Cheng, Liang; Yang, Haixiu; Wang, Jing; Sun, Jie; Wang, Zhenzhen
2016-01-01
MicroRNAs (miRNAs) play an important role in the development and progression of human diseases. The identification of disease-associated miRNAs will be helpful for understanding the molecular mechanisms of diseases at the post-transcriptional level. Based on different types of genomic data sources, computational methods for miRNA-disease association prediction have been proposed. However, individual source of genomic data tends to be incomplete and noisy; therefore, the integration of various types of genomic data for inferring reliable miRNA-disease associations is urgently needed. In this study, we present a computational framework, CHNmiRD, for identifying miRNA-disease associations by integrating multiple genomic and phenotype data, including protein-protein interaction data, gene ontology data, experimentally verified miRNA-target relationships, disease phenotype information and known miRNA-disease connections. The performance of CHNmiRD was evaluated by experimentally verified miRNA-disease associations, which achieved an area under the ROC curve (AUC) of 0.834 for 5-fold cross-validation. In particular, CHNmiRD displayed excellent performance for diseases without any known related miRNAs. The results of case studies for three human diseases (glioblastoma, myocardial infarction and type 1 diabetes) showed that all of the top 10 ranked miRNAs having no known associations with these three diseases in existing miRNA-disease databases were directly or indirectly confirmed by our latest literature mining. All these results demonstrated the reliability and efficiency of CHNmiRD, and it is anticipated that CHNmiRD will serve as a powerful bioinformatics method for mining novel disease-related miRNAs and providing a new perspective into molecular mechanisms underlying human diseases at the post-transcriptional level. CHNmiRD is freely available at http://www.bio-bigdata.com/CHNmiRD. PMID:26849207
Speicher, Nora K.; Pfeifer, Nico
2015-01-01
Motivation: Despite ongoing cancer research, available therapies are still limited in quantity and effectiveness, and making treatment decisions for individual patients remains a hard problem. Established subtypes, which help guide these decisions, are mainly based on individual data types. However, the analysis of multidimensional patient data involving the measurements of various molecular features could reveal intrinsic characteristics of the tumor. Large-scale projects accumulate this kind of data for various cancer types, but we still lack the computational methods to reliably integrate this information in a meaningful manner. Therefore, we apply and extend current multiple kernel learning for dimensionality reduction approaches. On the one hand, we add a regularization term to avoid overfitting during the optimization procedure, and on the other hand, we show that one can even use several kernels per data type and thereby alleviate the user from having to choose the best kernel functions and kernel parameters for each data type beforehand. Results: We have identified biologically meaningful subgroups for five different cancer types. Survival analysis has revealed significant differences between the survival times of the identified subtypes, with P values comparable or even better than state-of-the-art methods. Moreover, our resulting subtypes reflect combined patterns from the different data sources, and we demonstrate that input kernel matrices with only little information have less impact on the integrated kernel matrix. Our subtypes show different responses to specific therapies, which could eventually assist in treatment decision making. Availability and implementation: An executable is available upon request. Contact: nora@mpi-inf.mpg.de or npfeifer@mpi-inf.mpg.de PMID:26072491
Modified principal component analysis: an integration of multiple similarity subspace models.
Fan, Zizhu; Xu, Yong; Zuo, Wangmeng; Yang, Jian; Tang, Jinhui; Lai, Zhihui; Zhang, David
2014-08-01
We modify the conventional principal component analysis (PCA) and propose a novel subspace learning framework, modified PCA (MPCA), using multiple similarity measurements. MPCA computes three similarity matrices exploiting the similarity measurements: 1) mutual information; 2) angle information; and 3) Gaussian kernel similarity. We employ the eigenvectors of similarity matrices to produce new subspaces, referred to as similarity subspaces. A new integrated similarity subspace is then generated using a novel feature selection approach. This approach needs to construct a kind of vector set, termed weak machine cell (WMC), which contains an appropriate number of the eigenvectors spanning the similarity subspaces. Combining the wrapper method and the forward selection scheme, MPCA selects a WMC at a time that has a powerful discriminative capability to classify samples. MPCA is very suitable for the application scenarios in which the number of the training samples is less than the data dimensionality. MPCA outperforms the other state-of-the-art PCA-based methods in terms of both classification accuracy and clustering result. In addition, MPCA can be applied to face image reconstruction. MPCA can use other types of similarity measurements. Extensive experiments on many popular real-world data sets, such as face databases, show that MPCA achieves desirable classification results, as well as has a powerful capability to represent data. PMID:25050950
Online Guidance Law of Missile Using Multiple Design Point Method
NASA Astrophysics Data System (ADS)
Yamaoka, Seiji; Ueno, Seiya
This paper deals with design procedure of online guidance law for future missiles that are required to have agile maneuverability. For the purpose, the authors propose to mount high power side-thrusters on a missile. The guidance law for such missiles is discussed from a point of view of optimal control theory in this paper. Minimum time problem is solved for the approximated system. It is derived that bang-bang control is optimal input from the necessary conditions of optimal solution. Feedback guidance without iterative calculation is useful for actual systems. Multiple design point method is applied to design feedback gains and feedforward inputs of the guidance law. The numerical results show the good performance of the proposed guidance law.
System and method for inventorying multiple remote objects
Carrender, Curtis L.; Gilbert, Ronald W.
2007-10-23
A system and method of inventorying multiple objects utilizing a multi-level or a chained radio frequency identification system. The system includes a master tag and a plurality of upper level tags and lower level tags associated with respective objects. The upper and lower level tags communicate with each other and the master tag so that reading of the master tag reveals the presence and absence of upper and lower level tags. In the chained RF system, the upper and lower level tags communicate locally with each other in a manner so that more remote tags that are out of range of some of the upper and lower level tags have their information relayed through adjacent tags to the master tag and thence to a controller.
System and method for inventorying multiple remote objects
Carrender, Curtis L.; Gilbert, Ronald W.
2009-12-29
A system and method of inventorying multiple objects utilizing a multi-level or a chained radio frequency identification system. The system includes a master tag and a plurality of upper level tags and lower level tags associated with respective objects. The upper and lower level tags communicate with each other and the master tag so that reading of the master tag reveals the presence and absence of upper and lower level tags. In the chained RF system, the upper and lower level tags communicate locally with each other in a manner so that more remote tags that are out of range of some of the upper and lower level tags have their information relayed through adjacent tags to the master tag and thence to a controller.
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
NASA Astrophysics Data System (ADS)
McGillivary, P. A.; Borges de Sousa, J.; Martins, R.; Rajan, K.
2012-12-01
Autonomous platforms are increasingly used as components of Integrated Ocean Observing Systems and oceanographic research cruises. Systems deployed can include gliders or propeller-driven autonomous underwater vessels (AUVs), autonomous surface vessels (ASVs), and unmanned aircraft systems (UAS). Prior field campaigns have demonstrated successful communication, sensor data fusion and visualization for studies using gliders and AUVs. However, additional requirements exist for incorporating ASVs and UASs into ship operations. For these systems to be optimally integrated into research vessel data management and operational planning systems involves addressing three key issues: real-time field data availability, platform coordination, and data archiving for later analysis. A fleet of AUVs, ASVs and UAS deployed from a research vessel is best operated as a system integrated with the ship, provided communications among them can be sustained. For this purpose, Disruptive Tolerant Networking (DTN) software protocols for operation in communication-challenged environments help ensure reliable high-bandwidth communications. Additionally, system components need to have considerable onboard autonomy, namely adaptive sampling capabilities using their own onboard sensor data stream analysis. We discuss Oceanographic Decision Support System (ODSS) software currently used for situational awareness and planning onshore, and in the near future event detection and response will be coordinated among multiple vehicles. Results from recent field studies from oceanographic research vessels using AUVs, ASVs and UAS, including the Rapid Environmental Picture (REP-12) cruise, are presented describing methods and results for use of multi-vehicle communication and deliberative control networks, adaptive sampling with single and multiple platforms, issues relating to data management and archiving, and finally challenges that remain in addressing these technological issues. Significantly, the
NASA Astrophysics Data System (ADS)
Ren, L.; Liu, Q.; Hjörleifsdóttir, V.
2010-12-01
We present multiple moment-tensor solution of the Dec 26, 2004 Sumatra earthquake based upon the adjoint methods. An objective function Φ(m), where m is the multiple source model, measures the goodness of waveform fit between data and synthetics. The Fréchet derivatives of Φ in the form δΦ = ∫∫I(ɛ†)(x,T-t)δmij_dot(x,t)dVdt, where δmij is the source model perturbation and I(ɛ†)(x,T-t) denotes the time-integrated adjoint strain tensor, are calculated based upon adjoint methods and spectral-element simulations (SPECFEM3D_GLOBE) in a 3D global earth model S362ANI. Our initial source model is obtained independently by monitoring the time-integrated adjoint strain tensors around the presumed source region. We then utilize the Φ and δΦ calculations in a conjugate-gradient method to iteratively invert for the source model. Our final inversion results show both similarities with and differences to previous source inversion results based on 1D earth models.
A survey of payload integration methods
NASA Technical Reports Server (NTRS)
Engels, R. C.; Craig, R. R., Jr.; Harcrow, H. W.
1984-01-01
Several full-scale and short-cut methods for analyzing a booster/payload system are presented. Two full-scale techniques are considered: (1) a technique that uses a restrained payload together with a free-booster model, the latter being augmented with residual mass and stiffness correction and (2) a technique that uses a restrained payload and booster model. Both techniques determine the 'modal modes', which require the solution of a system eigenvalue problem; the loads usually are then determined via an acceleration approach. A brief description is given of a number of short-cut methods which are of special interest to Shuttle payload design: structural modification, base drive, and interface impedance methods. Directions for further research and development are suggested.
Monitoring gray wolf populations using multiple survey methods
Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.
2013-01-01
The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method
Treatment of domain integrals in boundary element methods
Nintcheu Fata, Sylvain
2012-01-01
A systematic and rigorous technique to calculate domain integrals without a volume-fitted mesh has been developed and validated in the context of a boundary element approximation. In the proposed approach, a domain integral involving a continuous or weakly-singular integrand is first converted into a surface integral by means of straight-path integrals that intersect the underlying domain. Then, the resulting surface integral is carried out either via analytic integration over boundary elements or by use of standard quadrature rules. This domain-to-boundary integral transformation is derived from an extension of the fundamental theorem of calculus to higher dimension, and the divergence theorem. In establishing the method, it is shown that the higher-dimensional version of the first fundamental theorem of calculus corresponds to the well-known Poincare lemma. The proposed technique can be employed to evaluate integrals defined over simply- or multiply-connected domains with Lipschitz boundaries which are embedded in an Euclidean space of arbitrary but finite dimension. Combined with the singular treatment of surface integrals that is widely available in the literature, this approach can also be utilized to effectively deal with boundary-value problems involving non-homogeneous source terms by way of a collocation or a Galerkin boundary integral equation method using only the prescribed surface discretization. Sample problems associated with the three-dimensional Poisson equation and featuring the Newton potential are successfully solved by a constant element collocation method to validate this study.
Method for distinguishing multiple targets using time-reversal acoustics
Berryman, James G.
2004-06-29
A method for distinguishing multiple targets using time-reversal acoustics. Time-reversal acoustics uses an iterative process to determine the optimum signal for locating a strongly reflecting target in a cluttered environment. An acoustic array sends a signal into a medium, and then receives the returned/reflected signal. This returned/reflected signal is then time-reversed and sent back into the medium again, and again, until the signal being sent and received is no longer changing. At that point, the array has isolated the largest eigenvalue/eigenvector combination and has effectively determined the location of a single target in the medium (the one that is most strongly reflecting). After the largest eigenvalue/eigenvector combination has been determined, to determine the location of other targets, instead of sending back the same signals, the method sends back these time reversed signals, but half of them will also be reversed in sign. There are various possibilities for choosing which half to do sign reversal. The most obvious choice is to reverse every other one in a linear array, or as in a checkerboard pattern in 2D. Then, a new send/receive, send-time reversed/receive iteration can proceed. Often, the first iteration in this sequence will be close to the desired signal from a second target. In some cases, orthogonalization procedures must be implemented to assure the returned signals are in fact orthogonal to the first eigenvector found.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Computational methods for inlet airframe integration
NASA Technical Reports Server (NTRS)
Towne, Charles E.
1988-01-01
Fundamental equations encountered in computational fluid dynamics (CFD), and analyses used for internal flow are introduced. Irrotational flow; Euler equations; boundary layers; parabolized Navier-Stokes equations; and time averaged Navier-Stokes equations are treated. Assumptions made and solution methods are outlined, with examples. The overall status of CFD in propulsion is indicated.
Hollow fiber integrated microfluidic platforms for in vitro Co-culture of multiple cell types.
Huang, Jen-Huang; Harris, Jennifer F; Nath, Pulak; Iyer, Rashi
2016-10-01
This study demonstrates a rapid prototyping approach for fabricating and integrating porous hollow fibers (HFs) into microfluidic device. Integration of HF can enhance mass transfer and recapitulate tubular shapes for tissue-engineered environments. We demonstrate the integration of single or multiple HFs, which can give the users the flexibility to control the total surface area for tissue development. We also present three microfluidic designs to enable different co-culture conditions such as the ability to co-culture multiple cell types simultaneously on a flat and tubular surface, or inside the lumen of multiple HFs. Additionally, we introduce a pressurized cell seeding process that can allow the cells to uniformly adhere on the inner surface of HFs without losing their viabilities. Co-cultures of lung epithelial cells and microvascular endothelial cells were demonstrated on the different platforms for at least five days. Overall, these platforms provide new opportunities for co-culturing of multiple cell types in a single device to reconstruct native tissue micro-environment for biomedical and tissue engineering research. PMID:27613401
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
Shape integral method for magnetospheric shapes. [boundary layer calculations
NASA Technical Reports Server (NTRS)
Michel, F. C.
1979-01-01
A method is developed for calculating the shape of any magnetopause to arbitrarily high precision. The method uses an integral equation which is evaluated for a trial shape. The resulting values of the integral equation as a function of auxiliary variables indicate how close one is to the desired solution. A variational method can then be used to improve the trial shape. Some potential applications are briefly mentioned.
Prakash, Ruchika Shaurya; Snook, Erin M.; Motl, Robert W.; Kramer, Arthur F.
2009-01-01
Alterations in gray and white matter have been well documented in individuals with multiple sclerosis. Severity and extent of such brain tissue damage have been associated with cognitive impairment, disease duration and neurological disability, making quantitative indices of tissue damage important markers of disease progression. In this study, we investigated the association between cardiorespiratory fitness and measures of gray matter atrophy and white matter integrity. Employing a voxel-based approach to analyses of gray matter and white matter, we specifically examined whether higher levels of fitness in multiple sclerosis participants were associated with preserved gray matter volume and integrity of white matter. We found a positive association between cardiorespiratory fitness and regional gray matter volumes and higher focal fractional anisotropy values. Statistical mapping revealed that higher levels of fitness were associated with greater gray matter volume in the midline cortical structures including the medial frontal gyrus, anterior cingulate cortex and the precuneus. Further, we also found increasing levels of fitness were associated with higher fractional anisotropy in the left thalamic radiation and right anterior corona radiata. Both preserved gray matter volume and white-matter tract integrity were associated with better performance on measures of processing speed. Taken together, these results suggest that fitness exerts a prophylactic influence on the cerebral atrophy observed early on preserving neuronal integrity in multiple sclerosis, thereby reducing long-term disability. PMID:19560443
Prakash, Ruchika Shaurya; Snook, Erin M; Motl, Robert W; Kramer, Arthur F
2010-06-23
Alterations in gray and white matter have been well documented in individuals with multiple sclerosis. Severity and extent of such brain tissue damage have been associated with cognitive impairment, disease duration and neurological disability, making quantitative indices of tissue damage important markers of disease progression. In this study, we investigated the association between cardiorespiratory fitness and measures of gray matter atrophy and white matter integrity. Employing voxel-based approaches to analysis of gray matter and white matter, we specifically examined whether higher levels of fitness in multiple sclerosis participants were associated with preserved gray matter volume and integrity of white matter. We found a positive association between cardiorespiratory fitness and regional gray matter volumes and higher focal fractional anisotropy values. Statistical mapping revealed that higher levels of fitness were associated with greater gray matter volume in the midline cortical structures including the medial frontal gyrus, anterior cingulate cortex and the precuneus. Further, we also found that increasing levels of fitness were associated with higher fractional anisotropy in the left thalamic radiation and right anterior corona radiata. Both preserved gray matter volume and white matter tract integrity were associated with better performance on measures of processing speed. Taken together, these results suggest that fitness exerts a prophylactic influence on the structural decline observed early on, preserving neuronal integrity in multiple sclerosis, thereby reducing long-term disability. PMID:19560443
Coupling equivalent plate and finite element formulations in multiple-method structural analyses
NASA Technical Reports Server (NTRS)
Giles, Gary L.; Norwood, Keith
1994-01-01
A coupled multiple-method analysis procedure for use late in conceptual design or early in preliminary design of aircraft structures is described. Using this method, aircraft wing structures are represented with equivalent plate models, and structural details such as engine/pylon structure, landing gear, or a 'stick' model of a fuselage are represented with beam finite element models. These two analysis methods are implemented in an integrated multiple-method formulation that involves the assembly and solution of a combined set of linear equations. The corresponding solution vector contains coefficients of the polynomials that describe the deflection of the wing and also the components of translations and rotations at the joints of the beam members. Two alternative approaches for coupling the methods are investigated; one using transition finite elements and the other using Lagrange multipliers. The coupled formulation is applied to the static analysis and vibration analysis of a conceptual design model of a fighter aircraft. The results from the coupled method are compared with corresponding results from an analysis in which the entire model is composed of finite elements.
Sandroff, Brian M.; Pula, John H.; Motl, Robert W.
2013-01-01
Background. Retinal nerve fiber layer thickness (RNFLT) and total macular volume (TMV) represent markers of neuroaxonal degeneration within the anterior visual pathway that might correlate with ambulation in persons with multiple sclerosis (MS). Objective. This study examined the associations between RNFLT and TMV with ambulatory parameters in MS. Methods. Fifty-eight MS patients underwent a neurological examination for generation of an expanded disability status scale (EDSS) score and measurement of RNFLT and TMV using optical coherence tomography (OCT). Participants completed the 6-minute walk (6MW) and the timed 25-foot walk (T25FW). The associations were examined using generalized estimating equation models that accounted for within-patient, inter-eye correlations, and controlled for disease duration, EDSS score, and age. Results. RNFLT was not significantly associated with 6MW (P = 0.99) or T25FW (P = 0.57). TMV was significantly associated with 6MW (P = 0.023) and T25FW (P = 0.005). The coefficients indicated that unit differences in 6MW (100 feet) and T25FW (1 second) were associated with 0.040 and −0.048 unit differences in TMV (mm3), respectively. Conclusion. Integrity of the anterior visual pathway, particularly TMV, might represent a noninvasive measure of neuroaxonal degeneration that is correlated with ambulatory function in MS. PMID:23864950
Damping identification in frequency domain using integral method
NASA Astrophysics Data System (ADS)
Guo, Zhiwei; Sheng, Meiping; Ma, Jiangang; Zhang, Wulin
2015-03-01
A new method for damping identification of linear system in frequency domain is presented, by using frequency response function (FRF) with integral method. The FRF curve is firstly transformed to other type of frequency-related curve by changing the representations of horizontal and vertical axes. For the newly constructed frequency-related curve, integral is conducted and the area forming from the new curve is used to determine the damping. Three different methods based on integral are proposed in this paper, which are called FDI-1, FDI-2 and FDI-3 method, respectively. For a single degree of freedom (Sdof) system, the formulated relation of each method between integrated area and loss factor is derived theoretically. The numeral simulation and experiment results show that, the proposed integral methods have high precision, strong noise resistance and are very stable in repeated measurements. Among the three integral methods, FDI-3 method is the most recommended because of its higher accuracy and simpler algorithm. The new methods are limited to linear system in which modes are well separated, and for closely spaced mode system, mode decomposition process should be conducted firstly.
Exponential Methods for the Time Integration of Schroedinger Equation
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
A Rationale for Mixed Methods (Integrative) Research Programmes in Education
ERIC Educational Resources Information Center
Niaz, Mansoor
2008-01-01
Recent research shows that research programmes (quantitative, qualitative and mixed) in education are not displaced (as suggested by Kuhn) but rather lead to integration. The objective of this study is to present a rationale for mixed methods (integrative) research programs based on contemporary philosophy of science (Lakatos, Giere, Cartwright,…
Methods for the joint meta-analysis of multiple tests.
Trikalinos, Thomas A; Hoaglin, David C; Small, Kevin M; Terrin, Norma; Schmid, Christopher H
2014-12-01
Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests' true-positive rates (TPRs) and between their false-positive rates (FPRs) (induced because tests are applied to the same participants), and allow for between-study correlations between TPRs and FPRs (such as those induced by threshold effects). We estimate models in the Bayesian setting. We demonstrate using a meta-analysis of screening for Down syndrome with two tests: shortened humerus (arm bone), and shortened femur (thigh bone). Separate and joint meta-analyses yielded similar TPR and FPR estimates. For example, the summary TPR for a shortened humerus was 35.3% (95% credible interval (CrI): 26.9, 41.8%) versus 37.9% (27.7, 50.3%) with joint versus separate meta-analysis. Joint meta-analysis is more efficient when calculating comparative accuracy: the difference in the summary TPRs was 0.0% (-8.9, 9.5%; TPR higher for shortened humerus) with joint versus 2.6% (-14.7, 19.8%) with separate meta-analyses. Simulation and empirical analyses are needed to refine the role of the proposed methodology. PMID:26052954
NASA Astrophysics Data System (ADS)
Rao, Gottipaty N.; Karpf, Andreas
2011-05-01
We report on the development of a new sensor for NO2 with ultrahigh sensitivity of detection. This has been accomplished by combining off-axis integrated cavity output spectroscopy (OA-ICOS) (which can provide large path lengths of the order of several km in a small volume cell) with multiple line integrated absorption spectroscopy (MLIAS) (where we integrate the absorption spectra over a large number of rotational-vibrational transitions of the molecular species to further improve the sensitivity). Employing an external cavity tunable quantum cascade laser operating in the 1601 - 1670 cm-1 range and a high-finesse optical cavity, the absorption spectra of NO2 over 100 transitions in the R-band have been recorded. From the observed linear relationship between the integrated absorption vs. concentration of NO2, we report an effective sensitivity of detection of 10 ppt for NO2. To the best of our knowledge, this is among the most sensitive levels of detection of NO2 to date. A sensitive sensor for the detection of NO2 will be helpful to monitor the ambient air quality, combustion emissions from the automobiles, power plants, aircraft and for the detection of nitrate based explosives (which are commonly used in improvised explosives (IEDs)). Additionally such a sensor would be valuable for the study of complex chemical reactions that undergo in the atmosphere resulting in the formation of photochemical smog, tropospheric ozone and acid rain.
A rainfall design method for spatial flood risk assessment: considering multiple flood sources
NASA Astrophysics Data System (ADS)
Jiang, X.; Tatano, H.
2015-08-01
Information about the spatial distribution of flood risk is important for integrated urban flood risk management. Focusing on urban areas, spatial flood risk assessment must reflect all risk information derived from multiple flood sources: rivers, drainage, coastal flooding etc. that may affect the area. However, conventional flood risk assessment deals with each flood source independently, which leads to an underestimation of flood risk in the floodplain. Even in floodplains that have no risk from coastal flooding, flooding from river channels and inundation caused by insufficient drainage capacity should be considered simultaneously. For integrated flood risk management, it is necessary to establish a methodology to estimate flood risk distribution across a floodplain. In this paper, a rainfall design method for spatial flood risk assessment, which considers the joint effects of multiple flood sources, is proposed. The concept of critical rainfall duration determined by the concentration time of flooding is introduced to connect response characteristics of different flood sources with rainfall. A copula method is then adopted to capture the correlation of rainfall amount with different critical rainfall durations. Rainfall events are designed taking advantage of the copula structure of correlation and marginal distribution of rainfall amounts within different critical rainfall durations. A case study in the Otsu River Basin, Osaka prefecture, Japan was conducted to demonstrate this methodology.
Scientific concepts and applications of integrated discrete multiple organ co-culture technology
Gayathri, Loganathan; Dhanasekaran, Dharumadurai; Akbarsha, Mohammad A.
2015-01-01
Over several decades, animals have been used as models to investigate the human-specific drug toxicity, but the outcomes are not always reliably extrapolated to the humans in vivo. Appropriate in vitro human-based experimental system that includes in vivo parameters is required for the evaluation of multiple organ interaction, multiple organ/organ-specific toxicity, and metabolism of xenobiotic compounds to avoid the use of animals for toxicity testing. One such versatile in vitro technology in which human primary cells could be used is integrated discrete multiple organ co-culture (IdMOC). IdMOC system adopts wells-within-well concept that facilitates co-culture of cells from different organs in a discrete manner, separately in the respective media in the smaller inner wells which are then interconnected by an overlay of a universal medium in the large containing well. This novel in vitro approach mimics the in vivo situation to a great extent, and employs cells from multiple organs that are physically separated but interconnected by a medium that mimics the systemic circulation and provides for multiple organ interaction. Applications of IdMOC include assessment of multiple organ toxicity, drug distribution, organ-specific toxicity, screening of anticancer drugs, metabolic cytotoxicity, etc. PMID:25969651
Sternberg, Zohara
2016-03-01
Thought to be an autoimmune inflammatory CNS disease, multiple sclerosis (MS) involves multiple pathologies with heterogeneous clinical presentations. An impaired neurovisceral integration of cardiovascular modulation, indicated by sympathetic and parasympathetic autonomic nervous system (ANS) dysfunction, is among common MS clinical presentations. ANS dysfunction could not only enhance MS inflammatory and neurodegenerative processes, but can also lead to clinical symptoms such as depression, fatigue, sleep disorder, migraine, osteoporosis, and cerebral hemodynamic impairments. Therefore, factors influencing ANS functional activities, in one way or another, will have a significant impact on MS disease course. This review describes the genetic and epigenetic factors, and their interactions with a number of environmental factors contributing to the neurovisceral integration of cardiovascular modulation, with a focus on MS. Future studies should investigate the improvement in cardiovascular ANS function, as a strategy for preventing and minimizing MS-related morbidities, and improving patients' quality of life. PMID:26502224
NASA Astrophysics Data System (ADS)
Alqurashi, Muwaffaq; Wang, Jinling
2015-03-01
For positioning, navigation and timing (PNT) purposes, GNSS or GNSS/INS integration is utilised to provide real-time solutions. However, any potential sensor failures or faulty measurements due to malfunctions of sensor components or harsh operating environments may cause unsatisfactory estimation for PNT parameters. The inability for immediate detecting faulty measurements or sensor component failures will reduce the overall performance of the system. So, real time detection and identification of faulty measurements is required to make the system more accurate and reliable for different applications that need real time solutions such as real time mapping for safety or emergency purposes. Consequently, it is necessary to implement an online fault detection and isolation (FDI) algorithm which is a statistic-based approach to detect and identify multiple faults.However, further investigations on the performance of the FDI for multiple fault scenarios is still required. In this paper, the performance of the FDI method under multiple fault scenarios is evaluated, e.g., for two, three and four faults in the GNSS and GNSS/INS measurements under different conditions of visible satellites and satellites geometry. Besides, the reliability (e.g., MDB) and separability (correlation coefficients between faults detection statistics) measures are also investigated to measure the capability of the FDI method. A performance analysis of the FDI method is conducted under the geometric constraints, to show the importance of the FDI method in terms of fault detectability and separability for robust positioning and navigation for real time applications.
The eye in hand: predicting others' behavior by integrating multiple sources of information
Pezzulo, Giovanni; Costantini, Marcello
2015-01-01
The ability to predict the outcome of other beings' actions confers significant adaptive advantages. Experiments have assessed that human action observation can use multiple information sources, but it is currently unknown how they are integrated and how conflicts between them are resolved. To address this issue, we designed an action observation paradigm requiring the integration of multiple, potentially conflicting sources of evidence about the action target: the actor's gaze direction, hand preshape, and arm trajectory, and their availability and relative uncertainty in time. In two experiments, we analyzed participants' action prediction ability by using eye tracking and behavioral measures. The results show that the information provided by the actor's gaze affected participants' explicit predictions. However, results also show that gaze information was disregarded as soon as information on the actor's hand preshape was available, and this latter information source had widespread effects on participants' prediction ability. Furthermore, as the action unfolded in time, participants relied increasingly more on the arm movement source, showing sensitivity to its increasing informativeness. Therefore, the results suggest that the brain forms a robust estimate of the actor's motor intention by integrating multiple sources of information. However, when informative motor cues such as a preshaped hand with a given grip are available and might help in selecting action targets, people tend to capitalize on such motor cues, thus turning out to be more accurate and fast in inferring the object to be manipulated by the other's hand. PMID:25568158
Choi, Hyunmo; Oh, Eunkyoo
2016-01-01
As sessile organisms, plants must be able to adapt to the environment. Plants respond to the environment by adjusting their growth and development, which is mediated by sophisticated signaling networks that integrate multiple environmental and endogenous signals. Recently, increasing evidence has shown that a bHLH transcription factor PIF4 plays a major role in the multiple signal integration for plant growth regulation. PIF4 is a positive regulator in cell elongation and its activity is regulated by various environmental signals, including light and temperature, and hormonal signals, including auxin, gibberellic acid and brassinosteroid, both transcriptionally and post-translationally. Moreover, recent studies have shown that the circadian clock and metabolic status regulate endogenous PIF4 level. The PIF4 transcription factor cooperatively regulates the target genes involved in cell elongation with hormone-regulated transcription factors. Therefore, PIF4 is a key integrator of multiple signaling pathways, which optimizes growth in the environment. This review will discuss our current understanding of the PIF4-mediated signaling networks that control plant growth. PMID:27432188
Choi, Hyunmo; Oh, Eunkyoo
2016-08-31
As sessile organisms, plants must be able to adapt to the environment. Plants respond to the environment by adjusting their growth and development, which is mediated by sophisticated signaling networks that integrate multiple environmental and endogenous signals. Recently, increasing evidence has shown that a bHLH transcription factor PIF4 plays a major role in the multiple signal integration for plant growth regulation. PIF4 is a positive regulator in cell elongation and its activity is regulated by various environmental signals, including light and temperature, and hormonal signals, including auxin, gibberellic acid and brassinosteroid, both transcriptionally and post-translationally. Moreover, recent studies have shown that the circadian clock and metabolic status regulate endogenous PIF4 level. The PIF4 transcription factor cooperatively regulates the target genes involved in cell elongation with hormone-regulated transcription factors. Therefore, PIF4 is a key integrator of multiple signaling pathways, which optimizes growth in the environment. This review will discuss our current understanding of the PIF4-mediated signaling networks that control plant growth. PMID:27432188
The eye in hand: predicting others' behavior by integrating multiple sources of information.
Ambrosini, Ettore; Pezzulo, Giovanni; Costantini, Marcello
2015-04-01
The ability to predict the outcome of other beings' actions confers significant adaptive advantages. Experiments have assessed that human action observation can use multiple information sources, but it is currently unknown how they are integrated and how conflicts between them are resolved. To address this issue, we designed an action observation paradigm requiring the integration of multiple, potentially conflicting sources of evidence about the action target: the actor's gaze direction, hand preshape, and arm trajectory, and their availability and relative uncertainty in time. In two experiments, we analyzed participants' action prediction ability by using eye tracking and behavioral measures. The results show that the information provided by the actor's gaze affected participants' explicit predictions. However, results also show that gaze information was disregarded as soon as information on the actor's hand preshape was available, and this latter information source had widespread effects on participants' prediction ability. Furthermore, as the action unfolded in time, participants relied increasingly more on the arm movement source, showing sensitivity to its increasing informativeness. Therefore, the results suggest that the brain forms a robust estimate of the actor's motor intention by integrating multiple sources of information. However, when informative motor cues such as a preshaped hand with a given grip are available and might help in selecting action targets, people tend to capitalize on such motor cues, thus turning out to be more accurate and fast in inferring the object to be manipulated by the other's hand. PMID:25568158
Integrated Research Methods for Applied Urban Hydrogeology of Karst Sites
NASA Astrophysics Data System (ADS)
Epting, J.; Romanov, D. K.; Kaufmann, G.; Huggenberger, P.
2008-12-01
measures. Theories describing the evolution of karst systems are mainly based on conceptual models. Although these models are based on fundamental and well established physical and chemical principles that allow studying important processes from initial small scale fracture networks to the mature karst, systems for monitoring the evolution of karst phenomena are rare. Integrated process-oriented investigation methods are presented, comprising the combination of multiple data sources (lithostratigraphic information of boreholes, extensive groundwater monitoring, dye tracer tests, geophysics) with high-resolution numerical groundwater modeling and model simulations of karstification below the dam. Subsequently, different scenarios evaluated the future development of the groundwater flow regime, the karstification processes as well as possible remediation measures. The approach presented assists in optimizing investigation methods, including measurement and monitoring technologies with predictive character for similar subsidence problems within karst environments in urban areas.
Comparison of Integrated Analysis Methods for Two Model Scenarios
NASA Technical Reports Server (NTRS)
Amundsen, Ruth M.
1999-01-01
Integrated analysis methods have the potential to substantially decrease the time required for analysis modeling. Integration with computer aided design (CAD) software can also allow a model to be more accurate by facilitating import of exact design geometry. However, the integrated method utilized must sometimes be tailored to the specific modeling situation, in order to make the process most efficient. Two cases are presented here that illustrate different processes used for thermal analysis on two different models. These examples are used to illustrate how the requirements, available input, expected output, and tools available all affect the process selected by the analyst for the most efficient and effective analysis.
NASA Astrophysics Data System (ADS)
Li, D. H.; Zhang, X.; Sze, K. Y.; Liu, Y.
2016-07-01
In this paper, the extended layerwise method (XLWM), which was developed for laminated composite beams with multiple delaminations and transverse cracks (Li et al. in Int J Numer Methods Eng 101:407-434, 2015), is extended to laminated composite plates. The strong and weak discontinuous functions along the thickness direction are adopted to simulate multiple delaminations and interlaminar interfaces, respectively, whilst transverse cracks are modeled by the extended finite element method (XFEM). The interaction integral method and maximum circumferential tensile criterion are used to calculate the stress intensity factor (SIF) and crack growth angle, respectively. The XLWM for laminated composite plates can accurately predicts the displacement and stress fields near the crack tips and delamination fronts. The thickness distribution of SIF and thus the crack growth angles in different layers can be obtained. These information cannot be predicted by using other existing shell elements enriched by XFEM. Several numerical examples are studied to demonstrate the capabilities of the XLWM in static response analyses, SIF calculations and crack growth predictions.
NASA Astrophysics Data System (ADS)
Xie, Guizhong; Zhang, Dehai; Zhang, Jianming; Meng, Fannian; Du, Wenliao; Wen, Xiaoyu
2016-07-01
As a widely used numerical method, boundary element method (BEM) is efficient for computer aided engineering (CAE). However, boundary integrals with near singularity need to be calculated accurately and efficiently to implement BEM for CAE analysis on thin bodies successfully. In this paper, the distance in the denominator of the fundamental solution is first designed as an equivalent form using approximate expansion and the original sinh method can be revised into a new form considering the minimum distance and the approximate expansion. Second, the acquisition of the projection point by Newton-Raphson method is introduced. We acquire the nearest point between the source point and element edge by solving a cubic equation if the location of the projection point is outside the element, where boundary integrals with near singularity appear. Finally, the subtriangles of the local coordinate space are mapped into the integration space and the sinh method is applied in the integration space. The revised sinh method can be directly performed in the integration element. Averification test of our method is proposed. Results demonstrate that our method is effective for regularizing the boundary integrals with near singularity.
An integrated lean-methods approach to hospital facilities redesign.
Nicholas, John
2012-01-01
Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach. PMID:22671435
Modulation of C. elegans touch sensitivity is integrated at multiple levels.
Chen, Xiaoyin; Chalfie, Martin
2014-05-01
Sensory systems can adapt to different environmental signals. Here we identify four conditions that modulate anterior touch sensitivity in Caenorhabditis elegans after several hours and demonstrate that such sensory modulation is integrated at multiple levels to produce a single output. Prolonged vibration involving integrin signaling directly sensitizes the touch receptor neurons (TRNs). In contrast, hypoxia, the dauer state, and high salt reduce touch sensitivity by preventing the release of long-range neuroregulators, including two insulin-like proteins. Integration of these latter inputs occurs at upstream neurohormonal cells and at the insulin signaling cascade within the TRNs. These signals and those from integrin signaling converge to modulate touch sensitivity by regulating AKT kinases and DAF-16/FOXO. Thus, activation of either the integrin or insulin pathways can compensate for defects in the other pathway. This modulatory system integrates conflicting signals from different modalities, and adapts touch sensitivity to both mechanical and non-mechanical conditions. PMID:24806678
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Yi, Faliu; Lee, Jieun; Moon, Inkyu
2014-05-01
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU. PMID:24921860
Explicit Integration of Extremely Stiff Reaction Networks: Partial Equilibrium Methods
Guidry, Mike W; Billings, J. J.; Hix, William Raphael
2013-01-01
In two preceding papers [1,2] we have shown that, when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the new partial equilibrium methods, give an integration scheme that plausibly can deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that algebraically stabilized explicit methods may offer alternatives to implicit integration of even extremely stiff systems, and that these methods may permit integration of much larger networks than have been feasible previously in a variety of fields.
System and method for integrating hazard-based decision making tools and processes
Hodgin, C. Reed
2012-03-20
A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.
NASA Astrophysics Data System (ADS)
Taylor, Ted L.; Makimura, Eri
2007-03-01
Micron Technology, Inc., explores the challenges of defining specific wafer sampling scenarios for users of multiple integrated metrology modules within a Tokyo Electron Limited (TEL) CLEAN TRACK TM LITHIUS TM. With the introduction of integrated metrology (IM) into the photolithography coater/developer, users are faced with the challenge of determining what type of data is required to collect to adequately monitor the photolithography tools and the manufacturing process. Photolithography coaters/developers have a metrology block that is capable of integrating three metrology modules into the standard wafer flow. Taking into account the complexity of multiple metrology modules and varying across-wafer sampling plans per metrology module, users must optimize the module wafer sampling to obtain their desired goals. Users must also understand the complexity of the coater/developer handling systems to deliver wafers to each module. Coater/developer systems typically process wafers sequentially through each module to ensure consistent processing. In these systems, the first wafer must process through a module before the next wafer can process through a module, and the first wafer must return to the cassette before the second wafer can return to the cassette. IM modules within this type of system can reduce throughput and limit flexible wafer selections. Finally, users must have the ability to select specific wafer samplings for each IM module. This case study explores how to optimize wafer sampling plans and how to identify limitations with the complexity of multiple integrated modules to ensure maximum metrology throughput without impact to the productivity of processing wafers through the photolithography cell (litho cell).
Tuning of PID controllers for integrating systems using direct synthesis method.
Anil, Ch; Padma Sree, R
2015-07-01
A PID controller is designed for various forms of integrating systems with time delay using direct synthesis method. The method is based on comparing the characteristic equation of the integrating system and PID controller with a filter with the desired characteristic equation. The desired characteristic equation comprises of multiple poles which are placed at the same desired location. The tuning parameter is adjusted so as to achieve the desired robustness. Tuning rules in terms of process parameters are given for various forms of integrating systems. The tuning parameter can be selected for the desired robustness by specifying Ms value. The proposed controller design method is applied to various transfer function models and to the nonlinear model equations of jacketed CSTR to show its effectiveness and applicability. PMID:25800952
NASA Astrophysics Data System (ADS)
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-01
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. The thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
Laser housing having integral mounts and method of manufacturing same
Herron, Michael Alan; Brickeen, Brian Keith
2004-10-19
A housing adapted to position, support, and facilitate aligning various components, including an optical path assembly, of a laser. In a preferred embodiment, the housing is constructed from a single piece of material and broadly comprises one or more through-holes; one or more cavities; and one or more integral mounts, wherein the through-holes and the cavities cooperate to define the integral mounts. Securement holes machined into the integral mounts facilitate securing components within the integral mounts using set screws, adhesive, or a combination thereof. In a preferred method of making the housing, the through-holes and cavities are first machined into the single piece of material, with at least some of the remaining material forming the integral mounts.
Application of integrated fluid-thermal-structural analysis methods
NASA Technical Reports Server (NTRS)
Wieting, Allan R.; Dechaumphai, Pramote; Bey, Kim S.; Thornton, Earl A.; Morgan, Ken
1988-01-01
Hypersonic vehicles operate in a hostile aerothermal environment which has a significant impact on their aerothermostructural performance. Significant coupling occurs between the aerodynamic flow field, structural heat transfer, and structural response creating a multidisciplinary interaction. Interfacing state-of-the-art disciplinary analysis methods is not efficient, hence interdisciplinary analysis methods integrated into a single aerothermostructural analyzer are needed. The NASA Langley Research Center is developing such methods in an analyzer called LIFTS (Langley Integrated Fluid-Thermal-Structural) analyzer. The evolution and status of LIFTS is reviewed and illustrated through applications.
A Comparison of Treatment Integrity Assessment Methods for Behavioral Intervention
ERIC Educational Resources Information Center
Koh, Seong A.
2010-01-01
The purpose of this study was to examine the similarity of outcomes from three different treatment integrity (TI) methods, and to identify the method which best corresponded to the assessment of a child's behavior. Six raters were recruited through individual contact via snowball sampling. A modified intervention component list and 19 video clips…
When Curriculum and Technology Meet: Technology Integration in Methods Courses
ERIC Educational Resources Information Center
Keeler, Christy G.
2008-01-01
Reporting on the results of an action research study, this manuscript provides examples of strategies used to integrate technology into a content methods course. The study used reflective teaching of a social studies methods course at a major Southwestern university in 10 course sections over a four-semester period. In alignment with the research…
An Integrated Calculation Method to Predict Arc Behavior
NASA Astrophysics Data System (ADS)
Li, Xingwen; Chen, Degui
The precision of magnetic field calculation is crucial to predict the arc behavior using magnetohydrodynamic (MHD) model. A integrated calculation method is proposed to couple the calculation of magnetic field and fluid dynamics based on the commercial software ANSYS and FLUENT, which especially benefits to take into account the existence of the ferromagnetic parts. An example concerning air arc is presented using the method.
Multiple-aperture speckle method applied to local displacement measurements
NASA Astrophysics Data System (ADS)
Ángel, Luciano; Tebaldi, Myrian; Bolognini, Néstor
2007-06-01
The goal of this work is to analyze the measurement capability of the modified speckle photography technique that uses different multiple aperture pupils in a multiple exposure scheme. In particular, the rotation case is considered. A point-wise analysis procedure is utilized to obtain the fringes required to access to the local displacement measurements. The proposed arrangement allows simultaneous displaying in the Fourier plane several fringes system each one associated with different rotations. We experimentally verified that the local displacement measurements can be determined with a high precision and accuracy.
Methods for integration site distribution analyses in animal cell genomes
Ciuffi, Angela; Ronen, Keshet; Brady, Troy; Malani, Nirav; Wang, Gary; Berry, Charles C.; Bushman, Frederic D.
2014-01-01
The question of where retroviral DNA becomes integrated in chromosomes is important for understanding (i) the mechanisms of viral growth, (ii) devising new anti-retroviral therapy, (iii) understanding how genomes evolve, and (iv) developing safer methods for gene therapy. With the completion of genome sequences for many organisms, it has become possible to study integration targeting by cloning and sequencing large numbers of host–virus DNA junctions, then mapping the host DNA segments back onto the genomic sequence. This allows statistical analysis of the distribution of integration sites relative to the myriad types of genomic features that are also being mapped onto the sequence scaffold. Here we present methods for recovering and analyzing integration site sequences. PMID:19038346
Multiple Integration of the Heat-Conduction Equation for a Space Bounded From the Inside
NASA Astrophysics Data System (ADS)
Kot, V. A.
2016-03-01
An N-fold integration of the heat-conduction equation for a space bounded from the inside has been performed using a system of identical equalities with definition of the temperature function by a power polynomial with an exponential factor. It is shown that, in a number of cases, the approximate solutions obtained can be considered as exact because their errors comprise hundredths and thousandths of a percent. The method proposed for N-fold integration represents an alternative to classical integral transformations.
NASA Astrophysics Data System (ADS)
Ren, L.; Liu, Q.
2012-12-01
We present multiple moment-tensor solution of the December 26, 2004 Sumatra earthquake based upon adjoint methods. An objective function Φ that measures the goodness of waveform fit between data and synthetics is minimized. Synthetics are calculated by spectral-element simulations (SPECFEM3D_GLOBE) in a 3D global earth model S362ANI to reduce the effect of heterogeneous structures. The Fréchet derivatives of Φ in the form δΦ = ∫T ∫VI(ɛ †ij)(x,T-t) δ(m_dot)ij(x,t)d3xdt, where δmij is the perturbation of moment density function and I(ɛ†ij)(x,T-t) denotes the time-integrated adjoint strain tensor, are calculated based upon adjoint methods implemented in SPECFEM3D_GLOBE. Our initial source model is obtained by monitoring the time-integrated adjoint strain tensors in the vicinity of the presumed source region. Source model parameters are iteratively updated by a preconditioned conjugate-gradient method to iteratively utilizing the calculated Φ and δΦ values. Our final inversion results show both similarities to and differences from previous source inversion results based on 1D background models.
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that
A flexible importance sampling method for integrating subgrid processes
NASA Astrophysics Data System (ADS)
Raut, E. K.; Larson, V. E.
2016-01-01
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-01
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales.
The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories.
The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). Here, the resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
Explicit Integration of Extremely Stiff Reaction Networks: Asymptotic Methods
Guidry, Mike W; Budiardja, R.; Feger, E.; Billings, J. J.; Hix, William Raphael; Messer, O.E.B.; Roche, K. J.; McMahon, E.; He, M.
2013-01-01
We show that, even for extremely stiff systems, explicit integration may compete in both accuracy and speed with implicit methods if algebraic methods are used to stabilize the numerical integration. The stabilizing algebra differs for systems well removed from equilibrium and those near equilibrium. This paper introduces a quantitative distinction between these two regimes and addresses the former case in depth, presenting explicit asymptotic methods appropriate when the system is extremely stiff but only weakly equilibrated. A second paper [1] examines quasi-steady-state methods as an alternative to asymptotic methods in systems well away from equilibrium and a third paper [2] extends these methods to equilibrium conditions in extremely stiff systems using partial equilibrium methods. All three papers present systematic evidence for timesteps competitive with implicit methods. Because explicit methods can execute a timestep faster than an implicit method, our results imply that algebraically stabilized explicit algorithms may offer a means to integration of larger networks than have been feasible previously in various disciplines.
NASA Astrophysics Data System (ADS)
Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens
2015-04-01
The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.
ePRISM: A case study in multiple proxy and mixed temporal resolution integration
Robinson, Marci M.; Dowsett, Harry J.
2010-01-01
As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.
Characterization of multiple-bit errors from single-ion tracks in integrated circuits
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Edmonds, L. D.; Smith, L. S.
1989-01-01
The spread of charge induced by an ion track in an integrated circuit and its subsequent collection at sensitive nodal junctions can cause multiple-bit errors. The authors have experimentally and analytically investigated this phenomenon using a 256-kb dynamic random-access memory (DRAM). The effects of different charge-transport mechanisms are illustrated, and two classes of ion-track multiple-bit error clusters are identified. It is demonstrated that ion tracks that hit a junction can affect the lateral spread of charge, depending on the nature of the pull-up load on the junction being hit. Ion tracks that do not hit a junction allow the nearly uninhibited lateral spread of charge.
Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. PMID:26677817
NASA Astrophysics Data System (ADS)
Dahlin, K.; Asner, G. P.
2010-12-01
The ability to map plant species distributions has long been one of the key goals of terrestrial remote sensing. Achieving this goal has been challenging, however, due to technical constraints and the difficulty in relating remote observations to ground measurements. Advances in both the types of data that can be collected remotely and in available analytical tools like multiple endmember spectral mixture analysis (MESMA) are allowing for rapid improvements in this field. In 2007 the Carnegie Airborne Observatory (CAO) acquired high resolution lidar and hyperspectral imagery of Jasper Ridge Biological Preserve (Woodside, California). The site contains a mosaic of vegetation types, from grassland to chaparral to evergreen forest. To build a spectral library, 415 GPS points were collected in the field, made up of 44 plant species, six plant categories (for nonphotosynthetic vegetation), and four substrate types. Using the lidar data to select the most illuminated pixels as seen from the aircraft (based on canopy shape and viewing angle), we then reduced the spectral library to only the most fully lit pixels. To identify individual plant species in the imagery, first the hyperspectral data was used to calculate the normalized difference vegetation index (NDVI), and then pixels with an NDVI less than 0.15 were removed from further analysis. The remaining image was stratified into five classes based on vegetation height derived from the lidar data. For each class, a suite of possible endmembers was identified and then three endmember selection procedures (endmember average RMS, minimum average spectral angle, and count based endmember selection) were employed to select the most representative endmembers from each species in each class. Two and three endmember models were then applied and each pixel was assigned a species or plant category based on the highest endmember fraction. To validate the approach, an independent set of 200 points was collected throughout the
Liu, Kevin F R
2007-05-01
While pursuing economic development, countries around the world have become aware of the importance of environmental sustainability; therefore, the evaluation of environmental sustainability has become a significant issue. Traditionally, multiple-criteria decision-making (MCDM) was widely used as a way of evaluating environmental sustainability, Recently, several researchers have attempted to implement this evaluation with fuzzy logic since they recognized the assessment of environmental sustainability as a subjective judgment Intuition. This paper outlines a new evaluation-framework of environmental sustainability, which integrates fuzzy logic into MCDM. This evaluation-framework consists of 36 structured and 5 unstructured decision-points, wherein MCDM is used to handle the former and fuzzy logic serves for the latter, With the integrated evaluation-framework, the evaluations of environmental sustainability in 146 countries are calculated, ranked and clustered, and the evaluation results are very helpful to these countries, as they identify their obstacles towards environmental sustainability. PMID:17377731
[Treatment methods in multiple hepatic, pulmonary and pleural hydatidosis].
Galie, N; Stoica, R; Cadar, G; Posea, R; Brânzea, R
2001-01-01
The modern treatment of hepato-pleural-pulmonary hydatidosis is based on surgical excision and medical treatment with drugs like: Mebendazol, Albendazol or Praziquantel. There are presented 23 patients with multiple hepato-pleuro-pulmonary hydatidosis, operated in the last 4 years. Surgical excision in pleuro-pulmonary hydatidosis is adapted to cysts topography, aiming to eliminate the intact cysts and to close the remaining cavities. PMID:11374380
Gain enhancement methods for printed circuit antennas through multiple superstrates
NASA Astrophysics Data System (ADS)
Yang, H. Y.; Alexopoulos, Nicolaos G.
1987-07-01
Reciprocity and a transmission line model are used to determine the radiation properties of printed circuit antennas (PCA's) in a multilayered material configuration. It is demonstrated that extremely high directive gain may result at any scan angle, with practical materials, if the thickness of the substrate and multiple superstrate layers is chosen properly. This model is also used to analyze the radiation characteristics of printed circuit antennas in inhomogeneous substrates.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Scott, S.
2013-12-01
While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into
Entropy-based method to evaluate the data integrity
NASA Astrophysics Data System (ADS)
Peng, Xu; Tianyu, Ma; Yongjie, Jin
2006-12-01
Projection stage of single photon emission computed tomography (SPECT) was discussed to analyze the characteristics of information transmission and evaluate the data integrity. Information is transferred from the source to the detector in the photon emitting process. In the projection stage, integrity of projection data can be assessed by the information entropy, which is the conditional entropy standing for the average uncertainty of the source object under the condition of projection data. Simulations were performed to study projection data of emission-computed tomography with a pinhole collimator. Several types of collimators were treated. Results demonstrate that the conditional entropy shows the data integrity, and indicate how the algorithms are matched or mismatched to the geometry. A new method for assessing data integrity is devised for those decision makers to help improve the quality of image reconstruction.
NASA Astrophysics Data System (ADS)
Sica, R. J.; Haefele, A.
2014-12-01
The measurement of temperature in the middle atmosphere with Rayleigh-scatter lidars is an important technique for assessing atmospheric change. Current retrieval schemes for these temperature have several shortcoming which can be overcome using an optimal estimation method (OEM). OEMs are applied to the retrieval of temperature from Rayleigh-scatter lidar measurements using both single and multiple channel measurements. Forward models are presented that completely characterize the measurement and allow the simultaneous retrieval of temperature, dead time and background. The method allows a full uncertainty budget to be obtained on a per profile basis that includes, in addition to the statistical uncertainties, the smoothing error and uncertainties due to Rayleigh extinction, ozone absorption, the lidar constant, nonlinearity in the counting system, variation of the Rayleigh-scatter cross section with altitude, pressure, acceleration due to gravity and the variation of mean molecular mass with altitude. The vertical resolution of the temperature profile is found at each height, and a quantitative determination is made of the maximum height to which the retrieval is valid. A single temperature profile can be retrieved from measurements with multiple channels that cover different height ranges, vertical resolutions and even different detection methods. The OEM employed is shown to give robust estimates of temperature consistent with previous methods, while requiring minimal computational time. This demonstrated success of lidar temperature retrievals using an OEM opens new possibilities in atmospheric science for measurement integration between active and passive remote sensing instruments. We are currently working on extending our method to simultaneously retrieve water vapour and temperature using Raman-scatter lidar measurements.
NASA Technical Reports Server (NTRS)
Schneider, Harold
1959-01-01
This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.
NASA Astrophysics Data System (ADS)
Pearson, L. W.; Whitaker, R. A.
1991-02-01
The transverse-aperture/integral-equation method provides a means of computation for diffraction coefficients at blunt edges of a broad class of stratified layers, including sheet-anisotropy models for conducting composites. This paper concentrates on the application of the method when the material profile comprises layers of homogeneous, potentially lossy material. The method proceeds from defining an artificial aperture perpendicular to a semiinfinite, planar, stratified region and passing through the terminal edge of the region. An integral equation is formulated over this infinite-extent aperture, and the solution to the integral equation represents the influence of the edge. The kernel in the integral equation is a weighted sum of the Green functions for the respective half spaces lying on either side of the aperture plane. The vector wave equation is separable in each of these half spaces, resulting in Green functions that are expressible analytically. The Green function for the stratified half space is stated in terms of a Sommerfeld-type integral.
Comparison of Four Methods for Weighting Multiple Predictors.
ERIC Educational Resources Information Center
Aamodt, Michael G.; Kimbrough, Wilson W.
1985-01-01
Four methods were used to weight predictors associated with a Resident Assistant job: (1) rank order weights; (2) unit weights; (3) critical incident weights; and (4) regression weights. A cross-validation was also done. Most weighting methods were highly related. No method was superior in terms of protection from validity shrinkage. (GDC)
Accelerometer Method and Apparatus for Integral Display and Control Functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1998-01-01
Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.
Accelerometer Method and Apparatus for Integral Display and Control Functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1996-01-01
Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. Art accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.
Predicted PAR1 inhibitors from multiple computational methods
NASA Astrophysics Data System (ADS)
Wang, Ying; Liu, Jinfeng; Zhu, Tong; Zhang, Lujia; He, Xiao; Zhang, John Z. H.
2016-08-01
Multiple computational approaches are employed in order to find potentially strong binders of PAR1 from the two molecular databases: the Specs database containing more than 200,000 commercially available molecules and the traditional Chinese medicine (TCM) database. By combining the use of popular docking scoring functions together with detailed molecular dynamics simulation and protein-ligand free energy calculations, a total of fourteen molecules are found to be potentially strong binders of PAR1. The atomic details in protein-ligand interactions of these molecules with PAR1 are analyzed to help understand the binding mechanism which should be very useful in design of new drugs.
Structure of the EGF receptor transactivation circuit integrates multiple signals with cell context
Joslin, Elizabeth J.; Shankaran, Harish; Opresko, Lee K.; Bollinger, Nikki; Lauffenburger, Douglas A.; Wiley, H. S.
2010-05-10
Transactivation of the epidermal growth factor receptor (EGFR) has been proposed to be a mechanism by which a variety of cellular inputs can be integrated into a single signaling pathway, but the regulatory topology of this important system is unclear. To understand the transactivation circuit, we first created a “non-binding” reporter for ligand shedding. We then quantitatively defined how signals from multiple agonists were integrated both upstream and downstream of the EGFR into the extracellular signal regulated kinase (ERK) cascade in human mammary epithelial cells. We found that transactivation is mediated by a recursive autocrine circuit where ligand shedding drives EGFR-stimulated ERK that in turn drives further ligand shedding. The time from shedding to ERK activation is fast (<5 min) whereas the recursive feedback is slow (>15 min). Simulations showed that this delay in positive feedback greatly enhanced system stability and robustness. Our results indicate that the transactivation circuit is constructed so that the magnitude of ERK signaling is governed by the sum of multiple direct inputs, while recursive, autocrine ligand shedding controls signal duration.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
Approximation method to compute domain related integrals in structural studies
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2015-11-01
Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the
An Integrated Approach to Research Methods and Capstone
ERIC Educational Resources Information Center
Postic, Robert; McCandless, Ray; Stewart, Beth
2014-01-01
In 1991, the AACU issued a report on improving undergraduate education suggesting, in part, that a curriculum should be both comprehensive and cohesive. Since 2008, we have systematically integrated our research methods course with our capstone course in an attempt to accomplish the twin goals of comprehensiveness and cohesion. By taking this…
A Five-Year Journey: Integrating Teacher Education Methods Courses.
ERIC Educational Resources Information Center
Wright, Eileen; And Others
1996-01-01
Describes one college's program requiring preservice elementary educators to take their methods courses in an integrated block during one semester before student teaching, noting pitfalls of and advantages to this network of classes and reporting data collected from cooperating classroom teachers who subsequently had these student teachers in…
Detection method for dissociation of multiple-charged ions
Smith, Richard D.; Udseth, Harold R.; Rockwood, Alan L.
1991-01-01
Dissociations of multiple-charged ions are detected and analyzed by charge-separation tandem mass spectrometry. Analyte molecules are ionized to form multiple-charged parent ions. A particular charge parent ion state is selected in a first-stage mass spectrometer and its mass-to-charge ratio (M/Z) is detected to determine its mass and charge. The selected parent ions are then dissociated, each into a plurality of fragments including a set of daughter ions each having a mass of at least one molecular weight and a charge of at least one. Sets of daughter ions resulting from the dissociation of one parent ion (sibling ions) vary in number but typically include two to four ions, one or more multiply-charged. A second stage mass spectrometer detects mass-to-charge ratio (m/z) of the daughter ions and a temporal or temporo-spatial relationship among them. This relationship is used to correlate the daughter ions to determine which (m/z) ratios belong to a set of sibling ions. Values of mass and charge of each of the sibling ions are determined simultaneously from their respective (m/z) ratios such that the sibling ion charges are integers and sum to the parent ion charge.
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
Rice, Glenn; Teuschler, Linda; MacDonel, Margaret; Butler, Jim; Finster, Molly; Hertzberg, Rick; Harou, Lynne
2007-07-01
Available in abstract form only. Full text of publication follows: As information about environmental contamination has increased in recent years, so has public interest in the combined effects of multiple contaminants. This interest has been highlighted by recent tragedies such as the World Trade Center disaster and hurricane Katrina. In fact, assessing multiple contaminants, exposures, and effects has long been an issue for contaminated sites, including U.S. Department of Energy (DOE) legacy waste sites. Local citizens have explicitly asked the federal government to account for cumulative risks, with contaminants moving offsite via groundwater flow, surface runoff, and air dispersal being a common emphasis. Multiple exposures range from ingestion and inhalation to dermal absorption and external gamma irradiation. Three types of concerns can lead to cumulative assessments: (1) specific sources or releases - e.g., industrial facilities or accidental discharges; (2) contaminant levels - in environmental media or human tissues; and (3) elevated rates of disease - e.g., asthma or cancer. The specific initiator frames the assessment strategy, including a determination of appropriate models to be used. Approaches are being developed to better integrate a variety of data, extending from environmental to internal co-location of contaminants and combined effects, to support more practical assessments of cumulative health risks. (authors)
Encrypting three-dimensional information system based on integral imaging and multiple chaotic maps
NASA Astrophysics Data System (ADS)
Xing, Yan; Wang, Qiong-Hua; Xiong, Zhao-Long; Deng, Huan
2016-02-01
An encrypting three-dimensional (3-D) information system based on integral imaging (II) and multiple chaotic maps is proposed. In the encrypting process, the elemental image array (EIA) which represents spatial and angular information of the real 3-D scene is picked up by a microlens array. Subsequently, R, G, and B color components decomposed by the EIA are encrypted using multiple chaotic maps. Finally, these three encrypted components are interwoven to obtain the cipher information. The decryption process implements the reverse operation of the encryption process for retrieving the high-quality 3-D images. Since the encrypted EIA has the data redundancy property due to II, and all parameters of the pickup part are the secret keys of the encrypting system, the system sensitivity on the changes of the plaintext and secret keys can be significantly improved. Moreover, the algorithm based on multiple chaotic maps can effectively enhance the security. A preliminary experiment is carried out, and the experimental results verify the effectiveness, robustness, and security of the proposed system.
NASA Astrophysics Data System (ADS)
Congdon, Peter
2010-03-01
This paper describes a structural equation methodology for obtaining social capital scores for survey subjects from multiple indicators of social support, neighbourhood and trust perceptions, and memberships of organizations. It adjusts for variation that is likely to occur in levels of social capital according to geographic context (e.g. level of area deprivation, geographic region, level of urbanity) and demographic group. Social capital is used as an explanatory factor for psychological distress using data from the 2006 Health Survey for England. A highly significant effect of social capital in reducing the chance of psychiatric caseness is obtained after controlling for other individual and geographic risk factors. Allowing for social capital has considerable effects on the impacts on psychiatric health of other risk factors. In particular, the impact of area deprivation category is much reduced. There is also evidence of significant differentiation in social capital between population categories and geographic contexts.
The Boundary Integral Equation Method for Porous Media Flow
NASA Astrophysics Data System (ADS)
Anderson, Mary P.
Just as groundwater hydrologists are breathing sighs of relief after the exertions of learning the finite element method, a new technique has reared its nodes—the boundary integral equation method (BIEM) or the boundary equation method (BEM), as it is sometimes called. As Liggett and Liu put it in the preface to The Boundary Integral Equation Method for Porous Media Flow, “Lately, the Boundary Integral Equation Method (BIEM) has emerged as a contender in the computation Derby.” In fact, in July 1984, the 6th International Conference on Boundary Element Methods in Engineering will be held aboard the Queen Elizabeth II, en route from Southampton to New York. These conferences are sponsored by the Department of Civil Engineering at Southampton College (UK), whose members are proponents of BIEM. The conferences have featured papers on applications of BIEM to all aspects of engineering, including flow through porous media. Published proceedings are available, as are textbooks on application of BIEM to engineering problems. There is even a 10-minute film on the subject.
NASA Technical Reports Server (NTRS)
Boldman, D. R.; Schmidt, J. F.; Ehlers, R. C.
1972-01-01
An empirical modification of an existing integral energy turbulent boundary layer method is proposed in order to improve the estimates of local heat transfer in converging-diverging nozzles and consequently, provide better assessments of the total or integrated heat transfer. The method involves the use of a modified momentum-heat analogy which includes an acceleration term comprising the nozzle geometry and free stream velocity. The original and modified theories are applied to heat transfer data from previous studies which used heated air in 30 deg - 15 deg, 45 deg - 15 deg, and 60 deg - 15 deg water-cooled nozzles.
Signed Decomposition Method for Scalar Multiplication in Elliptic Curve Cryptography
NASA Astrophysics Data System (ADS)
Said, M. R. M.; Mohamed, M. A.; Atan, K. A. Mohd; Zulkarnain, Z. Ahmad
2010-11-01
Addition chain is the solution to computability constraint of the problematic large number arithmetic. In elliptic curve cryptography, a point arithmetic on elliptic curve can be reduced to repetitive addition and doubling operations. Based on this idea, various methods were proposed, lately a decomposition method based on prime decomposition was put forward. This method uses a pre-generated set of rules to calculate an addition chain for n. Though the method shows it own advantage over others in some cases, but some improvements is still avail. We develop an enhancement version called signed decomposition method which takes rule from decomposition method as an input. We also generalize the idea of a prime rule to an integer rule. An improvement is done to the original add rule in decomposition method by allowing subtraction operation to terms. In so doing, we optimize the original form of add rule. The result shows not only an improvement over decomposition method but also become an all time superior compare to preceeding methods. Furthermore, having secret key in a form of rule will put up extra security to the message under communication.
The finite element method: Is weighted volume integration essential?
NASA Astrophysics Data System (ADS)
Narasimhan, T. N.
In developing finite element equations for steady state and transient diffusion-type processes, weighted volume integration is generally assumed to be an intrinsic requirement. It is shown that such finite element equations can be developed directly and with ease on the basis of the elementary notion of a surface integral. Although weighted volume integration is mathematically correct, the algebraic equations stemming from it are no more informative than those derived directly on the basis of a surface integral. An interesting upshot is that the derivation based on surface integration does not require knowledge of a partial differential equation but yet is logically rigorous. It is commonly stated that weighted volume integration of the differential equation helps one carry out analyses of errors, convergence and existence, and therefore, weighted volume integration is preferable. It is suggested that because the direct derivation is logically consistent, numerical solutions emanating from it must be testable for accuracy and internal consistency in ways that the style of which may differ from the classical procedures of error- and convergence-analysis. In addition to simplifying the teaching of the finite element method, the thoughts presented in this paper may lead to establishing the finite element method independently in its own right, rather than it being a surrogate of the differential equation. The purpose of this paper is not to espouse any one particular way of formulating the finite element equations. Rather, it is one of introspection. The desire is to critically examine our traditional way of doing things and inquire whether alternate approaches may reveal to us new and interesting insights.
ERIC Educational Resources Information Center
Tang, Kok-Sing; Delgado, Cesar; Moje, Elizabeth Birr
2014-01-01
This paper presents an integrative framework for analyzing science meaning-making with representations. It integrates the research on multiple representations and multimodal representations by identifying and leveraging the differences in their units of analysis in two dimensions: timescale and compositional grain size. Timescale considers the…
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Multiple cell radiation detector system, and method, and submersible sonde
Johnson, Larry O.; McIsaac, Charles V.; Lawrence, Robert S.; Grafwallner, Ervin G.
2002-01-01
A multiple cell radiation detector includes a central cell having a first cylindrical wall providing a stopping power less than an upper threshold; an anode wire suspended along a cylindrical axis of the central cell; a second cell having a second cylindrical wall providing a stopping power greater than a lower threshold, the second cylindrical wall being mounted coaxially outside of the first cylindrical wall; a first end cap forming a gas-tight seal at first ends of the first and second cylindrical walls; a second end cap forming a gas-tight seal at second ends of the first and second cylindrical walls; and a first group of anode wires suspended between the first and second cylindrical walls.
Material mechanical characterization method for multiple strains and strain rates
Erdmand, III, Donald L.; Kunc, Vlastimil; Simunovic, Srdjan; Wang, Yanli
2016-01-19
A specimen for measuring a material under multiple strains and strain rates. The specimen including a body having first and second ends and a gage region disposed between the first and second ends, wherein the body has a central, longitudinal axis passing through the first and second ends. The gage region includes a first gage section and a second gage section, wherein the first gage section defines a first cross-sectional area that is defined by a first plane that extends through the first gage section and is perpendicular to the central, longitudinal axis. The second gage section defines a second cross-sectional area that is defined by a second plane that extends through the second gage section and is perpendicular to the central, longitudinal axis and wherein the first cross-sectional area is different in size than the second cross-sectional area.
Classification accuracy across multiple tests following item method directed forgetting.
Goernert, Phillip N; Widner, Robert L; Otani, Hajime
2007-09-01
We investigated recall of line-drawing pictures paired at study with an instruction either to remember (TBR items) or to forget (TBF items). Across three 7-minute tests, net recall (items reported independent of accuracy in instructional designation) and correctly classified recall (recall conditional on correct instructional designation) showed directed forgetting. That is, for both measures, recall of TBR items always exceeded recall of TBF items. Net recall for both item types increased across tests at comparable levels showing hypermnesia. However, across tests, correct classification of both item types decreased at comparable levels. Collectively, hypermnesia as measured by net recall is possible for items from multiple sets, but at the cost of accurate source information. PMID:17676551
Yoga as a method of symptom management in multiple sclerosis
Frank, Rachael; Larimore, Jennifer
2015-01-01
Multiple Sclerosis (MS) is an immune-mediated process in which the body's immune system damages myelin in the central nervous system (CNS). The onset of this disorder typically occurs in young adults, and it is more common among women. Currently, there is no cure and the long-term disease progression makes symptomatic management critical for maintaining quality of life. Several pharmacotherapeutic agents are approved for treatment, but many patients seek complementary and alternative interventions. Reviews have been conducted regarding broad topics such as mindfulness-based interventions for people diagnosed with MS and the impact of yoga on a range of neurological disorders. The objective of the present review is to examine the potential benefits of yoga for individuals with MS and address its use in managing symptoms including pain, mental health, fatigue, spasticity, balance, bladder control, and sexual function. PMID:25983675
Metcalf, Jessica L.; Prost, Stefan; Nogués-Bravo, David; DeChaine, Eric G.; Anderson, Christian; Batra, Persaram; Araújo, Miguel B.; Cooper, Alan; Guralnick, Robert P.
2014-01-01
One of the grand goals of historical biogeography is to understand how and why species' population sizes and distributions change over time. Multiple types of data drawn from disparate fields, combined into a single modelling framework, are necessary to document changes in a species's demography and distribution, and to determine the drivers responsible for change. Yet truly integrated approaches are challenging and rarely performed. Here, we discuss a modelling framework that integrates spatio-temporal fossil data, ancient DNA, palaeoclimatological reconstructions, bioclimatic envelope modelling and coalescence models in order to statistically test alternative hypotheses of demographic and potential distributional changes for the iconic American bison (Bison bison). Using different assumptions about the evolution of the bioclimatic niche, we generate hypothetical distributional and demographic histories of the species. We then test these demographic models by comparing the genetic signature predicted by serial coalescence against sequence data derived from subfossils and modern populations. Our results supported demographic models that include both climate and human-associated drivers of population declines. This synthetic approach, integrating palaeoclimatology, bioclimatic envelopes, serial coalescence, spatio-temporal fossil data and heterochronous DNA sequences, improves understanding of species' historical biogeography by allowing consideration of both abiotic and biotic interactions at the population level. PMID:24403338
Method for analyzing radiation sensitivity of integrated circuits
NASA Technical Reports Server (NTRS)
Gauthier, M. K.; Stanley, A. G. (Inventor)
1979-01-01
A method for analyzing the radiation sensitivity of an integrated circuit is described to determine the components. The application of a narrow radiation beam to portions of the circuit is considered. The circuit is operated under normal bias conditions during the application of radiation in a dosage that is likely to cause malfunction of at least some transistors, while the circuit is monitored for failure of the irradiated transistor. When a radiation sensitive transistor is found, then the radiation beam is further narrowed and, using a fresh integrated circuit, a very narrow beam is applied to different parts of the transistor, such as its junctions, to locate the points of greatest sensitivity.
Method to integrate full particle orbit in toroidal plasmas
NASA Astrophysics Data System (ADS)
Wei, X. S.; Xiao, Y.; Kuley, A.; Lin, Z.
2015-09-01
It is important to integrate full particle orbit accurately when studying charged particle dynamics in electromagnetic waves with frequency higher than cyclotron frequency. We have derived a form of the Boris scheme using magnetic coordinates, which can be used effectively to integrate the cyclotron orbit in toroidal geometry over a long period of time. The new method has been verified by a full particle orbit simulation in toroidal geometry without high frequency waves. The full particle orbit calculation recovers guiding center banana orbit. This method has better numeric properties than the conventional Runge-Kutta method for conserving particle energy and magnetic moment. The toroidal precession frequency is found to match that from guiding center simulation. Many other important phenomena in the presence of an electric field, such as E × B drift, Ware pinch effect and neoclassical polarization drift are also verified by the full orbit simulation.
A Renormalisation Group Method. I. Gaussian Integration and Normed Algebras
NASA Astrophysics Data System (ADS)
Brydges, David C.; Slade, Gordon
2015-05-01
This paper is the first in a series devoted to the development of a rigorous renormalisation group method for lattice field theories involving boson fields, fermion fields, or both. Our immediate motivation is a specific model, involving both boson and fermion fields, which arises as a representation of the continuous-time weakly self-avoiding walk. In this paper, we define normed algebras suitable for a renormalisation group analysis, and develop methods for performing analysis on these algebras. We also develop the theory of Gaussian integration on these normed algebras, and prove estimates for Gaussian integrals. The concepts and results developed here provide a foundation for the continuation of the method presented in subsequent papers in the series.
The Multiple-Car Method. Exploring Its Use in Driver and Traffic Safety Education. Second Edition.
ERIC Educational Resources Information Center
American Driver and Traffic Safety Education Association, Washington, DC.
Primarily written for school administrators and driver education teachers, this publication presents information on planning and implementing the multiple car method of driver instruction. An introductory section presents a definition of the multiple car method and its history of development. It is defined as an off-street paved area incorporating…
Face recognition using fuzzy integral and wavelet decomposition method.
Kwak, Keun-Chang; Pedrycz, Witold
2004-08-01
In this paper, we develop a method for recognizing face images by combining wavelet decomposition, Fisherface method, and fuzzy integral. The proposed approach is comprised of four main stages. The first stage uses the wavelet decomposition that helps extract intrinsic features of face images. As a result of this decomposition, we obtain four subimages (namely approximation, horizontal, vertical, and diagonal detailed images). The second stage of the approach concerns the application of the Fisherface method to these four decompositions. The choice of the Fisherface method in this setting is motivated by its insensitivity to large variation in light direction, face pose, and facial expression. The two last phases are concerned with the aggregation of the individual classifiers by means of the fuzzy integral. Both Sugeno and Choquet type of fuzzy integral are considered as the aggregation method. In the experiments we use n-fold cross-validation to assure high consistency of the produced classification outcomes. The experimental results obtained for the Chungbuk National University (CNU) and Yale University face databases reveal that the approach presented in this paper yields better classification performance in comparison to the results obtained by other classifiers. PMID:15462434
Lagerwaard, Frank J. Hoorn, Elles A.P. van der; Verbakel, Wilko; Haasbeek, Cornelis J.A.; Slotman, Ben J.; Senan, Suresh
2009-09-01
Purpose: Volumetric modulated arc therapy (RapidArc [RA]; Varian Medical Systems, Palo Alto, CA) allows for the generation of intensity-modulated dose distributions by use of a single gantry rotation. We used RA to plan and deliver whole-brain radiotherapy (WBRT) with a simultaneous integrated boost in patients with multiple brain metastases. Methods and Materials: Composite RA plans were generated for 8 patients, consisting of WBRT (20 Gy in 5 fractions) with an integrated boost, also 20 Gy in 5 fractions, to Brain metastases, and clinically delivered in 3 patients. Summated gross tumor volumes were 1.0 to 37.5 cm{sup 3}. RA plans were measured in a solid water phantom by use of Gafchromic films (International Specialty Products, Wayne, NJ). Results: Composite RA plans could be generated within 1 hour. Two arcs were needed to deliver the mean of 1,600 monitor units with a mean 'beam-on' time of 180 seconds. RA plans showed excellent coverage of planning target volume for WBRT and planning target volume for the boost, with mean volumes receiving at least 95% of the prescribed dose of 100% and 99.8%, respectively. The mean conformity index was 1.36. Composite plans showed much steeper dose gradients outside Brain metastases than plans with a conventional summation of WBRT and radiosurgery. Comparison of calculated and measured doses showed a mean gamma for double-arc plans of 0.30, and the area with a gamma larger than 1 was 2%. In-room times for clinical RA sessions were approximately 20 minutes for each patient. Conclusions: RA treatment planning and delivery of integrated plans of WBRT and boosts to multiple brain metastases is a rapid and accurate technique that has a higher conformity index than conventional summation of WBRT and radiosurgery boost.
Adaptive frequency estimation by MUSIC (Multiple Signal Classification) method
NASA Astrophysics Data System (ADS)
Karhunen, Juha; Nieminen, Esko; Joutsensalo, Jyrki
During the last years, the eigenvector-based method called MUSIC has become very popular in estimating the frequencies of sinusoids in additive white noise. Adaptive realizations of the MUSIC method are studied using simulated data. Several of the adaptive realizations seem to give in practice equally good results as the nonadaptive standard realization. The only exceptions are instantaneous gradient type algorithms that need considerably more samples to achieve a comparable performance. A new method is proposed for constructing initial estimates to the signal subspace. The method improves often dramatically the performance of instantaneous gradient type algorithms. The new signal subspace estimate can also be used to define a frequency estimator directly or to simplify eigenvector computation.
[An integrated segmentation method for 3D ultrasound carotid artery].
Yang, Xin; Wu, Huihui; Liu, Yang; Xu, Hongwei; Liang, Huageng; Cai, Wenjuan; Fang, Mengjie; Wang, Yujie
2013-07-01
An integrated segmentation method for 3D ultrasound carotid artery was proposed. 3D ultrasound image was sliced into transverse, coronal and sagittal 2D images on the carotid bifurcation point. Then, the three images were processed respectively, and the carotid artery contours and thickness were obtained finally. This paper tries to overcome the disadvantages of current computer aided diagnosis method, such as high computational complexity, easily introduced subjective errors et al. The proposed method could get the carotid artery overall information rapidly, accurately and completely. It could be transplanted into clinical usage for atherosclerosis diagnosis and prevention. PMID:24195385
Wang, Li; Tu, Zhidong; Sun, Fengzhu
2009-01-01
Background The recently developed RNA interference (RNAi) technology has created an unprecedented opportunity which allows the function of individual genes in whole organisms or cell lines to be interrogated at genome-wide scale. However, multiple issues, such as off-target effects or low efficacies in knocking down certain genes, have produced RNAi screening results that are often noisy and that potentially yield both high rates of false positives and false negatives. Therefore, integrating RNAi screening results with other information, such as protein-protein interaction (PPI), may help to address these issues. Results By analyzing 24 genome-wide RNAi screens interrogating various biological processes in Drosophila, we found that RNAi positive hits were significantly more connected to each other when analyzed within a protein-protein interaction network, as opposed to random cases, for nearly all screens. Based on this finding, we developed a network-based approach to identify false positives (FPs) and false negatives (FNs) in these screening results. This approach relied on a scoring function, which we termed NePhe, to integrate information obtained from both PPI network and RNAi screening results. Using a novel rank-based test, we compared the performance of different NePhe scoring functions and found that diffusion kernel-based methods generally outperformed others, such as direct neighbor-based methods. Using two genome-wide RNAi screens as examples, we validated our approach extensively from multiple aspects. We prioritized hits in the original screens that were more likely to be reproduced by the validation screen and recovered potential FNs whose involvements in the biological process were suggested by previous knowledge and mutant phenotypes. Finally, we demonstrated that the NePhe scoring system helped to biologically interpret RNAi results at the module level. Conclusion By comprehensively analyzing multiple genome-wide RNAi screens, we conclude that
Evaluating the Accuracy and Efficiency of Multiple Sequence Alignment Methods
Pervez, Muhammad Tariq; Babar, Masroor Ellahi; Nadeem, Asif; Aslam, Muhammad; Awan, Ali Raza; Aslam, Naeem; Hussain, Tanveer; Naveed, Nasir; Qadri, Salman; Waheed, Usman; Shoaib, Muhammad
2014-01-01
A comparison of 10 most popular Multiple Sequence Alignment (MSA) tools, namely, MUSCLE, MAFFT(L-INS-i), MAFFT (FFT-NS-2), T-Coffee, ProbCons, SATe, Clustal Omega, Kalign, Multalin, and Dialign-TX is presented. We also focused on the significance of some implementations embedded in algorithm of each tool. Based on 10 simulated trees of different number of taxa generated by R, 400 known alignments and sequence files were constructed using indel-Seq-Gen. A total of 4000 test alignments were generated to study the effect of sequence length, indel size, deletion rate, and insertion rate. Results showed that alignment quality was highly dependent on the number of deletions and insertions in the sequences and that the sequence length and indel size had a weaker effect. Overall, ProbCons was consistently on the top of list of the evaluated MSA tools. SATe, being little less accurate, was 529.10% faster than ProbCons and 236.72% faster than MAFFT(L-INS-i). Among other tools, Kalign and MUSCLE achieved the highest sum of pairs. We also considered BALiBASE benchmark datasets and the results relative to BAliBASE- and indel-Seq-Gen-generated alignments were consistent in the most cases. PMID:25574120
NASA Astrophysics Data System (ADS)
Hu, Yanxia; Yang, Xiaozhong
2006-08-01
A method for obtaining first integrals and integrating factors of n-th order autonomous systems is proposed. The search for first integrals and integrating factors can be reduced to the search for a class of invariant manifolds of the systems. Finally, the proposed method is applied to Euler-Poisson equations (gyroscope system), and the fourth first integral of the system in general Kovalevskaya case can be obtained.
ERIC Educational Resources Information Center
Rimpiläinen, Sanna
2015-01-01
What do different research methods and approaches "do" in practice? The article seeks to discuss this point by drawing upon socio-material research approaches and empirical examples taken from the early stages of an extensive case study on an interdisciplinary project between two multidisciplinary fields of study, education and computer…
NASA Astrophysics Data System (ADS)
Brown, Craig J.; Sameoto, Jessica A.; Smith, Stephen J.
2012-08-01
The establishment of multibeam echosounders (MBES) as a mainstream tool in ocean mapping has facilitated integrative approaches toward nautical charting, benthic habitat mapping, and seafloor geotechnical surveys. The inherent bathymetric and backscatter information generated by MBES enables marine scientists to present highly accurate bathymetric data with a spatial resolution closely matching that of terrestrial mapping. Furthermore, developments in data collection and processing of MBES backscatter, combined with the quality of the co-registered depth information, have resulted in the increasing preferential use of multibeam technology over conventional sidescan sonar for the production of benthic habitat maps. A range of post-processing approaches can generate customized map products to meet multiple ocean management needs, thus extracting maximum value from a single survey data set. Based on recent studies over German Bank off SW Nova Scotia, Canada, we show how primary MBES bathymetric and backscatter data, along with supplementary data (i.e. in situ video and stills), were processed using a variety of methods to generate a series of maps. Methods conventionally used for classification of multi-spectral data were tested for classification of the MBES data set to produce a map summarizing broad bio-physical characteristics of the seafloor (i.e. a benthoscape map), which is of value for use in many aspects of marine spatial planning. A species-specific habitat map for the sea scallop Placopecten magellanicus was also generated from the MBES data by applying a Species Distribution Modeling (SDM) method to spatially predict habitat suitability, which offers tremendous promise for use in fisheries management. In addition, we explore the challenges of incorporating benthic community data into maps based on species information derived from a large number of seafloor photographs. Through the process of applying multiple methods to generate multiple maps for
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941
Williams, Lee; McBain, Heidi
2006-07-01
As the field of family therapy has evolved, there has been growing recognition as to the importance of gender in family therapy. To prepare the next generation of family therapists adequately, it is important that they recognize the many and complex ways in which gender permeates their work. In this article we present an integrative model to help educators teach family therapists about gender issues. The model examines how gender influences clinical work on multiple levels, including contextual levels such as society and the marriage and family therapy field. The model also acknowledges how gender can influence individuals, including clients, therapists, and supervisors. Finally, the model attempts to capture the complexity of how gender can impact the relational dynamics between two or more individuals. PMID:16933441
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941
The continuous end-state comfort effect: weighted integration of multiple biases.
Herbort, Oliver; Butz, Martin V
2012-05-01
The grasp orientation when grasping an object is frequently aligned in anticipation of the intended rotation of the object (end-state comfort effect). We analyzed grasp orientation selection in a continuous task to determine the mechanisms underlying the end-state comfort effect. Participants had to grasp a box by a circular handle-which allowed for arbitrary grasp orientations-and then had to rotate the box by various angles. Experiments 1 and 2 revealed both that the rotation's direction considerably determined grasp orientations and that end-postures varied considerably. Experiments 3 and 4 further showed that visual stimuli and initial arm postures biased grasp orientations if the intended rotation could be easily achieved. The data show that end-state comfort but also other factors determine grasp orientation selection. A simple mechanism that integrates multiple weighted biases can account for the data. PMID:21499901
Borja, Angel; Bricker, Suzanne B; Dauer, Daniel M; Demetriades, Nicolette T; Ferreira, João G; Forbes, Anthony T; Hutchings, Pat; Jia, Xiaoping; Kenchington, Richard; Carlos Marques, João; Zhu, Changbo
2008-09-01
In recent years, several sets of legislation worldwide (Oceans Act in USA, Australia or Canada; Water Framework Directive or Marine Strategy in Europe, National Water Act in South Africa, etc.) have been developed in order to address ecological quality or integrity, within estuarine and coastal systems. Most such legislation seeks to define quality in an integrative way, by using several biological elements, together with physico-chemical and pollution elements. Such an approach allows assessment of ecological status at the ecosystem level ('ecosystem approach' or 'holistic approach' methodologies), rather than at species level (e.g. mussel biomonitoring or Mussel Watch) or just at chemical level (i.e. quality objectives) alone. Increasing attention has been paid to the development of tools for different physico-chemical or biological (phytoplankton, zooplankton, benthos, algae, phanerogams, fishes) elements of the ecosystems. However, few methodologies integrate all the elements into a single evaluation of a water body. The need for such integrative tools to assess ecosystem quality is very important, both from a scientific and stakeholder point of view. Politicians and managers need information from simple and pragmatic, but scientifically sound methodologies, in order to show to society the evolution of a zone (estuary, coastal area, etc.), taking into account human pressures or recovery processes. These approaches include: (i) multidisciplinarity, inherent in the teams involved in their implementation; (ii) integration of biotic and abiotic factors; (iii) accurate and validated methods in determining ecological integrity; and (iv) adequate indicators to follow the evolution of the monitored ecosystems. While some countries increasingly use the establishment of marine parks to conserve marine biodiversity and ecological integrity, there is awareness (e.g. in Australia) that conservation and management of marine ecosystems cannot be restricted to Marine Protected
Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji
2016-01-01
Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices.
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2004-05-18
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2006-12-12
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, A. T.; Cannon, A. J.
2015-06-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Prinn, Ronald; Webster, Mort
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
Review and Research of the Neutron Source Multiplication Method in Nuclear Critical Safety
Shi Yongqian; Zhu Qingfu; Tao He
2005-01-15
The paper first briefly reviews the neutron source multiplication method and then presents an experimental study that shows that the parameter measured by the neutron source multiplication method actually is a subcritical effective neutron multiplication factor k{sub s} with an external neutron source, not the effective neutron multiplication factor k{sub eff}. The parameters k{sub s} and k{sub eff} have been researched for a nuclear critical safety experiment assembly using a uranium solution. The parameter k{sub s} was measured by the source multiplication method, while the parameter k{sub eff} was measured by the power-raising period method. The relationship between k{sub eff} and k{sub s} is discussed and their effects on nuclear safety are mentioned.
Track fitting with multiple scattering: A new method
NASA Astrophysics Data System (ADS)
Billoir, Pierre
1984-08-01
An analytical calculation of the variance is performed, in some simple case, for standard least-squares estimators of track parameters (accounting for independent measurement errors only); comparison is made with optimal estimators (accounting also for scattering errors, correlated between one point and the following ones). A new method is proposed for optimal estimation: the points measured on the track are included backwards, one by one, in the fitting algorithm, and the scattering is handled locally at each step. The feasibility of the method is shown on real events, for which the geometrical resolution is improved. The algorithm is very flexible and allows fast programmation; moreover the computation time is merely proportional to the number of measured points, contrary to the other optimal estimators.
Support Operators Method for the Diffusion Equation in Multiple Materials
Winters, Andrew R.; Shashkov, Mikhail J.
2012-08-14
A second-order finite difference scheme for the solution of the diffusion equation on non-uniform meshes is implemented. The method allows the heat conductivity to be discontinuous. The algorithm is formulated on a one dimensional mesh and is derived using the support operators method. A key component of the derivation is that the discrete analog of the flux operator is constructed to be the negative adjoint of the discrete divergence, in an inner product that is a discrete analog of the continuum inner product. The resultant discrete operators in the fully discretized diffusion equation are symmetric and positive definite. The algorithm is generalized to operate on meshes with cells which have mixed material properties. A mechanism to recover intermediate temperature values in mixed cells using a limited linear reconstruction is introduced. The implementation of the algorithm is verified and the linear reconstruction mechanism is compared to previous results for obtaining new material temperatures.
Sinkhole Imaging With Multiple Geophysical Methods in Covered Karst Terrain
NASA Astrophysics Data System (ADS)
Weiss, M.
2005-05-01
A suite of geophysical surveys was run at the Geopark at the University of South Florida campus in Tampa in attempt to determine the degree to which methods could image a collapsed sinkhole with a diameter of ~4m and maximum depth of ~2.5m. Geologically, the Geopark is part of a covered karst terrane, with collapsed sinkholes filled in by overlying unconsolidated sand separated from the weathered limestone beneath by a clayey sand layer. The sinkholes are hydrologically significant as they may serve as sites of concentrated recharge. The methods used during the study include: refraction seismics, resistivity, electromagnetics (TEM and EM), and ground penetrating radar (GPR). Geophysical data are compared against cores. The resistivity, GPR, and seismic refraction profiles yield remarkably consistent images of the clayey sand layer. EM-31 data revealed regional trends in subsurface geology, but could not delineate specific sinkhole features with the desired resolution.
Automatic generation of hypergeometric identities by the beta integral method
NASA Astrophysics Data System (ADS)
Krattenthaler, C.; Srinivasa Rao, K.
2003-11-01
In this article, hypergeometric identities (or transformations) for p+1Fp-series and for Kampe de Feriet series of unit arguments are derived systematically from known transformations of hypergeometric series and products of hypergeometric series, respectively, using the beta integral method in an automated manner, based on the Mathematica package HYP. As a result, we obtain some known and some identities which seem to not have been recorded before in literature.
Romualdi, Chiara; Trevisan, Silvia; Celegato, Barbara; Costa, Germano; Lanfranchi, Gerolamo
2003-01-01
The variability of results in microarray technology is in part due to the fact that independent scans of a single hybridised microarray give spot images that are not quite the same. To solve this problem and turn it to our advantage, we introduced the approach of multiple scanning and of image integration of microarrays. To this end, we have developed specific software that creates a virtual image that statistically summarises a series of consecutive scans of a microarray. We provide evidence that the use of multiple imaging (i) enhances the detection of differentially expressed genes; (ii) increases the image homogeneity; and (iii) reveals false-positive results such as differentially expressed genes that are detected by a single scan but not confirmed by successive scanning replicates. The increase in the final number of differentially expressed genes detected in a microarray experiment with this approach is remarkable; 50% more for microarrays hybridised with targets labelled by reverse transcriptase, and 200% more for microarrays developed with the tyramide signal amplification (TSA) technique. The results have been confirmed by semi-quantitative RT–PCR tests. PMID:14627839
An integrated voice and data multiple-access scheme for a land-mobile satellite system
NASA Technical Reports Server (NTRS)
Li, V. O. K.; Yan, T.-Y.
1984-01-01
An analytical study is performed of the satellite requirements for a land mobile satellite system (LMSS). The spacecraft (MSAT-X) would be in GEO and would be compatible with multiple access by mobile radios and antennas and fixed stations. The FCC has received a petition from NASA to reserve the 821-825 and 866-870 MHz frequencies for the LMSS, while communications with fixed earth stations would be in the Ku band. MSAT-X transponders would alter the frequencies of signal and do no processing in the original configuration considered. Channel use would be governed by an integrated demand-assigned, multiple access protocol, which would divide channels into reservation and information channels, governed by a network management center. Further analyses will cover tradeoffs between data and voice users, probability of blocking, and the performance impacts of on-board switching and variable bandwidth assignment. Initial calculations indicate that a large traffic volume can be handled with acceptable delays and voice blocking probabilities.
Walzer, Andreas; Schausberger, Peter
2013-01-01
Intraguild (IG) prey is commonly confronted with multiple IG predator species. However, the IG predation (IGP) risk for prey is not only dependent on the predator species, but also on inherent (intraspecific) characteristics of a given IG predator such as its life-stage, sex or gravidity and the associated prey needs. Thus, IG prey should have evolved the ability to integrate multiple IG predator cues, which should allow both inter- and intraspecific threat-sensitive anti-predator responses. Using a guild of plant-inhabiting predatory mites sharing spider mites as prey, we evaluated the effects of single and combined cues (eggs and/or chemical traces left by a predator female on the substrate) of the low risk IG predator Neoseiulus californicus and the high risk IG predator Amblyseius andersoni on time, distance and path shape parameters of the larval IG prey Phytoseiulus persimilis. IG prey discriminated between traces of the low and high risk IG predator, with and without additional presence of their eggs, indicating interspecific threat-sensitivity. The behavioural changes were manifest in distance moved, activity and path shape of IG prey. The cue combination of traces and eggs of the IG predators conveyed other information than each cue alone, allowing intraspecific threat-sensitive responses by IG prey apparent in changed velocities and distances moved. We argue that graded responses to single and combined IG predator cues are adaptive due to minimization of acceptance errors in IG prey decision making. PMID:23750040
Schleier III, Jerome J.; Marshall, Lucy A.; Davis, Ryan S.
2015-01-01
Decision analysis often considers multiple lines of evidence during the decision making process. Researchers and government agencies have advocated for quantitative weight-of-evidence approaches in which multiple lines of evidence can be considered when estimating risk. Therefore, we utilized Bayesian Markov Chain Monte Carlo to integrate several human-health risk assessment, biomonitoring, and epidemiology studies that have been conducted for two common insecticides (malathion and permethrin) used for adult mosquito management to generate an overall estimate of risk quotient (RQ). The utility of the Bayesian inference for risk management is that the estimated risk represents a probability distribution from which the probability of exceeding a threshold can be estimated. The mean RQs after all studies were incorporated were 0.4386, with a variance of 0.0163 for malathion and 0.3281 with a variance of 0.0083 for permethrin. After taking into account all of the evidence available on the risks of ULV insecticides, the probability that malathion or permethrin would exceed a level of concern was less than 0.0001. Bayesian estimates can substantially improve decisions by allowing decision makers to estimate the probability that a risk will exceed a level of concern by considering seemingly disparate lines of evidence. PMID:25648367
Multiple-mode Lamb wave scattering simulations using 3D elastodynamic finite integration technique.
Leckey, Cara A C; Rogge, Matthew D; Miller, Corey A; Hinders, Mark K
2012-02-01
We have implemented three-dimensional (3D) elastodynamic finite integration technique (EFIT) simulations to model Lamb wave scattering for two flaw-types in an aircraft-grade aluminum plate, a rounded rectangle flat-bottom hole and a disbond of the same shape. The plate thickness and flaws explored in this work include frequency-thickness regions where several Lamb wave modes exist and sometimes overlap in phase and/or group velocity. For the case of the flat-bottom hole the depth was incrementally increased to explore progressive changes in multiple-mode Lamb wave scattering due to the damage. The flat-bottom hole simulation results have been compared to experimental data and are shown to provide key insight for this well-defined experimental case by explaining unexpected results in experimental waveforms. For the rounded rectangle disbond flaw, which would be difficult to implement experimentally, we found that Lamb wave behavior differed significantly from the flat-bottom hole flaw. Most of the literature in this field is restricted to low frequency-thickness regions due to difficulties in interpreting data when multiple modes exist. We found that benchmarked 3D EFIT simulations can yield an understanding of scattering behavior for these higher frequency-thickness regions and in cases that would be difficult to set up experimentally. Additionally, our results show that 2D simulations would not have been sufficient for modeling the complicated scattering that occurred. PMID:21908011
The Testing Methods and Gender Differences in Multiple-Choice Assessment
NASA Astrophysics Data System (ADS)
Ng, Annie W. Y.; Chan, Alan H. S.
2009-10-01
This paper provides a comprehensive review of the multiple-choice assessment in the past two decades for facilitating people to conduct effective testing in various subject areas. It was revealed that a variety of multiple-choice test methods viz. conventional multiple-choice, liberal multiple-choice, elimination testing, confidence marking, probability testing, and order-of-preference scheme are available for use in assessing subjects' knowledge and decision ability. However, the best multiple-choice test method for use has not yet been identified. The review also indicated that the existence of gender differences in multiple-choice task performance might be due to the test area, instruction/scoring condition, and item difficulty.
Multiple sclerosis, an autoimmune inflammatory disease: prospects for its integrative management.
Kidd, P M
2001-12-01
Multiple sclerosis (MS) is aptly named for the many scars it produces in the brain and spinal cord. A sometimes fatal, often debilitating disease, MS features autoimmune inflammatory attack against the myelin insulation of neurons. Thymus derived (T) cells sensitized against myelin self-antigens secrete tumor necrosis factor, cytokines, prostaglandins, and other inflammatory mediators that strip away the myelin and sometimes destroy the axons. Familial and twin inheritance studies indicate MS is mildly heritable. No single MS locus has been identified, but an HLA haplotype has been implicated. Unique geographic distribution of the disease is best attributed to some combination of vitamin D abnormality and dietary patterns. No pharmaceutical or other therapies exist that confer prolonged remission on MS, and obvious interrelationships between toxic, infectious, and dietary factors make a persuasive case for integrative management. The time-proven MS diet meticulously keeps saturated fats low, includes three fish meals per week, and eliminates allergenic foods. Dietary supplementation for MS minimally requires potent vitamin supplementation, along with the thiol antioxidants, the anti-inflammatory omega-3 fatty acids, and adaptogenic phytonutrients. Gut malabsorption and dysbiosis can be corrected using digestive enzymes and probiotics. Long-term hyperbaric oxygen therapy can slow or remit the disease. Transdermal histamine offers promise, and adenosine monophosphate may sometimes benefit. Chronic viruses and other infectious load must be aggressively treated and exercise should maintain muscle tone and balance. Early intervention with integrative modalities has the potential to make MS a truly manageable disease. PMID:11804546
NASA Astrophysics Data System (ADS)
Wittig, V.; Yang, X.; Jain, A.
2008-12-01
Independent changes in atmospheric carbon dioxide, tropospheric ozone, nitrogen deposition and climate change directly impact terrestrial productivity. Less well understood are the interactive effects of these globally changing factors on terrestrial productivity and the resultant impact on rising atmospheric carbon dioxide concentrations. This study uses the Integrated Science Assessment Model (ISAM) to quantify the impacts of these multiple global changes on terrestrial productivity and further, to project how these changes feedback on atmospheric carbon dioxide concentrations via respiratory carbon fluxes. The ISAM is modified to include a mechanistic model of leaf photosynthesis including the sensitivity of leaf photosynthesis to tropospheric ozone. Leaf-level photosynthetic carbon gain is scaled to the canopy with a sun-shade microclimate model to estimate the gross primary productivity of major biomes comprised of representative plant functional types. The modified carbon cycle in ISAM is coupled to a detailed model of the terrestrial nitrogen cycle therefore providing the integrated modeling framework required to assess the interactive effects of rising carbon dioxide, tropospheric ozone, nitrogen deposition and climate change on global productivity.
Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches
NASA Astrophysics Data System (ADS)
Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo
This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.
Del Boccio, Piero; Rossi, Claudia; di Ioia, Maria; Cicalini, Ilaria; Sacchetta, Paolo; Pieragostino, Damiana
2016-04-01
Personalized medicine is the science of individualized prevention and therapy. In the last decade, advances in high-throughput approaches allowed the development of proteomic and metabolomic studies in evaluating the association of genetic and phenotypic variability with disease sensitivity and analgesic response. These considerations have more value in case of multiple sclerosis (MuS), a multifactorial disease with high heterogeneity in clinical course and treatment response. In this review, we reported and updated about proteomic and metabolomic studies for the research of new candidate biomarkers in MuS, and difficulties in their clinical applications. We focused especially on the description of both "omics" approaches that, once integrated, may synergically describe pathophysiology conditions. To prove this assumption, we rebuilt interaction between proteins and metabolites described in the literature as potential biomarkers for MuS, and a pathway analysis of these molecules was performed. The result of such speculation demonstrated a strong convergence of proteomic and metabolomic results in this field, showing also a poorness of available tools for incorporating "omics" approaches. In conclusion, the integration of Metabolomics and Proteomics may allow a more complete characterization of such a heterogeneous disease, providing further insights into personalized healthcare. PMID:27061322
The blackboard model - A framework for integrating multiple cooperating expert systems
NASA Technical Reports Server (NTRS)
Erickson, W. K.
1985-01-01
The use of an artificial intelligence (AI) architecture known as the blackboard model is examined as a framework for designing and building distributed systems requiring the integration of multiple cooperating expert systems (MCXS). Aerospace vehicles provide many examples of potential systems, ranging from commercial and military aircraft to spacecraft such as satellites, the Space Shuttle, and the Space Station. One such system, free-flying, spaceborne telerobots to be used in construction, servicing, inspection, and repair tasks around NASA's Space Station, is examined. The major difficulties found in designing and integrating the individual expert system components necessary to implement such a robot are outlined. The blackboard model, a general expert system architecture which seems to address many of the problems found in designing and building such a system, is discussed. A progress report on a prototype system under development called DBB (Distributed BlackBoard model) is given. The prototype will act as a testbed for investigating the feasibility, utility, and efficiency of MCXS-based designs developed under the blackboard model.
Ducrotoy, M J; Yahyaoui Azami, H; El Berbri, I; Bouslikhane, M; Fassi Fihri, O; Boué, F; Petavy, A F; Dakkak, A; Welburn, S; Bardosh, K L
2015-12-01
Integrating the control of multiple neglected zoonoses at the community-level holds great potential, but critical data is missing to inform the design and implementation of different interventions. In this paper we present an evaluation of an integrated health messaging intervention, using powerpoint presentations, for five bacterial (brucellosis and bovine tuberculosis) and dog-associated (rabies, cystic echinococcosis and leishmaniasis) zoonotic diseases in Sidi Kacem Province, northwest Morocco. Conducted by veterinary and epidemiology students between 2013 and 2014, this followed a process-based approach that encouraged sequential adaptation of images, key messages, and delivery strategies using auto-evaluation and end-user feedback. We describe the challenges and opportunities of this approach, reflecting on who was targeted, how education was conducted, and what tools and approaches were used. Our results showed that: (1) replacing words with local pictures and using "hands-on" activities improved receptivity; (2) information "overload" easily occurred when disease transmission pathways did not overlap; (3) access and receptivity at schools was greater than at the community-level; and (4) piggy-backing on high-priority diseases like rabies offered an important avenue to increase knowledge of other zoonoses. We conclude by discussing the merits of incorporating our validated education approach into the school curriculum in order to influence long-term behaviour change. PMID:26299194
Method for integrating microelectromechanical devices with electronic circuitry
Montague, S.; Smith, J.H.; Sniegowski, J.J.; McWhorter, P.J.
1998-08-25
A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry. 13 figs.
Method for integrating microelectromechanical devices with electronic circuitry
Montague, Stephen; Smith, James H.; Sniegowski, Jeffry J.; McWhorter, Paul J.
1998-01-01
A method for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry.
Synthesis of aircraft structures using integrated design and analysis methods
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Goetz, R. C.
1978-01-01
A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.
Method and apparatus for determining material structural integrity
Pechersky, Martin
1996-01-01
A non-destructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis techniques to determine the damping loss factor of a material. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity as a function of time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method. If an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the amount of coil current used in vibrating the magnet. If a reciprocating transducer is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by a force gauge in the reciprocating transducer. Using known vibrational analysis methods, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity measurements. The damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
Pavan, Ana Carolina; Marroig, Gabriel
2016-10-01
A phylogenetic systematic perspective is instrumental in recovering new species and their evolutionary relationships. The advent of new technologies for molecular and morphological data acquisition and analysis, allied to the integration of knowledge from different areas, such as ecology and population genetics, allows for the emergence of more rigorous, accurate and complete scientific hypothesis on species diversity. Mustached bats (genus Pteronotus) are a good model for the application of this integrative approach. They are a widely distributed and a morphologically homogeneous group, but comprising species with remarkable differences in their echolocation strategy and feeding behavior. The latest systematic review suggested six species with 17 subspecies in Pteronotus. Subsequent studies using discrete morphological characters supported the same arrangement. However, recent papers reported high levels of genetic divergence among conspecific taxa followed by bioacoustic and geographic agreement, suggesting an underestimated diversity in the genus. To date, no study merging genetic evidences and morphometric variation along the entire geographic range of this group has been attempted. Based on a comprehensive sampling including representatives of all current taxonomic units, we attempt to delimit species in Pteronotus through the application of multiple methodologies and hierarchically distinct datasets. The molecular approach includes six molecular markers from three genetic transmission systems; morphological investigations used 41 euclidean distances estimated through three-dimensional landmarks collected from 1628 skulls. The phylogenetic analysis reveals a greater diversity than previously reported, with a high correspondence among the genetic lineages and the currently recognized subspecies in the genus. Discriminant analysis of variables describing size and shape of cranial bones support the rising of the genetic groups to the specific status. Based on
Neocartilage integration in temporomandibular joint discs: physical and enzymatic methods
Murphy, Meghan K.; Arzi, Boaz; Prouty, Shannon M.; Hu, Jerry C.; Athanasiou, Kyriacos A.
2015-01-01
Integration of engineered musculoskeletal tissues with adjacent native tissues presents a significant challenge to the field. Specifically, the avascularity and low cellularity of cartilage elicit the need for additional efforts in improving integration of neocartilage within native cartilage. Self-assembled neocartilage holds significant potential in replacing degenerated cartilage, though its stabilization and integration in native cartilage require further efforts. Physical and enzymatic stabilization methods were investigated in an in vitro model for temporomandibular joint (TMJ) disc degeneration. First, in phase 1, suture, glue and press-fit constructs were compared in TMJ disc intermediate zone defects. In phase 1, suturing enhanced interfacial shear stiffness and strength immediately; after four weeks, a 15-fold increase in stiffness and a ninefold increase in strength persisted over press-fit. Neither suture nor glue significantly altered neocartilage properties. In phase 2, the effects of the enzymatic stabilization regimen composed of lysyl oxidase, CuSO4 and hydroxylysine were investigated. A full factorial design was employed, carrying forward the best physical method from phase 1, suturing. Enzymatic stabilization significantly increased interfacial shear stiffness after eight weeks. Combined enzymatic stabilization and suturing led to a fourfold increase in shear stiffness and threefold increase in strength over press-fit. Histological analysis confirmed the presence of a collagen-rich interface. Enzymatic treatment additionally enhanced neocartilage mechanical properties, yielding a tensile modulus over 6 MPa and compressive instantaneous modulus over 1200 kPa at eight weeks. Suturing enhances stabilization of neocartilage, and enzymatic treatment enhances functional properties and integration of neocartilage in the TMJ disc. Methods developed here are applicable to other orthopaedic soft tissues, including knee meniscus and hyaline articular
Method and Apparatus for Simultaneous Processing of Multiple Functions
NASA Technical Reports Server (NTRS)
Stoica, Adrian (Inventor); Andrei, Radu (Inventor); Zhu, David (Inventor); Mojarradi, Mohammad Mehdi (Inventor); Vo, Tuan A. (Inventor)
2015-01-01
Electronic logic gates that operate using N logic state levels, where N is greater than 2, and methods of operating such gates. The electronic logic gates operate according to truth tables. At least two input signals each having a logic state that can range over more than two logic states are provided to the logic gates. The logic gates each provide an output signal that can have one of N logic states. Examples of gates described include NAND/NAND gates having two inputs A and B and NAND/NAND gates having three inputs A, B, and C, where A, B and C can take any of four logic states. Systems using such gates are described, and their operation illustrated. Optical logic gates that operate using N logic state levels are also described.
Amir, Amnon; Zeisel, Amit; Zuk, Or; Elgart, Michael; Stern, Shay; Shamir, Ohad; Turnbaugh, Peter J; Soen, Yoav; Shental, Noam
2013-12-01
The emergence of massively parallel sequencing technology has revolutionized microbial profiling, allowing the unprecedented comparison of microbial diversity across time and space in a wide range of host-associated and environmental ecosystems. Although the high-throughput nature of such methods enables the detection of low-frequency bacteria, these advances come at the cost of sequencing read length, limiting the phylogenetic resolution possible by current methods. Here, we present a generic approach for integrating short reads from large genomic regions, thus enabling phylogenetic resolution far exceeding current methods. The approach is based on a mapping to a statistical model that is later solved as a constrained optimization problem. We demonstrate the utility of this method by analyzing human saliva and Drosophila samples, using Illumina single-end sequencing of a 750 bp amplicon of the 16S rRNA gene. Phylogenetic resolution is significantly extended while reducing the number of falsely detected bacteria, as compared with standard single-region Roche 454 Pyrosequencing. Our approach can be seamlessly applied to simultaneous sequencing of multiple genes providing a higher resolution view of the composition and activity of complex microbial communities. PMID:24214960
Amir, Amnon; Zeisel, Amit; Zuk, Or; Elgart, Michael; Stern, Shay; Shamir, Ohad; Turnbaugh, Peter J.; Soen, Yoav; Shental, Noam
2013-01-01
The emergence of massively parallel sequencing technology has revolutionized microbial profiling, allowing the unprecedented comparison of microbial diversity across time and space in a wide range of host-associated and environmental ecosystems. Although the high-throughput nature of such methods enables the detection of low-frequency bacteria, these advances come at the cost of sequencing read length, limiting the phylogenetic resolution possible by current methods. Here, we present a generic approach for integrating short reads from large genomic regions, thus enabling phylogenetic resolution far exceeding current methods. The approach is based on a mapping to a statistical model that is later solved as a constrained optimization problem. We demonstrate the utility of this method by analyzing human saliva and Drosophila samples, using Illumina single-end sequencing of a 750 bp amplicon of the 16S rRNA gene. Phylogenetic resolution is significantly extended while reducing the number of falsely detected bacteria, as compared with standard single-region Roche 454 Pyrosequencing. Our approach can be seamlessly applied to simultaneous sequencing of multiple genes providing a higher resolution view of the composition and activity of complex microbial communities. PMID:24214960
Development of Integration and Adjustment Method for Sequential Range Images
NASA Astrophysics Data System (ADS)
Nagara, K.; Fuse, T.
2015-05-01
With increasing widespread use of three-dimensional data, the demand for simplified data acquisition is also increasing. The range camera, which is a simplified sensor, can acquire a dense-range image in a single shot; however, its measuring coverage is narrow and its measuring accuracy is limited. The former drawback had be overcome by registering sequential range images. This method, however, assumes that the point cloud is error-free. In this paper, we develop an integration method for sequential range images with error adjustment of the point cloud. The proposed method consists of ICP (Iterative Closest Point) algorithm and self-calibration bundle adjustment. The ICP algorithm is considered an initial specification for the bundle adjustment. By applying the bundle adjustment, coordinates of the point cloud are modified and the camera poses are updated. Through experimentation on real data, the efficiency of the proposed method has been confirmed.
Kim, TaeHyung; Tyndel, Marc S.; Huang, Haiming; Sidhu, Sachdev S.; Bader, Gary D.; Gfeller, David; Kim, Philip M.
2012-01-01
Peptide recognition domains and transcription factors play crucial roles in cellular signaling. They bind linear stretches of amino acids or nucleotides, respectively, with high specificity. Experimental techniques that assess the binding specificity of these domains, such as microarrays or phage display, can retrieve thousands of distinct ligands, providing detailed insight into binding specificity. In particular, the advent of next-generation sequencing has recently increased the throughput of such methods by several orders of magnitude. These advances have helped reveal the presence of distinct binding specificity classes that co-exist within a set of ligands interacting with the same target. Here, we introduce a software system called MUSI that can rapidly analyze very large data sets of binding sequences to determine the relevant binding specificity patterns. Our pipeline provides two major advances. First, it can detect previously unrecognized multiple specificity patterns in any data set. Second, it offers integrated processing of very large data sets from next-generation sequencing machines. The results are visualized as multiple sequence logos describing the different binding preferences of the protein under investigation. We demonstrate the performance of MUSI by analyzing recent phage display data for human SH3 domains as well as microarray data for mouse transcription factors. PMID:22210894
Gregg, Watson W; Rousseaux, Cécile S
2014-01-01
Quantifying change in ocean biology using satellites is a major scientific objective. We document trends globally for the period 1998–2012 by integrating three diverse methodologies: ocean color data from multiple satellites, bias correction methods based on in situ data, and data assimilation to provide a consistent and complete global representation free of sampling biases. The results indicated no significant trend in global pelagic ocean chlorophyll over the 15 year data record. These results were consistent with previous findings that were based on the first 6 years and first 10 years of the SeaWiFS mission. However, all of the Northern Hemisphere basins (north of 10° latitude), as well as the Equatorial Indian basin, exhibited significant declines in chlorophyll. Trend maps showed the local trends and their change in percent per year. These trend maps were compared with several other previous efforts using only a single sensor (SeaWiFS) and more limited time series, showing remarkable consistency. These results suggested the present effort provides a path forward to quantifying global ocean trends using multiple satellite missions, which is essential if we are to understand the state, variability, and possible changes in the global oceans over longer time scales. PMID:26213675
A Low-Cost Method for Multiple Disease Prediction
Bayati, Mohsen; Bhaskar, Sonia; Montanari, Andrea
2015-01-01
Recently, in response to the rising costs of healthcare services, employers that are financially responsible for the healthcare costs of their workforce have been investing in health improvement programs for their employees. A main objective of these so called “wellness programs” is to reduce the incidence of chronic illnesses such as cardiovascular disease, cancer, diabetes, and obesity, with the goal of reducing future medical costs. The majority of these wellness programs include an annual screening to detect individuals with the highest risk of developing chronic disease. Once these individuals are identified, the company can invest in interventions to reduce the risk of those individuals. However, capturing many biomarkers per employee creates a costly screening procedure. We propose a statistical data-driven method to address this challenge by minimizing the number of biomarkers in the screening procedure while maximizing the predictive power over a broad spectrum of diseases. Our solution uses multi-task learning and group dimensionality reduction from machine learning and statistics. We provide empirical validation of the proposed solution using data from two different electronic medical records systems, with comparisons to a statistical benchmark. PMID:26958164
Community Engagement in US Biobanking: Multiplicity of Meaning and Method
Haldeman, Kaaren M.; Cadigan, R. Jean; Davis, Arlene; Goldenberg, Aaron; Henderson, Gail E.; Lassiter, Dragana; Reavely, Erik
2014-01-01
Background/Aims Efforts to improve individual and population health increasingly rely on large scale collections of human biological specimens and associated data. Such collections or “biobanks” are hailed as valuable resources for facilitating translational biomedical research. However, biobanks also raise important ethical considerations, such as whether, how and why biobanks might engage with those who contributed specimens. This paper examines perceptions and practices of community engagement (CE) among individuals who operate six diverse biobanks in the U.S. Methods Twenty-four people from a diverse group of six biobanks were interviewed in-person or via telephone from March-July, 2011. Interview transcripts were coded and analyzed for common themes. Results Emergent themes include how biobank personnel understand “community” and community engagement as it pertains to biobank operations; information regarding the diversity of practices of CE; and the reasons why biobanks conduct CE. Conclusion Despite recommendations from federal agencies to conduct CE, the interpretation of CE varies widely among biobank employees, ultimately affecting how CE is practiced and what goals are achieved. PMID:24556734
Average wavefunction method for multiple scattering theory and applications
Singh, H.
1985-01-01
A general approximation scheme, the average wavefunction approximation (AWM), applicable to scattering of atoms and molecules off multi-center targets, is proposed. The total potential is replaced by a sum of nonlocal, separable interactions. Each term in the sum projects the wave function onto a weighted average in the vicinity of a given scattering center. The resultant solution is an infinite order approximation to the true solution, and choosing the weighting function as the zeroth order solution guarantees agreement with the Born approximation to second order. In addition, the approximation also becomes increasingly more accurate in the low energy long wave length limit. A nonlinear, nonperturbative literature scheme for the wave function is proposed. An extension of the scheme to multichannel scattering suitable for treating inelastic scattering is also presented. The method is applied to elastic scattering of a gas off a solid surface. The formalism is developed for both periodic as well as disordered surfaces. Numerical results are presented for atomic clusters on a flat hard wall with a Gaussian like potential at each atomic scattering site. The effect of relative lateral displacement of two clusters upon the scattering pattern is shown. The ability of AWM to accommodate disorder through statistical averaging over cluster configuration is illustrated. Enhanced uniform back scattering is observed with increasing roughness on the surface. Finally, the AWM is applied to atom-molecule scattering.
Methods and systems for integrating fluid dispensing technology with stereolithography
Medina, Francisco; Wicker, Ryan; Palmer, Jeremy A.; Davis, Don W.; Chavez, Bart D.; Gallegos, Phillip L.
2010-02-09
An integrated system and method of integrating fluid dispensing technologies (e.g., direct-write (DW)) with rapid prototyping (RP) technologies (e.g., stereolithography (SL)) without part registration comprising: an SL apparatus and a fluid dispensing apparatus further comprising a translation mechanism adapted to translate the fluid dispensing apparatus along the Z-, Y- and Z-axes. The fluid dispensing apparatus comprises: a pressurized fluid container; a valve mechanism adapted to control the flow of fluid from the pressurized fluid container; and a dispensing nozzle adapted to deposit the fluid in a desired location. To aid in calibration, the integrated system includes a laser sensor and a mechanical switch. The method further comprises building a second part layer on top of the fluid deposits and optionally accommodating multi-layered circuitry by incorporating a connector trace. Thus, the present invention is capable of efficiently building single and multi-material SL fabricated parts embedded with complex three-dimensional circuitry using DW.
Mathies, Richard A.; Singhal, Pankaj; Xie, Jin; Glazer, Alexander N.
2002-01-01
This invention relates to a microfabricated capillary electrophoresis chip for detecting multiple redox-active labels simultaneously using a matrix coding scheme and to a method of selectively labeling analytes for simultaneous electrochemical detection of multiple label-analyte conjugates after electrophoretic or chromatographic separation.
Mof-Tree: A Spatial Access Method To Manipulate Multiple Overlapping Features.
ERIC Educational Resources Information Center
Manolopoulos, Yannis; Nardelli, Enrico; Papadopoulos, Apostolos; Proietti, Guido
1997-01-01
Investigates the manipulation of large sets of two-dimensional data representing multiple overlapping features, and presents a new access method, the MOF-tree. Analyzes storage requirements and time with respect to window query operations involving multiple features. Examines both the pointer-based and pointerless MOF-tree representations.…
Integral structural-functional method for characterizing microbial populations
NASA Astrophysics Data System (ADS)
Yakushev, A. V.
2015-04-01
An original integral structural-functional method has been proposed for characterizing microbial communities. The novelty of the approach is the in situ study of microorganisms based on the growth kinetics of microbial associations in liquid nutrient broth media under selective conditions rather than on the level of taxa or large functional groups. The method involves the analysis of the integral growth model of a periodic culture. The kinetic parameters of such associations reflect their capacity of growing on different media, i.e., their physiological diversity, and the metabolic capacity of the microorganisms for growth on a nutrient medium. Therefore, the obtained parameters are determined by the features of the microbial ecological strategies. The inoculation of a dense medium from the original inoculate allows characterizing the taxonomic composition of the dominants in the soil community. The inoculation from the associations developed on selective media characterizes the composition of syntrophic groups, which fulfill a specific function in nature. This method is of greater information value than the classical methods of inoculation on selective media.
Integrated Force Method Solution to Indeterminate Structural Mechanics Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Halford, Gary R.
2004-01-01
Strength of materials problems have been classified into determinate and indeterminate problems. Determinate analysis primarily based on the equilibrium concept is well understood. Solutions of indeterminate problems required additional compatibility conditions, and its comprehension was not exclusive. A solution to indeterminate problem is generated by manipulating the equilibrium concept, either by rewriting in the displacement variables or through the cutting and closing gap technique of the redundant force method. Compatibility improvisation has made analysis cumbersome. The authors have researched and understood the compatibility theory. Solutions can be generated with equal emphasis on the equilibrium and compatibility concepts. This technique is called the Integrated Force Method (IFM). Forces are the primary unknowns of IFM. Displacements are back-calculated from forces. IFM equations are manipulated to obtain the Dual Integrated Force Method (IFMD). Displacement is the primary variable of IFMD and force is back-calculated. The subject is introduced through response variables: force, deformation, displacement; and underlying concepts: equilibrium equation, force deformation relation, deformation displacement relation, and compatibility condition. Mechanical load, temperature variation, and support settling are equally emphasized. The basic theory is discussed. A set of examples illustrate the new concepts. IFM and IFMD based finite element methods are introduced for simple problems.
Method and apparatus for determining material structural integrity
Pechersky, M.J.
1994-01-01
Disclosed are a nondestructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis to determine the damping loss factor. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity vs time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method: if an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the coil current. If a reciprocating transducer is used, the vibrational force is determined by a force gauge in the transducer. Using vibrational analysis, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity data. Damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
Gonzalez, Ivan; Schmidt, Ivan
2009-06-15
A modular application of the integration by fractional expansion method for evaluating Feynman diagrams is extended to diagrams that contain loop triangle subdiagrams in their geometry. The technique is based in the replacement of this module or subdiagram by its corresponding multiregion expansion (MRE), which in turn is obtained from Schwinger's parametric representation of the diagram. The result is a topological reduction, transforming the triangular loop into an equivalent vertex, which simplifies the search for the MRE of the complete diagram. This procedure has important advantages with respect to considering the parametric representation of the whole diagram: the obtained MRE is reduced, and the resulting hypergeometric series tends to have smaller multiplicity.
A method for optimizing integrated system health management
NASA Astrophysics Data System (ADS)
Jambor, Bruno; Rouch, Robin L.; Eger, George W.; Black, Stephen T.
1996-03-01
The cost of operating the existing fleet of launch vehicles, both expendable and reusable, is too high. The high cost is attributable to two primary sources: people-intensive checkout procedures and delayed launches. This latter has cost impacts on both launch procedures and other launch operations through ripple-down effects. Without significant changes in how the launch vehicle community does business, the next generation of vehicles shall be burdened by the same high costs. By integrating system health management into the next generation, Reusable Launch Vehicle (RLV) operations costs can be reduced. A method for optimizing Integrated System Health Management (ISHM) is being developed under a cooperative agreement between NASA and Lockheed Martin Corporation (LMC). This paper describes the work currently underway at LMC. ISHM shall be implemented on the prototype vehicle X-33 in order to demonstrate its usefulness for RLV.
Real-time optical multiple object recognition and tracking system and method
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Liu, Hua Kuang (Inventor)
1987-01-01
The invention relates to an apparatus and associated methods for the optical recognition and tracking of multiple objects in real time. Multiple point spatial filters are employed that pre-define the objects to be recognized at run-time. The system takes the basic technology of a Vander Lugt filter and adds a hololens. The technique replaces time, space and cost-intensive digital techniques. In place of multiple objects, the system can also recognize multiple orientations of a single object. This later capability has potential for space applications where space and weight are at a premium.
A method to determine integrated steroid levels in wildlife claws.
Matas, Devorah; Keren-Rotem, Tammy; Koren, Lee
2016-05-01
Glucocorticoids act throughout life to regulate numerous physiological and behavioral processes. Their levels are therefore highly labile, reacting to varying conditions and stressors. Hence, measuring glucocorticoids (and other steroids) in wildlife is challenging, and devising methods that are unaffected by the stress of capture and handling should be explored. Here we use the tip of free-ranging chameleons' claws that were cut to allow individual identification, and report a steroids extraction and quantification method. Claw steroids present an integrated level representing the period of claw growth. We found that we could measure corticosterone in small amounts of chameleon claw matrix using commercial EIA kits. Using this method, we learned that in wild male chameleons, claw corticosterone levels were associated with body size. We suggest that claw-testing can potentially provide an ideal matrix for wildlife biomonitoring. PMID:26993343
Comparison of four stable numerical methods for Abel's integral equation
NASA Technical Reports Server (NTRS)
Murio, Diego A.; Mejia, Carlos E.
1991-01-01
The 3-D image reconstruction from cone-beam projections in computerized tomography leads naturally, in the case of radial symmetry, to the study of Abel-type integral equations. If the experimental information is obtained from measured data, on a discrete set of points, special methods are needed in order to restore continuity with respect to the data. A new combined Regularized-Adjoint-Conjugate Gradient algorithm, together with two different implementations of the Mollification Method (one based on a data filtering technique and the other on the mollification of the kernal function) and a regularization by truncation method (initially proposed for 2-D ray sample schemes and more recently extended to 3-D cone-beam image reconstruction) are extensively tested and compared for accuracy and numerical stability as functions of the level of noise in the data.
Fourier-sparsity integrated method for complex target ISAR imagery.
Gao, Xunzhang; Liu, Zhen; Chen, Haowen; Li, Xiang
2015-01-01
In existing sparsity-driven inverse synthetic aperture radar (ISAR) imaging framework a sparse recovery (SR) algorithm is usually applied to azimuth compression to achieve high resolution in the cross-range direction. For range compression, however, direct application of an SR algorithm is not very effective because the scattering centers resolved in the high resolution range profiles at different view angles always exhibit irregular range cell migration (RCM), especially for complex targets, which will blur the ISAR image. To alleviate the sparse recovery-induced RCM in range compression, a sparsity-driven framework for ISAR imaging named Fourier-sparsity integrated (FSI) method is proposed in this paper, which can simultaneously achieve better focusing performance in both the range and cross-range domains. Experiments using simulated data and real data demonstrate the superiority of our proposed framework over existing sparsity-driven methods and range-Doppler methods. PMID:25629707
NASA Astrophysics Data System (ADS)
Masychev, Victor I.
2000-11-01
In this research we present the results of approbation of two methods of optical caries diagnostics: PNC-spectral diagnostics and caries detection by laser integral fluorescence. The research was conducted in a dental clinic. PNC-method analyses parameters of probing laser radiation and PNC-spectrums of stimulated secondary radiations: backscattering and endogenous fluorescence of caries-involved bacterias. He-Ne-laser ((lambda) =632,8 nm, 1-2mW) was used as a source of probing (stimulated) radiation. For registration of signals, received from intact and pathological teeth PDA-detector was applied. PNC-spectrums were processed by special algorithms, and were displayed on PC monitor. The method of laser integral fluorescence was used for comparison. In this case integral power of fluorescence of human teeth was measured. As a source of probing (stimulated) radiation diode lasers ((lambda) =655 nm, 0.1 mW and 630nm, 1mW) and He-Ne laser were applied. For registration of signals Si-photodetector was used. Integral power was shown in a digital indicator. Advantages and disadvantages of these methods are described in this research. It is disclosed that the method of laser integral power of fluorescence has the following characteristics: simplicity of construction and schema-technical decisions. However the method of PNC-spectral diagnostics are characterized by considerably more sensitivity in diagnostics of initial caries and capability to differentiate pathologies of various stages (for example, calculus/initial caries). Estimation of spectral characteristics of PNC-signals allows eliminating a number of drawbacks, which are character for detection by method of laser integral fluorescence (for instance, detection of fluorescent fillings, plagues, calculus, discolorations generally, amalgam, gold fillings as if it were caries.
Methods for tracking multiple marine mammals with wide-baseline passive acoustic arrays.
Nosal, Eva-Marie
2013-09-01
Most methods used to track marine mammals with passive acoustics require that time differences of arrivals (TDOAs) are established and are associated between hydrophone pairs. Consequently, multiple animal trackers commonly apply single-animal TDOA localization methods after performing a call separation and/or TDOA association step. When a wide-baseline array is used with multiple animals that make similar calls with short inter-call-intervals, the separation/association step can be challenging and potentially rejects valid TDOAs. This paper extends a model-based TDOA method to deal with multiple-animal datasets in a way that does not require a TDOA association step; animals are separated based on position. Advantageously, false TDOAs (e.g., a direct path associated with a multipath arrival) do not need to be removed. An analogous development is also presented for a model-based time of arrival tracking method. Results from simulations and application to a multiple sperm whale dataset are used to illustrate the multiple-animal methods. Although computationally more demanding than most track-after-association methods because separation is performed in a higher-dimensional space, the methods are computationally tractable and represent a useful new tool in the suite of options available for tracking multiple animals with passive acoustics. PMID:23968035
NASA Astrophysics Data System (ADS)
Cheng, Q.
2013-12-01
This paper introduces several techniques recently developed based on the concepts of multiplicative cascade processes and multifractals for processing exploration geochemical and geophysical data for recognition of geological features and delineation of target areas for undiscovered mineral deposits. From a nonlinear point of view extreme geo-processes such as cloud formation, rainfall, hurricanes, flooding, landslides, earthquakes, igneous activities, tectonics and mineralization often show singular property that they may result in anomalous amounts of energy release or mass accumulation that generally are confined to narrow intervals in space or time. The end products of these non-linear processes have in common that they can be modeled as fractals or multifractals. Here we show that the three fundamental concepts of scaling in the context of multifractals: singularity, self-similarity and fractal dimension spectrum, make multifractal theory and methods useful for geochemical and geophysical data processing for general purposes of geological features recognition. These methods include: a local singularity analysis based on a area-density (C-A) multifractal model used as a scaling high-pass filtering technique capable of extracting weak signals caused by buried geological features; a suite of multifractal filtering techniques based on spectrum density - area (S-A) multifractal models implemented in various domain including frequency domain can be used for unmixing geochemical or geophysical fields according to distinct generalized self-similarities characterized in certain domain; and multiplicative cascade processes for integration of diverse evidential layers of information for prediction of point events such as location of mineral deposits. It is demonstrated by several case studies involving Fe, Sn, Mo-Ag and Mo-W mineral deposits that singularity method can be utilized to process stream sediment/soil geochemical data and gravity/aeromagnetic data as high
Rowat, S C
1998-01-01
The central nervous, immune, and endocrine systems communicate through multiple common messengers. Over evolutionary time, what may be termed integrated defense system(s) (IDS) have developed to coordinate these communications for specific contexts; these include the stress response, acute-phase response, nonspecific immune response, immune response to antigen, kindling, tolerance, time-dependent sensitization, neurogenic switching, and traumatic dissociation (TD). These IDSs are described and their overlap is examined. Three models of disease production are generated: damage, in which IDSs function incorrectly; inadequate/inappropriate, in which IDS response is outstripped by a changing context; and evolving/learning, in which the IDS learned response to a context is deemed pathologic. Mechanisms of multiple chemical sensitivity (MCS) are developed from several IDS disease models. Model 1A is pesticide damage to the central nervous system, overlapping with body chemical burdens, TD, and chronic zinc deficiency; model 1B is benzene disruption of interleukin-1, overlapping with childhood developmental windows and hapten-antigenic spreading; and model 1C is autoimmunity to immunoglobulin-G (IgG), overlapping with spreading to other IgG-inducers, sudden spreading of inciters, and food-contaminating chemicals. Model 2A is chemical and stress overload, including comparison with the susceptibility/sensitization/triggering/spreading model; model 2B is genetic mercury allergy, overlapping with: heavy metals/zinc displacement and childhood/gestational mercury exposures; and model 3 is MCS as evolution and learning. Remarks are offered on current MCS research. Problems with clinical measurement are suggested on the basis of IDS models. Large-sample patient self-report epidemiology is described as an alternative or addition to clinical biomarker and animal testing. Images Figure 1 Figure 2 Figure 3 Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 PMID:9539008
Salient object detection based on discriminative boundary and multiple cues integration
NASA Astrophysics Data System (ADS)
Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei
2016-01-01
In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.
A Method of Integrated Description of Design Information for Reusability
NASA Astrophysics Data System (ADS)
Tsumaya, Akira; Nagae, Masao; Wakamatsu, Hidefumi; Shirase, Keiichi; Arai, Eiji
Much of product design is executed concurrently these days. For such concurrent design, the method which can share and ueuse varioud kind of design information among designers is needed. However, complete understanding of the design information among designers have been a difficult issue. In this paper, design process model with use of designers’ intention is proposed. A method to combine the design process information and the design object information is also proposed. We introduce how to describe designers’ intention by providing some databases. Keyword Database consists of ontological data related to design object/activities. Designers select suitable keyword(s) from Keyword Database and explain the reason/ideas for their design activities by the description with use of keyword(s). We also developed the integration design information management system architecture by using a method of integrated description with designers’ intension. This system realizes connections between the information related to design process and that related to design object through designers’ intention. Designers can communicate with each other to understand how others make decision in design through that. Designers also can re-use both design process information data and design object information data through detabase management sub-system.
A Dynamic Integration Method for Borderland Database using OSM data
NASA Astrophysics Data System (ADS)
Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.
2013-11-01
Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.
A multi-disciplinary approach for the integrated assessment of multiple risks in delta areas.
NASA Astrophysics Data System (ADS)
Sperotto, Anna; Torresan, Silvia; Critto, Andrea; Marcomini, Antonio
2016-04-01
The assessment of climate change related risks is notoriously difficult due to the complex and uncertain combinations of hazardous events that might happen, the multiplicity of physical processes involved, the continuous changes and interactions of environmental and socio-economic systems. One important challenge lies in predicting and modelling cascades of natural and man -made hazard events which can be triggered by climate change, encompassing different spatial and temporal scales. Another regard the potentially difficult integration of environmental, social and economic disciplines in the multi-risk concept. Finally, the effective interaction between scientists and stakeholders is essential to ensure that multi-risk knowledge is translated into efficient adaptation and management strategies. The assessment is even more complex at the scale of deltaic systems which are particularly vulnerable to global environmental changes, due to the fragile equilibrium between the presence of valuable natural ecosystems and relevant economic activities. Improving our capacity to assess the combined effects of multiple hazards (e.g. sea-level rise, storm surges, reduction in sediment load, local subsidence, saltwater intrusion) is therefore essential to identify timely opportunities for adaptation. A holistic multi-risk approach is here proposed to integrate terminology, metrics and methodologies from different research fields (i.e. environmental, social and economic sciences) thus creating shared knowledge areas to advance multi risk assessment and management in delta regions. A first testing of the approach, including the application of Bayesian network analysis for the assessment of impacts of climate change on key natural systems (e.g. wetlands, protected areas, beaches) and socio-economic activities (e.g. agriculture, tourism), is applied in the Po river delta in Northern Italy. The approach is based on a bottom-up process involving local stakeholders early in different
Optical matrix-matrix multiplication method demonstrated by the use of a multifocus hololens
NASA Technical Reports Server (NTRS)
Liu, H. K.; Liang, Y.-Z.
1984-01-01
A method of optical matrix-matrix multiplication is presented. The feasibility of the method is also experimentally demonstrated by the use of a dichromated-gelatin multifocus holographic lens (hololens). With the specific values of matrices chosen, the average percentage error between the theoretical and experimental data of the elements of the output matrix of the multiplication of some specific pairs of 3 x 3 matrices is 0.4 percent, which corresponds to an 8-bit accuracy.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Investigation of system integration methods for bubble domain flight recorders
NASA Technical Reports Server (NTRS)
Chen, T. T.; Bohning, O. D.
1975-01-01
System integration methods for bubble domain flight records are investigated. Bubble memory module packaging and assembly, the control electronics design and construction, field coils, and permanent magnet bias structure design are studied. A small 60-k bit engineering model was built and tested to demonstrate the feasibility of the bubble recorder. Based on the various studies performed, a projection is made on a 50,000,000-bit prototype recorder. It is estimated that the recorder will occupy 190 cubic in., weigh 12 lb, and consume 12 w power when all of its four tracks are operated in parallel at 150 kHz data rate.
Method of and apparatus for testing the integrity of filters
Herman, Raymond L [Richland, WA
1985-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Methods of and apparatus for testing the integrity of filters
Herman, R.L.
1984-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstram upstream and downstream of such filter stage. Samples of the particel concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Method of and apparatus for testing the integrity of filters
Herman, R.L.
1985-05-07
A method of and apparatus are disclosed for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage. 5 figs.
The biocommunication method: On the road to an integrative biology.
Witzany, Guenther
2016-01-01
Although molecular biology, genetics, and related special disciplines represent a large amount of empirical data, a practical method for the evaluation and overview of current knowledge is far from being realized. The main concepts and narratives in these fields have remained nearly the same for decades and the more recent empirical data concerning the role of noncoding RNAs and persistent viruses and their defectives do not fit into this scenario. A more innovative approach such as applied biocommunication theory could translate empirical data into a coherent perspective on the functions within and between biological organisms and arguably lead to a sustainable integrative biology. PMID:27195071
Primal and Dual Integrated Force Methods Used for Stochastic Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.
2005-01-01
At the NASA Glenn Research Center, the primal and dual integrated force methods are being extended for the stochastic analysis of structures. The stochastic simulation can be used to quantify the consequence of scatter in stress and displacement response because of a specified variation in input parameters such as load (mechanical, thermal, and support settling loads), material properties (strength, modulus, density, etc.), and sizing design variables (depth, thickness, etc.). All the parameters are modeled as random variables with given probability distributions, means, and covariances. The stochastic response is formulated through a quadratic perturbation theory, and it is verified through a Monte Carlo simulation.
Method for deposition of a conductor in integrated circuits
Creighton, J.R.; Dominguez, F.; Johnson, A.W.; Omstead, T.R.
1997-09-02
A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten. 2 figs.
Method for deposition of a conductor in integrated circuits
Creighton, J. Randall; Dominguez, Frank; Johnson, A. Wayne; Omstead, Thomas R.
1997-01-01
A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten.
The Study of Gay-Berne Fluid:. Integral Equations Method
NASA Astrophysics Data System (ADS)
Khordad, Reza; Mohebbi, Mehran; Keshavarzi, Abolla; Poostforush, Ahmad; Ghajari Haghighi, Farnaz
We study a classical fluid of nonspherical molecules. The components of the fluid are the ellipsoidal molecules interacting through the Gay-Berne potential model. A method is described, which allows the Percus-Yevick (PY) and hypernetted-chain (HNC) integral equation theories to be solved numerically for this fluid. Explicit results are given and comparisons are made with recent Monte Carlo (MC) simulations. It is found that, at lower cutoff lmax, the HNC and the PY closures give significantly different results. The HNC and PY (approximately) theories, at higher cutoff lmax, are superior in predicting the existence of the phase transition in a qualitative agreement with computer simulation.
The biocommunication method: On the road to an integrative biology
Witzany, Guenther
2016-01-01
ABSTRACT Although molecular biology, genetics, and related special disciplines represent a large amount of empirical data, a practical method for the evaluation and overview of current knowledge is far from being realized. The main concepts and narratives in these fields have remained nearly the same for decades and the more recent empirical data concerning the role of noncoding RNAs and persistent viruses and their defectives do not fit into this scenario. A more innovative approach such as applied biocommunication theory could translate empirical data into a coherent perspective on the functions within and between biological organisms and arguably lead to a sustainable integrative biology. PMID:27195071
Integral fill yarn insertion and beatup method using inflatable membrane
NASA Technical Reports Server (NTRS)
Farley, Gary L. (Inventor)
1993-01-01
An apparatus and method for integral fill yarn insertion and beatup are disclosed. A modified rapier contains a channel for holding fill yarn. The channel is covered with a flexible and inflatable boot, and an inflating apparatus for this boot is also attached. Fill yarn is inserted into the channel, and the rapier is extended into a shed formed by warp yarn. Next, the rapier is pushed into the fell of the fabric, and the flexible and inflatable cover inflated, which both pushes the yarn into the fell of the fabric and performs beatup. The rapier is withdrawn and the shed closed to complete one step of the weaving process.
Method of producing an integral resonator sensor and case
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill V. (Inventor); Challoner, A. Dorian (Inventor); Hayworth, Ken J. (Inventor); Wiberg, Dean V. (Inventor); Yee, Karl Y. (Inventor)
2005-01-01
The present invention discloses an inertial sensor having an integral resonator. A typical sensor comprises a planar mechanical resonator for sensing motion of the inertial sensor and a case for housing the resonator. The resonator and a wall of the case are defined through an etching process. A typical method of producing the resonator includes etching a baseplate, bonding a wafer to the etched baseplate, through etching the wafer to form a planar mechanical resonator and the wall of the case and bonding an end cap wafer to the wall to complete the case.
Integration of Boltzmann machine and reverse analysis method
NASA Astrophysics Data System (ADS)
Mamuda, Mamman; Sathasivam, Saratha
2015-10-01
Reverse analysis method is actually a data mining technique to unearth relationships between data. By knowing the connection strengths by using Hopfield network, we can extract the relationships in data sets. Hopfield networks have recognized that some relaxation schemes have a joined cost function and the states of the network converge to local minima of this function. It had performed optimization of a well-defined function. However, there is no guarantee to find the best minimum in the network. Thus, Boltzmann machine has been introduced to overcome this problem. In this paper, we integrate both approaches to enhance data mining. We limit our work to Horn clauses.
Towards a Better Understanding of CMMI and Agile Integration - Multiple Case Study of Four Companies
NASA Astrophysics Data System (ADS)
Pikkarainen, Minna
The amount of software is increasing in the different domains in Europe. This provides the industries in smaller countries good opportunities to work in the international markets. Success in the global markets however demands the rapid production of high quality, error free software. Both CMMI and agile methods seem to provide a ready solution for quality and lead time improvements. There is not, however, much empirical evidence available either about 1) how the integration of these two aspects can be done in practice or 2) what it actually demands from assessors and software process improvement groups. The goal of this paper is to increase the understanding of CMMI and agile integration, in particular, focusing on the research question: how to use ‘lightweight’ style of CMMI assessments in agile contexts. This is done via four case studies in which assessments were conducted using the goals of CMMI integrated project management and collaboration and coordination with relevant stakeholder process areas and practices from XP and Scrum. The study shows that the use of agile practices may support the fulfilment of the goals of CMMI process areas but there are still many challenges for the agile teams to be solved within the continuous improvement programs. It also identifies practical advices to the assessors and improvement groups to take into consideration when conducting assessment in the context of agile software development.
Integrating Multiple Distribution Models to Guide Conservation Efforts of an Endangered Toad
Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models. PMID:26125634
Integrating Multiple Distribution Models to Guide Conservation Efforts of an Endangered Toad.
Treglia, Michael L; Fisher, Robert N; Fitzgerald, Lee A
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species' ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species' current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models. PMID:26125634
Integrating multiple distribution models to guide conservation efforts of an endangered toad
Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.
An integral imaging method for depth extraction with lens array in an optical tweezer system
NASA Astrophysics Data System (ADS)
Wang, Shulu; Liu, Wei-Wei; Wang, Anting; Li, Yinmei; Ming, Hai
2014-10-01
In this paper, a new integral imaging method is proposed for depth extraction in an optical tweezer system. A mutual coherence algorithm of stereo matching are theoretically analyzed and demonstrated feasible by virtual simulation. In our design, optical tweezer technique is combined with integral imaging in a single microscopy system by inserting a lens array into the optical train. On one hand, the optical tweezer subsystem is built based on the modulated light field from a solid laser, and the strong focused beam forms a light trap to capture tiny specimens. On the other hand, through parameters optimization, the microscopic integral imaging subsystem is composed of a microscope objective, a lens array (150x150 array with 0.192mm unit size and 9mm focal length) and a single lens reflex (SLR). Pre-magnified by the microscope objective, the specimens formed multiple images through the lens array. A single photograph of a series of multiple sub-images has recorded perspective views of the specimens. The differences between adjacent sub-images have been analyzed for depth extraction with the mutual coherence algorithm. The experimental results show that the axial resolution can reach to 1μm -1 and lateral resolution can reach to 2 μm -1.
Sensitivity method for integrated structure/active control law design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1987-01-01
The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.
Methods for integrating optical fibers with advanced aerospace materials
NASA Astrophysics Data System (ADS)
Poland, Stephen H.; May, Russell G.; Murphy, Kent A.; Claus, Richard O.; Tran, Tuan A.; Miller, Mark S.
1993-07-01
Optical fibers are attractive candidates for sensing applications in near-term smart materials and structures, due to their inherent immunity to electromagnetic interference and ground loops, their capability for distributed and multiplexed operation, and their high sensitivity and dynamic range. These same attributes also render optical fibers attractive for avionics busses for fly-by-light systems in advanced aircraft. The integration of such optical fibers with metal and composite aircraft and aerospace materials, however, remains a limiting factor in their successful use in such applications. This paper first details methods for the practical integration of optical fiber waveguides and cable assemblies onto and into materials and structures. Physical properties of the optical fiber and coatings which affect the survivability of the fiber are then considered. Mechanisms for the transfer of the strain from matrix to fiber for sensor and data bus fibers integrated with composite structural elements are evaluated for their influence on fiber survivability, in applications where strain or impact is imparted to the assembly.
A Multiple-Methods Approach to the Investigation of WAIS-R Constructs Employing Cluster Analysis.
ERIC Educational Resources Information Center
Fraboni, Maryann; And Others
1989-01-01
Seven hierarchical clustering methods were applied to the Wechsler Adult Intelligence Scale-Revised (WAIS-R) scores of 121 medical rehabilitation clients to investigate the possibility of method-dependent results and determine the stability of the clusters. This multiple-methods cluster analysis suggests that the underlying constructs of the…
Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.; Pollard, Colin J.; Warner, Kirstin F.; Remmers, Daniel L.; Sorensen, Daniel N.; Whinnery, LeRoy L.; Phillips, Jason J.; Shelley, Timothy J.; Reyes, Jose A.; Hsu, Peter C.; Reynolds, John G.
2013-03-25
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.
Bornstein, Robert F.
2015-01-01
Recent controversies have illuminated the strengths and limitations of different frameworks for conceptualizing personality pathology (e.g., trait perspectives, categorical models), and stimulated debate regarding how best to diagnose personality disorders (PDs) in DSM-5, and in other diagnostic systems (i.e., the International Classification of Diseases, the Psychodynamic Diagnostic Manual). In this article I argue that regardless of how PDs are conceptualized and which diagnostic system is employed, multi-method assessment must play a central role in PD diagnosis. By complementing self-reports with evidence from other domains (e.g., performance-based tests), a broader range of psychological processes are engaged in the patient, and the impact of self-perception and self-presentation biases may be better understood. By providing the assessor with evidence drawn from multiple modalities, some of which provide converging patterns and some of which yield divergent results, the assessor is compelled to engage this evidence more deeply. The mindful processing that ensues can help minimize the deleterious impact of naturally occurring information processing bias and distortion on the part of the clinician (e.g., heuristics, attribution errors), bringing greater clarity to the synthesis and integration of assessment data. PMID:25856565
NASA Astrophysics Data System (ADS)
Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei
2015-12-01
Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.
MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems
NASA Astrophysics Data System (ADS)
Kopecky, Ken; Winer, Eliot
2014-06-01
Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.
Ren, Fangfang; Shi, Qing; Chen, Yongbin; Jiang, Alice; Ip, Y Tony; Jiang, Huaqi; Jiang, Jin
2013-01-01
Intestinal stem cells (ISCs) in the Drosophila adult midgut are essential for maintaining tissue homeostasis, and their proliferation and differentiation speed up in order to meet the demand for replenishing the lost cells in response to injury. Several signaling pathways including JAK-STAT, EGFR and Hippo (Hpo) pathways have been implicated in damage-induced ISC proliferation, but the mechanisms that integrate these pathways have remained elusive. Here, we demonstrate that the Drosophila homolog of the oncoprotein Myc (dMyc) functions downstream of these signaling pathways to mediate their effects on ISC proliferation. dMyc expression in precursor cells is stimulated in response to tissue damage, and dMyc is essential for accelerated ISC proliferation and midgut regeneration. We show that tissue damage caused by dextran sulfate sodium feeding stimulates dMyc expression via the Hpo pathway, whereas bleomycin feeding activates dMyc through the JAK-STAT and EGFR pathways. We provide evidence that dMyc expression is transcriptionally upregulated by multiple signaling pathways, which is required for optimal ISC proliferation in response to tissue damage. We have also obtained evidence that tissue damage can upregulate dMyc expression post-transcriptionally. Finally, we show that a basal level of dMyc expression is required for ISC maintenance, proliferation and lineage differentiation during normal tissue homeostasis. PMID:23896988
A way to integrate multiple block layers for middle of line contact patterning
NASA Astrophysics Data System (ADS)
Kunnen, E.; Demuynck, S.; Brouri, M.; Boemmels, J.; Versluijs, J.; Ryckaert, J.
2015-03-01
It is clear today that further scaling towards smaller dimensions and pitches requires a multitude of additional process steps. Within this work we look for solutions to achieve a middle of line 193i based patterning scheme for N7 logic at a contacted poly pitch of 40-45 nm. At these pitches, trenches can still be printed by means of double patterning. However, they need to be blocked at certain positions because of a limited line end control below 90 nm pitch single print. Based on the 193i patterning abilities, the proposed SRAM (Static Random Access Memory) cell requires 5 blocking layers. Integrating 5 blocking layers is a new challenge since down to N10 one blocking layer was usually sufficient. The difficulty with multiple blocking layers is the removal of the masked parts, especially in cases of overlap. As a solution a novel patterning approach is proposed and tried out on relaxed dimensions (patent pending). The proposed solution is expected not to be sensitive to the number of blocking layers used, and tolerates their overlap. The stack is constructed to be compatible with N7 substrates such as SiGe or P:Si. Experimental results of the stack blocking performance on relaxed pitch will be presented and discussed.
An integrated economic model of multiple types and uses of water
NASA Astrophysics Data System (ADS)
Luckmann, Jonas; Grethe, Harald; McDonald, Scott; Orlov, Anton; Siddig, Khalid
2014-05-01
Water scarcity is an increasing problem in many parts of the world and the management of water has become an important issue on the political economy agenda in many countries. As water is used in most economic activities and the allocation of water is often a complex problem involving different economic agents and sectors, Computable General Equilibrium (CGE) models have been proven useful to analyze water allocation problems, although their adaptation to include water is still relatively undeveloped. This paper provides a description of an integrated water-focused CGE model (STAGE_W) that includes multiple types and uses of water, and for the first time, the reclamation of wastewater as well as the provision of brackish groundwater as separate, independent activities with specific cost structures. The insights provided by the model are illustrated with an application to the Israeli water sector assuming that freshwater resources available to the economy are cut by 50%. We analyze how the Israeli economy copes with this shock if it reduces potable water supply compared with further investments in the desalination sector. The results demonstrate that the effects on the economy are slightly negative under both scenarios. Counter intuitively, the provision of additional potable water to the economy through desalination does not substantively reduce the negative outcomes. This is mainly due to the high costs of desalination, which are currently subsidized, with the distribution of the negative welfare effect over household groups dependent on how these subsidies are financed.
Accelerometer method and apparatus for integral display and control functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1992-01-01
Vibration analysis has been used for years to provide a determination of the proper functioning of different types of machinery, including rotating machinery and rocket engines. A determination of a malfunction, if detected at a relatively early stage in its development, will allow changes in operating mode or a sequenced shutdown of the machinery prior to a total failure. Such preventative measures result in less extensive and/or less expensive repairs, and can also prevent a sometimes catastrophic failure of equipment. Standard vibration analyzers are generally rather complex, expensive, and of limited portability. They also usually result in displays and controls being located remotely from the machinery being monitored. Consequently, a need exists for improvements in accelerometer electronic display and control functions which are more suitable for operation directly on machines and which are not so expensive and complex. The invention includes methods and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. The apparatus includes an accelerometer package having integral display and control functions. The accelerometer package is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine condition over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase over the selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated. The benefits of a vibration recording and monitoring system with controls and displays readily
The reduced basis method for the electric field integral equation
Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.
2011-06-20
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity
Yeung, Edward S.; Gong, Xiaoyi
2004-09-07
The present invention provides a method of analyzing multiple samples simultaneously by absorption detection. The method comprises: (i) providing a planar array of multiple containers, each of which contains a sample comprising at least one absorbing species, (ii) irradiating the planar array of multiple containers with a light source and (iii) detecting absorption of light with a detetion means that is in line with the light source at a distance of at leaat about 10 times a cross-sectional distance of a container in the planar array of multiple containers. The absorption of light by a sample indicates the presence of an absorbing species in it. The method can further comprise: (iv) measuring the amount of absorption of light detected in (iii) indicating the amount of the absorbing species in the sample. Also provided by the present invention is a system for use in the abov metho.The system comprises; (i) a light source comrnpising or consisting essentially of at leaat one wavelength of light, the absorption of which is to be detected, (ii) a planar array of multiple containers, and (iii) a detection means that is in line with the light source and is positioned in line with and parallel to the planar array of multiple contiainers at a distance of at least about 10 times a cross-sectional distance of a container.
The Flux-integral Method for Multidimensional Convection and Diffusion
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1994-01-01
The flux-integral method is a procedure for constructing an explicit, single-step, forward-in-time, conservative, control volume update of the unsteady, multidimensional convection-diffusion equation. The convective plus diffusive flux at each face of a control-volume cell is estimated by integrating the transported variable and its face-normal derivative over the volume swept out by the convecting velocity field. This yields a unique description of the fluxes, whereas other conservative methods rely on nonunique, arbitrary pseudoflux-difference splitting procedures. The accuracy of the resulting scheme depends on the form of the subcell interpolation assumed, given cell-average data. Cellwise constant behavior results in a (very artificially diffusive) first-order convection scheme. Second-order convection-diffusion schemes correspond to cellwise linear (or bilinear) subcell interpolation. Cellwise quadratic subcell interpolants generate a highly accurate convection-diffusion scheme with excellent phase accuracy. Under constant-coefficient conditions, this is a uniformly third-order polynomial interpolation algorithm (UTOPIA).
Non-Bayesian Information Fusion for Integrating Hydrologic and Multiple Sets of Geophysical Data
NASA Astrophysics Data System (ADS)
Ozbek, M. M.; Pinder, G. F.
2005-12-01
Combination of geological, geophysical and geohydrological data derived from disparate sources is a cost-effective and scientifically challenging approach to maximizing information on the subsurface. Existing studies have limitations in that no universal methods are available for converting geophysical attributes to geohydrological ones due to the inconsistency in the methods of geophysical data acquisition and interpretation and the fact that the complementary nature of the geophysical methods are not exploited. Indeed, there is no single geophysical method effective in most environmental and subsurface conditions, and all are strongly scenario-dependent. Thus it becomes essential to characterize the information that each individual geophysical method provides in combination. Our approach explicitly quantifies and integrates into the characterization process the insight of a geophysicist on i) the individual capabilities that geophysical methods have and ii) what the meaning of the data is that they produce when interpreted collectively. A model based upon the mathematics of fuzzy set theory based approximate reasoning and of belief theory is used address the following problems: 1) the use of geological and hydrogeological knowledge that relates geological conditions to hydrogeological attributes for the creation of site specific a priori conductivity field in the presence of a limited amount of borehole data 2) the use of geophysical knowledge in the solution of the `geophysical data interpretation' problem defined as the synthesis of data generated by several geophysical methods to infer the true conditions of the soil and 3) the use of the inferred soil information to condition the a priori conductivity field. The approach is demonstrated through an application using real site data.
A method to optimize selection on multiple identified quantitative trait loci
Chakraborty, Reena; Moreau, Laurence; Dekkers, Jack CM
2002-01-01
A mathematical approach was developed to model and optimize selection on multiple known quantitative trait loci (QTL) and polygenic estimated breeding values in order to maximize a weighted sum of responses to selection over multiple generations. The model allows for linkage between QTL with multiple alleles and arbitrary genetic effects, including dominance, epistasis, and gametic imprinting. Gametic phase disequilibrium between the QTL and between the QTL and polygenes is modeled but polygenic variance is assumed constant. Breeding programs with discrete generations, differential selection of males and females and random mating of selected parents are modeled. Polygenic EBV obtained from best linear unbiased prediction models can be accommodated. The problem was formulated as a multiple-stage optimal control problem and an iterative approach was developed for its solution. The method can be used to develop and evaluate optimal strategies for selection on multiple QTL for a wide range of situations and genetic models. PMID:12081805
An Integrative Method for Accurate Comparative Genome Mapping
Swidan, Firas; Rocha, Eduardo P. C; Shmoish, Michael; Pinter, Ron Y
2006-01-01
We present MAGIC, an integrative and accurate method for comparative genome mapping. Our method consists of two phases: preprocessing for identifying “maximal similar segments,” and mapping for clustering and classifying these segments. MAGIC's main novelty lies in its biologically intuitive clustering approach, which aims towards both calculating reorder-free segments and identifying orthologous segments. In the process, MAGIC efficiently handles ambiguities resulting from duplications that occurred before the speciation of the considered organisms from their most recent common ancestor. We demonstrate both MAGIC's robustness and scalability: the former is asserted with respect to its initial input and with respect to its parameters' values. The latter is asserted by applying MAGIC to distantly related organisms and to large genomes. We compare MAGIC to other comparative mapping methods and provide detailed analysis of the differences between them. Our improvements allow a comprehensive study of the diversity of genetic repertoires resulting from large-scale mutations, such as indels and duplications, including explicitly transposable and phagic elements. The strength of our method is demonstrated by detailed statistics computed for each type of these large-scale mutations. MAGIC enabled us to conduct a comprehensive analysis of the different forces shaping prokaryotic genomes from different clades, and to quantify the importance of novel gene content introduced by horizontal gene transfer relative to gene duplication in bacterial genome evolution. We use these results to investigate the breakpoint distribution in several prokaryotic genomes. PMID:16933978
Integrating Ground System Tools From Multiple Technologies Into a Single System Environment
NASA Technical Reports Server (NTRS)
Ritter, George H.
2004-01-01
With rapid technology changes and new and improved development techniques, it becomes extremely difficult to try to add capabilities to existing ground systems without wanting to replace the entire system. Replacing entire systems is not usually cost effective so there is a need to be able to slowly improve systems without long development times that introduce risk due to large amounts of change. The Marshall Space Flight Center s (MSFC) Payload Operations Integration Center (POIC) ground system provides command, telemetry, and payload planning systems in support of the International Space Station. Our systems have continuously evolved with technology changes due to hardware end of life issues, and also due to user requirement changes. As changes have been implemented, we have tried to take advantage of some of the latest technologies while at the same time maintaining certain legacy capabilities that are not cost affective to replace. One of our biggest challenges is to integrate all of these implementations into a single system that is usable, maintainable, and scalable. Another challenge is to provide access to our tools in such a way that users are not aware of all the various implementation methods and tools being used. This approach not only makes our system much more usable, it allows us to continue to migrate capabilities and to add capabilities without impacting system usability. This paper will give an overview of the tools used for MSFC ISS payload operations and show an approach for integrating various technologies into a single environment that is maintainable, flexible, usable, cost effective, and that meets user needs.
NASA Technical Reports Server (NTRS)
Foernsler, Lynda J.
1996-01-01
Checklists are used by the flight crew to properly configure an aircraft for safe flight and to ensure a high level of safety throughout the duration of the flight. In addition, the checklist provides a sequential framework to meet cockpit operational requirements, and it fosters cross-checking of the flight deck configuration among crew members. This study examined the feasibility of integrating multiple checklists for non-normal procedures into a single procedure for a typical transport aircraft. For the purposes of this report, a typical transport aircraft is one that represents a midpoint between early generation aircraft (B-727/737-200 and DC-10) and modern glass cockpit aircraft (B747-400/777 and MD-11). In this report, potential conflicts among non-normal checklist items during multiple failure situations for a transport aircraft are identified and analyzed. The non-normal checklist procedure that would take precedence for each of the identified multiple failure flight conditions is also identified. The rationale behind this research is that potential conflicts among checklist items might exist when integrating multiple checklists for non-normal procedures into a single checklist. As a rule, multiple failures occurring in today's highly automated and redundant system transport aircraft are extremely improbable. In addition, as shown in this analysis, conflicts among checklist items in a multiple failure flight condition are exceedingly unlikely. The possibility of a multiple failure flight condition occurring with a conflict among checklist items is so remote that integration of the non-normal checklists into a single checklist appears to be a plausible option.
Online two-stage association method for robust multiple people tracking
NASA Astrophysics Data System (ADS)
Lv, Jingqin; Fang, Jiangxiong; Yang, Jie
2011-07-01
Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.
Two-Dimensional Integral Reacting Computer Code for Multiple Phase Flows
1997-05-05
ICRKFLO solves conservation equations for gaseous species, droplets, and solid particles of various sizes. General conservation laws, expressed by ellipitic-type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy and dissipation for a three-phase reacting flow. Associated sub-models include integral combustion, two-parameter turbulence, particle melting and evaporation, droplet evaporation, and interfacial submodels. An evolving integral reaction submodel, originally designed for ICOMFLO2 to solve numerical stabilitymore » problems associated with Arrhenius type differential reaction submodels, was expanded and enhanced to handle petroleum cracking applications. A two-parameter turbulence submodel accounts for droplet and particle dispersion by gas phase turbulence with feedback effects on the gas phase. The evaporation submodel treats not only particle evaporation but the droplet size distribution shift caused by evaporation. Interfacial submodels correlate momentum and energy transfer between phases. Three major upgrades, adding new capabilities and improved physical modeling, were implemnted in IRCKFLO Version 2.0. They are :(1) particle-particle and particle wall interactions; (2) a two-step process for computing the reaction kinetics for a very large number of chemical reactions within a complex non-isothermal hydrodynamic flow field; and (3) a sectional coupling method combined with a triangular blocked cell technique for computing reacting multiphase flow systems of complex geometry while preserving the advantages of grid orthogonality.« less
The value of integrating information from multiple hazards for flood risk management
NASA Astrophysics Data System (ADS)
Castillo-Rodríguez, J. T.; Escuder-Bueno, I.; Altarejos-García, L.; Serrano-Lombillo, A.
2013-07-01
This article presents a methodology for estimating flood risk in urban areas integrating pluvial flooding, river flooding and failure of both small and large dams. The first part includes a review of basic concepts and existing methods on flood risk analysis, evaluation and management. Traditionally, flood risk analyses have focused on specific site studies and qualitative or semi-quantitative approaches. However, in this context, a general methodology to perform a quantitative flood risk analysis including different flood hazards was still required. The second part describes the proposed methodology, which presents an integrated approach - combining pluvial, river flooding and dam failure, as applied to a case study: a urban area located downstream a dam under construction. Such methodology represents an upgrade of the methodological piece developed within the SUFRI project. This article shows how outcomes from flood risk analysis provide better and more complete information to inform authorities, local entities and the stakeholders involved on decision-making with regard to flood risk management.
Photomask design method for pattern-integrated interference lithography
NASA Astrophysics Data System (ADS)
Leibovici, Matthieu C. R.; Gaylord, Thomas K.
2016-01-01
Pattern-integrated interference lithography (PIIL) combines multibeam interference lithography and projection lithography simultaneously to produce two-dimensional (2-D) and three-dimensional (3-D) periodic-lattice-based microstructures in a rapid, single-exposure step. Using a comprehensive PIIL vector model and realistic photolithographic conditions, PIIL exposures for a representative photonic-crystal (PhC) 90 deg bend waveguide are simulated in the volume of the photoresist film. The etched structures in the underlying substrate are estimated as well. Due to the imperfect integration of the photomask within the interference pattern, the interference pattern is locally distorted, thereby impacting the PhC periodic lattice and potentially the device performance. To mitigate these distortions, a photomask optimization method for PIIL is presented in this work. With an improved photomask, pillar-area and pillar-displacement errors in the vicinity of the waveguide are reduced by factors of 3.3 and 2.7, respectively. Furthermore, calculated transmission spectra show that the performance of the PIIL-produced PhC device is as good as that of its idealized equivalent.
Lindenmeyer, Carl W.
1993-01-01
An apparatus and method to automate the handling of multiple digital tape cassettes for processing by commercially available cassette tape readers and recorders. A removable magazine rack stores a plurality of tape cassettes, and cooperates with a shuttle device that automatically inserts and removes cassettes from the magazine to the reader and vice-versa. Photocells are used to identify and index to the desired tape cassette. The apparatus allows digital information stored on multiple cassettes to be processed without significant operator intervention.
Pseudospectral methods for computing the multiple solutions of the Lane-Emden equation
NASA Astrophysics Data System (ADS)
Li, Zhao-xiang; Wang, Zhong-qing
2013-12-01
Based on the Liapunov-Schmidt reduction and symmetry-breaking bifurcation theory, we compute and visualize multiple solutions of the Lane-Emden equation on a square and a disc, using Legendre and Fourier-Legendre pseudospectral methods. Starting from the nontrivial solution branches of the corresponding nonlinear bifurcation problem, we obtain multiple solutions of Lane-Emden equation with various symmetries numerically. Numerical results demonstrate the effectiveness of these approaches.
Lindenmeyer, C.W.
1993-01-26
An apparatus and method to automate the handling of multiple digital tape cassettes for processing by commercially available cassette tape readers and recorders. A removable magazine rack stores a plurality of tape cassettes, and cooperates with a shuttle device that automatically inserts and removes cassettes from the magazine to the reader and vice-versa. Photocells are used to identify and index to the desired tape cassette. The apparatus allows digital information stored on multiple cassettes to be processed without significant operator intervention.
ERIC Educational Resources Information Center
Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee
2009-01-01
Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…
ERIC Educational Resources Information Center
Große, Cornelia S.
2014-01-01
It is commonly suggested to mathematics teachers to present learners different methods in order to solve one problem. This so-called "learning with multiple solution methods" is also recommended from a psychological point of view. However, existing research leaves many questions unanswered, particularly concerning the effects of…
Pineda, Silvia; Real, Francisco X.; Kogevinas, Manolis; Carrato, Alfredo; Chanock, Stephen J.
2015-01-01
Omics data integration is becoming necessary to investigate the genomic mechanisms involved in complex diseases. During the integration process, many challenges arise such as data heterogeneity, the smaller number of individuals in comparison to the number of parameters, multicollinearity, and interpretation and validation of results due to their complexity and lack of knowledge about biological processes. To overcome some of these issues, innovative statistical approaches are being developed. In this work, we propose a permutation-based method to concomitantly assess significance and correct by multiple testing with the MaxT algorithm. This was applied with penalized regression methods (LASSO and ENET) when exploring relationships between common genetic variants, DNA methylation and gene expression measured in bladder tumor samples. The overall analysis flow consisted of three steps: (1) SNPs/CpGs were selected per each gene probe within 1Mb window upstream and downstream the gene; (2) LASSO and ENET were applied to assess the association between each expression probe and the selected SNPs/CpGs in three multivariable models (SNP, CPG, and Global models, the latter integrating SNPs and CPGs); and (3) the significance of each model was assessed using the permutation-based MaxT method. We identified 48 genes whose expression levels were significantly associated with both SNPs and CPGs. Importantly, 36 (75%) of them were replicated in an independent data set (TCGA) and the performance of the proposed method was checked with a simulation study. We further support our results with a biological interpretation based on an enrichment analysis. The approach we propose allows reducing computational time and is flexible and easy to implement when analyzing several types of omics data. Our results highlight the importance of integrating omics data by applying appropriate statistical strategies to discover new insights into the complex genetic mechanisms involved in disease
The U.S. Environmental Protection Agency recently established the Ecosystem Services Research Program to help formulate methods and models for conducting comprehensive risk assessments that quantify how multiple ecosystem services interact and respond in concert to environmental ...
Integrating multiple HD video services over tiled display for advanced multi-party collaboration
NASA Astrophysics Data System (ADS)
Han, Sangwoo; Kim, Jaeyoun; Choi, Kiho; Kim, JongWon
2006-10-01
Multi-party collaborative environments based on AG (Access Grid) are extensively utilized for distance learning, e-science, and other distributed global collaboration events. In such environments, A/V media services play an important role in providing QoE (quality of experience) to participants in collaboration sessions. In this paper, in order to support high-quality user experience in the aspect of video services, we design an integration architecture to combine high-quality video services and a high-resolution tiled display service. In detail, the proposed architecture incorporates video services for DV (digital video) and HDV (high-definition digital video) streaming with a display service to provide methods for decomposable decoding/display for a tiled display system. By implementing the proposed architecture on top of AG, we verify that high-quality collaboration among a couple of collaboration sites can be realized over a multicast-enabled network testbed with improved media quality experience.
Wang, Qian; Yang, Can; Gelernter, Joel; Zhao, Hongyu
2015-11-01
Although some existing epidemiological observations and molecular experiments suggested that brain disorders in the realm of psychiatry may be influenced by immune dysregulation, the degree of genetic overlap between psychiatric disorders and immune disorders has not been well established. We investigated this issue by integrative analysis of genome-wide association studies of 18 complex human traits/diseases (five psychiatric disorders, seven immune disorders, and others) and multiple genome-wide annotation resources (central nervous system genes, immune-related expression-quantitative trait loci (eQTL) and DNase I hypertensive sites from 98 cell lines). We detected pleiotropy in 24 of the 35 psychiatric-immune disorder pairs. The strongest pleiotropy was observed for schizophrenia-rheumatoid arthritis with MHC region included in the analysis (p = 3.9 x 10(-285), and schizophrenia-Crohn's disease with MHC region excluded (p = 1.1 x 10(-36). Significant enrichment (> 1.4 fold) of immune-related eQTL was observed in four psychiatric disorders. Genomic regions responsible for pleiotropy between psychiatric disorders and immune disorders were detected. The MHC region on chromosome 6 appears to be the most important with other regions, such as cytoband 1p13.2, also playing significant roles in pleiotropy. We also found that most alleles shared between schizophrenia and Crohn's disease have the same effect direction, with similar trend found for other disorder pairs, such as bipolar-Crohn's disease. Our results offer a novel bird's-eye view of the genetic relationship and demonstrate strong evidence for pervasive pleiotropy between psychiatric disorders and immune disorders. Our findings might open new routes for prevention and treatment strategies for these disorders based on a new appreciation of the importance of immunological mechanisms in mediating risk of many psychiatric diseases. PMID:26340901
NASA Astrophysics Data System (ADS)
Tazik, D.; Roehm, C. L.; Atkin, O.; Ayers, E.; Berukoff, S. J.; Fitzgerald, M.; Held, A. A.; Hinckley, E. S.; Kampe, T. U.; Liddell, M.; Phinn, S. R.; Taylor, J. R.; Thibault, K. M.; Thorpe, A.
2013-12-01
Distributed standardized sensor networks that collect coordinated airborne- and ground-based observations and are coupled with remotely sensed satellite imagery provide unique insight into complex ecological processes and feedbacks across a range of spatio-temporal scales. Measurements and information transfer at and across scales are key challenges in ecohydrology. A combination of approaches, for example, isotopic signatures of leaves, evapotranspiration using micrometeorological techniques, and water stress from remote sensing imagery, will improve our ability to integrate data across spatial scales. The collaboration among science networks such as the National Ecological Observatory Network (NEON) in the US and Terrestrial Ecosystem Research Network (TERN) in Australia will provide data that enable researchers to address complex questions regarding processes operating within and across systems, at site-to-continental scales and beyond. In this talk, we present several examples demonstrating combinations of remotely sensed and ground-based ecohydrological data collected using standardized methodologies across multiple sensor networks. Examples include: 1) determining ecohydrological controls on plant production at plot to regional scales; 2) interpreting atmospheric chemical and isotopic deposition gradients across geographic domains; 3) using the stable isotope signatures of small mammal tissues to track drought dynamics across space and time; 4) mapping water quality characteristics in optically complex waters using remotely sensed imagery and high temporal frequency ground based sensor calibration data and 5) scaling plot and individual plant level vegetation structure estimates to continental scale maps of vegetation and ground cover dynamics. Australian scientists are using TERN's infrastructure for improving Soil-Vegetation-Atmosphere Transfer (SVAT) modeling for Australian conditions by assessing plant photosynthetic and respiration performance across a
Apparatus and method for defect testing of integrated circuits
Cole, Jr., Edward I.; Soden, Jerry M.
2000-01-01
An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V.sub.DD, to an IC under test and measures a transient voltage component, V.sub.DDT, signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V.sub.DDT signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V.sub.DDT signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.
Apparatus and method for defect testing of integrated circuits
Cole, E.I. Jr.; Soden, J.M.
2000-02-29
An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V(DD), to an IC under test and measures a transient voltage component, V(DDT), signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V(DDT) signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V(DDT) signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.
Imaginary time integration method using a quantum lattice gas approach
NASA Astrophysics Data System (ADS)
Oganesov, Armen; Flint, Christopher; Vahala, George; Vahala, Linda; Yepez, Jeffrey; Soe, Min
2016-02-01
By modifying the collision operator in the quantum lattice gas (QLG) algorithm one can develop an imaginary time (IT) integration to determine the ground state solutions of the Schrödinger equation and its variants. These solutions are compared to those found by other methods (in particular the backward-Euler finite-difference scheme and the quantum lattice Boltzmann). In particular, the ground state of the quantum harmonic oscillator is considered as well as bright solitons in the one-dimensional (1D) non-linear Schrödinger equation. The dark solitons in an external potential are then determined. An advantage of the QLG IT algorithm is the avoidance of any real/complex matrix inversion and that its extension to arbitrary dimensions is straightforward.
Attachment method for stacked integrated circuit (IC) chips
Bernhardt, Anthony F.; Malba, Vincent
1999-01-01
An attachment method for stacked integrated circuit (IC) chips. The method involves connecting stacked chips, such as DRAM memory chips, to each other and/or to a circuit board. Pads on the individual chips are rerouted to form pads on the side of the chip, after which the chips are stacked on top of each other whereby desired interconnections to other chips or a circuit board can be accomplished via the side-located pads. The pads on the side of a chip are connected to metal lines on a flexible plastic tape (flex) by anisotropically conductive adhesive (ACA). Metal lines on the flex are likewise connected to other pads on chips and/or to pads on a circuit board. In the case of a stack of DRAM chips, pads to corresponding address lines on the various chips may be connected to the same metal line on the flex to form an address bus. This method has the advantage of reducing the number of connections required to be made to the circuit board due to bussing; the flex can accommodate dimensional variation in the alignment of chips in the stack; bonding of the ACA is accomplished at low temperature and is otherwise simpler and less expensive than solder bonding; chips can be bonded to the ACA all at once if the sides of the chips are substantially coplanar, as in the case for stacks of identical chips, such as DRAM.
Attachment method for stacked integrated circuit (IC) chips
Bernhardt, A.F.; Malba, V.
1999-08-03
An attachment method for stacked integrated circuit (IC) chips is disclosed. The method involves connecting stacked chips, such as DRAM memory chips, to each other and/or to a circuit board. Pads on the individual chips are rerouted to form pads on the side of the chip, after which the chips are stacked on top of each other whereby desired interconnections to other chips or a circuit board can be accomplished via the side-located pads. The pads on the side of a chip are connected to metal lines on a flexible plastic tape (flex) by anisotropically conductive adhesive (ACA). Metal lines on the flex are likewise connected to other pads on chips and/or to pads on a circuit board. In the case of a stack of DRAM chips, pads to corresponding address lines on the various chips may be connected to the same metal line on the flex to form an address bus. This method has the advantage of reducing the number of connections required to be made to the circuit board due to bussing; the flex can accommodate dimensional variation in the alignment of chips in the stack; bonding of the ACA is accomplished at low temperature and is otherwise simpler and less expensive than solder bonding; chips can be bonded to the ACA all at once if the sides of the chips are substantially coplanar, as in the case for stacks of identical chips, such as DRAM. 12 figs.
NASA Astrophysics Data System (ADS)
Min, Xiaoyi
This thesis first presents the study of the interaction of electromagnetic waves with three-dimensional heterogeneous, dielectric, magnetic, and lossy bodies by surface integral equation modeling. Based on the equivalence principle, a set of coupled surface integral equations is formulated and then solved numerically by the method of moments. Triangular elements are used to model the interfaces of the heterogeneous body, and vector basis functions are defined to expand the unknown current in the formulation. The validity of this formulation is verified by applying it to concentric spheres for which an exact solution exists. The potential applications of this formulation to a partially coated sphere and a homogeneous human body are discussed. Next, this thesis also introduces an efficient new set of integral equations for treating the scattering problem of a perfectly conducting body coated with a thin magnetically lossy layer. These electric field integral equations and magnetic field integral equations are numerically solved by the method of moments (MoM). To validate the derived integral equations, an alternative method to solve the scattering problem of an infinite circular cylinder coated with a thin magnetic lossy layer has also been developed, based on the eigenmode expansion. Results for the radar cross section and current densities via the MoM and the eigenmode expansion method are compared. The agreement is excellent. The finite difference time domain method is subsequently implemented to solve a metallic object coated with a magnetic thin layer and numerical results are compared with that by the MoM. Finally, this thesis presents an application of the finite-difference time-domain approach to the problem of electromagnetic receiving and scattering by a cavity -backed antenna situated on an infinite conducting plane. This application involves modifications of Yee's model, which applies the difference approximations of field derivatives to differential
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kanae, S.; Seto, S.; Hirabayashi, Y.; Oki, T.
2012-12-01
Bias-correction methods applied to monthly temperature and precipitation data simulated by multiple General Circulation Models (GCMs) are evaluated in this study. Although various methods have been proposed recently, an intercomparison among them using multiple GCM simulations has seldom been reported. Here, five previous methods as well as a proposed new method are compared. Before the comparison, we classified previous methods. The methods proposed in previous studies can be classified into four types based on the following two criteria: 1) Whether the statistics (e.g. mean, standard deviation, or the coefficient of variation) of future simulation is used in bias-correction; and 2) whether the estimation of cumulative probability is included in bias-correction. The methods which require future statistics will depend on the data in the projection period, while those which do not use future statistics are not. The classification proposed can characterize each bias-correction method. These methods are applied to temperature and precipitation simulated from 12 GCMs in the Coupled Model Intercomparison Project (CMIP3) archives. Parameters of each method are calibrated by using 1948-1972 observed data and validated for the 1974-1998 period. These methods are then applied to GCM future simulations (2073-2097), and the bias-corrected data are intercompared. For the historical simulation, negligible difference can be found between observed and bias-corrected data. However, the difference in the future simulation is large dependent on the characteristics of each method. The frequency (probability) that the 2073-2097 bias-corrected data exceed the 95th percentile of the 1948-1972 observed data is estimated in order to evaluate the differences among methods. The difference between proposed and one of the previous method is more than 10% in many areas. The differences of bias-corrected data among methods are discussed based on their respective characteristics. The results
A Multi-Index Integrated Change Detection Method for Updating the National Land Cover Database
NASA Astrophysics Data System (ADS)
Jin, S.; Yang, L.; Xian, G. Z.; Danielson, P.; Homer, C.
2010-12-01
Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.
A Multi-Index Integrated Change detection method for updating the National Land Cover Database
Jin, Suming; Yang, Limin; Xian, George Z.; Danielson, Patrick; Homer, Collin
2010-01-01
Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.
Trigonometrically fitted two step hybrid method for the numerical integration of second order IVPs
NASA Astrophysics Data System (ADS)
Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.
2016-06-01
In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct trigonometrically fitted two step hybrid methods. We apply the new methods on the numerical integration of several test problems.
Hubbell rectangular source integral calculation using a fast Chebyshev wavelets method.
Manai, K; Belkadhi, K
2016-07-01
An integration method based on Chebyshev wavelets is presented and used to calculate the Hubbell rectangular source integral. A study of the convergence and the accuracy of the method was carried out by comparing it to previous studies. PMID:27152913
mulPBA: an efficient multiple protein structure alignment method based on a structural alphabet.
Léonard, Sylvain; Joseph, Agnel Praveen; Srinivasan, Narayanaswamy; Gelly, Jean-Christophe; de Brevern, Alexandre G
2014-04-01
The increasing number of available protein structures requires efficient tools for multiple structure comparison. Indeed, multiple structural alignments are essential for the analysis of function, evolution and architecture of protein structures. For this purpose, we proposed a new web server called multiple Protein Block Alignment (mulPBA). This server implements a method based on a structural alphabet to describe the backbone conformation of a protein chain in terms of dihedral angles. This 'sequence-like' representation enables the use of powerful sequence alignment methods for primary structure comparison, followed by an iterative refinement of the structural superposition. This approach yields alignments superior to most of the rigid-body alignment methods and highly comparable with the flexible structure comparison approaches. We implement this method in a web server designed to do multiple structure superimpositions from a set of structures given by the user. Outputs are given as both sequence alignment and superposed 3D structures visualized directly by static images generated by PyMol or through a Jmol applet allowing dynamic interaction. Multiple global quality measures are given. Relatedness between structures is indicated by a distance dendogram. Superimposed structures in PDB format can be also downloaded, and the results are quickly obtained. mulPBA server can be accessed at www.dsimb.inserm.fr/dsimb_tools/mulpba/ . PMID:23659291
A new method for conservation planning for the persistence of multiple species.
Nicholson, Emily; Westphal, Michael I; Frank, Karin; Rochester, Wayne A; Pressey, Robert L; Lindenmayer, David B; Possingham, Hugh P
2006-09-01
Although the aim of conservation planning is the persistence of biodiversity, current methods trade-off ecological realism at a species level in favour of including multiple species and landscape features. For conservation planning to be relevant, the impact of landscape configuration on population processes and the viability of species needs to be considered. We present a novel method for selecting reserve systems that maximize persistence across multiple species, subject to a conservation budget. We use a spatially explicit metapopulation model to estimate extinction risk, a function of the ecology of the species and the amount, quality and configuration of habitat. We compare our new method with more traditional, area-based reserve selection methods, using a ten-species case study, and find that the expected loss of species is reduced 20-fold. Unlike previous methods, we avoid designating arbitrary weightings between reserve size and configuration; rather, our method is based on population processes and is grounded in ecological theory. PMID:16925654
Antanaviciute, Agne; Watson, Christopher M.; Harrison, Sally M.; Lascelles, Carolina; Crinnion, Laura; Markham, Alexander F.; Bonthron, David T.; Carr, Ian M.
2015-01-01
Motivation: Exome sequencing has become a de facto standard method for Mendelian disease gene discovery in recent years, yet identifying disease-causing mutations among thousands of candidate variants remains a non-trivial task. Results: Here we describe a new variant prioritization tool, OVA (ontology variant analysis), in which user-provided phenotypic information is exploited to infer deeper biological context. OVA combines a knowledge-based approach with a variant-filtering framework. It reduces the number of candidate variants by considering genotype and predicted effect on protein sequence, and scores the remainder on biological relevance to the query phenotype. We take advantage of several ontologies in order to bridge knowledge across multiple biomedical domains and facilitate computational analysis of annotations pertaining to genes, diseases, phenotypes, tissues and pathways. In this way, OVA combines information regarding molecular and physical phenotypes and integrates both human and model organism data to effectively prioritize variants. By assessing performance on both known and novel disease mutations, we show that OVA performs biologically meaningful candidate variant prioritization and can be more accurate than another recently published candidate variant prioritization tool. Availability and implementation: OVA is freely accessible at http://dna2.leeds.ac.uk:8080/OVA/index.jsp Supplementary information: Supplementary data are available at Bioinformatics online. Contact: umaan@leeds.ac.uk PMID:26272982
NASA Astrophysics Data System (ADS)
Mao, Yadan; Luick, John L.
2014-03-01
New mechanisms for stratification and upwelling in the southern Great Barrier Reef (GBR) are identified, and dynamic details of Capricorn Eddy, a transient feature located off the shelf at the southern extremity of the GBR, are revealed using the newly available surface current from High Frequency (HF) radar combined with other remote sensing and mooring data. The HF radar surface currents were used for tidal harmonic analysis and current-wind correlation analysis. These analyses, combined with Sea Surface Temperature (SST) data, mooring data, and altimetry-based geostrophic currents, enabled the effects of forcing from the large-scale oceanic currents (including the East Australian Current (EAC)), wind, and tides in a topographically complex flow regime to be separately identified. Within the indentation region where the width of the shelf abruptly narrows, current is strongly coupled with the EAC. Here strong residual flows, identified on current maps and SST images, fall into three patterns: southward flow, northwestward flow, and an eddy. Multiple data sets shed light on the prerequisite for the formation of the eddy, the reasons for its geometric variation, and its evolution with time. Intrusions of the eddy onto the shelf result in stratification characterized by a significant increase of surface temperature. Upwelling driven by wind or oceanic inflow is shown to cause stratification of previously well-mixed shelf water. The upwelling appears to be associated with equatorward-traveling coastal-trapped waves. The integrative method of analysis embodied here is applicable to other coastal regions with complex circulation.
Petersen, Kia Vest; Martinussen, Jan; Jensen, Peter Ruhdal; Solem, Christian
2013-06-01
We present a tool for repetitive, marker-free, site-specific integration in Lactococcus lactis, in which a nonreplicating plasmid vector (pKV6) carrying a phage attachment site (attP) can be integrated into a bacterial attachment site (attB). The novelty of the tool described here is the inclusion of a minimal bacterial attachment site (attB(min)), two mutated loxP sequences (lox66 and lox71) allowing for removal of undesirable vector elements (antibiotic resistance marker), and a counterselection marker (oroP) for selection of loxP recombination on the pKV6 vector. When transformed into L. lactis expressing the phage TP901-1 integrase, pKV6 integrates with high frequency into the chromosome, where it is flanked by attL and attR hybrid attachment sites. After expression of Cre recombinase from a plasmid that is not able to replicate in L. lactis, loxP recombinants can be selected for by using 5-fluoroorotic acid. The introduced attB(min) site can subsequently be used for a second round of integration. To examine if attP recombination was specific to the attB site, integration was performed in strains containing the attB, attL, and attR sites or the attL and attR sites only. Only attP-attB recombination was observed when all three sites were present. In the absence of the attB site, a low frequency of attP-attL recombination was observed. To demonstrate the functionality of the system, the xylose utilization genes (xylABR and xylT) from L. lactis strain KF147 were integrated into the chromosome of L. lactis strain MG1363 in two steps. PMID:23542630
Identification and integration of Picorna-like viruses in multiple insect taxa
Technology Transfer Automated Retrieval System (TEKTRAN)
Virus infection often leads to incorporation of a piece of the virus genetic code into the genome of the host organism, referred to as integration. Determining if the virus has integrated into the host genome provides valuable information needed to monitor disease spread. Detection of integrated vir...
A method to visualize the evolution of multiple interacting spatial systems
NASA Astrophysics Data System (ADS)
Heitzler, Magnus; Hackl, Jürgen; Adey, Bryan T.; Iosifescu-Enescu, Ionut; Lam, Juan Carlos; Hurni, Lorenz
2016-07-01
Integrated modeling approaches are being increasingly used to simulate the behavior of, and the interaction between, several interdependent systems. They are becoming more and more important in many fields, including, but not being limited to, civil engineering, hydrology and climate impact research. It is beneficial when using these approaches to be able to visualize both, the intermediary and final results of scenario-based analyses that are conducted in both, space and time. This requires appropriate visualization techniques that enable to efficiently navigate between multiple such scenarios. In recent years, several innovative visualization techniques have been developed that allow for such navigation purposes. These techniques, however, are limited to the representation of one system at a time. Improvements are possible with respect to the ability to visualize the results related to multiple scenarios for multiple interdependent spatio-temporal systems. To address this issue, existing multi-scenario navigation techniques based on small multiples and line graphs are extended by multiple system representations and inter-system impact representations. This not only allows to understand the evolution of the systems under consideration but also eases identifying events where one system influences another system significantly. In addition, the concept of selective branching is described that allows to remove otherwise redundant information from the visualization by considering the logical and temporal dependencies between these systems. This visualization technique is applied to a risk assessment methodology that allows to determine how different environmental systems (i.e. precipitation, flooding, and landslides) influence each other as well as how their impact on civil infrastructure affects society. The results of this work are concepts for improved visualization techniques for multiple interacting spatial systems. The successful validation with domain experts of
Atomic Calculations with a One-Parameter, Single Integral Method.
ERIC Educational Resources Information Center
Baretty, Reinaldo; Garcia, Carmelo
1989-01-01
Presents an energy function E(p) containing a single integral and one variational parameter, alpha. Represents all two-electron integrals within the local density approximation as a single integral. Identifies this as a simple treatment for use in an introductory quantum mechanics course. (MVL)
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
NASA Astrophysics Data System (ADS)
Ilhan, I.; Coakley, B. J.
2013-12-01
The Chukchi Edges project was designed to establish the relationship between the Chukchi Shelf and Borderland and indirectly test theories of opening for the Canada Basin. During this cruise, ~5300 km of 2D multi-channel reflection seismic profiles and other geophysical data (swath bathymetry, gravity, magnetics, sonobuoy refraction seismic) were collected from the RV Marcus G. Langseth across the transition between the Chukchi Shelf and Chukchi Borderland, where the water depths vary from 30 m to over 3 km. Multiples occur when seismic energy is trapped in a layer and reflected from an acoustic interface more than once. Various kinds of multiples occur during seismic data acquisition. These depend on the ray-path the seismic energy follows through the layers. One of the most common multiples is the surface related multiple, which occurs due to strong acoustic impedance contrast between the air and water. The reflected seismic energy from the water surface is trapped within the water column, thus reflects from the seafloor multiple times. Multiples overprint the primary reflections and complicate data interpretation. Both surface related multiple elimination (SRME) and forward parabolic radon transform multiple modeling methods were necessary to attenuate the multiples. SRME is applied to shot gathers starting with the near offset interpolation, multiple estimation using water depths, and subtracting the model multiple from the shot gathers. This method attenuated surface related multiple energy, however, peg-leg multiples remained in the data. The parabolic radon transform method minimized the effect of these multiples. This method is applied to normal moveout (NMO) corrected common mid-point gathers (CMP). The CMP gathers are fitted or modeled with curves estimated from the reference offset, moveout range, moveout increment parameters. Then, the modeled multiples are subtracted from the data. Preliminary outputs of these two methods show that the surface related
Unsteady aerodynamic simulation of multiple bodies in relative motion: A prototype method
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1989-01-01
A prototype method for time-accurate simulation of multiple aerodynamic bodies in relative motion is presented. The method is general and features unsteady chimera domain decomposition techniques and an implicit approximately factored finite-difference procedure to solve the time-dependent thin-layer Navier-Stokes equations. The method is applied to a set of two- and three- dimensional test problems to establish spatial and temporal accuracy, quantify computational efficiency, and begin to test overall code robustness.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, J.D.; Bailey, L.L.; O'Connell, A.F., Jr.; Talancy, N.W.; Grant, E.H.C.; Gilbert, A.T.; Annand, E.M.; Husband, T.P.; Hines, J.E.
2008-01-01
1. Occupancy estimation and modelling based on detection?nondetection data provide an effective way of exploring change in a species' distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method. 2. We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species' use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site. 3. We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species. 4. Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can
Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng
2014-08-01
Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method. PMID:25173293
Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil
NASA Astrophysics Data System (ADS)
Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng
2014-08-01
Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.
Balancing multiple constraints in model-data integration: Weights and the parameter block approach
NASA Astrophysics Data System (ADS)
Wutzler, T.; Carvalhais, N.
2014-11-01
Model data integration (MDI) studies are key to parameterize ecosystem models that synthesize our knowledge about ecosystem function. The use of diverse data sets, however, results in strongly imbalanced contributions of data streams with model fits favoring the largest data stream. This imbalance poses new challenges in the identification of model deficiencies. A standard approach for balancing is to attribute weights to different data streams in the cost function. However, this may result in overestimation of posterior uncertainty. In this study, we propose an alternative: the parameter block approach. The proposed method enables joint optimization of different blocks, i.e., subsets of the parameters, against particular data streams. This method is applicable when specific parameter blocks are related to processes that are more strongly associated with specific observations, i.e., data streams. A comparison of different approaches using simple artificial examples and the DALEC ecosystem model is presented. The unweighted inversion of a DALEC model variant, where artificial structural errors in photosynthesis calculation had been introduced, failed to reveal the resulting biases in fast processes (e.g., turnover). The posterior bias emerged only in parameters related to slower processes (e.g., carbon allocation) constrained by fewer data sets. On the other hand, when weighted or blocked approaches were used, the introduced biases were revealed, as expected, in parameters of fast processes. Ultimately, with the parameter block approach, the transfer of model error was diminished and at the same time the overestimation of posterior uncertainty associated with weighting was prevented.
Integrated method for the measurement of trace atmospheric bases
NASA Astrophysics Data System (ADS)
Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.
2011-09-01
Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace atmospheric nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.
Integrated method for the measurement of trace nitrogenous atmospheric bases
NASA Astrophysics Data System (ADS)
Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.
2011-12-01
Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv), as supported by field measurements in an urban park and in the exhaust of on-road vehicles.
Nearest neighbor interaction in the Path Integral Renormalization Group method
NASA Astrophysics Data System (ADS)
de Silva, Wasanthi; Clay, R. Torsten
2014-03-01
The Path Integral Renormalization Group (PIRG) method is an efficient numerical algorithm for studying ground state properties of strongly correlated electron systems. The many-body ground state wave function is approximated by an optimized linear combination of Slater determinants which satisfies the variational principle. A major advantage of PIRG is that is does not suffer the Fermion sign problem of quantum Monte Carlo. Results are exact in the noninteracting limit and can be enhanced using space and spin symmetries. Many observables can be calculated using Wick's theorem. PIRG has been used predominantly for the Hubbard model with a single on-site Coulomb interaction U. We describe an extension of PIRG to the extended Hubbard model (EHM) including U and a nearest-neighbor interaction V. The EHM is particularly important in models of charge-transfer solids (organic superconductors) and at 1/4-filling drives a charge-ordered state. The presence of lattice frustration also makes studying these systems difficult. We test the method with comparisons to small clusters and long one dimensional chains, and show preliminary results for a coupled-chain model for the (TMTTF)2X materials. This work was supported by DOE grant DE-FG02-06ER46315.
Non-destructive testing method and apparatus utilizing phase multiplication holography
Collins, H. Dale; Prince, James M.; Davis, Thomas J.
1984-01-01
An apparatus and method for imaging of structural characteristics in test objects using radiation amenable to coherent signal processing methods. Frequency and phase multiplication of received flaw signals is used to simulate a test wavelength at least one to two orders of magnitude smaller than the actual wavelength. The apparent reduction in wavelength between the illumination and recording radiation performs a frequency translation hologram. The hologram constructed with a high synthetic frequency and flaw phase multiplication is similar to a conventional acoustic hologram construction at the high frequency.
Method of forming a multiple layer dielectric and a hot film sensor therewith
NASA Technical Reports Server (NTRS)
Hopson, Purnell, Jr. (Inventor); Tran, Sang Q. (Inventor)
1990-01-01
The invention is a method of forming a multiple layer dielectric for use in a hot-film laminar separation sensor. The multiple layer dielectric substrate is formed by depositing a first layer of a thermoelastic polymer such as on an electrically conductive substrate such as the metal surface of a model to be tested under cryogenic conditions and high Reynolds numbers. Next, a second dielectric layer of fused silica is formed on the first dielectric layer of thermoplastic polymer. A resistive metal film is deposited on selected areas of the multiple layer dielectric substrate to form one or more hot-film sensor elements to which aluminum electrical circuits deposited upon the multiple layered dielectric substrate are connected.
Evaluation of the Hubbell rectangular source integral using Haar wavelets method
NASA Astrophysics Data System (ADS)
Belkadhi, K.; Manai, K.
2016-05-01
Haar wavelets numerical integration method is exposed and used for evaluating the Hubbell rectangular source integral. The method convergence is studied to get the minimum iteration number for a desired precision. Haar wavelets results are finally compared to those obtained with other integration methods.
Baallal Jacobsen, Simo Abdessamad; Jensen, Niels B.; Kildegaard, Kanchana R.; Herrgård, Markus J.; Schneider, Konstantin; Koza, Anna; Forster, Jochen; Nielsen, Jens; Borodina, Irina
2016-01-01
Saccharomyces cerevisiae is widely used in the biotechnology industry for production of ethanol, recombinant proteins, food ingredients and other chemicals. In order to generate highly producing and stable strains, genome integration of genes encoding metabolic pathway enzymes is the preferred option. However, integration of pathway genes in single or few copies, especially those encoding rate-controlling steps, is often not sufficient to sustain high metabolic fluxes. By exploiting the sequence diversity in the long terminal repeats (LTR) of Ty retrotransposons, we developed a new set of integrative vectors, EasyCloneMulti, that enables multiple and simultaneous integration of genes in S. cerevisiae. By creating vector backbones that combine consensus sequences that aim at targeting subsets of Ty sequences and a quickly degrading selective marker, integrations at multiple genomic loci and a range of expression levels were obtained, as assessed with the green fluorescent protein (GFP) reporter system. The EasyCloneMulti vector set was applied to balance the expression of the rate-controlling step in the β-alanine pathway for biosynthesis of 3-hydroxypropionic acid (3HP). The best 3HP producing clone, with 5.45 g.L-1 of 3HP, produced 11 times more 3HP than the lowest producing clone, which demonstrates the capability of EasyCloneMulti vectors to impact metabolic pathway enzyme activity. PMID:26934490
Comparison of methods for assessing integrity of equine sperm membranes.
Foster, M L; Love, C C; Varner, D D; Brinsko, S P; Hinrichs, K; Teague, S; Lacaze, K; Blanchard, T L
2011-07-15
Sperm membrane integrity (SMI) is thought to be an important measure of stallion sperm quality. The objective was to compare three methods for evaluating SMI: flow cytometry using SYBR-14/propidium iodide (PI) stain; an automated cell counting device using PI stain; and eosin-nigrosin stain. Raw equine semen was subjected to various treatments containing 20 to 80% seminal plasma in extender, with differing sperm concentrations, to simulate spontaneous loss of SMI. The SMI was assessed immediately, and after 1 and 2 d of cooled storage. Agreement between methods was determined according to Bland-Altman methodology. Eosin-nigrosin staining yielded higher (2%) overall mean values for SMI than did flow cytometry. Flow cytometry yielded higher (6%) overall mean values for SMI than did the automated cell counter. As percentage of membrane-damaged sperm increased, agreement of SMI measurement between methods decreased. When semen contained 50-79% membrane-intact sperm, the 95% limits of agreement between SMI determined by flow cytometry and eosin-nigrosin staining were greater (range = -26.9 to 24.3%; i.e., a 51.2% span) than for SMI determined by flow cytometry and the automated cell counter (range = -3.1 to 17.0%; 20.1% span). When sperm populations contained <50% membrane-intact sperm, the 95% limits of agreement between SMI determined by flow cytometry and eosin-nigrosin staining were greater (range = -35.9 to 19.0%; 54.9% span) than for SMI determined by flow cytometry and the automated cell counter (range = -11.6 to 28.7%; 40.3% span). We concluded that eosin-nigrosin staining assessments of percent membrane-intact sperm agreed less with flow cytometry when <80% of sperm had intact membranes, whereas automated cell counter assessments of percent membrane-intact sperm agreed less with flow cytometry when <30% of sperm had intact membranes. PMID:21496902
Low-noise multiple watermarks technology based on complex double random phase encoding method
NASA Astrophysics Data System (ADS)
Zheng, Jihong; Lu, Rongwen; Sun, Liujie; Zhuang, Songlin
2010-11-01
Based on double random phase encoding method (DRPE), watermarking technology may provide a stable and robust method to protect the copyright of the printing. However, due to its linear character, DRPE exist the serious safety risk when it is attacked. In this paper, a complex coding method, which means adding the chaotic encryption based on logistic mapping before the DRPE coding, is provided and simulated. The results testify the complex method will provide better security protection for the watermarking. Furthermore, a low-noise multiple watermarking is studied, which means embedding multiple watermarks into one host printing and decrypt them with corresponding phase keys individually. The Digital simulation and mathematic analysis show that with the same total embedding weight factor, multiply watermarking will improve signal noise ratio (SNR) of the output printing image significantly. The complex multiply watermark method may provide a robust, stability, reliability copyright protection with higher quality printing image.