Method and system of integrating information from multiple sources
Alford, Francine A.; Brinkerhoff, David L.
2006-08-15
A system and method of integrating information from multiple sources in a document centric application system. A plurality of application systems are connected through an object request broker to a central repository. The information may then be posted on a webpage. An example of an implementation of the method and system is an online procurement system.
Improving the accuracy of multiple integral evaluation by applying Romberg's method
NASA Astrophysics Data System (ADS)
Zhidkov, E. P.; Lobanov, Yu. Yu.; Rushai, V. D.
2009-02-01
Romberg’s method, which is used to improve the accuracy of one-dimensional integral evaluation, is extended to multiple integrals if they are evaluated using the product of composite quadrature formulas. Under certain conditions, the coefficients of the Romberg formula are independent of the integral’s multiplicity, which makes it possible to use a simple evaluation algorithm developed for one-dimensional integrals. As examples, integrals of multiplicity two to six are evaluated by Romberg’s method and the results are compared with other methods.
NASA Astrophysics Data System (ADS)
Tang, Xiaojun
2016-04-01
The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.
Zhang, Yuan; Cheng, Yue; Ge, Liang; Du, Nan; Jia, Kebin; Zhang, Aidong
2015-01-01
Many clustering methods have been developed to identify functional modules in Protein-Protein Interaction (PPI) networks but the results are far from satisfaction. To overcome the noise and incomplete problems of PPI networks and find more accurate and stable functional modules, we propose an integrative method, bipartite graph-based Non-negative Matrix Factorisation method (BiNMF), in which we adopt multiple biological data sources as different views that describe PPIs. Specifically, traditional clustering models are adopted as preliminary analysis of different views of protein functional similarity. Then the intermediate clustering results are represented by a bipartite graph which can comprehensively represent the relationships between proteins and intermediate clusters and finally overlapping clustering results are achieved. Through extensive experiments, we see that our method is superior to baseline methods and detailed analysis has demonstrated the benefits of integrating diverse clustering methods and multiple biological information sources. PMID:26547971
Nakamura, Kunio; Guizard, Nicolas; Fonov, Vladimir S.; Narayanan, Sridar; Collins, D. Louis; Arnold, Douglas L.
2013-01-01
Gray matter atrophy provides important insights into neurodegeneration in multiple sclerosis (MS) and can be used as a marker of neuroprotection in clinical trials. Jacobian integration is a method for measuring volume change that uses integration of the local Jacobian determinants of the nonlinear deformation field registering two images, and is a promising tool for measuring gray matter atrophy. Our main objective was to compare the statistical power of the Jacobian integration method to commonly used methods in terms of the sample size required to detect a treatment effect on gray matter atrophy. We used multi-center longitudinal data from relapsing–remitting MS patients and evaluated combinations of cross-sectional and longitudinal pre-processing with SIENAX/FSL, SPM, and FreeSurfer, as well as the Jacobian integration method. The Jacobian integration method outperformed these other commonly used methods, reducing the required sample size by a factor of 4–5. The results demonstrate the advantage of using the Jacobian integration method to assess neuroprotection in MS clinical trials. PMID:24266007
A Nonparametric, Multiple Imputation-Based Method for the Retrospective Integration of Data Sets.
Carrig, Madeline M; Manrique-Vallier, Daniel; Ranby, Krista W; Reiter, Jerome P; Hoyle, Rick H
2015-01-01
Complex research questions often cannot be addressed adequately with a single data set. One sensible alternative to the high cost and effort associated with the creation of large new data sets is to combine existing data sets containing variables related to the constructs of interest. The goal of the present research was to develop a flexible, broadly applicable approach to the integration of disparate data sets that is based on nonparametric multiple imputation and the collection of data from a convenient, de novo calibration sample. We demonstrate proof of concept for the approach by integrating three existing data sets containing items related to the extent of problematic alcohol use and associations with deviant peers. We discuss both necessary conditions for the approach to work well and potential strengths and weaknesses of the method compared to other data set integration approaches.
Musick, Charles R.; Critchlow, Terence; Ganesh, Madhaven; Slezak, Tom; Fidelis, Krzysztof
2006-12-19
A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.
Simulating multiple diffraction in imaging systems using a path integration method.
Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Jörg; Urbach, Paul
2016-05-10
We present a method for simulating multiple diffraction in imaging systems based on the Huygens-Fresnel principle. The method accounts for the effects of both aberrations and diffraction and is entirely performed using Monte Carlo ray tracing. We compare the results of this method to those of reference simulations for field propagation through optical systems and for the calculation of point spread functions. The method can accurately model a wide variety of optical systems beyond the exit pupil approximation. PMID:27168302
NASA Technical Reports Server (NTRS)
Bogart, Edward H. (Inventor); Pope, Alan T. (Inventor)
2000-01-01
A system for display on a single video display terminal of multiple physiological measurements is provided. A subject is monitored by a plurality of instruments which feed data to a computer programmed to receive data, calculate data products such as index of engagement and heart rate, and display the data in a graphical format simultaneously on a single video display terminal. In addition live video representing the view of the subject and the experimental setup may also be integrated into the single data display. The display may be recorded on a standard video tape recorder for retrospective analysis.
Reichardt, Jens; Reichardt, Susanne
2006-04-20
A method is presented that permits the determination of the cloud effective particle size from Raman- or Rayleigh-integration temperature measurements that exploits the dependence of the multiple-scattering contributions to the lidar signals from heights above the cloud on the particle size of the cloud. Independent temperature information is needed for the determination of size. By use of Raman-integration temperatures, the technique is applied to cirrus measurements. The magnitude of the multiple-scattering effect and the above-cloud lidar signal strength limit the method's range of applicability to cirrus optical depths from 0.1 to 0.5. Our work implies that records of stratosphere temperature obtained with lidar may be affected by multiple scattering in clouds up to heights of 30 km and beyond.
NASA Astrophysics Data System (ADS)
Chen, Duan; Cai, Wei; Zinser, Brian; Cho, Min Hyung
2016-09-01
In this paper, we develop an accurate and efficient Nyström volume integral equation (VIE) method for the Maxwell equations for a large number of 3-D scatterers. The Cauchy Principal Values that arise from the VIE are computed accurately using a finite size exclusion volume together with explicit correction integrals consisting of removable singularities. Also, the hyper-singular integrals are computed using interpolated quadrature formulae with tensor-product quadrature nodes for cubes, spheres and cylinders, that are frequently encountered in the design of meta-materials. The resulting Nyström VIE method is shown to have high accuracy with a small number of collocation points and demonstrates p-convergence for computing the electromagnetic scattering of these objects. Numerical calculations of multiple scatterers of cubic, spherical, and cylindrical shapes validate the efficiency and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Uhde, Britta; Andreas Hahn, W.; Griess, Verena C.; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Uhde, Britta; Hahn, W Andreas; Griess, Verena C; Knoke, Thomas
2015-08-01
Multi-criteria decision analysis (MCDA) is a decision aid frequently used in the field of forest management planning. It includes the evaluation of multiple criteria such as the production of timber and non-timber forest products and tangible as well as intangible values of ecosystem services (ES). Hence, it is beneficial compared to those methods that take a purely financial perspective. Accordingly, MCDA methods are increasingly popular in the wide field of sustainability assessment. Hybrid approaches allow aggregating MCDA and, potentially, other decision-making techniques to make use of their individual benefits and leading to a more holistic view of the actual consequences that come with certain decisions. This review is providing a comprehensive overview of hybrid approaches that are used in forest management planning. Today, the scientific world is facing increasing challenges regarding the evaluation of ES and the trade-offs between them, for example between provisioning and regulating services. As the preferences of multiple stakeholders are essential to improve the decision process in multi-purpose forestry, participatory and hybrid approaches turn out to be of particular importance. Accordingly, hybrid methods show great potential for becoming most relevant in future decision making. Based on the review presented here, the development of models for the use in planning processes should focus on participatory modeling and the consideration of uncertainty regarding available information.
Soil mapping at regional scale using Remote Sensing - integrating multiple research methods
NASA Astrophysics Data System (ADS)
Mulder, V. L.; de Bruin, S.; Schaepman, M. E.
2012-04-01
Initiated by renewed interest in soil resources because of their role in supporting food security and climate change adaptation and mitigation, this research aims to provide a coherent methodology for soil and terrain mapping using remote sensing data. The work particularly addresses data acquisition for extensive areas where information about soils is sparse while at the same time resources are limited. The methodology aims to fully exploit data from current missions as well as the Sentinel-2 satellite mission (to be launched in 2014) for delivering soil data. The project aims to establish a coherent methodology where RS is integrated within each part of the soil mapping process on a regional scale; (1) A sampling method (constrained Latin hypercube sampling) that aims to acquire soil sample data representing soil variability in the study area under time and budgetary constraints. (2) Retrieval of composite soil mineralogy from spectroscopic data using linear mixing and non-linear methods. (3) Soil property prediction at regional scale using remote sensing data and a small primary data set. Employing regression trees and related methods along with spatial interpolation, this part integrates the above components and produces soil property maps as well as confidence intervals for these. The methodologies are demonstrated in a 1500 km2 study area in Northern Morocco offering a combination of landscape diversity, sparse vegetation cover and limited availability of existing data. With this research, we demonstrate that remote sensing plays a fundamental role for delivering detailed soil data on global and regional scale which is required for research focussing on food security and climate change adaptation and mitigation.
2012-01-01
Background The Hedgehog Signaling Pathway is one of signaling pathways that are very important to embryonic development. The participation of inhibitors in the Hedgehog Signal Pathway can control cell growth and death, and searching novel inhibitors to the functioning of the pathway are in a great demand. As the matter of fact, effective inhibitors could provide efficient therapies for a wide range of malignancies, and targeting such pathway in cells represents a promising new paradigm for cell growth and death control. Current research mainly focuses on the syntheses of the inhibitors of cyclopamine derivatives, which bind specifically to the Smo protein, and can be used for cancer therapy. While quantitatively structure-activity relationship (QSAR) studies have been performed for these compounds among different cell lines, none of them have achieved acceptable results in the prediction of activity values of new compounds. In this study, we proposed a novel collaborative QSAR model for inhibitors of the Hedgehog Signaling Pathway by integration the information from multiple cell lines. Such a model is expected to substantially improve the QSAR ability from single cell lines, and provide useful clues in developing clinically effective inhibitors and modifications of parent lead compounds for target on the Hedgehog Signaling Pathway. Results In this study, we have presented: (1) a collaborative QSAR model, which is used to integrate information among multiple cell lines to boost the QSAR results, rather than only a single cell line QSAR modeling. Our experiments have shown that the performance of our model is significantly better than single cell line QSAR methods; and (2) an efficient feature selection strategy under such collaborative environment, which can derive the commonly important features related to the entire given cell lines, while simultaneously showing their specific contributions to a specific cell-line. Based on feature selection results, we have
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
S.G. Lee; J.G. Bak; Y.S. Jung; M. Bitter; K.W. Hill; G. Hoelzer; O. Wehrhan; E. Foerster
2003-04-09
This paper describes a new method for the simultaneous measurement of the integrated reflectivity of a crystal for multiple orders of reflection at a predefined Bragg angle. The technique is demonstrated with a mica crystal for Bragg angles of 43{sup o}, 47{sup o}, and 50{sup o}. The measured integrated reflectivity for Bragg reflections up to the 24th order is compared with new theoretical predictions, which are also presented in this paper.
Multiple detectors "Influence Method".
Rios, I J; Mayer, R E
2016-05-01
The "Influence Method" is conceived for the absolute determination of a nuclear particle flux in the absence of known detector efficiency and without the need to register coincidences of any kind. This method exploits the influence of the presence of one detector in the count rate of another detector, when they are placed one behind the other and define statistical estimators for the absolute number of incident particles and for the efficiency (Rios and Mayer, 2015a). Its detailed mathematical description was recently published (Rios and Mayer, 2015b) and its practical implementation in the measurement of a moderated neutron flux arising from an isotopic neutron source was exemplified in (Rios and Mayer, 2016). With the objective of further reducing the measurement uncertainties, in this article we extend the method for the case of multiple detectors placed one behind the other. The new estimators for the number of particles and the detection efficiency are herein derived.
Multiple detectors "Influence Method".
Rios, I J; Mayer, R E
2016-05-01
The "Influence Method" is conceived for the absolute determination of a nuclear particle flux in the absence of known detector efficiency and without the need to register coincidences of any kind. This method exploits the influence of the presence of one detector in the count rate of another detector, when they are placed one behind the other and define statistical estimators for the absolute number of incident particles and for the efficiency (Rios and Mayer, 2015a). Its detailed mathematical description was recently published (Rios and Mayer, 2015b) and its practical implementation in the measurement of a moderated neutron flux arising from an isotopic neutron source was exemplified in (Rios and Mayer, 2016). With the objective of further reducing the measurement uncertainties, in this article we extend the method for the case of multiple detectors placed one behind the other. The new estimators for the number of particles and the detection efficiency are herein derived. PMID:26943904
NASA Astrophysics Data System (ADS)
Chang, Xin
This dissertation proposal is concerned with the use of fast and broadband full-wave electromagnetic methods for modeling high speed interconnects (e.g, vertical vias and horizontal traces) and passive components (e.g, decoupling capacitors) for structures of PCB and packages, in 3D IC, Die-level packaging and SIW based devices, to effectively modeling the designs signal integrity (SI) and power integrity (PI) aspects. The main contributions finished in this thesis is to create a novel methodology, which hybridizes the Foldy-Lax multiple scattering equations based fast full wave method, method of moment (MoM) based 1D technology, modes decoupling based geometry decomposition and cavity modes expansions, to model and simulate the electromagnetic scattering effects for the irregular power/ground planes, multiple vias and traces, for fast and accurate analysis of link level simulation on multilayer electronic structures. For the modeling details, the interior massively-coupled multiple vias problem is modeled most-analytically by using the Foldy-Lax multiple scattering equations. The dyadic Green's functions of the magnetic field are expressed in terms of waveguide modes in the vertical direction and vector cylindrical wave expansions or cavity modes expansions in the horizontal direction, combined with 2D MoM realized by 1D technology. For the incident field of the case of vias in the arbitrarily shaped antipad in finite large cavity/waveguide, the exciting and scattering field coefficients are calculated based on the transformation which converts surface integration of magnetic surface currents in antipad into 1D line integration of surface charges on the vias and on the ground plane. Geometry decomposition method is applied to model and integrate both the vertical and horizontal interconnects/traces in arbitrarily shaped power/ground planes. Moreover, a new form of multiple scattering equations is derived for solving coupling effects among mixed metallic
Integrating multiple networks for protein function prediction
2015-01-01
Background High throughput techniques produce multiple functional association networks. Integrating these networks can enhance the accuracy of protein function prediction. Many algorithms have been introduced to generate a composite network, which is obtained as a weighted sum of individual networks. The weight assigned to an individual network reflects its benefit towards the protein functional annotation inference. A classifier is then trained on the composite network for predicting protein functions. However, since these techniques model the optimization of the composite network and the prediction tasks as separate objectives, the resulting composite network is not necessarily optimal for the follow-up protein function prediction. Results We address this issue by modeling the optimization of the composite network and the prediction problems within a unified objective function. In particular, we use a kernel target alignment technique and the loss function of a network based classifier to jointly adjust the weights assigned to the individual networks. We show that the proposed method, called MNet, can achieve a performance that is superior (with respect to different evaluation criteria) to related techniques using the multiple networks of four example species (yeast, human, mouse, and fly) annotated with thousands (or hundreds) of GO terms. Conclusion MNet can effectively integrate multiple networks for protein function prediction and is robust to the input parameters. Supplementary data is available at https://sites.google.com/site/guoxian85/home/mnet. The Matlab code of MNet is available upon request. PMID:25707434
NASA Astrophysics Data System (ADS)
Adamowski, Jan; Fung Chan, Hiu; Prasher, Shiv O.; Ozga-Zielinski, Bogdan; Sliusarieva, Anna
2012-01-01
Daily water demand forecasts are an important component of cost-effective and sustainable management and optimization of urban water supply systems. In this study, a method based on coupling discrete wavelet transforms (WA) and artificial neural networks (ANNs) for urban water demand forecasting applications is proposed and tested. Multiple linear regression (MLR), multiple nonlinear regression (MNLR), autoregressive integrated moving average (ARIMA), ANN and WA-ANN models for urban water demand forecasting at lead times of one day for the summer months (May to August) were developed, and their relative performance was compared using the coefficient of determination, root mean square error, relative root mean square error, and efficiency index. The key variables used to develop and validate the models were daily total precipitation, daily maximum temperature, and daily water demand data from 2001 to 2009 in the city of Montreal, Canada. The WA-ANN models were found to provide more accurate urban water demand forecasts than the MLR, MNLR, ARIMA, and ANN models. The results of this study indicate that coupled wavelet-neural network models are a potentially promising new method of urban water demand forecasting that merit further study.
Multiple-stage integrating accelerometer
Devaney, Howard F.
1986-01-01
An accelerometer assembly is provided for use in activating a switch in response to multiple acceleration pulses in series. The accelerometer includes a housing forming a chamber. An inertial mass or piston is slidably disposed in the chamber and spring biased toward a first or reset position. A damping system is also provided to damp piston movement in response to first and subsequent acceleration pulses. Additionally, a cam, including a Z-shaped slot, and cooperating follower pin slidably received therein are mounted to the piston and the housing. The middle or cross-over leg of the Z-shaped slot cooperates with the follower pin to block or limit piston movement and prevent switch activation in response to a lone acceleration pulse. The switch of the assembly is only activated after two or more separate acceleration pulses are sensed and the piston reaches the end of the chamber opposite the reset position.
Multiple-stage integrating accelerometer
Devaney, H.F.
1984-06-27
An accelerometer assembly is provided for use in activating a switch in response to multiple acceleration pulses in series. The accelerometer includes a housing forming a chamber. An inertial mass or piston is slidably disposed in the chamber and spring biased toward a first or reset position. A damping system is also provided to damp piston movement in response to first and subsequent acceleration pulses. Additionally, a cam, including a Z-shaped slot, and cooperating follower pin slidably received therein are mounted to the piston and the housing. The middle or cross-over leg of the Z-shaped slot cooperates with the follower pin to block or limit piston movement and prevent switch activation in response to a lone acceleration pulse. The switch of the assembly is only activated after two or more separate acceleration pulses are sensed and the piston reaches the end of the chamber opposite the reset position.
Interstitial integrals in the multiple-scattering model
Swanson, J.R.; Dill, D.
1982-08-15
We present an efficient method for the evaluation of integrals involving multiple-scattering wave functions over the interstitial region. Transformation of the multicenter interstitial wave functions to a single center representation followed by a geometric projection reduces the integrals to products of analytic angular integrals and numerical radial integrals. The projection function, which has the value 1 in the interstitial region and 0 elsewhere, has a closed-form partial-wave expansion. The method is tested by comparing its results with exact normalization and dipole integrals; the differences are 2% at worst and typically less than 1%. By providing an efficient means of calculating Coulomb integrals, the method allows treatment of electron correlations using a multiple scattering basis set.
Applying Quadrature Rules with Multiple Nodes to Solving Integral Equations
Hashemiparast, S. M.; Avazpour, L.
2008-09-01
There are many procedures for the numerical solution of Fredholm integral equations. The main idea in these procedures is accuracy of the solution. In this paper, we use Gaussian quadrature with multiple nodes to improve the solution of these integral equations. The application of this method is illustrated via some examples, the related tables are given at the end.
Improving Inferences from Multiple Methods.
ERIC Educational Resources Information Center
Shotland, R. Lance; Mark, Melvin M.
1987-01-01
Multiple evaluation methods (MEMs) can cause an inferential challenge, although there are strategies to strengthen inferences. Practical and theoretical issues involved in the use by social scientists of MEMs, three potential problems in drawing inferences from MEMs, and short- and long-term strategies for alleviating these problems are outlined.…
Vertically Integrated Multiple Nanowire Field Effect Transistor.
Lee, Byung-Hyun; Kang, Min-Ho; Ahn, Dae-Chul; Park, Jun-Young; Bang, Tewook; Jeon, Seung-Bae; Hur, Jae; Lee, Dongil; Choi, Yang-Kyu
2015-12-01
A vertically integrated multiple channel-based field-effect transistor (FET) with the highest number of nanowires reported ever is demonstrated on a bulk silicon substrate without use of wet etching. The driving current is increased by 5-fold due to the inherent vertically stacked five-level nanowires, thus showing good feasibility of three-dimensional integration-based high performance transistor. The developed fabrication process, which is simple and reproducible, is used to create multiple stiction-free and uniformly sized nanowires with the aid of the one-route all-dry etching process (ORADEP). Furthermore, the proposed FET is revamped to create nonvolatile memory with the adoption of a charge trapping layer for enhanced practicality. Thus, this research suggests an ultimate design for the end-of-the-roadmap devices to overcome the limits of scaling.
Vertically Integrated Multiple Nanowire Field Effect Transistor.
Lee, Byung-Hyun; Kang, Min-Ho; Ahn, Dae-Chul; Park, Jun-Young; Bang, Tewook; Jeon, Seung-Bae; Hur, Jae; Lee, Dongil; Choi, Yang-Kyu
2015-12-01
A vertically integrated multiple channel-based field-effect transistor (FET) with the highest number of nanowires reported ever is demonstrated on a bulk silicon substrate without use of wet etching. The driving current is increased by 5-fold due to the inherent vertically stacked five-level nanowires, thus showing good feasibility of three-dimensional integration-based high performance transistor. The developed fabrication process, which is simple and reproducible, is used to create multiple stiction-free and uniformly sized nanowires with the aid of the one-route all-dry etching process (ORADEP). Furthermore, the proposed FET is revamped to create nonvolatile memory with the adoption of a charge trapping layer for enhanced practicality. Thus, this research suggests an ultimate design for the end-of-the-roadmap devices to overcome the limits of scaling. PMID:26544156
ERIC Educational Resources Information Center
Dadelo, Stanislav; Turskis, Zenonas; Zavadskas, Edmundas Kazimieras; Kacerauskas, Tomas; Dadeliene, Ruta
2016-01-01
To maximize the effectiveness of a decision, it is necessary to support decision-making with integrated methods. It can be assumed that subjective evaluation (considering only absolute values) is only remotely connected with the evaluation of real processes. Therefore, relying solely on these values in process management decision-making would be a…
Integrative prescreening in analysis of multiple cancer genomic studies
2012-01-01
Background In high throughput cancer genomic studies, results from the analysis of single datasets often suffer from a lack of reproducibility because of small sample sizes. Integrative analysis can effectively pool and analyze multiple datasets and provides a cost effective way to improve reproducibility. In integrative analysis, simultaneously analyzing all genes profiled may incur high computational cost. A computationally affordable remedy is prescreening, which fits marginal models, can be conducted in a parallel manner, and has low computational cost. Results An integrative prescreening approach is developed for the analysis of multiple cancer genomic datasets. Simulation shows that the proposed integrative prescreening has better performance than alternatives, particularly including prescreening with individual datasets, an intensity approach and meta-analysis. We also analyze multiple microarray gene profiling studies on liver and pancreatic cancers using the proposed approach. Conclusions The proposed integrative prescreening provides an effective way to reduce the dimensionality in cancer genomic studies. It can be coupled with existing analysis methods to identify cancer markers. PMID:22799431
Multiple network interface core apparatus and method
Underwood, Keith D.; Hemmert, Karl Scott
2011-04-26
A network interface controller and network interface control method comprising providing a single integrated circuit as a network interface controller and employing a plurality of network interface cores on the single integrated circuit.
Multiple protocol fluorometer and method
Kolber, Zbigniew S.; Falkowski, Paul G.
2000-09-19
A multiple protocol fluorometer measures photosynthetic parameters of phytoplankton and higher plants using actively stimulated fluorescence protocols. The measured parameters include spectrally-resolved functional and optical absorption cross sections of PSII, extent of energy transfer between reaction centers of PSII, F.sub.0 (minimal), F.sub.m (maximal) and F.sub.v (variable) components of PSII fluorescence, photochemical and non-photochemical quenching, size of the plastoquinone (PQ) pool, and the kinetics of electron transport between Q.sub.a and PQ pool and between PQ pool and PSI. The multiple protocol fluorometer, in one embodiment, is equipped with an excitation source having a controlled spectral output range between 420 nm and 555 nm and capable of generating flashlets having a duration of 0.125-32 .mu.s, an interval between 0.5 .mu.s and 2 seconds, and peak optical power of up to 2 W/cm.sup.2. The excitation source is also capable of generating, simultaneous with the flashlets, a controlled continuous, background illumination.
Integrating Multiple Intelligences in EFL/ESL Classrooms
ERIC Educational Resources Information Center
Bas, Gokhan
2008-01-01
This article deals with the integration of the theory of Multiple Intelligences in EFL/ESL classrooms. In this study, after the theory of multiple intelligences was presented shortly, the integration of this theory into English classrooms. Intelligence types in MI Theory were discussed and some possible application ways of these intelligence types…
Building a cognitive map by assembling multiple path integration systems.
Wang, Ranxiao Frances
2016-06-01
Path integration and cognitive mapping are two of the most important mechanisms for navigation. Path integration is a primitive navigation system which computes a homing vector based on an animal's self-motion estimation, while cognitive map is an advanced spatial representation containing richer spatial information about the environment that is persistent and can be used to guide flexible navigation to multiple locations. Most theories of navigation conceptualize them as two distinctive, independent mechanisms, although the path integration system may provide useful information for the integration of cognitive maps. This paper demonstrates a fundamentally different scenario, where a cognitive map is constructed in three simple steps by assembling multiple path integrators and extending their basic features. The fact that a collection of path integration systems can be turned into a cognitive map suggests the possibility that cognitive maps may have evolved directly from the path integration system.
Integral Methodological Pluralism in Science Education Research: Valuing Multiple Perspectives
ERIC Educational Resources Information Center
Davis, Nancy T.; Callihan, Laurie P.
2013-01-01
This article examines the multiple methodologies used in educational research and proposes a model that includes all of them as contributing to understanding educational contexts and research from multiple perspectives. The model, based on integral theory (Wilber in a theory of everything. Shambhala, Boston, 2000) values all forms of research as…
Integral methodological pluralism in science education research: valuing multiple perspectives
NASA Astrophysics Data System (ADS)
Davis, Nancy T.; Callihan, Laurie P.
2013-09-01
This article examines the multiple methodologies used in educational research and proposes a model that includes all of them as contributing to understanding educational contexts and research from multiple perspectives. The model, based on integral theory (Wilber in a theory of everything. Shambhala, Boston,
Perturbative Methods in Path Integration
NASA Astrophysics Data System (ADS)
Johnson-Freyd, Theodore Paul
This dissertation addresses a number of related questions concerning perturbative "path" integrals. Perturbative methods are one of the few successful ways physicists have worked with (or even defined) these infinite-dimensional integrals, and it is important as mathematicians to check that they are correct. Chapter 0 provides a detailed introduction. We take a classical approach to path integrals in Chapter 1. Following standard arguments, we posit a Feynman-diagrammatic description of the asymptotics of the time-evolution operator for the quantum mechanics of a charged particle moving nonrelativistically through a curved manifold under the influence of an external electromagnetic field. We check that our sum of Feynman diagrams has all desired properties: it is coordinate-independent and well-defined without ultraviolet divergences, it satisfies the correct composition law, and it satisfies Schrodinger's equation thought of as a boundary-value problem in PDE. Path integrals in quantum mechanics and elsewhere in quantum field theory are almost always of the shape ∫ f es for some functions f (the "observable") and s (the "action"). In Chapter 2 we step back to analyze integrals of this type more generally. Integration by parts provides algebraic relations between the values of ∫ (-) es for different inputs, which can be packaged into a Batalin--Vilkovisky-type chain complex. Using some simple homological perturbation theory, we study the version of this complex that arises when f and s are taken to be polynomial functions, and power series are banished. We find that in such cases, the entire scheme-theoretic critical locus (complex points included) of s plays an important role, and that one can uniformly (but noncanonically) integrate out in a purely algebraic way the contributions to the integral from all "higher modes," reducing ∫ f es to an integral over the critical locus. This may help explain the presence of analytic continuation in questions like the
The Effects of Tasks on Integrating Information from Multiple Documents
ERIC Educational Resources Information Center
Cerdan, Raquel; Vidal-Abarca, Eduardo
2008-01-01
The authors examine 2 issues: (a) how students integrate information from multiple scientific documents to describe and explain a physical phenomenon that represents a subset of the information in the documents; and (b) the role of 2 sorts of tasks to achieve this type of integration, either writing an essay on a question requiring integration…
Lutken, Carol; Macelloni, Leonardo; D'Emidio, Marco; Dunbar, John; Higley, Paul
2015-01-31
detect short-term changes within the hydrates system, identify relationships/impacts of local oceanographic parameters on the hydrates system, and improve our understanding of how seafloor instability is affected by hydrates-driven changes. A 2009 DCR survey of MC118 demonstrated that we could image resistivity anomalies to a depth of 75m below the seafloor in water depths of 1km. We reconfigured this system to operate autonomously on the seafloor in a pre-programmed mode, for periods of months. We designed and built a novel seafloor lander and deployment capability that would allow us to investigate the seafloor at potential deployment sites and deploy instruments only when conditions met our criteria. This lander held the DCR system, controlling computers, and battery power supply, as well as instruments to record oceanographic parameters. During the first of two cruises to the study site, we conducted resistivity surveying, selected a monitoring site, and deployed the instrumented lander and DCR, centered on what appeared to be the most active locations within the site, programmed to collect a DCR profile, weekly. After a 4.5-month residence on the seafloor, the team recovered all equipment. Unfortunately, several equipment failures occurred prior to recovery of the instrument packages. Prior to the failures, however, two resistivity profiles were collected together with oceanographic data. Results show, unequivocally, that significant changes can occur in both hydrate volume and distribution during time periods as brief as one week. Occurrences appear to be controlled by both deep and near-surface structure. Results have been integrated with seismic data from the area and show correspondence in space of hydrate and structures, including faults and gas chimneys.
A multiple index integrating different levels of organization.
Cortes, Rui; Hughes, Samantha; Coimbra, Ana; Monteiro, Sandra; Pereira, Vítor; Lopes, Marisa; Pereira, Sandra; Pinto, Ana; Sampaio, Ana; Santos, Cátia; Carrola, João; de Jesus, Joaquim; Varandas, Simone
2016-10-01
Many methods in freshwater biomonitoring tend to be restricted to a few levels of biological organization, limiting the potential spectrum of measurable of cause-effect responses to different anthropogenic impacts. We combined distinct organisational levels, covering biological biomarkers (histopathological and biochemical reactions in liver and fish gills), community based bioindicators (fish guilds, invertebrate metrics/traits and chironomid pupal exuviae) and ecosystem functional indicators (decomposition rates) to assess ecological status at designated Water Framework Directive monitoring sites, covering a gradient of human impact across several rivers in northern Portugal. We used Random Forest to rank the variables that contributed more significantly to successfully predict the different classes of ecological status and also to provide specific cut levels to discriminate each WFD class based on reference condition. A total of 59 Biological Quality Elements and functional indicators were determined using this procedure and subsequently applied to develop the integrated Multiple Ecological Level Index (MELI Index), a potentially powerful bioassessment tool. PMID:27344015
Spatial Interpolation Methods for Integrating Newton's Equation
NASA Astrophysics Data System (ADS)
Gueron, Shay; Shalloway, David
1996-11-01
Numerical integration of Newton's equation in multiple dimensions plays an important role in many fields such as biochemistry and astrophysics. Currently, some of the most important practical questions in these areas cannot be addressed because the large dimensionality of the variable space and complexity of the required force evaluations precludes integration over sufficiently large time intervals. Improving the efficiency of algorithms for this purpose is therefore of great importance. Standard numerical integration schemes (e.g., leap-frog and Runge-Kutta) ignore the special structure of Newton's equation that, for conservative systems, constrains the force to be the gradient of a scalar potential. We propose a new class of "spatial interpolation" (SI) integrators that exploit this property by interpolating the force in space rather than (as with standard methods) in time. Since the force is usually a smoother function of space than of time, this can improve algorithmic efficiency and accuracy. In particular, an SI integrator solves the one- and two-dimensional harmonic oscillators exactly with one force evaluation per step. A simple type of time-reversible SI algorithm is described and tested. Significantly improved performance is achieved on one- and multi-dimensional benchmark problems.
A Fuzzy Logic Framework for Integrating Multiple Learned Models
Bobi Kai Den Hartog
1999-03-01
The Artificial Intelligence field of Integrating Multiple Learned Models (IMLM) explores ways to combine results from sets of trained programs. Aroclor Interpretation is an ill-conditioned problem in which trained programs must operate in scenarios outside their training ranges because it is intractable to train them completely. Consequently, they fail in ways related to the scenarios. We developed a general-purpose IMLM solution, the Combiner, and applied it to Aroclor Interpretation. The Combiner's first step, Scenario Identification (M), learns rules from very sparse, synthetic training data consisting of results from a suite of trained programs called Methods. S1 produces fuzzy belief weights for each scenario by approximately matching the rules. The Combiner's second step, Aroclor Presence Detection (AP), classifies each of three Aroclors as present or absent in a sample. The third step, Aroclor Quantification (AQ), produces quantitative values for the concentration of each Aroclor in a sample. AP and AQ use automatically learned empirical biases for each of the Methods in each scenario. Through fuzzy logic, AP and AQ combine scenario weights, automatically learned biases for each of the Methods in each scenario, and Methods' results to determine results for a sample.
Integrating Multiple Criteria Evaluation and GIS in Ecotourism: a Review
NASA Astrophysics Data System (ADS)
Mohd, Z. H.; Ujang, U.
2016-09-01
The concept of 'Eco-tourism' is increasingly heard in recent decades. Ecotourism is one adventure that environmentally responsible intended to appreciate the nature experiences and cultures. Ecotourism should have low impact on environment and must contribute to the prosperity of local residents. This article reviews the use of Multiple Criteria Evaluation (MCE) and Geographic Information System (GIS) in ecotourism. Multiple criteria evaluation mostly used to land suitability analysis or fulfill specific objectives based on various attributes that exist in the selected area. To support the process of environmental decision making, the application of GIS is used to display and analysis the data through Analytic Hierarchy Process (AHP). Integration between MCE and GIS tool is important to determine the relative weight for the criteria used objectively. With the MCE method, it can resolve the conflict between recreation and conservation which is to minimize the environmental and human impact. Most studies evidences that the GIS-based AHP as a multi criteria evaluation is a strong and effective in tourism planning which can aid in the development of ecotourism industry effectively.
Lamp method and apparatus using multiple reflections
MacLennan, D.A.; Turner, B.; Kipling, K.
1999-05-11
A method wherein the light in a sulfur or selenium lamp is reflected through the fill a multiplicity of times to convert ultraviolet radiation to visible is disclosed. A light emitting device comprised of an electrodeless envelope which bears a light reflecting covering around a first portion which does not crack due to differential thermal expansion and which has a second portion which comprises a light transmissive aperture. 20 figs.
Lamp method and apparatus using multiple reflections
MacLennan, Donald A.; Turner, Brian; Kipling, Kent
1999-01-01
A method wherein the light in a sulfur or selenium lamp is reflected through the fill a multiplicity of times to convert ultraviolet radiation to visible. A light emitting device comprised of an electrodeless envelope which bears a light reflecting covering around a first portion which does not crack due to differential thermal expansion and which has a second portion which comprises a light transmissive aperture.
Ergen, Kayra; Kentel, Elcin
2016-01-15
Stream gauges measure the temporal variation of water quantity; thus they are vital in managing water resources. The stream gauge network in Turkey includes a limited number of gauges and often streamflow estimates need to be generated at ungauged locations where reservoirs, small hydropower plants, weirs, etc. are planned. Prediction of streamflows at ungauged locations generally relies on donor gauges where flow is assumed to be similar to that at the ungauged location. Generally, donor stream gauges are selected based on geographical proximity. However, closer stream gauges are not always the most-correlated ones. The Map Correlation Method (MCM) enables development of a map that shows the spatial distribution of the correlation between a selected stream gauge and any other location within the study region. In this study, a new approach which combines MCM with the multiple-source site drainage-area ratio (DAR) method is used to estimate daily streamflows at ungauged catchments in the Western Black Sea Region. Daily streamflows predicted by the combined three-source sites DAR with MCM approach give higher Nash-Sutcliffe Efficiency (NSE) values than those predicted using the nearest stream gauge as the donor stream gauge, for most of the trial cases. Hydrographs and flow duration curves predicted using this approach are usually in better agreement with the observed hydrographs and flow duration curves than those predicted using the nearest catchment. PMID:26520038
Multiple time scale methods in tokamak magnetohydrodynamics
Jardin, S.C.
1984-01-01
Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B/sup 2//2..mu../sub 0/, which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed.
An Alternative Method for Multiplication of Rhotrices. Classroom Notes
ERIC Educational Resources Information Center
Sani, B.
2004-01-01
In this article, an alternative multiplication method for rhotrices is proposed. The method establishes some relationships between rhotrices and matrices. This article has discussed a modified multiplication method for rhotrices. The method has a direct relationship with matrix multiplication, and so rhotrices under this multiplication procedure…
Multiple frequency method for operating electrochemical sensors
Martin, Louis P.
2012-05-15
A multiple frequency method for the operation of a sensor to measure a parameter of interest using calibration information including the steps of exciting the sensor at a first frequency providing a first sensor response, exciting the sensor at a second frequency providing a second sensor response, using the second sensor response at the second frequency and the calibration information to produce a calculated concentration of the interfering parameters, using the first sensor response at the first frequency, the calculated concentration of the interfering parameters, and the calibration information to measure the parameter of interest.
High integrity carrier phase navigation using multiple civil GPS signals
NASA Astrophysics Data System (ADS)
Jung, Jaewoo
2000-11-01
A navigation system should guide users to their destinations accurately and reliably. Among the many available navigation aids, the Global Positioning System stands out due to its unique capabilities. It is a satellite-based navigation system which covers the entire Earth with horizontal accuracy of 20 meters for stand alone civil users. Today, the GPS provides only one civil signal, but two more signals will be available in the near future. GPS will provide a second signal at 1227.60 MHz (L2) and a third signal at 1176.45 MHz (Lc), in addition to the current signal at 1575.42 MHz (L1). The focus of this thesis is exploring the possibility of using beat frequencies of these signals to provide navigation aid to users with high accuracy and integrity. To achieve high accuracy, the carrier phase differential GPS is used. The integer ambiguity is resolved using the Cascade Integer Resolution (CIR), which is defined in this thesis. The CIR is an instantaneous, geometry-free integer resolution method utilizing beat frequencies of GPS signals. To insure high integrity, the probability of incorrect integer ambiguity resolution using the CIR is analyzed. The CIR can immediately resolve the Lc integer ambiguity up to 2.4 km from the reference receiver, the Widelane (L1-L2) integer ambiguity up to 22 km, and the Extra Widelane (L2-Lc) integer ambiguity from there on, with probability of incorrect integer resolution of 10-4 . The optimal use of algebraic combinations of multiple GPS signals are also investigated in this thesis. Finally, the gradient of residual differential ionospheric error is estimated to stimated to increase performance of the CIR.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Integration of multiple sensor fusion in controller design.
Abdelrahman, Mohamed; Kandasamy, Parameshwaran
2003-04-01
The main focus of this research is to reduce the risk of a catastrophic response of a feedback control system when some of the feedback data from the system sensors are not reliable, while maintaining a reasonable performance of the control system. In this paper a methodology for integrating multiple sensor fusion into the controller design is presented. The multiple sensor fusion algorithm produces, in addition to the estimate of the measurand, a parameter that measures the confidence in the estimated value. This confidence is integrated as a parameter into the controller to produce fast system response when the confidence in the estimate is high, and a slow response when the confidence in the estimate is low. Conditions for the stability of the system with the developed controller are discussed. This methodology is demonstrated on a cupola furnace model. The simulations illustrate the advantages of the new methodology.
Downsizing of an integrated tracking unit for multiple applications
NASA Astrophysics Data System (ADS)
Steinway, William J.; Thomas, James E.; Nicoloff, Michael J.; Patz, Mark D.
1997-02-01
This paper describes the specifications and capabilities of the integrated tracking unit (ITU) and its multiple applications are presented. The original ITU was developed by Coleman Research Corporation (CRC) for several federal law enforcement agencies over a four-year period and it has been used for friendly and unfriendly vehicle and person position tracking. The ITU has been down-sized to reduce its physical size, weight, and power requirements with respect to the first generation unit. The ITU consists of a global positioning system (GPS) receiver for precise position location and a cellular phone to transmit voice and data to a PC base station with a modem interface. This paper describes the down-sizing of the unit introduced in CRC's 'An Integrated Tracking Unit for Multiple Applications' paper presented at the 1995 Counterdrug Technology Assessment Center's symposium in Nashua, NH. This paper provides a description of the ITU and tested applications.
Deconstructing Calculation Methods, Part 3: Multiplication
ERIC Educational Resources Information Center
Thompson, Ian
2008-01-01
In this third of a series of four articles, the author deconstructs the primary national strategy's approach to written multiplication. The approach to multiplication, as set out on pages 12 to 15 of the primary national strategy's "Guidance paper" "Calculation" (DfES, 2007), is divided into six stages: (1) mental multiplication using…
Case studies: Soil mapping using multiple methods
NASA Astrophysics Data System (ADS)
Petersen, Hauke; Wunderlich, Tina; Hagrey, Said A. Al; Rabbel, Wolfgang; Stümpel, Harald
2010-05-01
Soil is a non-renewable resource with fundamental functions like filtering (e.g. water), storing (e.g. carbon), transforming (e.g. nutrients) and buffering (e.g. contamination). Degradation of soils is meanwhile not only to scientists a well known fact, also decision makers in politics have accepted this as a serious problem for several environmental aspects. National and international authorities have already worked out preservation and restoration strategies for soil degradation, though it is still work of active research how to put these strategies into real practice. But common to all strategies the description of soil state and dynamics is required as a base step. This includes collecting information from soils with methods ranging from direct soil sampling to remote applications. In an intermediate scale mobile geophysical methods are applied with the advantage of fast working progress but disadvantage of site specific calibration and interpretation issues. In the framework of the iSOIL project we present here some case studies for soil mapping performed using multiple geophysical methods. We will present examples of combined field measurements with EMI-, GPR-, magnetic and gammaspectrometric techniques carried out with the mobile multi-sensor-system of Kiel University (GER). Depending on soil type and actual environmental conditions, different methods show a different quality of information. With application of diverse methods we want to figure out, which methods or combination of methods will give the most reliable information concerning soil state and properties. To investigate the influence of varying material we performed mapping campaigns on field sites with sandy, loamy and loessy soils. Classification of measured or derived attributes show not only the lateral variability but also gives hints to a variation in the vertical distribution of soil material. For all soils of course soil water content can be a critical factor concerning a succesful
Method of descent for integrable lattices
NASA Astrophysics Data System (ADS)
Bogoyavlensky, Oleg
2009-05-01
A method of descent for constructing integrable Hamiltonian systems is introduced. The derived periodic and nonperiodic lattices possess Lax representations with spectral parameter and have plenty of first integrals. Examples of Liouville-integrable four-dimensional Hamiltonian Lotka-Volterra systems are presented.
NEXT Propellant Management System Integration With Multiple Ion Thrusters
NASA Technical Reports Server (NTRS)
Sovey, James S.; Soulas, George C.; Herman, Daniel A.
2011-01-01
As a critical part of the NEXT test validation process, a multiple-string integration test was performed on the NEXT propellant management system and ion thrusters. The objectives of this test were to verify that the PMS is capable of providing stable flow control to multiple thrusters operating over the NEXT system throttling range and to demonstrate to potential users that the NEXT PMS is ready for transition to flight. A test plan was developed for the sub-system integration test for verification of PMS and thruster system performance and functionality requirements. Propellant management system calibrations were checked during the single and multi-thruster testing. The low pressure assembly total flow rates to the thruster(s) were within 1.4 percent of the calibrated support equipment flow rates. The inlet pressures to the main, cathode, and neutralizer ports of Thruster PM1R were measured as the PMS operated in 1-thruster, 2-thruster, and 3-thruster configurations. It was found that the inlet pressures to Thruster PM1R for 2-thruster and 3-thruster operation as well as single thruster operation with the PMS compare very favorably indicating that flow rates to Thruster PM1R were similar in all cases. Characterizations of discharge losses, accelerator grid current, and neutralizer performance were performed as more operating thrusters were added to the PMS. There were no variations in these parameters as thrusters were throttled and single and multiple thruster operations were conducted. The propellant management system power consumption was at a fixed voltage to the DCIU and a fixed thermal throttle temperature of 75 C. The total power consumed by the PMS was 10.0, 17.9, and 25.2 W, respectively, for single, 2-thruster, and 3-thruster operation with the PMS. These sub-system integration tests of the PMS, the DCIU Simulator, and multiple thrusters addressed, in part, the NEXT PMS and propulsion system performance and functionality requirements.
Cao, D-S; Xiao, N; Li, Y-J; Zeng, W-B; Liang, Y-Z; Lu, A-P; Xu, Q-S; Chen, AF
2015-01-01
Identifying potential adverse drug reactions (ADRs) is critically important for drug discovery and public health. Here we developed a multiple evidence fusion (MEF) method for the large-scale prediction of drug ADRs that can handle both approved drugs and novel molecules. MEF is based on the similarity reference by collaborative filtering, and integrates multiple similarity measures from various data types, taking advantage of the complementarity in the data. We used MEF to integrate drug-related and ADR-related data from multiple levels, including the network structural data formed by known drug–ADR relationships for predicting likely unknown ADRs. On cross-validation, it obtains high sensitivity and specificity, substantially outperforming existing methods that utilize single or a few data types. We validated our prediction by their overlap with drug–ADR associations that are known in databases. The proposed computational method could be used for complementary hypothesis generation and rapid analysis of potential drug–ADR interactions. PMID:26451329
Decreasing Multicollinearity: A Method for Models with Multiplicative Functions.
ERIC Educational Resources Information Center
Smith, Kent W.; Sasaki, M. S.
1979-01-01
A method is proposed for overcoming the problem of multicollinearity in multiple regression equations where multiplicative independent terms are entered. The method is not a ridge regression solution. (JKS)
Early Gnathostome Phylogeny Revisited: Multiple Method Consensus
Qiao, Tuo; King, Benedict; Long, John A.; Ahlberg, Per E.; Zhu, Min
2016-01-01
A series of recent studies recovered consistent phylogenetic scenarios of jawed vertebrates, such as the paraphyly of placoderms with respect to crown gnathostomes, and antiarchs as the sister group of all other jawed vertebrates. However, some of the phylogenetic relationships within the group have remained controversial, such as the positions of Entelognathus, ptyctodontids, and the Guiyu-lineage that comprises Guiyu, Psarolepis and Achoania. The revision of the dataset in a recent study reveals a modified phylogenetic hypothesis, which shows that some of these phylogenetic conflicts were sourced from a few inadvertent miscodings. The interrelationships of early gnathostomes are addressed based on a combined new dataset with 103 taxa and 335 characters, which is the most comprehensive morphological dataset constructed to date. This dataset is investigated in a phylogenetic context using maximum parsimony (MP), Bayesian inference (BI) and maximum likelihood (ML) approaches in an attempt to explore the consensus and incongruence between the hypotheses of early gnathostome interrelationships recovered from different methods. Our findings consistently corroborate the paraphyly of placoderms, all ‘acanthodians’ as a paraphyletic stem group of chondrichthyans, Entelognathus as a stem gnathostome, and the Guiyu-lineage as stem sarcopterygians. The incongruence using different methods is less significant than the consensus, and mainly relates to the positions of the placoderm Wuttagoonaspis, the stem chondrichthyan Ramirosuarezia, and the stem osteichthyan Lophosteus—the taxa that are either poorly known or highly specialized in character complement. Given that the different performances of each phylogenetic approach, our study provides an empirical case that the multiple phylogenetic analyses of morphological data are mutually complementary rather than redundant. PMID:27649538
Integrated Methods: Applications in Quantum Chemistry
Irle, Stephen; Morokuma, Keiji
2004-03-31
Authors introduce quantum chemical methods applicable to extended molecular systems or parts of them, describe in short the theory behind integrated methods, and discuss their applications to the most recognizable areas of nanochemistry (fullerenes, nanotubes, and silica- based nanosystems).
Research in Mathematics Education: Multiple Methods for Multiple Uses
ERIC Educational Resources Information Center
Battista, Michael; Smith, Margaret S.; Boerst, Timothy; Sutton, John; Confrey, Jere; White, Dorothy; Knuth, Eric; Quander, Judith
2009-01-01
Recent federal education policies and reports have generated considerable debate about the meaning, methods, and goals of "scientific research" in mathematics education. Concentrating on the critical problem of determining which educational programs and practices reliably improve students' mathematics achievement, these policies and reports focus…
Multiple cue use and integration in pigeons (Columba livia).
Legge, Eric L G; Madan, Christopher R; Spetch, Marcia L; Ludvig, Elliot A
2016-05-01
Encoding multiple cues can improve the accuracy and reliability of navigation and goal localization. Problems may arise, however, if one cue is displaced and provides information which conflicts with other cues. Here we investigated how pigeons cope with cue conflict by training them to locate a goal relative to two landmarks and then varying the amount of conflict between the landmarks. When the amount of conflict was small, pigeons tended to integrate both cues in their search patterns. When the amount of conflict was large, however, pigeons used information from both cues independently. This context-dependent strategy for resolving spatial cue conflict agrees with Bayes optimal calculations for using information from multiple sources.
Chen, Jia-Jin; Wang, Jia-Yi; Li, Li-Chun; Lin, Jing; Yang, Kai; Ma, Zhi-Guo; Xu, Zong-Huan
2012-03-01
In this study, an index system for the integrated risk evaluation of multiple disasters on the Longyan production in Fujian Province was constructed, based on the analysis of the major environmental factors affecting the Longyan growth and yield, and from the viewpoints of potential hazard of disaster-causing factors, vulnerability of hazard-affected body, and disaster prevention and mitigation capability of Longyan growth regions in the Province. In addition, an integrated evaluation model of multiple disasters was established to evaluate the risks of the major agro-meteorological disasters affecting the Longyan yield, based on the yearly meteorological data, Longyan planting area and yield, and other socio-economic data in Longyan growth region in Fujian, and by using the integral weight of risk indices determined by AHP and entropy weight coefficient methods. In the Province, the Longyan growth regions with light integrated risk of multiple disasters were distributed in the coastal counties (except Dongshan County) with low elevation south of Changle, the regions with severe and more severe integrated risk were mainly in Zhangping of Longyan, Dongshan, Pinghe, Nanjin, and Hua' an of Zhangzhou, Yongchun and Anxi of Quanzhou, north mountainous areas of Putian and Xianyou, Minqing, Minhou, Luoyuan, and mountainous areas of Fuzhou, and Fuan, Xiapu, and mountainous areas of Ninde, among which, the regions with severe integrated risk were in Dongshan, Zhangping, and other mountainous areas with high altitudes, and the regions with moderate integrated risk were distributed in the other areas of the Province.
Integrated control system and method
Wang, Paul Sai Keat; Baldwin, Darryl; Kim, Myoungjin
2013-10-29
An integrated control system for use with an engine connected to a generator providing electrical power to a switchgear is disclosed. The engine receives gas produced by a gasifier. The control system includes an electronic controller associated with the gasifier, engine, generator, and switchgear. A gas flow sensor monitors a gas flow from the gasifier to the engine through an engine gas control valve and provides a gas flow signal to the electronic controller. A gas oversupply sensor monitors a gas oversupply from the gasifier and provides an oversupply signal indicative of gas not provided to the engine. A power output sensor monitors a power output of the switchgear and provide a power output signal. The electronic controller changes gas production of the gasifier and the power output rating of the switchgear based on the gas flow signal, the oversupply signal, and the power output signal.
Fast integral methods for integrated optical systems simulations: a review
NASA Astrophysics Data System (ADS)
Kleemann, Bernd H.
2015-09-01
Boundary integral equation methods (BIM) or simply integral methods (IM) in the context of optical design and simulation are rigorous electromagnetic methods solving Helmholtz or Maxwell equations on the boundary (surface or interface of the structures between two materials) for scattering or/and diffraction purposes. This work is mainly restricted to integral methods for diffracting structures such as gratings, kinoforms, diffractive optical elements (DOEs), micro Fresnel lenses, computer generated holograms (CGHs), holographic or digital phase holograms, periodic lithographic structures, and the like. In most cases all of the mentioned structures have dimensions of thousands of wavelengths in diameter. Therefore, the basic methods necessary for the numerical treatment are locally applied electromagnetic grating diffraction algorithms. Interestingly, integral methods belong to the first electromagnetic methods investigated for grating diffraction. The development started in the mid 1960ies for gratings with infinite conductivity and it was mainly due to the good convergence of the integral methods especially for TM polarization. The first integral equation methods (IEM) for finite conductivity were the methods by D. Maystre at Fresnel Institute in Marseille: in 1972/74 for dielectric, and metallic gratings, and later for multiprofile, and other types of gratings and for photonic crystals. Other methods such as differential and modal methods suffered from unstable behaviour and slow convergence compared to BIMs for metallic gratings in TM polarization from the beginning to the mid 1990ies. The first BIM for gratings using a parametrization of the profile was developed at Karl-Weierstrass Institute in Berlin under a contract with Carl Zeiss Jena works in 1984-1986 by A. Pomp, J. Creutziger, and the author. Due to the parametrization, this method was able to deal with any kind of surface grating from the beginning: whether profiles with edges, overhanging non
Integrated presentation of ecological risk from multiple stressors
Goussen, Benoit; Price, Oliver R.; Rendal, Cecilie; Ashauer, Roman
2016-01-01
Current environmental risk assessments (ERA) do not account explicitly for ecological factors (e.g. species composition, temperature or food availability) and multiple stressors. Assessing mixtures of chemical and ecological stressors is needed as well as accounting for variability in environmental conditions and uncertainty of data and models. Here we propose a novel probabilistic ERA framework to overcome these limitations, which focusses on visualising assessment outcomes by construct-ing and interpreting prevalence plots as a quantitative prediction of risk. Key components include environmental scenarios that integrate exposure and ecology, and ecological modelling of relevant endpoints to assess the effect of a combination of stressors. Our illustrative results demonstrate the importance of regional differences in environmental conditions and the confounding interactions of stressors. Using this framework and prevalence plots provides a risk-based approach that combines risk assessment and risk management in a meaningful way and presents a truly mechanistic alternative to the threshold approach. Even whilst research continues to improve the underlying models and data, regulators and decision makers can already use the framework and prevalence plots. The integration of multiple stressors, environmental conditions and variability makes ERA more relevant and realistic. PMID:27782171
Efficient orbit integration by manifold correction methods.
Fukushima, Toshio
2005-12-01
Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.
EMERGY METHODS: VALUABLE INTEGRATED ASSESSMENT TOOLS
NHEERL's Atlantic Ecology Division is investigating emergy methods as tools for integrated assessment in several projects evaluating environmental impacts, policies, and alternatives for remediation and intervention. Emergy accounting is a methodology that provides a quantitative...
Erlangga, Mokhammad Puput
2015-04-16
Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, in case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.
Efficient orbit integration by orbital longitude methods
NASA Astrophysics Data System (ADS)
Fukushima, Toshio
Recently we developed a new formulation of numerical integration of orbital motion named manifold correction methods. The main trick is to keep rigorously the consistency of some physical relations such as that of the orbital energy, of the orbital angular momentum, or of the Laplace integral of a binary subsystem. This maintenance is done by applying a sort of correction to the integrated variables at every integration step. Typical methods of correction are certain geometric transformation such as the spatial scaling and the spatial rotation, which are commonly used in the comparison of reference frames, or mathematically-reasonable operations such as the modularization of angle variables into the standard domain [-π, π). The finally-evolved form of the manifold correction methods is the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In the unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an infinitely long period. In the perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset time of which depends on the type and the magnitude of perturbations. This feature is also realized for highly eccentric orbits by applying the same idea to the KS-regularization. Expecially the introduction of time element greatly enhances the performance of numerical integration of KS-regularized orbits whether the scaling is applied or not.
Integrability: mathematical methods for studying solitary waves theory
NASA Astrophysics Data System (ADS)
Wazwaz, Abdul-Majid
2014-03-01
In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the
Multiple Integrated Complementary Healing Approaches: Energetics & Light for bone.
Gray, Michael G; Lackey, Brett R; Patrick, Evelyn F; Gray, Sandra L; Hurley, Susan G
2016-01-01
A synergistic-healing strategy that combines molecular targeting within a system-wide perspective is presented as the Multiple Integrated Complementary Healing Approaches: Energetics And Light (MICHAEL). The basis of the MICHAEL approach is the realization that environmental, nutritional and electromagnetic factors form a regulatory framework involved in bone and nerve healing. The interactions of light, energy, and nutrition with neural, hormonal and cellular pathways will be presented. Energetic therapies including electrical, low-intensity pulsed ultrasound and light based treatments affect growth, differentiation and proliferation of bone and nerve and can be utilized for their healing benefits. However, the benefits of these therapies can be impaired by the absence of nutritional, hormonal and organismal factors. For example, lack of sleep, disrupted circadian rhythms and vitamin-D deficiency can impair healing. Molecular targets, such as the Wnt pathway, protein kinase B and glucocorticoid signaling systems can be modulated by nutritional components, including quercetin, curcumin and Mg(2+) to enhance the healing process. The importance of water and water-regulation will be presented as an integral component. The effects of exercise and acupuncture on bone healing will also be discussed within the context of the MICHAEL approach. PMID:26804592
Achieving Integration in Mixed Methods Designs—Principles and Practices
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-01-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835
Tools and Models for Integrating Multiple Cellular Networks
Gerstein, Mark
2015-11-06
In this grant, we have systematically investigated the integrated networks, which are responsible for the coordination of activity between metabolic pathways in prokaryotes. We have developed several computational tools to analyze the topology of the integrated networks consisting of metabolic, regulatory, and physical interaction networks. The tools are all open-source, and they are available to download from Github, and can be incorporated in the Knowledgebase. Here, we summarize our work as follow. Understanding the topology of the integrated networks is the first step toward understanding its dynamics and evolution. For Aim 1 of this grant, we have developed a novel algorithm to determine and measure the hierarchical structure of transcriptional regulatory networks [1]. The hierarchy captures the direction of information flow in the network. The algorithm is generally applicable to regulatory networks in prokaryotes, yeast and higher organisms. Integrated datasets are extremely beneficial in understanding the biology of a system in a compact manner due to the conflation of multiple layers of information. Therefore for Aim 2 of this grant, we have developed several tools and carried out analysis for integrating system-wide genomic information. To make use of the structural data, we have developed DynaSIN for protein-protein interactions networks with various dynamical interfaces [2]. We then examined the association between network topology with phenotypic effects such as gene essentiality. In particular, we have organized E. coli and S. cerevisiae transcriptional regulatory networks into hierarchies. We then correlated gene phenotypic effects by tinkering with different layers to elucidate which layers were more tolerant to perturbations [3]. In the context of evolution, we also developed a workflow to guide the comparison between different types of biological networks across various species using the concept of rewiring [4], and Furthermore, we have developed
Methods for biological data integration: perspectives and challenges
Gligorijević, Vladimir; Pržulj, Nataša
2015-01-01
Rapid technological advances have led to the production of different types of biological data and enabled construction of complex networks with various types of interactions between diverse biological entities. Standard network data analysis methods were shown to be limited in dealing with such heterogeneous networked data and consequently, new methods for integrative data analyses have been proposed. The integrative methods can collectively mine multiple types of biological data and produce more holistic, systems-level biological insights. We survey recent methods for collective mining (integration) of various types of networked biological data. We compare different state-of-the-art methods for data integration and highlight their advantages and disadvantages in addressing important biological problems. We identify the important computational challenges of these methods and provide a general guideline for which methods are suited for specific biological problems, or specific data types. Moreover, we propose that recent non-negative matrix factorization-based approaches may become the integration methodology of choice, as they are well suited and accurate in dealing with heterogeneous data and have many opportunities for further development. PMID:26490630
Efficient orbit integration by orbital longitude methods
NASA Astrophysics Data System (ADS)
Fukushima, T.
2005-09-01
Triggered by the desire to investigate numerically the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correction methods. The main trick is to keep rigorously the consistency of some physical relations such as that of the orbital energy, of the orbital angular momentum, or of the Laplace integral of a binary subsystem. This maintenance is done by applying a sort of correction to the integrated variables at every integration step. Typical methods of correction are certain geometric transformation such as the spatial scaling and the spatial rotation, which are commonly used in the comparison of reference frames, or mathematically-reasonable operations such as the modularization of angle variables into the standard domain [-π,π). The finally-evolved form of the manifold correction methods is the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In the unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an infinitely long period. In the perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset time of which depends on the type and the magnitude of perturbations. This feature is also realized for highly eccentric orbits by applying the same idea to the KS-regularization. Especially the introduction of time element greatly enhances the performance of numerical integration of KS-regularized orbits whether the scaling is applied or not.
Efficient integration method for fictitious domain approaches
NASA Astrophysics Data System (ADS)
Duczek, Sascha; Gabbert, Ulrich
2015-10-01
In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.
Principles and methods of integrative genomic analyses in cancer.
Kristensen, Vessela N; Lingjærde, Ole Christian; Russnes, Hege G; Vollan, Hans Kristian M; Frigessi, Arnoldo; Børresen-Dale, Anne-Lise
2014-05-01
Combined analyses of molecular data, such as DNA copy-number alteration, mRNA and protein expression, point to biological functions and molecular pathways being deregulated in multiple cancers. Genomic, metabolomic and clinical data from various solid cancers and model systems are emerging and can be used to identify novel patient subgroups for tailored therapy and monitoring. The integrative genomics methodologies that are used to interpret these data require expertise in different disciplines, such as biology, medicine, mathematics, statistics and bioinformatics, and they can seem daunting. The objectives, methods and computational tools of integrative genomics that are available to date are reviewed here, as is their implementation in cancer research.
Multiple Shooting-Local Linearization method for the identification of dynamical systems
NASA Astrophysics Data System (ADS)
Carbonell, F.; Iturria-Medina, Y.; Jimenez, J. C.
2016-08-01
The combination of the multiple shooting strategy with the generalized Gauss-Newton algorithm turns out in a recognized method for estimating parameters in ordinary differential equations (ODEs) from noisy discrete observations. A key issue for an efficient implementation of this method is the accurate integration of the ODE and the evaluation of the derivatives involved in the optimization algorithm. In this paper, we study the feasibility of the Local Linearization (LL) approach for the simultaneous numerical integration of the ODE and the evaluation of such derivatives. This integration approach results in a stable method for the accurate approximation of the derivatives with no more computational cost than that involved in the integration of the ODE. The numerical simulations show that the proposed Multiple Shooting-Local Linearization method recovers the true parameters value under different scenarios of noisy data.
Upcoming challenges for multiple sequence alignment methods in the high-throughput era
Kemena, Carsten; Notredame, Cedric
2009-01-01
This review focuses on recent trends in multiple sequence alignment tools. It describes the latest algorithmic improvements including the extension of consistency-based methods to the problem of template-based multiple sequence alignments. Some results are presented suggesting that template-based methods are significantly more accurate than simpler alternative methods. The validation of existing methods is also discussed at length with the detailed description of recent results and some suggestions for future validation strategies. The last part of the review addresses future challenges for multiple sequence alignment methods in the genomic era, most notably the need to cope with very large sequences, the need to integrate large amounts of experimental data, the need to accurately align non-coding and non-transcribed sequences and finally, the need to integrate many alternative methods and approaches. Contact: cedric.notredame@crg.es PMID:19648142
Multiple tag labeling method for DNA sequencing
Mathies, R.A.; Huang, X.C.; Quesada, M.A.
1995-07-25
A DNA sequencing method is described which uses single lane or channel electrophoresis. Sequencing fragments are separated in the lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radioisotope labels. 5 figs.
Multiple tag labeling method for DNA sequencing
Mathies, Richard A.; Huang, Xiaohua C.; Quesada, Mark A.
1995-01-01
A DNA sequencing method described which uses single lane or channel electrophoresis. Sequencing fragments are separated in said lane and detected using a laser-excited, confocal fluorescence scanner. Each set of DNA sequencing fragments is separated in the same lane and then distinguished using a binary coding scheme employing only two different fluorescent labels. Also described is a method of using radio-isotope labels.
Students' Use of "Look Back" Strategies in Multiple Solution Methods
ERIC Educational Resources Information Center
Lee, Shin-Yi
2016-01-01
The purpose of this study was to investigate the relationship between both 9th-grade and 1st-year undergraduate students' use of "look back" strategies and problem solving performance in multiple solution methods, the difference in their use of look back strategies and problem solving performance in multiple solution methods, and the…
Evaluation of Scheduling Methods for Multiple Runways
NASA Technical Reports Server (NTRS)
Bolender, Michael A.; Slater, G. L.
1996-01-01
Several scheduling strategies are analyzed in order to determine the most efficient means of scheduling aircraft when multiple runways are operational and the airport is operating at different utilization rates. The study compares simulation data for two and three runway scenarios to results from queuing theory for an M/D/n queue. The direction taken, however, is not to do a steady-state, or equilibrium, analysis since this is not the case during a rush period at a typical airport. Instead, a transient analysis of the delay per aircraft is performed. It is shown that the scheduling strategy that reduces the delay depends upon the density of the arrival traffic. For light traffic, scheduling aircraft to their preferred runways is sufficient; however, as the arrival rate increases, it becomes more important to separate traffic by weight class. Significant delay reduction is realized when aircraft that belong to the heavy and small weight classes are sent to separate runways with large aircraft put into the 'best' landing slot.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Differential temperature integrating diagnostic method and apparatus
Doss, James D.; McCabe, Charles W.
1976-01-01
A method and device for detecting the presence of breast cancer in women by integrating the temperature difference between the temperature of a normal breast and that of a breast having a malignant tumor. The breast-receiving cups of a brassiere are each provided with thermally conductive material next to the skin, with a thermistor attached to the thermally conductive material in each cup. The thermistors are connected to adjacent arms of a Wheatstone bridge. Unbalance currents in the bridge are integrated with respect to time by means of an electrochemical integrator. In the absence of a tumor, both breasts maintain substantially the same temperature, and the bridge remains balanced. If the tumor is present in one breast, a higher temperature in that breast unbalances the bridge and the electrochemical cells integrate the temperature difference with respect to time.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2012-05-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2008-06-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Methods for monitoring multiple gene expression
Berka, Randy; Bachkirova, Elena; Rey, Michael
2013-10-01
The present invention relates to methods for monitoring differential expression of a plurality of genes in a first filamentous fungal cell relative to expression of the same genes in one or more second filamentous fungal cells using microarrays containing Trichoderma reesei ESTs or SSH clones, or a combination thereof. The present invention also relates to computer readable media and substrates containing such array features for monitoring expression of a plurality of genes in filamentous fungal cells.
Belaineh, Getachew; Sumner, David; Carter, Edward; Clapp, David
2013-01-01
Potential evapotranspiration (PET) and reference evapotranspiration (RET) data are usually critical components of hydrologic analysis. Many different equations are available to estimate PET and RET. Most of these equations, such as the Priestley-Taylor and Penman- Monteith methods, rely on detailed meteorological data collected at ground-based weather stations. Few weather stations collect enough data to estimate PET or RET using one of the more complex evapotranspiration equations. Currently, satellite data integrated with ground meteorological data are used with one of these evapotranspiration equations to accurately estimate PET and RET. However, earlier than the last few decades, historical reconstructions of PET and RET needed for many hydrologic analyses are limited by the paucity of satellite data and of some types of ground data. Air temperature stands out as the most generally available meteorological ground data type over the last century. Temperature-based approaches used with readily available historical temperature data offer the potential for long period-of-record PET and RET historical reconstructions. A challenge is the inconsistency between the more accurate, but more data intensive, methods appropriate for more recent periods and the less accurate, but less data intensive, methods appropriate to the more distant past. In this study, multiple methods are harmonized in a seamless reconstruction of historical PET and RET by quantifying and eliminating the biases of the simple Hargreaves-Samani method relative to the more complex and accurate Priestley-Taylor and Penman-Monteith methods. This harmonization process is used to generate long-term, internally consistent, spatiotemporal databases of PET and RET.
Integrated force method versus displacement method for finite element analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Berke, Laszlo; Gallagher, Richard H.
1990-01-01
A novel formulation termed the integrated force method (IFM) has been developed in recent years for analyzing structures. In this method all the internal forces are taken as independent variables, and the system equilibrium equations (EE's) are integrated with the global compatibility conditions (CC's) to form the governing set of equations. In IFM the CC's are obtained from the strain formulation of St. Venant, and no choices of redundant load systems have to be made, in constrast to the standard force method (SFM). This property of IFM allows the generation of the governing equation to be automated straightforwardly, as it is in the popular stiffness method (SM). In this report IFM and SM are compared relative to the structure of their respective equations, their conditioning, required solution methods, overall computational requirements, and convergence properties as these factors influence the accuracy of the results. Overall, this new version of the force method produces more accurate results than the stiffness method for comparable computational cost.
Hamilton, Chris A; Hendrixson, Brent E; Brewer, Michael S; Bond, Jason E
2014-02-01
The North American tarantula genus Aphonopelma provides one of the greatest challenges to species delimitation and downstream identification in spiders because traditional morphological characters appear ineffective for evaluating limits of intra- and interspecific variation in the group. We evaluated the efficacy of numerous molecular-based approaches to species delimitation within Aphonopelma based upon the most extensive sampling of theraphosids to date, while also investigating the sensitivity of randomized taxon sampling on the reproducibility of species boundaries. Mitochondrial DNA (cytochrome c oxidase subunit I) sequences were sampled from 682 specimens spanning the genetic, taxonomic, and geographic breadth of the genus within the United States. The effects of random taxon sampling compared traditional Neighbor-Joining with three modern quantitative species delimitation approaches (ABGD, P ID(Liberal), and GMYC). Our findings reveal remarkable consistency and congruence across various approaches and sampling regimes, while highlighting highly divergent outcomes in GMYC. Our investigation allowed us to integrate methodologies into an efficient, consistent, and more effective general methodological workflow for estimating species boundaries within the mygalomorph spider genus Aphonopelma. Taken alone, these approaches are not particularly useful - especially in the absence of prior knowledge of the focal taxa. Only through the incorporation of multiple lines of evidence, employed in a hypothesis-testing framework, can the identification and delimitation of confident species boundaries be determined. A key point in studying closely related species, and perhaps one of the most important aspects of DNA barcoding, is to combine a sampling strategy that broadly identifies the extent of genetic diversity across the distributions of the species of interest and incorporates previous knowledge into the "species equation" (morphology, molecules, and natural history
Multiple time step integrators in ab initio molecular dynamics
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less
Implicit integration methods for dislocation dynamics
Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; Hommes, G.; Aubry, S.; Arsenlis, A.
2015-01-20
In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a way of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.
Bioluminescent bioreporter integrated circuit detection methods
Simpson, Michael L.; Paulus, Michael J.; Sayler, Gary S.; Applegate, Bruce M.; Ripp, Steven A.
2005-06-14
Disclosed are monolithic bioelectronic devices comprising a bioreporter and an OASIC. These bioluminescent bioreporter integrated circuit are useful in detecting substances such as pollutants, explosives, and heavy-metals residing in inhospitable areas such as groundwater, industrial process vessels, and battlefields. Also disclosed are methods and apparatus for detection of particular analytes, including ammonia and estrogen compounds.
Integration of multiple view plus depth data for free viewpoint 3D display
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Yoshida, Yuko; Kawamoto, Tetsuya; Fujii, Toshiaki; Mase, Kenji
2014-03-01
This paper proposes a method for constructing a reasonable scale of end-to-end free-viewpoint video system that captures multiple view and depth data, reconstructs three-dimensional polygon models of objects, and display them on virtual 3D CG spaces. This system consists of a desktop PC and four Kinect sensors. First, multiple view plus depth data at four viewpoints are captured by Kinect sensors simultaneously. Then, the captured data are integrated to point cloud data by using camera parameters. The obtained point cloud data are sampled to volume data that consists of voxels. Since volume data that are generated from point cloud data are sparse, those data are made dense by using global optimization algorithm. Final step is to reconstruct surfaces on dense volume data by discrete marching cubes method. Since accuracy of depth maps affects to the quality of 3D polygon model, a simple inpainting method for improving depth maps is also presented.
Method and apparatus for controlling multiple motors
Jones, Rollin G.; Kortegaard, Bert L.; Jones, David F.
1987-01-01
A method and apparatus are provided for simultaneously controlling a plurality of stepper motors. Addressing circuitry generates address data for each motor in a periodic address sequence. Memory circuits respond to the address data for each motor by accessing a corresponding memory location containing a first operational data set functionally related to a direction for moving the motor, speed data, and rate of speed change. First logic circuits respond to the first data set to generate a motor step command. Second logic circuits respond to the command from the first logic circuits to generate a third data set for replacing the first data set in memory with a current operational motor status, which becomes the first data set when the motor is next addressed.
Orthogonal matrix factorization enables integrative analysis of multiple RNA binding proteins
Stražar, Martin; Žitnik, Marinka; Zupan, Blaž; Ule, Jernej; Curk, Tomaž
2016-01-01
Motivation: RNA binding proteins (RBPs) play important roles in post-transcriptional control of gene expression, including splicing, transport, polyadenylation and RNA stability. To model protein–RNA interactions by considering all available sources of information, it is necessary to integrate the rapidly growing RBP experimental data with the latest genome annotation, gene function, RNA sequence and structure. Such integration is possible by matrix factorization, where current approaches have an undesired tendency to identify only a small number of the strongest patterns with overlapping features. Because protein–RNA interactions are orchestrated by multiple factors, methods that identify discriminative patterns of varying strengths are needed. Results: We have developed an integrative orthogonality-regularized nonnegative matrix factorization (iONMF) to integrate multiple data sources and discover non-overlapping, class-specific RNA binding patterns of varying strengths. The orthogonality constraint halves the effective size of the factor model and outperforms other NMF models in predicting RBP interaction sites on RNA. We have integrated the largest data compendium to date, which includes 31 CLIP experiments on 19 RBPs involved in splicing (such as hnRNPs, U2AF2, ELAVL1, TDP-43 and FUS) and processing of 3’UTR (Ago, IGF2BP). We show that the integration of multiple data sources improves the predictive accuracy of retrieval of RNA binding sites. In our study the key predictive factors of protein–RNA interactions were the position of RNA structure and sequence motifs, RBP co-binding and gene region type. We report on a number of protein-specific patterns, many of which are consistent with experimentally determined properties of RBPs. Availability and implementation: The iONMF implementation and example datasets are available at https://github.com/mstrazar/ionmf. Contact: tomaz.curk@fri.uni-lj.si Supplementary information: Supplementary data are available
Fidelity of the Integrated Force Method Solution
NASA Technical Reports Server (NTRS)
Hopkins, Dale; Halford, Gary; Coroneos, Rula; Patnaik, Surya
2002-01-01
The theory of strain compatibility of the solid mechanics discipline was incomplete since St. Venant's 'strain formulation' in 1876. We have addressed the compatibility condition both in the continuum and the discrete system. This has lead to the formulation of the Integrated Force Method. A dual Integrated Force Method with displacement as the primal variable has also been formulated. A modest finite element code (IFM/Analyzers) based on the IFM theory has been developed. For a set of standard test problems the IFM results were compared with the stiffness method solutions and the MSC/Nastran code. For the problems IFM outperformed the existing methods. Superior IFM performance is attributed to simultaneous compliance of equilibrium equation and compatibility condition. MSC/Nastran organization expressed reluctance to accept the high fidelity IFM solutions. This report discusses the solutions to the examples. No inaccuracy was detected in the IFM solutions. A stiffness method code with a small programming effort can be improved to reap the many IFM benefits when implemented with the IFMD elements. Dr. Halford conducted a peer-review on the Integrated Force Method. Reviewers' response is included.
One-step integration of multiple genes into the oleaginous yeast Yarrowia lipolytica.
Gao, Shuliang; Han, Linna; Zhu, Li; Ge, Mei; Yang, Sheng; Jiang, Yu; Chen, Daijie
2014-12-01
Yarrowia lipolytica is an unconventional yeast, and is generally recognized as safe (GRAS). It provides a versatile fermentation platform that is used commercially to produce many added-value products. Here we report a multiple fragment assembly method that allows one-step integration of an entire β-carotene biosynthesis pathway (~11 kb, consisting of four genes) via in vivo homologous recombination into the rDNA locus of the Y. lipolytica chromosome. The highest efficiency was 21%, and the highest production of β-carotene was 2.2 ± 0.3 mg per g dry cell weight. The total procedure was completed in less than one week, as compared to a previously reported sequential gene integration method that required n weeks for n genes. This time-saving method will facilitate synthetic biology, metabolic engineering and functional genomics studies of Y. lipolytica. PMID:25216641
Numerical methods for engine-airframe integration
Murthy, S.N.B.; Paynter, G.C.
1986-01-01
Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment.
Relationship between Multiple Regression and Selected Multivariable Methods.
ERIC Educational Resources Information Center
Schumacker, Randall E.
The relationship of multiple linear regression to various multivariate statistical techniques is discussed. The importance of the standardized partial regression coefficient (beta weight) in multiple linear regression as it is applied in path, factor, LISREL, and discriminant analyses is emphasized. The multivariate methods discussed in this paper…
A parallel multiple path tracing method based on OptiX for infrared image generation
NASA Astrophysics Data System (ADS)
Wang, Hao; Wang, Xia; Liu, Li; Long, Teng; Wu, Zimu
2015-12-01
Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.
Package for integrated optic circuit and method
Kravitz, S.H.; Hadley, G.R.; Warren, M.E.; Carson, R.F.; Armendariz, M.G.
1998-08-04
A structure and method are disclosed for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package. 6 figs.
Package for integrated optic circuit and method
Kravitz, Stanley H.; Hadley, G. Ronald; Warren, Mial E.; Carson, Richard F.; Armendariz, Marcelino G.
1998-01-01
A structure and method for packaging an integrated optic circuit. The package comprises a first wall having a plurality of microlenses formed therein to establish channels of optical communication with an integrated optic circuit within the package. A first registration pattern is provided on an inside surface of one of the walls of the package for alignment and attachment of the integrated optic circuit. The package in one embodiment may further comprise a fiber holder for aligning and attaching a plurality of optical fibers to the package and extending the channels of optical communication to the fibers outside the package. In another embodiment, a fiber holder may be used to hold the fibers and align the fibers to the package. The fiber holder may be detachably connected to the package.
A Methodology for Multiple Rule System Integration and Resolution Within a Singular Knowledge Base
NASA Technical Reports Server (NTRS)
Kautzmann, Frank N., III
1988-01-01
Expert Systems which support knowledge representation by qualitative modeling techniques experience problems, when called upon to support integrated views embodying description and explanation, especially when other factors such as multiple causality, competing rule model resolution, and multiple uses of knowledge representation are included. A series of prototypes are being developed to demonstrate the feasibility of automating the process of systems engineering, design and configuration, and diagnosis and fault management. A study involves not only a generic knowledge representation; it must also support multiple views at varying levels of description and interaction between physical elements, systems, and subsystems. Moreover, it will involve models of description and explanation for each level. This multiple model feature requires the development of control methods between rule systems and heuristics on a meta-level for each expert system involved in an integrated and larger class of expert system. The broadest possible category of interacting expert systems is described along with a general methodology for the knowledge representation and control of mutually exclusive rule systems.
Multiple Methods: Research Methods in Education Projects at NSF
ERIC Educational Resources Information Center
Suter, Larry E.
2005-01-01
Projects on science and mathematics education research supported by the National Science Foundation (US government) rarely employ a single method of study. Studies of educational practices that use experimental design are very rare. The most common research method is the case study method and the second most common is some form of experimental…
Generating nonlinear FM chirp radar signals by multiple integrations
Doerry, Armin W.
2011-02-01
A phase component of a nonlinear frequency modulated (NLFM) chirp radar pulse can be produced by performing digital integration operations over a time interval defined by the pulse width. Each digital integration operation includes applying to a respectively corresponding input parameter value a respectively corresponding number of instances of digital integration.
Cao, Hongbao; Lei, Shufeng; Deng, Hong-Wen; Wang, Yu-Ping
2012-01-01
Various types of genomic data (e.g., SNPs and mRNA transcripts) have been employed to identify risk genes for complex diseases. However, the analysis of these data has largely been performed in isolation. Combining these multiple data for integrative analysis can take advantage of complementary information and thus can have higher power to identify genes (and/or their functions) that would otherwise be impossible with individual data analysis. Due to the different nature, structure, and format of diverse sets of genomic data, multiple genomic data integration is challenging. Here we address the problem by developing a sparse representation based clustering (SRC) method for integrative data analysis. As an example, we applied the SRC method to the integrative analysis of 376821 SNPs in 200 subjects (100 cases and 100 controls) and expression data for 22283 genes in 80 subjects (40 cases and 40 controls) to identify significant genes for osteoporosis (OP). Comparing our results with previous studies, we identified some genes known related to OP risk (e.g., 'THSD4', 'CRHR1', 'HSD11B1', 'THSD7A', 'BMPR1B' 'ADCY10', 'PRL', 'CA8','ESRRA', 'CALM1', 'CALM1', 'SPARC', and 'LRP1'). Moreover, we uncovered novel osteoporosis susceptible genes ('DICER1', 'PTMA', etc.) that were not found previously but play functionally important roles in osteoporosis etiology from existing studies. In addition, the SRC method identified genes can lead to higher accuracy for the diagnosis/classification of osteoporosis subjects when compared with the traditional T-test and Fisher-exact test, which further validates the proposed SRC approach for integrative analysis.
Cao, Hongbao; Lei, Shufeng; Deng, Hong-Wen; Wang, Yu-Ping
2012-01-01
Various types of genomic data (e.g., SNPs and mRNA transcripts) have been employed to identify risk genes for complex diseases. However, the analysis of these data has largely been performed in isolation. Combining these multiple data for integrative analysis can take advantage of complementary information and thus can have higher power to identify genes (and/or their functions) that would otherwise be impossible with individual data analysis. Due to the different nature, structure, and format of diverse sets of genomic data, multiple genomic data integration is challenging. Here we address the problem by developing a sparse representation based clustering (SRC) method for integrative data analysis. As an example, we applied the SRC method to the integrative analysis of 376821 SNPs in 200 subjects (100 cases and 100 controls) and expression data for 22283 genes in 80 subjects (40 cases and 40 controls) to identify significant genes for osteoporosis (OP). Comparing our results with previous studies, we identified some genes known related to OP risk (e.g., ‘THSD4’, ‘CRHR1’, ‘HSD11B1’, ‘THSD7A’, ‘BMPR1B’ ‘ADCY10’, ‘PRL’, ‘CA8’,’ESRRA’, ‘CALM1’, ‘CALM1’, ‘SPARC’, and ‘LRP1’). Moreover, we uncovered novel osteoporosis susceptible genes (‘DICER1’, ‘PTMA’, etc.) that were not found previously but play functionally important roles in osteoporosis etiology from existing studies. In addition, the SRC method identified genes can lead to higher accuracy for the diagnosis/classification of osteoporosis subjects when compared with the traditional T-test and Fisher-exact test, which further validates the proposed SRC approach for integrative analysis. PMID:22957024
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
Integrative clustering methods for high-dimensional molecular data
Chalise, Prabhakar; Koestler, Devin C.; Bimali, Milan; Yu, Qing; Fridley, Brooke L.
2014-01-01
High-throughput ‘omic’ data, such as gene expression, DNA methylation, DNA copy number, has played an instrumental role in furthering our understanding of the molecular basis in states of human health and disease. As cells with similar morphological characteristics can exhibit entirely different molecular profiles and because of the potential that these discrepancies might further our understanding of patient-level variability in clinical outcomes, there is significant interest in the use of high-throughput ‘omic’ data for the identification of novel molecular subtypes of a disease. While numerous clustering methods have been proposed for identifying of molecular subtypes, most were developed for single “omic’ data types and may not be appropriate when more than one ‘omic’ data type are collected on study subjects. Given that complex diseases, such as cancer, arise as a result of genomic, epigenomic, transcriptomic, and proteomic alterations, integrative clustering methods for the simultaneous clustering of multiple ‘omic’ data types have great potential to aid in molecular subtype discovery. Traditionally, ad hoc manual data integration has been performed using the results obtained from the clustering of individual ‘omic’ data types on the same set of patient samples. However, such methods often result in inconsistent assignment of subjects to the molecular cancer subtypes. Recently, several methods have been proposed in the literature that offers a rigorous framework for the simultaneous integration of multiple ‘omic’ data types in a single comprehensive analysis. In this paper, we present a systematic review of existing integrative clustering methods. PMID:25243110
Scheuermann, Thomas H.; Brautigam, Chad A.
2014-01-01
Isothermal titration calorimetry (ITC) has become a standard and widely available tool to measure the thermodynamic parameters of macromolecular associations. Modern applications of the method, including global analysis and drug screening, require the acquisition of multiple sets of data; sometimes these data sets number in the hundreds. Therefore, there is a need for quick, precise, and automated means to process the data, particularly at the first step of data analysis, which is commonly the integration of the raw data to yield an interpretable isotherm. Herein, we describe enhancements to an algorithm that previously has been shown to provide an automated, unbiased, and high-precision means to integrate ITC data. These improvements allow for the speedy and precise serial integration of an unlimited number of ITC data sets, and they have been implemented in the freeware program NITPIC, version 1.1.0. We present a comprehensive comparison of the performance of this software against an older version of NITPIC and a current version of Origin, which is commonly used for integration. The new methods recapitulate the excellent performance of the previous versions of NITPIC while speeding it up substantially, and their precision is significantly better than that of Origin. This new version of NITPIC is therefore well suited to the serial integration of many ITC data sets. PMID:25524420
Scheuermann, Thomas H; Brautigam, Chad A
2015-04-01
Isothermal titration calorimetry (ITC) has become a standard and widely available tool to measure the thermodynamic parameters of macromolecular associations. Modern applications of the method, including global analysis and drug screening, require the acquisition of multiple sets of data; sometimes these data sets number in the hundreds. Therefore, there is a need for quick, precise, and automated means to process the data, particularly at the first step of data analysis, which is commonly the integration of the raw data to yield an interpretable isotherm. Herein, we describe enhancements to an algorithm that previously has been shown to provide an automated, unbiased, and high-precision means to integrate ITC data. These improvements allow for the speedy and precise serial integration of an unlimited number of ITC data sets, and they have been implemented in the freeware program NITPIC, version 1.1.0. We present a comprehensive comparison of the performance of this software against an older version of NITPIC and a current version of Origin, which is commonly used for integration. The new methods recapitulate the excellent performance of the previous versions of NITPIC while speeding it up substantially, and their precision is significantly better than that of Origin. This new version of NITPIC is therefore well suited to the serial integration of many ITC data sets. PMID:25524420
Monte Carlo methods for multidimensional integration for European option pricing
NASA Astrophysics Data System (ADS)
Todorov, V.; Dimov, I. T.
2016-10-01
In this paper, we illustrate examples of highly accurate Monte Carlo and quasi-Monte Carlo methods for multiple integrals related to the evaluation of European style options. The idea is that the value of the option is formulated in terms of the expectation of some random variable; then the average of independent samples of this random variable is used to estimate the value of the option. First we obtain an integral representation for the value of the option using the risk neutral valuation formula. Then with an appropriations change of the constants we obtain a multidimensional integral over the unit hypercube of the corresponding dimensionality. Then we compare a specific type of lattice rules over one of the best low discrepancy sequence of Sobol for numerical integration. Quasi-Monte Carlo methods are compared with Adaptive and Crude Monte Carlo techniques for solving the problem. The four approaches are completely different thus it is a question of interest to know which one of them outperforms the other for evaluation multidimensional integrals in finance. Some of the advantages and disadvantages of the developed algorithms are discussed.
A fast and high performance multiple data integration algorithm for identifying human disease genes
2015-01-01
Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620
Information Integration in Multiple Cue Judgment: A Division of Labor Hypothesis
ERIC Educational Resources Information Center
Juslin, Peter; Karlsson, Linnea; Olsson, Henrik
2008-01-01
There is considerable evidence that judgment is constrained to additive integration of information. The authors propose an explanation of why serial and additive cognitive integration can produce accurate multiple cue judgment both in additive and non-additive environments in terms of an adaptive division of labor between multiple representations.…
Pre-employment integrity testing across multiple industries.
Fine, Saul
2010-10-01
Despite the robust meta-analytic data available, very little comparative research exists on validities of integrity measures within specific industries. Among a sample of 2456 Israeli job applicants, integrity scores were found to be significantly correlated with self-reported counterproductive work behaviors across eight different industries, with no evidence of adverse impact by gender, age, or national origin. These results are believed to be of practical importance to the diverse organizations administering integrity tests. PMID:21117489
Integrating stakeholder values with multiple attributes to quantify watershed performance
NASA Astrophysics Data System (ADS)
Shriver, Deborah M.; Randhir, Timothy O.
2006-08-01
Integrating stakeholder values into the process of quantifying impairment of ecosystem functions is an important aspect of watershed assessment and planning. This study develops a classification and prioritization model to assess potential impairment in watersheds. A systematic evaluation of a broad set of abiotic, biotic, and human indicators of watershed structure and function was used to identify the level of degradation at a subbasin scale. Agencies and communities can use the method to effectively target and allocate resources to areas of greatest restoration need. The watershed performance measure (WPM) developed in this study is composed of three major components: (1) hydrologic processes (water quantity and quality), (2) biodiversity at a species scale (core and priority habitat for rare and endangered species and species richness) and landscape scale (impacts of fragmentation), and (3) urban impacts as assessed in the built environment (effective impervious area) and population effects (densities and density of toxic waste sites). Simulation modeling using the Soil and Water Assessment Tool (SWAT), monitoring information, and spatial analysis with GIS were used to assess each criterion in developing this model. Weights for attributes of potential impairment were determined through the use of the attribute prioritization procedure with a panel of expert stakeholders. This procedure uses preselected attributes and corresponding stakeholder values and is data intensive. The model was applied to all subbasins of the Chicopee River Watershed of western Massachusetts, an area with a mixture of rural, heavily forested lands, suburban, and urbanized areas. Highly impaired subbasins in one community were identified using this methodology and evaluated for principal forms of degradation and potential restoration policies and BMPs. This attribute-based prioritization method could be used in identifying baselines, prioritization policies, and adaptive community
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-06
... COMMISSION Certain Integrated Circuit Packages Provided With Multiple Heat- Conducting Paths and Products... With Multiple Heat-Conducting Paths and Products Containing Same, DN 2899; the Commission is soliciting... multiple heat-conducting paths and products containing same. The complaint names as respondents...
Chen, Tianle; Zeng, Donglin; Wang, Yuanjia
2015-12-01
Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although, kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data, but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data.
Integrated Force Method for Indeterminate Structures
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Halford, Gary R.; Patnaik, Surya N.
2008-01-01
Two methods of solving indeterminate structural-mechanics problems have been developed as products of research on the theory of strain compatibility. In these methods, stresses are considered to be the primary unknowns (in contrast to strains and displacements being considered as the primary unknowns in some prior methods). One of these methods, denoted the integrated force method (IFM), makes it possible to compute stresses, strains, and displacements with high fidelity by use of modest finite-element models that entail relatively small amounts of computation. The other method, denoted the completed Beltrami Mitchell formulation (CBMF), enables direct determination of stresses in an elastic continuum with general boundary conditions, without the need to first calculate displacements as in traditional methods. The equilibrium equation, the compatibility condition, and the material law are the three fundamental concepts of the theory of structures. For almost 150 years, it has been commonly supposed that the theory is complete. However, until now, the understanding of the compatibility condition remained incomplete, and the compatibility condition was confused with the continuity condition. Furthermore, the compatibility condition as applied to structures in its previous incomplete form was inconsistent with the strain formulation in elasticity.
Studying morphological integration and modularity at multiple levels: concepts and analysis
Klingenberg, Christian Peter
2014-01-01
Although most studies on integration and modularity have focused on variation among individuals within populations or species, this is not the only level of variation for which integration and modularity exist. Multiple levels of biological variation originate from distinct sources: genetic variation, phenotypic plasticity resulting from environmental heterogeneity, fluctuating asymmetry from random developmental variation and, at the interpopulation or interspecific levels, evolutionary change. The processes that produce variation at all these levels can impart integration or modularity on the covariance structure among morphological traits. In turn, studies of the patterns of integration and modularity can inform about the underlying processes. In particular, the methods of geometric morphometrics offer many advantages for such studies because they can characterize the patterns of morphological variation in great detail and maintain the anatomical context of the structures under study. This paper reviews biological concepts and analytical methods for characterizing patterns of variation and for comparing across levels. Because research comparing patterns across level has only just begun, there are relatively few results, generalizations are difficult and many biological and statistical questions remain unanswered. Nevertheless, it is clear that research using this approach can take advantage of an abundance of new possibilities that are so far largely unexplored. PMID:25002695
Curriculum Integration in Arts Education: Connecting Multiple Art Forms through the Idea of "Space"
ERIC Educational Resources Information Center
Bautista, Alfredo; Tan, Liang See; Ponnusamy, Letchmi Devi; Yau, Xenia
2016-01-01
Arts integration research has focused on documenting how the teaching of specific art forms can be integrated with "core" academic subject matters (e.g. science, mathematics and literacy). However, the question of how the teaching of multiple art forms themselves can be integrated in schools remains to be explored by educational…
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Methods of Genomic Competency Integration in Practice
Jenkins, Jean; Calzone, Kathleen A.; Caskey, Sarah; Culp, Stacey; Weiner, Marsha; Badzek, Laurie
2015-01-01
Purpose Genomics is increasingly relevant to health care, necessitating support for nurses to incorporate genomic competencies into practice. The primary aim of this project was to develop, implement, and evaluate a year-long genomic education intervention that trained, supported, and supervised institutional administrator and educator champion dyads to increase nursing capacity to integrate genomics through assessments of program satisfaction and institutional achieved outcomes. Design Longitudinal study of 23 Magnet Recognition Program® Hospitals (21 intervention, 2 controls) participating in a 1-year new competency integration effort aimed at increasing genomic nursing competency and overcoming barriers to genomics integration in practice. Methods Champion dyads underwent genomic training consisting of one in-person kick-off training meeting followed by monthly education webinars. Champion dyads designed institution-specific action plans detailing objectives, methods or strategies used to engage and educate nursing staff, timeline for implementation, and outcomes achieved. Action plans focused on a minimum of seven genomic priority areas: champion dyad personal development; practice assessment; policy content assessment; staff knowledge needs assessment; staff development; plans for integration; and anticipated obstacles and challenges. Action plans were updated quarterly, outlining progress made as well as inclusion of new methods or strategies. Progress was validated through virtual site visits with the champion dyads and chief nursing officers. Descriptive data were collected on all strategies or methods utilized, and timeline for achievement. Descriptive data were analyzed using content analysis. Findings The complexity of the competency content and the uniqueness of social systems and infrastructure resulted in a significant variation of champion dyad interventions. Conclusions Nursing champions can facilitate change in genomic nursing capacity through
Zhang, Chao; Joshi, Trupti; Lin, Guan Ning; Xu, Dong
2008-01-01
Characterising gene function is one of the major challenging tasks in the post-genomic era. Various approaches have been developed to integrate multiple sources of high-throughput data to predict gene function. Most of those approaches are just used for research purpose and have not been implemented as publicly available tools. Even for those implemented applications, almost all of them are still web-based 'prediction servers' that have to be managed by specialists. This paper introduces a systematic method for integrating various sources of high-throughput data to predict gene function and analyse our prediction results and evaluates its performances based on the competition for mouse gene function prediction (MouseFunc). A stand-alone Java-based software package 'GeneFAS' is freely available at http://digbio. missouri.eduigenefas.
Solution methods for very highly integrated circuits.
Nong, Ryan; Thornquist, Heidi K.; Chen, Yao; Mei, Ting; Santarelli, Keith R.; Tuminaro, Raymond Stephen
2010-12-01
While advances in manufacturing enable the fabrication of integrated circuits containing tens-to-hundreds of millions of devices, the time-sensitive modeling and simulation necessary to design these circuits poses a significant computational challenge. This is especially true for mixed-signal integrated circuits where detailed performance analyses are necessary for the individual analog/digital circuit components as well as the full system. When the integrated circuit has millions of devices, performing a full system simulation is practically infeasible using currently available Electrical Design Automation (EDA) tools. The principal reason for this is the time required for the nonlinear solver to compute the solutions of large linearized systems during the simulation of these circuits. The research presented in this report aims to address the computational difficulties introduced by these large linearized systems by using Model Order Reduction (MOR) to (i) generate specialized preconditioners that accelerate the computation of the linear system solution and (ii) reduce the overall dynamical system size. MOR techniques attempt to produce macromodels that capture the desired input-output behavior of larger dynamical systems and enable substantial speedups in simulation time. Several MOR techniques that have been developed under the LDRD on 'Solution Methods for Very Highly Integrated Circuits' will be presented in this report. Among those presented are techniques for linear time-invariant dynamical systems that either extend current approaches or improve the time-domain performance of the reduced model using novel error bounds and a new approach for linear time-varying dynamical systems that guarantees dimension reduction, which has not been proven before. Progress on preconditioning power grid systems using multi-grid techniques will be presented as well as a framework for delivering MOR techniques to the user community using Trilinos and the Xyce circuit simulator
ERIC Educational Resources Information Center
Crawford, Carrie L.
1990-01-01
Reviews literature on hypnosis, imagery, and metaphor as applied to the treatment and integration of those with multiple personality disorder (MPD) and dissociative states. Considers diagnostic criteria of MPD; explores current theories of etiology and treatment; and suggests specific examples of various clinical methods of treatment using…
Integrability: mathematical methods for studying solitary waves theory
NASA Astrophysics Data System (ADS)
Wazwaz, Abdul-Majid
2014-03-01
In recent decades, substantial experimental research efforts have been devoted to linear and nonlinear physical phenomena. In particular, studies of integrable nonlinear equations in solitary waves theory have attracted intensive interest from mathematicians, with the principal goal of fostering the development of new methods, and physicists, who are seeking solutions that represent physical phenomena and to form a bridge between mathematical results and scientific structures. The aim for both groups is to build up our current understanding and facilitate future developments, develop more creative results and create new trends in the rapidly developing field of solitary waves. The notion of the integrability of certain partial differential equations occupies an important role in current and future trends, but a unified rigorous definition of the integrability of differential equations still does not exist. For example, an integrable model in the Painlevé sense may not be integrable in the Lax sense. The Painlevé sense indicates that the solution can be represented as a Laurent series in powers of some function that vanishes on an arbitrary surface with the possibility of truncating the Laurent series at finite powers of this function. The concept of Lax pairs introduces another meaning of the notion of integrability. The Lax pair formulates the integrability of nonlinear equation as the compatibility condition of two linear equations. However, it was shown by many researchers that the necessary integrability conditions are the existence of an infinite series of generalized symmetries or conservation laws for the given equation. The existence of multiple soliton solutions often indicates the integrability of the equation but other tests, such as the Painlevé test or the Lax pair, are necessary to confirm the integrability for any equation. In the context of completely integrable equations, studies are flourishing because these equations are able to describe the
Method for measuring multiple scattering corrections between liquid scintillators
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Method for measuring multiple scattering corrections between liquid scintillators
NASA Astrophysics Data System (ADS)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.; Wurtz, R. E.
2016-07-01
A time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
Integrative Data Analysis: The Simultaneous Analysis of Multiple Data Sets
ERIC Educational Resources Information Center
Curran, Patrick J.; Hussong, Andrea M.
2009-01-01
There are both quantitative and methodological techniques that foster the development and maintenance of a cumulative knowledge base within the psychological sciences. Most noteworthy of these techniques is meta-analysis, which allows for the synthesis of summary statistics drawn from multiple studies when the original data are not available.…
Zhao, Dong; Su, Baiquan; Chen, Guowen; Liao, Hongen
2015-04-20
In this paper, we present a polyhedron-shaped floating autostereoscopic display viewable from 360 degrees using integral photography (IP) and multiple semitransparent mirrors. IP combined with polyhedron-shaped multiple semitransparent mirrors is used to achieve a 360 degree viewable floating three-dimensional (3D) autostereoscopic display, having the advantage of being able to be viewed by several observers from various viewpoints simultaneously. IP is adopted to generate a 3D autostereoscopic image with full parallax property. Multiple semitransparent mirrors reflect corresponding IP images, and the reflected IP images are situated around the center of the polyhedron-shaped display device for producing the floating display. The spatial reflected IP images reconstruct a floating autostereoscopic image viewable from 360 degrees. We manufactured two prototypes for producing such displays and performed two sets of experiments to evaluate the feasibility of the method described above. The results of our experiments showed that our approach can achieve a floating autostereoscopic display viewable from surrounding area. Moreover, it is shown the proposed method is feasible to facilitate the continuous viewpoint of a whole 360 degree display without flipping.
Zhao, Dong; Su, Baiquan; Chen, Guowen; Liao, Hongen
2015-04-20
In this paper, we present a polyhedron-shaped floating autostereoscopic display viewable from 360 degrees using integral photography (IP) and multiple semitransparent mirrors. IP combined with polyhedron-shaped multiple semitransparent mirrors is used to achieve a 360 degree viewable floating three-dimensional (3D) autostereoscopic display, having the advantage of being able to be viewed by several observers from various viewpoints simultaneously. IP is adopted to generate a 3D autostereoscopic image with full parallax property. Multiple semitransparent mirrors reflect corresponding IP images, and the reflected IP images are situated around the center of the polyhedron-shaped display device for producing the floating display. The spatial reflected IP images reconstruct a floating autostereoscopic image viewable from 360 degrees. We manufactured two prototypes for producing such displays and performed two sets of experiments to evaluate the feasibility of the method described above. The results of our experiments showed that our approach can achieve a floating autostereoscopic display viewable from surrounding area. Moreover, it is shown the proposed method is feasible to facilitate the continuous viewpoint of a whole 360 degree display without flipping. PMID:25969022
Parallel methods for dynamic simulation of multiple manipulator systems
NASA Technical Reports Server (NTRS)
Mcmillan, Scott; Sadayappan, P.; Orin, David E.
1993-01-01
In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.
Identifying multiple submissions in Internet research: preserving data integrity.
Bowen, Anne M; Daniel, Candice M; Williams, Mark L; Baird, Grayson L
2008-11-01
Internet-based sexuality research with hidden populations has become increasingly popular. Respondent anonymity may encourage participation and lower social desirability, but associated disinhibition may promote multiple submissions, especially when incentives are offered. The goal of this study was to identify the usefulness of different variables for detecting multiple submissions from repeat responders and to explore incentive effects. The data included 1,900 submissions from a three-session Internet intervention with a pretest and three post-test questionnaires. Participants were men who have sex with men and incentives were offered to rural participants for completing each questionnaire. The final number of submissions included 1,273 "unique", 132 first submissions by "repeat responders" and 495 additional submissions by the "repeat responders" (N = 1,900). Four categories of repeat responders were identified: "infrequent" (2-5 submissions), "persistent" (6-10 submissions), "very persistent" (11-30 submissions), and "hackers" (more than 30 submissions). Internet Provider (IP) addresses, user names, and passwords were the most useful for identifying "infrequent" repeat responders. "Hackers" often varied their IP address and identifying information to prevent easy identification, but investigating the data for small variations in IP, using reverse telephone look up, and patterns across usernames and passwords were helpful. Incentives appeared to play a role in stimulating multiple submissions, especially from the more sophisticated "hackers". Finally, the web is ever evolving and it will be necessary to have good programmers and staff who evolve as fast as "hackers".
Methods for the Joint Meta-Analysis of Multiple Tests
ERIC Educational Resources Information Center
Trikalinos, Thomas A.; Hoaglin, David C.; Small, Kevin M.; Terrin, Norma; Schmid, Christopher H.
2014-01-01
Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests'…
Evaluating Multiple Prevention Programs: Methods, Results, and Lessons Learned
ERIC Educational Resources Information Center
Adler-Baeder, Francesca; Kerpelman, Jennifer; Griffin, Melody M.; Schramm, David G.
2010-01-01
Extension faculty and agents/educators are increasingly collaborating with local and state agencies to provide and evaluate multiple, distinct programs, yet there is limited information about measuring outcomes and combining results across similar program types. This article explicates the methods and outcomes of a state-level evaluation of…
On a class of summability methods for multiple Fourier series
Dyachenko, Mikhail I
2013-03-31
The paper shows that the same properties which hold for the classical (C,1)-means are preserved for a sufficiently large class of summability methods for multiple Fourier series involving rectangular partial sums. More precisely, Fourier series of continuous functions are uniformly summable by these methods, and Fourier series of functions from the class L(ln{sup +} L){sup m-1} (T{sup m}) are summable almost everywhere. Bibliography: 6 titles.
Multiple integral representation for the trigonometric SOS model with domain wall boundaries
NASA Astrophysics Data System (ADS)
Galleas, W.
2012-05-01
Using the dynamical Yang-Baxter algebra we derive a functional equation for the partition function of the trigonometric SOS model with domain wall boundary conditions. The solution of the equation is given in terms of a multiple contour integral.
Path Integral Monte Carlo Methods for Fermions
NASA Astrophysics Data System (ADS)
Ethan, Ethan; Dubois, Jonathan; Ceperley, David
2014-03-01
In general, Quantum Monte Carlo methods suffer from a sign problem when simulating fermionic systems. This causes the efficiency of a simulation to decrease exponentially with the number of particles and inverse temperature. To circumvent this issue, a nodal constraint is often implemented, restricting the Monte Carlo procedure from sampling paths that cause the many-body density matrix to change sign. Unfortunately, this high-dimensional nodal surface is not a priori known unless the system is exactly solvable, resulting in uncontrolled errors. We will discuss two possible routes to extend the applicability of finite-temperatue path integral Monte Carlo. First we extend the regime where signful simulations are possible through a novel permutation sampling scheme. Afterwards, we discuss a method to variationally improve the nodal surface by minimizing a free energy during simulation. Applications of these methods will include both free and interacting electron gases, concluding with discussion concerning extension to inhomogeneous systems. Support from DOE DE-FG52-09NA29456, DE-AC52-07NA27344, LLNL LDRD 10- ERD-058, and the Lawrence Scholar program.
Two-Dimensional Integral Combustion for Multiple Phase Flow
1997-05-05
This ANL multiphase two-dimensional combustion computer code solves conservation equations for gaseous species and solid particles (or droplets) of various sizes. General conservation laws, expressed by ellipitic-type partial differential equations are used in conjunction with rate equations governing the mass, momentum, enthaply, species, turbulent kinetic energy, and turbulent dissipation for a two-phase reacting flow. Associated submodels include an integral combustion, a two-parameter turbulence, a particle evaporation, and interfacial submodels. A newly-developed integral combustion submodel replacingmore » an Arrhenius-type differential reaction submodel is implemented to improve numerical convergence and enhance numerical stability. The two-parameter turbulence submodel is modified for both gas and solid phases. The evaporation submodel treats size dispersion as well as particle evaporation. Interfacial submodels use correlations to model interfacial momentum and energy transfer.« less
Multiple time-scale methods in particle simulations of plasmas
Cohen, B.I.
1985-02-14
This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling.
Identifying Multiple Submissions in Internet Research: Preserving Data Integrity
Bowen, Anne M.; Daniel, Candice M.; Williams, Mark L.; Baird, Grayson L.
2008-01-01
Internet-based sexuality research with hidden populations has become increasingly popular. Respondent anonymity may encourage participation and lower social desirability, but associated disinhibition may promote multiple submissions, especially when incentives are offered. The goal of this study was to identify the usefulness of different variables for detecting multiple submissions from repeat responders and to explore incentive effects. The data included 1,900 submissions from a three-session Internet intervention with a pretest and three post-test questionnaires. Participants were men who have sex with men and incentives were offered to rural participants for completing each questionnaire. The final number of submissions included 1,273 “unique”, 132 first submissions by “repeat responders” and 495 additional submissions by the “repeat responders” (N = 1,900). Four categories of repeat responders were identified: “infrequent” (2–5 submissions), “persistent” (6–10 submissions), “very persistent” (11–30 submissions), and “hackers” (more than 30 submissions). Internet Provider (IP) addresses, user names, and passwords were the most useful for identifying “infrequent” repeat responders. “Hackers” often varied their IP address and identifying information to prevent easy identification, but investigating the data for small variations in IP, using reverse telephone look up, and patterns across usernames and passwords were helpful. Incentives appeared to play a role in stimulating multiple submissions, especially from the more sophisticated “hackers”. Finally, the web is ever evolving and it will be necessary to have good programmers and staff who evolve as fast as “hackers”. PMID:18240015
Characterizing lentic freshwater fish assemblages using multiple sampling methods
Fischer, Jesse R.; Quist, Michael
2014-01-01
Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.
An Alternative Method for Analyzing Active Multiplicity Data.
Pickrell, M. M.
2005-01-01
The authors have developed a novel method for analyzing active neutron multiplicity data. The conventional method was derived from the standard passive multiplicity equations known as the point model. The approach was to substitute a term consisting of the product of the interrogation source strength, the coupling coefficient, and the sample mass for the product term of the sample mass and the fission rate: I{kappa}{sub c}m {yields} F{sub 0}m, where I is the source strength, {kappa}{sub c} is the coupling coefficient, m is the sample mass and F is the fission rate. Note that the sample mass, m, refers to the fissile material (e.g. {sup 235}U) in the active case and fertile material (e.g. {sup 240}Pu{sub eff}) in the passive case. In addition, the spontaneous fission multiplicity coefficients, {nu}{sub s}, were replaced with the induced fission multiplicity coefficients, {nu}{sub i}. This model has several drawbacks. The most significant is that the coupling coefficient, {kappa}{sub c}, varies significantly with the multiplication. As a consequence, there is not a clear linear relationship between the doubles rate and the sample mass, nor is there a clear linear relationship between the multiplication-corrected doubles rate and the sample mass. This problem has limited the application of active neutron multiplicity counting. They propose here a novel approach to deriving the multiplicity equations. A different substitution is made in the point model equations. The value of alpha is replaced with a new term, alpha-prime: {alpha} {yields} {alpha}{prime} {triple_bond} {alpha} + {kappa}{sub p}I/Fm{sub 0}{nu}{sub s1}. There are several benefits to this approach, but most significant is that the new coupling coefficient, {kappa}{sub p}, remains constant. In this paper they will establish the general physics justification why this different substitution is appropriate. They will derive the new point model equations for active neutron multiplicity starting from the original
Method and apparatus for fiber optic multiple scattering suppression
NASA Technical Reports Server (NTRS)
Ackerson, Bruce J. (Inventor)
2000-01-01
The instant invention provides a method and apparatus for use in laser induced dynamic light scattering which attenuates the multiple scattering component in favor of the single scattering component. The preferred apparatus utilizes two light detectors that are spatially and/or angularly separated and which simultaneously record the speckle pattern from a single sample. The recorded patterns from the two detectors are then cross correlated in time to produce one point on a composite single/multiple scattering function curve. By collecting and analyzing cross correlation measurements that have been taken at a plurality of different spatial/angular positions, the signal representative of single scattering may be differentiated from the signal representative of multiple scattering, and a near optimum detector separation angle for use in taking future measurements may be determined.
ERIC Educational Resources Information Center
Braten, Ivar; Britt, M. Anne; Stromso, Helge I.; Rouet, Jean-Francois
2011-01-01
In present-day knowledge societies, competent reading involves the integration of information from multiple sources into a coherent, meaningful representation of a topic, issue, or situation. This article reviews research and theory concerning the comprehension of multiple textual resources, focusing especially on linkages recently established…
ERIC Educational Resources Information Center
Daniel, Shannon M.
2015-01-01
In this self-study, the author reflects on her implementation of empathetic, critical integrations of multiple perspectives (ECI), which she designed to afford preservice teachers the opportunity to discuss and collectively reflect upon the oft-diverging multiple perspectives, values, and practices they experience during their practicum (Daniel,…
Energy Simulation of Integrated Multiple-Zone Variable Refrigerant Flow System
Shen, Bo; Rice, C Keith; Baxter, Van D
2013-01-01
We developed a detailed steady-state system model, to simulate the performance of an integrated five-zone variable refrigerant flow (VRF)heat pump system. The system is multi-functional, capable of space cooling, space heating, combined space cooling and water heating, and dedicated water heating. Methods were developed to map the VRF performance in each mode, based on the abundant data produced by the equipment system model. The performance maps were used in TRNSYS annual energy simulations. Using TRNSYS, we have successfully setup and run cases for a multiple-split, VRF heat pump and dehumidifier combination in 5-zone houses in 5 climates that control indoor dry-bulb temperature and relative humidity. We compared the calculated energy consumptions for the VRF heat pump against that of a baseline central air source heat pump, coupled with electric water heating and the standalone dehumidifiers. In addition, we investigated multiple control scenarios for the VRF heat pump, i.e. on/off control, variable indoor air flow rate, and using different zone temperature setting schedules, etc. The energy savings for the multiple scenarios were assessed.
A Crack Growth Evaluation Method for Interacting Multiple Cracks
NASA Astrophysics Data System (ADS)
Kamaya, Masayuki
When stress corrosion cracking or corrosion fatigue occurs, multiple cracks are frequently initiated in the same area. According to section XI of the ASME Boiler and Pressure Vessel Code, multiple cracks are considered as a single combined crack in crack growth analysis, if the specified conditions are satisfied. In crack growth processes, however, no prescription for the interference between multiple cracks is given in this code. The JSME Post-Construction Code, issued in May 2000, prescribes the conditions of crack coalescence in the crack growth process. This study aimed to extend this prescription to more general cases. A simulation model was applied, to simulate the crack growth process, taking into account the interference between two cracks. This model made it possible to analyze multiple crack growth behaviors for many cases (e. g. different relative position and length) that could not be studied by experiment only. Based on these analyses, a new crack growth analysis method was suggested for taking into account the interference between multiple cracks.
Students' integration of multiple representations in a titration experiment
NASA Astrophysics Data System (ADS)
Kunze, Nicole M.
A complete understanding of a chemical concept is dependent upon a student's ability to understand the microscopic or particulate nature of the phenomenon and integrate the microscopic, symbolic, and macroscopic representations of the phenomenon. Acid-base chemistry is a general chemistry topic requiring students to understand the topics of chemical reactions, solutions, and equilibrium presented earlier in the course. In this study, twenty-five student volunteers from a second semester general chemistry course completed two interviews. The first interview was completed prior to any classroom instruction on acids and bases. The second interview took place after classroom instruction, a prelab activity consisting of a titration calculation worksheet, a titration computer simulation, or a microscopic level animation of a titration, and two microcomputer-based laboratory (MBL) titration experiments. During the interviews, participants were asked to define and describe acid-base concepts and in the second interview they also drew the microscopic representations of four stages in an acid-base titration. An analysis of the data showed that participants had integrated the three representations of an acid-base titration to varying degrees. While some participants showed complete understanding of acids, bases, titrations, and solution chemistry, other participants showed several alternative conceptions concerning strong acid and base dissociation, the formation of titration products, and the dissociation of soluble salts. Before instruction, participants' definitions of acid, base, and pH were brief and consisted of descriptive terms. After instruction, the definitions were more scientific and reflected the definitions presented during classroom instruction.
Predicting White Matter Integrity from Multiple Common Genetic Variants
Kohannim, Omid; Jahanshad, Neda; Braskie, Meredith N; Stein, Jason L; Chiang, Ming-Chang; Reese, April H; Hibar, Derrek P; Toga, Arthur W; McMahon, Katie L; de Zubicaray, Greig I; Medland, Sarah E; Montgomery, Grant W; Martin, Nicholas G; Wright, Margaret J; Thompson, Paul M
2012-01-01
Several common genetic variants have recently been discovered that appear to influence white matter microstructure, as measured by diffusion tensor imaging (DTI). Each genetic variant explains only a small proportion of the variance in brain microstructure, so we set out to explore their combined effect on the white matter integrity of the corpus callosum. We measured six common candidate single-nucleotide polymorphisms (SNPs) in the COMT, NTRK1, BDNF, ErbB4, CLU, and HFE genes, and investigated their individual and aggregate effects on white matter structure in 395 healthy adult twins and siblings (age: 20–30 years). All subjects were scanned with 4-tesla 94-direction high angular resolution diffusion imaging. When combined using mixed-effects linear regression, a joint model based on five of the candidate SNPs (COMT, NTRK1, ErbB4, CLU, and HFE) explained ∼6% of the variance in the average fractional anisotropy (FA) of the corpus callosum. This predictive model had detectable effects on FA at 82% of the corpus callosum voxels, including the genu, body, and splenium. Predicting the brain's fiber microstructure from genotypes may ultimately help in early risk assessment, and eventually, in personalized treatment for neuropsychiatric disorders in which brain integrity and connectivity are affected. PMID:22510721
Promoting return of function in multiple sclerosis: An integrated approach
Gacias, Mar; Casaccia, Patrizia
2013-01-01
Multiple sclerosis is a disease characterized by inflammatory demyelination, axonal degeneration and progressive brain atrophy. Most of the currently available disease modifying agents proved to be very effective in managing the relapse rate, however progressive neuronal damage continues to occur and leads to progressive accumulation of irreversible disability. For this reason, any therapeutic strategy aimed at restoration of function must take into account not only immunomodulation, but also axonal protection and new myelin formation. We further highlight the importance of an holistic approach, which considers the variability of therapeutic responsiveness as the result of the interplay between genetic differences and the epigenome, which is in turn affected by gender, age and differences in life style including diet, exercise, smoking and social interaction. PMID:24363985
Exercise in multiple sclerosis -- an integral component of disease management.
Döring, Andrea; Pfueller, Caspar F; Paul, Friedemann; Dörr, Jan
2011-12-24
Multiple sclerosis (MS) is the most common chronic inflammatory disorder of the central nervous system (CNS) in young adults. The disease causes a wide range of symptoms depending on the localization and characteristics of the CNS pathology. In addition to drug-based immunomodulatory treatment, both drug-based and non-drug approaches are established as complementary strategies to alleviate existing symptoms and to prevent secondary diseases. In particular, physical therapy like exercise and physiotherapy can be customized to the individual patient's needs and has the potential to improve the individual outcome. However, high quality systematic data on physical therapy in MS are rare. This article summarizes the current knowledge on the influence of physical activity and exercise on disease-related symptoms and physical restrictions in MS patients. Other treatment strategies such as drug treatments or cognitive training were deliberately excluded for the purposes of this article.
Promoting return of function in multiple sclerosis: An integrated approach.
Gacias, Mar; Casaccia, Patrizia
2013-10-01
Multiple sclerosis is a disease characterized by inflammatory demyelination, axonal degeneration and progressive brain atrophy. Most of the currently available disease modifying agents proved to be very effective in managing the relapse rate, however progressive neuronal damage continues to occur and leads to progressive accumulation of irreversible disability. For this reason, any therapeutic strategy aimed at restoration of function must take into account not only immunomodulation, but also axonal protection and new myelin formation. We further highlight the importance of an holistic approach, which considers the variability of therapeutic responsiveness as the result of the interplay between genetic differences and the epigenome, which is in turn affected by gender, age and differences in life style including diet, exercise, smoking and social interaction. PMID:24363985
Mafuba, Kay; Gates, Bob
2012-12-01
This paper explores and advocates the use of sequential multiple methods as a contemporary strategy for undertaking research. Sequential multiple methods involve the use of results obtained through one data collection method to determine the direction and implementation of subsequent stages of a research project (Morse, 1991; Morgan, 1998). This paper will also explore the significance of how triangulating research at the epistemological, theoretical and methodological levels could enhance research. Finally the paper evaluates the significance of sequential multiple method in learning disability nursing research practice.
A method for interactive specification of multiple-block topologies
NASA Technical Reports Server (NTRS)
Sorenson, Reese L.; Mccann, Karen M.
1991-01-01
A method is presented for dealing with the vast amount of topological and other data which must be specified to generate a multiple-block computational grid. Specific uses of the graphical capabilities of a powerful scientific workstation are described which reduce the burden on the user of collecting and formatting such large amounts of data. A program to implement this method, 3DPREP, is described. A plotting transformation algorithm, some useful software tools, notes on programming, and a database organization are also presented. Example grids developed using the method are shown.
Improved parallel solution techniques for the integral transport matrix method
Zerr, Robert J; Azmy, Yousry Y
2010-11-23
Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution by up to {approx}50% when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing case are opticaUy thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block preconditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient preconditioner.
Galerkin projection methods for solving multiple related linear systems
Chan, T.F.; Ng, M.; Wan, W.L.
1996-12-31
We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.
Improved Multiple-Coarsening Methods for Sn Discretizations of the Boltzmann Equation
Lee, Barry
2010-06-01
In a recent series of articles, the author presented a multiple-coarsening multigrid method for solving $S_n$ discretizations of the Boltzmann transport equation. This algorithm is applied to an integral equation for the scalar flux or moments. Although this algorithm is very efficient over parameter regimes that describe realistic neutron/photon transport applications, improved methods that can reduce the computational cost are presented in this paper. These improved methods are derived through a careful examination of the frequencies, particularly the near-nullspace, of the integral equation. In the earlier articles, the near-nullspace components were shown to be smooth in angle in the sense that the angular fluxes generated by these components are smooth in angle. In this paper, we present a spatial description of these near-nullspace components. Using the angular description of the earlier papers together with the spatial description reveals the intrinsic space-angle dependence of the integral equation's frequencies. This space-angle dependence is used to determine the appropriate space-angle grids to represent and efficiently attenuate the near-nullspace error components on. It will be shown that these components can have multiple spatial scales. By using only the appropriate space-angle grids that can represent these spatial scales in the original multiple-coarsening algorithm, an improved algorithm is obtained. Moreover, particularly for anisotropic scattering, recognizing the strong angle dependence of the angular fluxes generated by the high frequencies of the integral equation, another improved multiple-coarsening scheme is derived. Restricting this scheme to the appropriate space-angle grids produces a very efficient method.
Improved Multiple-Coarsening Methods for Sn Discretizations of the Boltzmann Equation
Lee, B
2008-12-01
In a recent series of articles, the author presented a multiple-coarsening multigrid method for solving S{sub n} discretizations of the Boltzmann transport equation. This algorithm is applied to an integral equation for the scalar flux or moments. Although this algorithm is very efficient over parameter regimes that describe realistic neutron/photon transport applications, improved methods that can reduce the computational cost are presented in this paper. These improved methods are derived through a careful examination of the frequencies, particularly the near-nullspace, of the integral equation. In the earlier articles, the near-nullspace components were shown to be smooth in angle in the sense that the angular fluxes generated by these components are smooth in angle. In this paper, we present a spatial description of these near-nullspace components. Using the angular description of the earlier papers together with the spatial description reveals the intrinsic space-angle dependence of the integral equation's frequencies. This space-angle dependence is used to determine the appropriate space-angle grids to represent and efficiently attenuate the near-nullspace error components on. It will be shown that these components can have multiple spatial scales. By using only the appropriate space-angle grids that can represent these spatial scales in the original multiple-coarsening algorithm, an improved algorithm is obtained. Moreover, particularly for anisotropic scattering, recognizing the strong angle dependence of the angular fluxes generated by the high frequencies of the integral equation, another improved multiple-coarsening scheme is derived. Restricting this scheme to the appropriate space-angle grids produces a very efficient method.
Multiple roles of atm in monitoring and maintaining dna integrity
Derheimer, Frederick A; Kastan, Michael B
2010-01-01
The ability of our cells to maintain genomic integrity is fundamental for protection from cancer development. Central to this process is the ability of cells to recognize and repair DNA damage and progress through the cell cycle in a regulated and orderly manner. In addition, protection of chromosome ends through the proper assembly of telomeres prevents loss of genetic information and aberrant chromosome fusions. Cells derived from patients with ataxia-telangiectasia (A-T) show defects in cell cycle regulation, abnormal responses to DNA breakage, and chromosomal end-to-end fusions. The identification and characterization of the ATM (ataxia-telangiectasia, mutated) gene product has provided an essential tool for researchers in elucidating cellular mechanisms involved in cell cycle control, DNA repair, and chromosomal stability. PMID:20580718
Reis, Ben Y; Mandl, Kenneth D
2003-01-01
Syndromic surveillance systems are being deployed widely to monitor for signals of covert bioterrorist attacks. Regional systems are being established through the integration of local surveillance data across multiple facilities. We studied how different methods of data integration affect outbreak detection performance. We used a simulation relying on a semi-synthetic dataset, introducing simulated outbreaks of different sizes into historical visit data from two hospitals. In one simulation, we introduced the synthetic outbreak evenly into both hospital datasets (aggregate model). In the second, the outbreak was introduced into only one or the other of the hospital datasets (local model). We found that the aggregate model had a higher sensitivity for detecting outbreaks that were evenly distributed between the hospitals. However, for outbreaks that were localized to one facility, maintaining individual models for each location proved to be better. Given the complementary benefits offered by both approaches, the results suggest building a hybrid system that includes both individual models for each location, and an aggregate model that combines all the data. We also discuss options for multi-level signal integration hierarchies. PMID:14728233
Measuring multiple residual-stress components using the contour method and multiple cuts
Prime, Michael B; Swenson, Hunter; Pagliaro, Pierluigi; Zuccarello, Bernardo
2009-01-01
The conventional contour method determines one component of stress over the cross section of a part. The part is cut into two, the contour of the exposed surface is measured, and Bueckner's superposition principle is analytically applied to calculate stresses. In this paper, the contour method is extended to the measurement of multiple stress components by making multiple cuts with subsequent applications of superposition. The theory and limitations are described. The theory is experimentally tested on a 316L stainless steel disk with residual stresses induced by plastically indenting the central portion of the disk. The stress results are validated against independent measurements using neutron diffraction. The theory has implications beyond just multiple cuts. The contour method measurements and calculations for the first cut reveal how the residual stresses have changed throughout the part. Subsequent measurements of partially relaxed stresses by other techniques, such as laboratory x-rays, hole drilling, or neutron or synchrotron diffraction, can be superimposed back to the original state of the body.
Multiple light scattering methods for multiphase flow diagnostics
NASA Astrophysics Data System (ADS)
Estevadeordal, Jordi
2015-11-01
Multiphase flows of gases and liquids containing droplets, bubbles, or particulates present light scattering imaging challenges due to the interference from each phase, such as secondary reflections, extinctions, absorptions, and refractions. These factors often prevent the unambiguous detection of each phase and also produce undesired beam steering. The effects can be especially complex in presence of dense phases, multispecies flows, and high pressure environments. This investigation reports new methods for overcoming these effects for quantitative measurements of velocity, density, and temperature fields. The methods are based on light scattering techniques combining Mie and filtered Rayleigh scattering and light extinction analyses and measurements. The optical layout is designed to perform multiple property measurements with improved signal from each phase via laser spectral and polarization characterization, etalon decontamination, and use of multiple wavelengths and imaging detectors.
Integrating regional conservation priorities for multiple objectives into national policy.
Beger, Maria; McGowan, Jennifer; Treml, Eric A; Green, Alison L; White, Alan T; Wolff, Nicholas H; Klein, Carissa J; Mumby, Peter J; Possingham, Hugh P
2015-01-01
Multinational conservation initiatives that prioritize investment across a region invariably navigate trade-offs among multiple objectives. It seems logical to focus where several objectives can be achieved efficiently, but such multi-objective hotspots may be ecologically inappropriate, or politically inequitable. Here we devise a framework to facilitate a regionally cohesive set of marine-protected areas driven by national preferences and supported by quantitative conservation prioritization analyses, and illustrate it using the Coral Triangle Initiative. We identify areas important for achieving six objectives to address ecosystem representation, threatened fauna, connectivity and climate change. We expose trade-offs between areas that contribute substantially to several objectives and those meeting one or two objectives extremely well. Hence there are two strategies to guide countries choosing to implement regional goals nationally: multi-objective hotspots and complementary sets of single-objective priorities. This novel framework is applicable to any multilateral or global initiative seeking to apply quantitative information in decision making. PMID:26364769
Integrating regional conservation priorities for multiple objectives into national policy
Beger, Maria; McGowan, Jennifer; Treml, Eric A.; Green, Alison L.; White, Alan T.; Wolff, Nicholas H.; Klein, Carissa J.; Mumby, Peter J.; Possingham, Hugh P.
2015-01-01
Multinational conservation initiatives that prioritize investment across a region invariably navigate trade-offs among multiple objectives. It seems logical to focus where several objectives can be achieved efficiently, but such multi-objective hotspots may be ecologically inappropriate, or politically inequitable. Here we devise a framework to facilitate a regionally cohesive set of marine-protected areas driven by national preferences and supported by quantitative conservation prioritization analyses, and illustrate it using the Coral Triangle Initiative. We identify areas important for achieving six objectives to address ecosystem representation, threatened fauna, connectivity and climate change. We expose trade-offs between areas that contribute substantially to several objectives and those meeting one or two objectives extremely well. Hence there are two strategies to guide countries choosing to implement regional goals nationally: multi-objective hotspots and complementary sets of single-objective priorities. This novel framework is applicable to any multilateral or global initiative seeking to apply quantitative information in decision making. PMID:26364769
DictyBase 2013: integrating multiple Dictyostelid species.
Basu, Siddhartha; Fey, Petra; Pandit, Yogesh; Dodson, Robert; Kibbe, Warren A; Chisholm, Rex L
2013-01-01
dictyBase (http://dictybase.org) is the model organism database for the social amoeba Dictyostelium discoideum. This contribution provides an update on dictyBase that has been previously presented. During the past 3 years, dictyBase has taken significant strides toward becoming a genome portal for the whole Amoebozoa clade. In its latest release, dictyBase has scaled up to host multiple Dictyostelids, including Dictyostelium purpureum [Sucgang, Kuo, Tian, Salerno, Parikh, Feasley, Dalin, Tu, Huang, Barry et al.(2011) (Comparative genomics of the social amoebae Dictyostelium discoideum and Dictyostelium purpureum. Genome Biol., 12, R20)], Dictyostelium fasciculatum and Polysphondylium pallidum [Heidel, Lawal, Felder, Schilde, Helps, Tunggal, Rivero, John, Schleicher, Eichinger et al. (2011) (Phylogeny-wide analysis of social amoeba genomes highlights ancient origins for complex intercellular communication. Genome Res., 21, 1882-1891)]. The new release includes a new Genome Browser with RNAseq expression, interspecies Basic Local Alignment Search Tool alignments and a unified Basic Local Alignment Search Tool search for cross-species comparisons.
Lidar Tracking of Multiple Fluorescent Tracers: Method and Field Test
NASA Technical Reports Server (NTRS)
Eberhard, Wynn L.; Willis, Ron J.
1992-01-01
Past research and applications have demonstrated the advantages and usefulness of lidar detection of a single fluorescent tracer to track air motions. Earlier researchers performed an analytical study that showed good potential for lidar discrimination and tracking of two or three different fluorescent tracers at the same time. The present paper summarizes the multiple fluorescent tracer method, discusses its expected advantages and problems, and describes our field test of this new technique.
Boundary integral methods for microfluidic problems
NASA Astrophysics Data System (ADS)
Burbidge, Adam
2015-01-01
Microscale experiments of reduced complexity allow one to tease out and examine some of the interesting phenomena that manifest in large hierarchically structured materials which are of general interest across many industries. Recent advances in high speed imaging techniques and post-processing allow experiments to yield small scale information which was previously unavailable, or extremely difficult to obtain. This additional information provides new challenges in terms of theoretical understanding and prediction that requires new tools. We discuss generalised weighted residual numerical methods as a means of solving physically derived systems of PDEs, using the steady Stokes equation as an example. These formulations require integration of arbitrary functions of submanifolds which often will have a lower dimensionality than the parent manifold, leading to cumbersome calculations of the Jacobian determinant. We provide a tensorial view of the transformation, in which the natural element coordinate system is a non-orthogonal frame, and derive an expression for the Jacobian factor in terms of the contravariant metric tensor gij. This approach has the additional advantage that it can be extended to yield the local surface curvature, which will be essential for correct implementation of free surface boundaries.
Boundary integral methods for unsaturated flow
Martinez, M.J.; McTigue, D.F.
1990-12-31
Many large simulations may be required to assess the performance of Yucca Mountain as a possible site for the nations first high level nuclear waste repository. A boundary integral equation method (BIEM) is described for numerical analysis of quasilinear steady unsaturated flow in homogeneous material. The applicability of the exponential model for the dependence of hydraulic conductivity on pressure head is discussed briefly. This constitutive assumption is at the heart of the quasilinear transformation. Materials which display a wide distribution in pore-size are described reasonably well by the exponential. For materials with a narrow range in pore-size, the exponential is suitable over more limited ranges in pressure head. The numerical implementation of the BIEM is used to investigate the infiltration from a strip source to a water table. The net infiltration of moisture into a finite-depth layer is well-described by results for a semi-infinite layer if {alpha}D > 4, where {alpha} is the sorptive number and D is the depth to the water table. the distribution of moisture exhibits a similar dependence on {alpha}D. 11 refs., 4 figs.,
Integrating multiple scientific computing needs via a Private Cloud infrastructure
NASA Astrophysics Data System (ADS)
Bagnasco, S.; Berzano, D.; Brunetti, R.; Lusso, S.; Vallero, S.
2014-06-01
In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.
Assessing District Energy Systems Performance Integrated with Multiple Thermal Energy Storages
NASA Astrophysics Data System (ADS)
Rezaie, Behnaz
The goal of this study is to examine various energy resources in district energy (DE) systems and then DE system performance development by means of multiple thermal energy storages (TES) application. This study sheds light on areas not yet investigated precisely in detail. Throughout the research, major components of the heat plant, energy suppliers of the DE systems, and TES characteristics are separately examined; integration of various configurations of the multiple TESs in the DE system is then analysed. In the first part of the study, various sources of energy are compared, in a consistent manner, financially and environmentally. The TES performance is then assessed from various aspects. Then, TES(s) and DE systems with several sources of energy are integrated, and are investigated as a heat process centre. The most efficient configurations of the multiple TESs integrated with the DE system are investigated. Some of the findings of this study are applied on an actual DE system. The outcomes of this study provide insight for researchers and engineers who work in this field, as well as policy makers and project managers who are decision-makers. The accomplishments of the study are original developments TESs and DE systems. As an original development the Enviro-Economic Function, to balance the economic and environmental aspects of energy resources technologies in DE systems, is developed; various configurations of multiple TESs, including series, parallel, and general grid, are developed. The developed related functions are discharge temperature and energy of the TES, and energy and exergy efficiencies of the TES. The TES charging and discharging behavior of TES instantaneously is also investigated to obtain the charging temperature, the maximum charging temperature, the charging energy flow, maximum heat flow capacity, the discharging temperature, the minimum charging temperature, the discharging energy flow, the maximum heat flow capacity, and performance
Differential operator multiplication method for fractional differential equations
NASA Astrophysics Data System (ADS)
Tang, Shaoqiang; Ying, Yuping; Lian, Yanping; Lin, Stephen; Yang, Yibo; Wagner, Gregory J.; Liu, Wing Kam
2016-11-01
Fractional derivatives play a very important role in modeling physical phenomena involving long-range correlation effects. However, they raise challenges of computational cost and memory storage requirements when solved using current well developed numerical methods. In this paper, the differential operator multiplication method is proposed to address the issues by considering a reaction-advection-diffusion equation with a fractional derivative in time. The linear fractional differential equation is transformed into an integer order differential equation by the proposed method, which can fundamentally fix the aforementioned issues for select fractional differential equations. In such a transform, special attention should be paid to the initial conditions for the resulting differential equation of higher integer order. Through numerical experiments, we verify the proposed method for both fractional ordinary differential equations and partial differential equations.
Differential operator multiplication method for fractional differential equations
NASA Astrophysics Data System (ADS)
Tang, Shaoqiang; Ying, Yuping; Lian, Yanping; Lin, Stephen; Yang, Yibo; Wagner, Gregory J.; Liu, Wing Kam
2016-08-01
Fractional derivatives play a very important role in modeling physical phenomena involving long-range correlation effects. However, they raise challenges of computational cost and memory storage requirements when solved using current well developed numerical methods. In this paper, the differential operator multiplication method is proposed to address the issues by considering a reaction-advection-diffusion equation with a fractional derivative in time. The linear fractional differential equation is transformed into an integer order differential equation by the proposed method, which can fundamentally fix the aforementioned issues for select fractional differential equations. In such a transform, special attention should be paid to the initial conditions for the resulting differential equation of higher integer order. Through numerical experiments, we verify the proposed method for both fractional ordinary differential equations and partial differential equations.
An Integrated Approach for Accessing Multiple Datasets through LANCE
NASA Astrophysics Data System (ADS)
Murphy, K. J.; Teague, M.; Conover, H.; Regner, K.; Beaumont, B.; Masuoka, E.; Vollmer, B.; Theobald, M.; Durbin, P.; Michael, K.; Boller, R. A.; Schmaltz, J. E.; Davies, D.; Horricks, K.; Ilavajhala, S.; Thompson, C. K.; Bingham, A.
2011-12-01
The NASA/GSFC Land Atmospheres Near-real time Capability for EOS (LANCE) provides imagery for approximately 40 data products from MODIS, AIRS, AMSR-E and OMI to support the applications community in the study of a variety of phenomena. Thirty-six of these products are available within 2.5 hours of observation at the spacecraft. The data set includes the population density data provided by the EOSDIS Socio-Economic Data and Applications Center (SEDAC). The purpose of this paper is to describe the variety of tools that have been developed by LANCE to support user access to the imagery. The long-standing Rapid Response system has been integrated into LANCE and is a major vehicle for the distribution of the imagery to end users. There are presently approximately 10,000 anonymous users per month accessing these imagery. The products are grouped into 14 applications categories such as Smoke Plumes, Pollution, Fires, Agriculture and the selection of any category will make relevant subsets of the 40 products available as possible overlays in an interactive Web Client utilizing Web Mapping Service (WMS) to support user investigations (http://lance2.modaps.eosdis.nasa.gov/wms/). For example, selecting Severe Storms will include 6 products for MODIS, OMI, AIRS, and AMSR-E plus the SEDAC population density data. The client and WMS were developed using open-source technologies such as OpenLayers and MapServer and provides a uniform, browser-based access to data products. All overlays are downloadable in PNG, JPEG, or GeoTiff form up to 200MB per request. The WMS was beta-tested with the user community and substantial performance improvements were made through the use of such techniques as tile-caching. LANCE established a partnership with Physical Oceanography Distributed Active Archive Center (PO DAAC) to develop an alternative presentation for the 40 data products known as the State of the Earth (SOTE). This provides a Google Earth-based interface to the products grouped in
Comparison of Multiple Gene Assembly Methods for Metabolic Engineering
NASA Astrophysics Data System (ADS)
Lu, Chenfeng; Mansoorabadi, Karen; Jeffries, Thomas
A universal, rapid DNA assembly method for efficient multigene plasmid construction is important for biological research and for optimizing gene expression in industrial microbes. Three different approaches to achieve this goal were evaluated. These included creating long complementary extensions using a uracil-DNA glycosylase technique, overlap extension polymerase chain reaction, and a SfiI-based ligation method. SfiI ligation was the only successful approach for assembling large DNA fragments that contained repeated homologous regions. In addition, the SfiI method has been improved over a similar, previous published technique so that it is more flexible and does not require polymerase chain reaction to incorporate adaptors. In the present study, Saccharomyces cerevisiae genes TAL1, TKL1, and PYK1 under control of the 6-phosphogluconate dehydrogenase promoter were successfully ligated together using multiple unique SfiI restriction sites. The desired construct was obtained 65% of the time during vector construction using four-piece ligations. The SfiI method consists of three steps: first a SfiI linker vector is constructed, whose multiple cloning site is flanked by two three-base linkers matching the neighboring SfiI linkers on SfiI digestion; second, the linkers are attached to the desired genes by cloning them into SfiI linker vectors; third, the genes flanked by the three-base linkers, are released by SfiI digestion. In the final step, genes of interest are joined together in a simple one-step ligation.
Numerical solution of integral-algebraic equations for multistep methods
NASA Astrophysics Data System (ADS)
Budnikova, O. S.; Bulatov, M. V.
2012-05-01
Systems of Volterra linear integral equations with identically singular matrices in the principal part (called integral-algebraic equations) are examined. Multistep methods for the numerical solution of a selected class of such systems are proposed and justified.
Calculation of transonic flows using an extended integral equation method
NASA Technical Reports Server (NTRS)
Nixon, D.
1976-01-01
An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.
Multiple Coarse Grid Multigrid Methods for Solving Elliptic Problems
NASA Technical Reports Server (NTRS)
Xiao, Shengyou; Young, David
1996-01-01
In this paper we describe some classes of multigrid methods for solving large linear systems arising in the solution by finite difference methods of certain boundary value problems involving Poisson's equation on rectangular regions. If parallel computing systems are used, then with standard multigrid methods many of the processors will be idle when one is working at the coarsest grid levels. We describe the use of Multiple Coarse Grid MultiGrid (MCGMG) methods. Here one first constructs a periodic set of equations corresponding to the given system. One then constructs a set of coarse grids such that for each grid corresponding to the grid size h there are four grids corresponding to the grid size 2*h. Multigrid operations such as restriction of residuals and interpolation of corrections are done in parallel at each grid level. For suitable choices of the multigrid operators the MCGMG method is equivalent to the Parallel Superconvergent MultiGrid (PSMG) method of Frederickson and McBryan. The convergence properties of MCGMG methods can be accurately analyzed using spectral methods.
A new method for multiple sperm cells tracking.
Imani, Yoones; Teyfouri, Niloufar; Ahmadzadeh, Mohammad Reza; Golabbakhsh, Marzieh
2014-01-01
Motion analysis or quality assessment of human sperm cell is great important for clinical applications of male infertility. Sperm tracking is quite complex due to cell collision, occlusion and missed detection. The aim of this study is simultaneous tracking of multiple human sperm cells. In the first step in this research, the frame difference algorithm is used for background subtraction. There are some limitations to select an appropriate threshold value since the output accuracy is strongly dependent on the selected threshold value. To eliminate this dependency, we propose an improved non-linear diffusion filtering in the time domain. Non-linear diffusion filtering is a smoothing and noise removing approach that can preserve edges in images. Many sperms that move with different speeds in different directions eventually coincide. For multiple tracking over time, an optimal matching strategy is introduced that is based on the optimization of a new cost function. A Hungarian search method is utilized to obtain the best matching for all possible candidates. The results show nearly 3.24% frame based error in dataset of videos that contain more than 1 and less than 10 sperm cells. Hence the accuracy rate was 96.76%. These results indicate the validity of the proposed algorithm to perform multiple sperms tracking.
NASA Astrophysics Data System (ADS)
Mahmud, K.; Mariethoz, G.; Baker, A.
2013-12-01
It has been widely demonstrated that the hydraulic conductivity of an aquifer increases with a larger portion of the aquifer tested. This poses a challenge when different hydraulic conductivity measurements coexist in a field study and have to be integrated simultaneously (e.g. core analysis, slug tests and well tests). While the scaling of hydraulic conductivity can be analytically derived in multiGaussian media, there is no general methodology to simultaneously integrate hydraulic conductivity measurements taken at different scales in highly heterogeneous media. Here we address this issue in the context of multiple-point statistics simulations (MPS). In MPS, the spatial continuity is based on a training image (TI) that contains the variability, connectivity, and structural properties of the medium. The key principle of our methodology is to consider the different scales of hydraulic conductivity as joint variables which are simulated together. Based on a TI that represents the fine-scale spatial variability, we use a classical upscaling method to obtain a series of upscaled TIs that correspond to the different scales at which measurements are available. In our case, the renormalization method is used for this upscaling step, but any upscaling method could be employed. Considered together, the different scales obtained are considered a single multi-scale representation of the initial TI, in a similar fashion as the multiscale pyramids used in image processing. We then use recent MPS simulation methods that allow dealing with multivariate TIs to generate conditional realizations of the different scales together. One characteristic of these realizations is that the possible non-linear relationships between the different simulated scales are statistically similar to the relationships observed in the multiscale TI. Therefore these relationships are considered a reasonable approximation of the renormalization results that were used on the TI. Another characteristic of
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
Fabricating and operating complex systems involves dealing with uncertainty in the relevant variables. In the case of aircraft, flow conditions are subject to change during operation. Efficiency and engine noise may be different from the expected values because of manufacturing tolerances and normal wear and tear. Engine components may have a shorter life than expected because of manufacturing tolerances. In spite of the important effect of operating- and manufacturing-uncertainty on the performance and expected life of the component or system, traditional aerodynamic shape optimization has focused on obtaining the best design given a set of deterministic flow conditions. Clearly it is important to both maintain near-optimal performance levels at off-design operating conditions, and, ensure that performance does not degrade appreciably when the component shape differs from the optimal shape due to manufacturing tolerances and normal wear and tear. These requirements naturally lead to the idea of robust optimal design wherein the concept of robustness to various perturbations is built into the design optimization procedure. The basic ideas involved in robust optimal design will be included in this lecture. The imposition of the additional requirement of robustness results in a multiple-objective optimization problem requiring appropriate solution procedures. Typically the costs associated with multiple-objective optimization are substantial. Therefore efficient multiple-objective optimization procedures are crucial to the rapid deployment of the principles of robust design in industry. Hence the companion set of lecture notes (Single- and Multiple-Objective Optimization with Differential Evolution and Neural Networks ) deals with methodology for solving multiple-objective Optimization problems efficiently, reliably and with little user intervention. Applications of the methodologies presented in the companion lecture to robust design will be included here. The
Robust control of multiple integrators subject to input saturation and disturbance
NASA Astrophysics Data System (ADS)
Ding, Shihong; Zheng, Wei Xing
2015-04-01
This paper is concerned with the problem of robust stabilisation of multiple integrators systems subject to input saturation and disturbance from the viewpoint of state feedback and output feedback. First of all, without considering the disturbance, a backstepping-like method in conjunction with a series of saturation functions with different saturation levels is employed to design a nested-saturation based state-feedback controller with pre-chosen parameters. On this basis, taking the disturbance into account, a sliding mode disturbance observer (DOB) is adopted to estimate the states and the disturbance. Then, by combining the above state-feedback controller and the estimated states together, a composite controller with disturbance compensation is developed. With the removal of the non-increasing restriction on the saturation levels, the controller design becomes very flexible and the convergence performance of the closed-loop system is much improved. Meanwhile, with the aid of the estimated values by the DOB, we obtain not only the output-feedback control scheme but also the better disturbance rejection property for the closed-loop system. A simulation example of a triple integrators system is presented to substantiate the usefulness of the proposed technique.
Moving Mesh Methods in Multiple Dimensions Based on Harmonic Maps
NASA Astrophysics Data System (ADS)
Li, Ruo; Tang, Tao; Zhang, Pingwen
2001-07-01
In practice, there are three types of adaptive methods using the finite element approach, namely the h-method, p-method, and r-method. In the h-method, the overall method contains two parts, a solution algorithm and a mesh selection algorithm. These two parts are independent of each other in the sense that the change of the PDEs will affect the first part only. However, in some of the existing versions of the r-method (also known as the moving mesh method), these two parts are strongly associated with each other and as a result any change of the PDEs will result in the rewriting of the whole code. In this work, we will propose a moving mesh method which also contains two parts, a solution algorithm and a mesh-redistribution algorithm. Our efforts are to keep the advantages of the r-method (e.g., keep the number of nodes unchanged) and of the h-method (e.g., the two parts in the code are independent). A framework for adaptive meshes based on the Hamilton-Schoen-Yau theory was proposed by Dvinsky. In this work, we will extend Dvinsky's method to provide an efficient solver for the mesh-redistribution algorithm. The key idea is to construct the harmonic map between the physical space and a parameter space by an iteration procedure. Each iteration step is to move the mesh closer to the harmonic map. This procedure is simple and easy to program and also enables us to keep the map harmonic even after long times of numerical integration. The numerical schemes are applied to a number of test problems in two dimensions. It is observed that the mesh-redistribution strategy based on the harmonic maps adapts the mesh extremely well to the solution without producing skew elements for multi-dimensional computations.
Hankin, Benjamin L.
2014-01-01
Depression is a developmental phenomenon. Considerable progress has been made in describing the syndrome, establishing its prevalence and features, providing clues as to its etiology, and developing evidence-based treatment and prevention options. Despite considerable headway in distinct lines of vulnerability research, there is an explanatory gap in the field ability to more comprehensively explain and predict who is likely to become depressed, when, and why. Still, despite clear success in predicting moderate variance for future depression, especially with empirically rigorous methods and designs, the heterogeneous and multi-determined nature of depression suggests that additional etiologies need to be included to advance knowledge on developmental pathways to depression. This paper advocates for a multiple levels of analysis approach to investigating vulnerability to depression across the lifespan and providing a more comprehensive understanding of its etiology. One example of a multiple levels of analysis model of vulnerabilities to depression is provided that integrates the most accessible, observable factors (e.g., cognitive and temperament risks), intermediate processes and endophenotypes (e.g., information processing biases, biological stress physiology, and neural activation and connectivity), and genetic influences (e.g., candidate genes and epigenetics). Evidence for each of these factors as well as their cross-level integration is provided. Methodological and conceptual considerations important for conducting integrative, multiple levels of depression vulnerability research are discussed. Finally, translational implications for how a multiple levels of analysis perspective may confer additional leverage to reduce the global burden of depression and improve care are considered. PMID:22900513
Uncertainty in streamflow records - a comparison of multiple estimation methods
NASA Astrophysics Data System (ADS)
Kiang, Julie; Gazoorian, Chris; Mason, Robert; Le Coz, Jerome; Renard, Benjamin; Mansanarez, Valentin; McMillan, Hilary; Westerberg, Ida; Petersen-Øverleir, Asgeir; Reitan, Trond; Sikorska, Anna; Siebert, Jan; Coxon, Gemma; Freer, Jim; Belleville, Arnaud; Hauet, Alexandre
2016-04-01
Stage-discharge rating curves are used to relate streamflow discharge to continuously measured river stage readings in order to create a continuous record of streamflow discharge. The stage-discharge relationship is estimated and refined using discrete streamflow gaugings over time, during which both the discharge and stage are measured. The resulting rating curve has uncertainty due to multiple factors including the curve-fitting process, assumptions on the form of the model used, the changeable nature of natural channels, and the approaches used to extrapolate the rating equation beyond available observations. A number of different methods have been proposed for estimating rating curve uncertainty, differing in mathematical rigour, in the assumptions made about the component errors, and in the information required to implement the method at any given site. This study compares several methods that range from simple LOWESS fits to more complicated Bayesian methods that consider hydraulic principles directly. We evaluate these different methods when applied to a single gauging station using the same information (channel characteristics, hydrographs, and streamflow gaugings). We quantify the resultant spread of the stage-discharge curves and compare the level of uncertainty attributed to the streamflow record by the different methods..
Methods for radiation detection and characterization using a multiple detector probe
Akers, Douglas William; Roybal, Lyle Gene
2014-11-04
Apparatuses, methods, and systems relating to radiological characterization of environments are disclosed. Multi-detector probes with a plurality of detectors in a common housing may be used to substantially concurrently detect a plurality of different radiation activities and types. Multiple multi-detector probes may be used in a down-hole environment to substantially concurrently detect radioactive activity and contents of a buried waste container. Software may process, analyze, and integrate the data from the different multi-detector probes and the different detector types therein to provide source location and integrated analysis as to the source types and activity in the measured environment. Further, the integrated data may be used to compensate for differential density effects and the effects of radiation shielding materials within the volume being measured.
Efficient implicit integration for finite-strain viscoplasticity with a nested multiplicative split
NASA Astrophysics Data System (ADS)
Shutov, A. V.
2016-07-01
An efficient and reliable stress computation algorithm is presented, which is based on implicit integration of the local evolution equations of multiplicative finite-strain plasticity/viscoplasticity. The algorithm is illustrated by an example involving a combined nonlinear isotropic/kinematic hardening; numerous backstress tensors are employed for a better description of the material behavior. The considered material model exhibits the so-called weak invariance under arbitrary isochoric changes of the reference configuration, and the presented algorithm retains this useful property. Even more: the weak invariance serves as a guide in constructing this algorithm. The constraint of inelastic incompressibility is exactly preserved as well. The proposed method is first-order accurate. Concerning the accuracy of the stress computation, the new algorithm is comparable to the Euler Backward method with a subsequent correction of incompressibility (EBMSC) and the classical exponential method (EM). Regarding the computational efficiency, the new algorithm is superior to the EBMSC and EM. Some accuracy tests are presented using parameters of the aluminum alloy 5754-O and the 42CrMo4 steel. FEM solutions of two boundary value problems using MSC.MARC are presented to show the correctness of the numerical implementation.
A multiple phenotype imputation method for genetic studies
Dahl, Andrew; Iotchkova, Valentina; Baud, Amelie; Johansson, Åsa; Gyllensten, Ulf; Soranzo, Nicole; Mott, Richard; Kranis, Andreas; Marchini, Jonathan
2016-01-01
Genetic association studies have yielded a wealth of biologic discoveries. However, these have mostly analyzed one trait and one SNP at a time, thus failing to capture the underlying complexity of these datasets. Joint genotype-phenotype analyses of complex, high-dimensional datasets represent an important way to move beyond simple GWAS with great potential. The move to high-dimensional phenotypes will raise many new statistical problems. In this paper we address the central issue of missing phenotypes in studies with any level of relatedness between samples. We propose a multiple phenotype mixed model and use a computationally efficient variational Bayesian algorithm to fit the model. On a variety of simulated and real datasets from a range of organisms and trait types, we show that our method outperforms existing state-of-the-art methods from the statistics and machine learning literature and can boost signals of association. PMID:26901065
Multiple-time-stepping generalized hybrid Monte Carlo methods
Escribano, Bruno; Akhmatskaya, Elena; Reich, Sebastian; Azpiroz, Jon M.
2015-01-01
Performance of the generalized shadow hybrid Monte Carlo (GSHMC) method [1], which proved to be superior in sampling efficiency over its predecessors [2–4], molecular dynamics and hybrid Monte Carlo, can be further improved by combining it with multi-time-stepping (MTS) and mollification of slow forces. We demonstrate that the comparatively simple modifications of the method not only lead to better performance of GSHMC itself but also allow for beating the best performed methods, which use the similar force splitting schemes. In addition we show that the same ideas can be successfully applied to the conventional generalized hybrid Monte Carlo method (GHMC). The resulting methods, MTS-GHMC and MTS-GSHMC, provide accurate reproduction of thermodynamic and dynamical properties, exact temperature control during simulation and computational robustness and efficiency. MTS-GHMC uses a generalized momentum update to achieve weak stochastic stabilization to the molecular dynamics (MD) integrator. MTS-GSHMC adds the use of a shadow (modified) Hamiltonian to filter the MD trajectories in the HMC scheme. We introduce a new shadow Hamiltonian formulation adapted to force-splitting methods. The use of such Hamiltonians improves the acceptance rate of trajectories and has a strong impact on the sampling efficiency of the method. Both methods were implemented in the open-source MD package ProtoMol and were tested on a water and a protein systems. Results were compared to those obtained using a Langevin Molly (LM) method [5] on the same systems. The test results demonstrate the superiority of the new methods over LM in terms of stability, accuracy and sampling efficiency. This suggests that putting the MTS approach in the framework of hybrid Monte Carlo and using the natural stochasticity offered by the generalized hybrid Monte Carlo lead to improving stability of MTS and allow for achieving larger step sizes in the simulation of complex systems.
Integrated navigation method based on inertial navigation system and Lidar
NASA Astrophysics Data System (ADS)
Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi
2016-04-01
An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.
NASA Astrophysics Data System (ADS)
Carr, M. C.; Baker, G. S.; Herrmann, N.; Yerka, S.; Angst, M.
2008-12-01
The objectives of this project are to (1) utilize quantitative integration of multiple geophysical techniques, (2) determine geophysical anomalies that may indicate locations of various archaeological structures, and (3) develop techniques of quantifying causes of uncertainty. Two sites are used to satisfy these objectives. The first, representing a site with unknown target features, is an archaeological site on the Tennessee River floodplain. The area is divided into 437 (20 x 20 m) plots with 0.5 m spacing where magnetic gradiometry profiles were collected in a zig-zag pattern, resulting in 350 km of line data. Once anomalies are identified in the magnetics data, potential excavation sites for archeological features are determined and other geophysical techniques are utilized to gain confidence in choosing which anomalies to excavate. Several grids are resurveyed using Ground Penetrating Radar (GPR) and EM-31 with a 0.25 m spacing in a grid pattern. A quantitative method of integrating data into one comprehensive set is developed, enhancing interpretation because each geophysical technique utilized within this study produced a unique response to noise and the targets. Spatial visualization software is used to interpolate irregularly spaced XYZ data into a regularly spaced grid and display the geophysical data in 3D representations. Once all data are exported from each individual instrument, grid files are created for quantitative merging of the data and to create grid-based maps including contour, image, shaded relief, and surface maps. Statistics were calculated from anomaly classification in the data and excavated features present. To study this methodology in a more controlled setting, a second site is used. This site is analogous to the first in that it is along the Tennessee River floodplain on the same bedrock units. However, this analog site contains known targets (previously buried and accurately located) including size, shape, and orientation. Four
Technology Integration in a One-to-One Laptop Initiative: A Multiple Case Study Analysis
ERIC Educational Resources Information Center
Jones, Marsha B.
2013-01-01
The purpose of this multiple case study analysis was to examine teachers' experiences and perceptions in order to understand what actions and interactions supported or inhibited technology integration during a one-to-one laptop initiative. This research sought to gain teachers' perspectives on the challenges and successes they faced as classroom…
The Effect of Sensory Integration Treatment on Children with Multiple Disabilities.
ERIC Educational Resources Information Center
Din, Feng S.; Lodato, Donna M.
Six children with multiple disabilities (ages 5 to 8) participated in this evaluation of the effect of sensory integration treatment on sensorimotor function and academic learning. The children had cognitive abilities ranging from sub-average to significantly sub-average, three were non-ambulatory, one had severe behavioral problems, and each…
STRUCTURE OF THE EGF RECEPTOR TRANSACTIVATION CIRCUIT INTEGRATES MULTIPLE SIGNALS WITH CELL CONTEXT
Joslin, Elizabeth J.; Shankaran, Harish; Opresko, Lee K.; Bollinger, Nikki; Lauffenburger, Douglas A.; Wiley, H. Steven
2012-01-01
Summary Transactivation of the epidermal growth factor receptor (EGFR) is thought to be a process by which a variety of cellular inputs can be integrated into a single signaling pathway through either stimulated proteolysis (shedding) of membrane-anchored EGFR ligands or by modification of the activity of the EGFR. As a first step towards building a predictive model of the EGFR transactivation circuit, we quantitatively defined how signals from multiple agonists were integrated both upstream and downstream of the EGFR to regulate extracellular signal regulated kinase (ERK) activity in human mammary epithelial cells. By using a “non-binding” reporter of ligand shedding, we found that transactivation triggers a positive feedback loop from ERK back to the EGFR such that ligand shedding drives EGFR-stimulated ERK that in turn drives further ligand shedding. Importantly, activated Ras and ERK levels were nearly linear functions of ligand shedding and the effect of multiple, sub-saturating inputs was additive. Simulations showed that ERK-mediated feedback through ligand shedding resulted in a stable steady-state level of activated ERK, but also showed that the extracellular environment can modulate the level of feedback. Our results suggest that the transactivation circuit acts as a context-dependent integrator and amplifier of multiple extracellular signals and that signal integration can effectively occur at multiple points in the EGFR pathway. PMID:20458382
Multiplicity and Self-Identity: Trauma and Integration in Shirley Mason's Art
ERIC Educational Resources Information Center
Thompson, Geoffrey
2011-01-01
This viewpoint appeared in its original form as the catalogue essay that accompanied the exhibition "Multiplicity and Self-Identity: Trauma and Integration in Shirley Mason's Art," curated by the author for Gallery 2110, Sacramento, CA, and the 2010 Annual Conference of the American Art Therapy Association. The exhibition featured 17 artworks by…
ERIC Educational Resources Information Center
Rega, Bonney
Noting that linguistic and mathematical/logical are the two kinds of intelligences the educational system encourages and that the educational system, as well as science in general, tends to neglect the nonverbal form of intellect, this paper describes Howard Gardner's multiple intelligences theory and Peter Kline's theory of integrative learning…
ERIC Educational Resources Information Center
Waldmann, Michael R.
2007-01-01
In everyday life, people typically observe fragments of causal networks. From this knowledge, people infer how novel combinations of causes they may never have observed together might behave. I report on 4 experiments that address the question of how people intuitively integrate multiple causes to predict a continuously varying effect. Most…
ERIC Educational Resources Information Center
Gil, Laura; Braten, Ivar; Vidal-Abarca, Eduardo; Stromso, Helge I.
2010-01-01
One of the major challenges of a knowledge society is that students as well as other citizens must learn to understand and integrate information from multiple textual sources. Still, task and reader characteristics that may facilitate or constrain such intertextual processes are not well understood by researchers. In this study, we compare the…
Integration of graphene oxide and DNA as a universal platform for multiple arithmetic logic units.
Wang, Kun; Ren, Jiangtao; Fan, Daoqing; Liu, Yaqing; Wang, Erkang
2014-11-28
By a combination of graphene oxide and DNA, a universal platform was developed for integration of multiple logic gates to implement both half adder and half subtractor functions. A constant undefined threshold range between high and low fluorescence output signals was set for all the developed logic gates.
Multiple proviral integration events after virological synapse-mediated HIV-1 spread
Russell, Rebecca A.; Martin, Nicola; Mitar, Ivonne; Jones, Emma; Sattentau, Quentin J.
2013-08-15
HIV-1 can move directly between T cells via virological synapses (VS). Although aspects of the molecular and cellular mechanisms underlying this mode of spread have been elucidated, the outcomes for infection of the target cell remain incompletely understood. We set out to determine whether HIV-1 transfer via VS results in productive, high-multiplicity HIV-1 infection. We found that HIV-1 cell-to-cell spread resulted in nuclear import of multiple proviruses into target cells as seen by fluorescence in-situ hybridization. Proviral integration into the target cell genome was significantly higher than that seen in a cell-free infection system, and consequent de novo viral DNA and RNA production in the target cell detected by quantitative PCR increased over time. Our data show efficient proviral integration across VS, implying the probability of multiple integration events in target cells that drive productive T cell infection. - Highlights: • Cell-to-cell HIV-1 infection delivers multiple vRNA copies to the target cell. • Cell-to-cell infection results in productive infection of the target cell. • Cell-to-cell transmission is more efficient than cell-free HIV-1 infection. • Suggests a mechanism for recombination in cells infected with multiple viral genomes.
NASA Astrophysics Data System (ADS)
VanMaasdam, Peter J.; Riddle, Jack G.
2003-09-01
The problem of seamless scene integration from multiple 3-dimensional views of a location for surveillance or recognition purposes is one that continues to receive much interest. This technique holds the promise of increased ability to detect concealed targets, as well as better visualization of the scene itself. The process of creating an integrated scene 'model' from multiple range images taken at different views of the scene consists of several basic steps: (1) Matching of scene points across views, (2) Registration of the multiple views to a common reference frame, and (3) Integration of the multiple views into a complete 3D representation (such as a mesh or voxel space). We propose using a technique known as spin-map correlation to compute the initial scene point correspondences between views. This technique has the advantage of being able to perform the registration with minimal knowledge of viewing geometry or viewer location - the only requirement is that there is overlap between views. Registration is performed using the correspondences generated from spin-map matching to seed an Iterative Closest Point (ICP) algorithm. The ICP algorithm grows the list of correspondences and estimates the rigid transformation between the multiple views. Following registration of the disparate views, the surface is represented probabilistically in a voxel space that is then polygonised into a triangular facet model using the well-known marching cubes algorithm. We demonstrate this procedure using LADAR range images of an armored vehicle of interest.
Integration of Multiple Genomic and Phenotype Data to Infer Novel miRNA-Disease Associations
Zhou, Meng; Cheng, Liang; Yang, Haixiu; Wang, Jing; Sun, Jie; Wang, Zhenzhen
2016-01-01
MicroRNAs (miRNAs) play an important role in the development and progression of human diseases. The identification of disease-associated miRNAs will be helpful for understanding the molecular mechanisms of diseases at the post-transcriptional level. Based on different types of genomic data sources, computational methods for miRNA-disease association prediction have been proposed. However, individual source of genomic data tends to be incomplete and noisy; therefore, the integration of various types of genomic data for inferring reliable miRNA-disease associations is urgently needed. In this study, we present a computational framework, CHNmiRD, for identifying miRNA-disease associations by integrating multiple genomic and phenotype data, including protein-protein interaction data, gene ontology data, experimentally verified miRNA-target relationships, disease phenotype information and known miRNA-disease connections. The performance of CHNmiRD was evaluated by experimentally verified miRNA-disease associations, which achieved an area under the ROC curve (AUC) of 0.834 for 5-fold cross-validation. In particular, CHNmiRD displayed excellent performance for diseases without any known related miRNAs. The results of case studies for three human diseases (glioblastoma, myocardial infarction and type 1 diabetes) showed that all of the top 10 ranked miRNAs having no known associations with these three diseases in existing miRNA-disease databases were directly or indirectly confirmed by our latest literature mining. All these results demonstrated the reliability and efficiency of CHNmiRD, and it is anticipated that CHNmiRD will serve as a powerful bioinformatics method for mining novel disease-related miRNAs and providing a new perspective into molecular mechanisms underlying human diseases at the post-transcriptional level. CHNmiRD is freely available at http://www.bio-bigdata.com/CHNmiRD. PMID:26849207
Integration of Multiple Genomic and Phenotype Data to Infer Novel miRNA-Disease Associations.
Shi, Hongbo; Zhang, Guangde; Zhou, Meng; Cheng, Liang; Yang, Haixiu; Wang, Jing; Sun, Jie; Wang, Zhenzhen
2016-01-01
MicroRNAs (miRNAs) play an important role in the development and progression of human diseases. The identification of disease-associated miRNAs will be helpful for understanding the molecular mechanisms of diseases at the post-transcriptional level. Based on different types of genomic data sources, computational methods for miRNA-disease association prediction have been proposed. However, individual source of genomic data tends to be incomplete and noisy; therefore, the integration of various types of genomic data for inferring reliable miRNA-disease associations is urgently needed. In this study, we present a computational framework, CHNmiRD, for identifying miRNA-disease associations by integrating multiple genomic and phenotype data, including protein-protein interaction data, gene ontology data, experimentally verified miRNA-target relationships, disease phenotype information and known miRNA-disease connections. The performance of CHNmiRD was evaluated by experimentally verified miRNA-disease associations, which achieved an area under the ROC curve (AUC) of 0.834 for 5-fold cross-validation. In particular, CHNmiRD displayed excellent performance for diseases without any known related miRNAs. The results of case studies for three human diseases (glioblastoma, myocardial infarction and type 1 diabetes) showed that all of the top 10 ranked miRNAs having no known associations with these three diseases in existing miRNA-disease databases were directly or indirectly confirmed by our latest literature mining. All these results demonstrated the reliability and efficiency of CHNmiRD, and it is anticipated that CHNmiRD will serve as a powerful bioinformatics method for mining novel disease-related miRNAs and providing a new perspective into molecular mechanisms underlying human diseases at the post-transcriptional level. CHNmiRD is freely available at http://www.bio-bigdata.com/CHNmiRD.
Integration of Multiple Genomic and Phenotype Data to Infer Novel miRNA-Disease Associations.
Shi, Hongbo; Zhang, Guangde; Zhou, Meng; Cheng, Liang; Yang, Haixiu; Wang, Jing; Sun, Jie; Wang, Zhenzhen
2016-01-01
MicroRNAs (miRNAs) play an important role in the development and progression of human diseases. The identification of disease-associated miRNAs will be helpful for understanding the molecular mechanisms of diseases at the post-transcriptional level. Based on different types of genomic data sources, computational methods for miRNA-disease association prediction have been proposed. However, individual source of genomic data tends to be incomplete and noisy; therefore, the integration of various types of genomic data for inferring reliable miRNA-disease associations is urgently needed. In this study, we present a computational framework, CHNmiRD, for identifying miRNA-disease associations by integrating multiple genomic and phenotype data, including protein-protein interaction data, gene ontology data, experimentally verified miRNA-target relationships, disease phenotype information and known miRNA-disease connections. The performance of CHNmiRD was evaluated by experimentally verified miRNA-disease associations, which achieved an area under the ROC curve (AUC) of 0.834 for 5-fold cross-validation. In particular, CHNmiRD displayed excellent performance for diseases without any known related miRNAs. The results of case studies for three human diseases (glioblastoma, myocardial infarction and type 1 diabetes) showed that all of the top 10 ranked miRNAs having no known associations with these three diseases in existing miRNA-disease databases were directly or indirectly confirmed by our latest literature mining. All these results demonstrated the reliability and efficiency of CHNmiRD, and it is anticipated that CHNmiRD will serve as a powerful bioinformatics method for mining novel disease-related miRNAs and providing a new perspective into molecular mechanisms underlying human diseases at the post-transcriptional level. CHNmiRD is freely available at http://www.bio-bigdata.com/CHNmiRD. PMID:26849207
Some aspects of integral transport method for deep penetration problem
Takahashi, H.
1982-01-01
An analytical expression of the integral transport method for an experimental hole in fission reactors has been developed. This analytical method might still be useful for designing a fusion reactor without using large computer machine time.
NASA Astrophysics Data System (ADS)
Choudhury, A. Ghose; Guha, Partha; Khanra, Barun
2009-10-01
The Darboux integrability method is particularly useful to determine first integrals of nonplanar autonomous systems of ordinary differential equations, whose associated vector fields are polynomials. In particular, we obtain first integrals for a variant of the generalized Raychaudhuri equation, which has appeared in string inspired modern cosmology.
System and method for inventorying multiple remote objects
Carrender, Curtis L.; Gilbert, Ronald W.
2009-12-29
A system and method of inventorying multiple objects utilizing a multi-level or a chained radio frequency identification system. The system includes a master tag and a plurality of upper level tags and lower level tags associated with respective objects. The upper and lower level tags communicate with each other and the master tag so that reading of the master tag reveals the presence and absence of upper and lower level tags. In the chained RF system, the upper and lower level tags communicate locally with each other in a manner so that more remote tags that are out of range of some of the upper and lower level tags have their information relayed through adjacent tags to the master tag and thence to a controller.
System and method for inventorying multiple remote objects
Carrender, Curtis L.; Gilbert, Ronald W.
2007-10-23
A system and method of inventorying multiple objects utilizing a multi-level or a chained radio frequency identification system. The system includes a master tag and a plurality of upper level tags and lower level tags associated with respective objects. The upper and lower level tags communicate with each other and the master tag so that reading of the master tag reveals the presence and absence of upper and lower level tags. In the chained RF system, the upper and lower level tags communicate locally with each other in a manner so that more remote tags that are out of range of some of the upper and lower level tags have their information relayed through adjacent tags to the master tag and thence to a controller.
Information integration in multiple cue judgment: a division of labor hypothesis.
Juslin, Peter; Karlsson, Linnea; Olsson, Henrik
2008-01-01
There is considerable evidence that judgment is constrained to additive integration of information. The authors propose an explanation of why serial and additive cognitive integration can produce accurate multiple cue judgment both in additive and non-additive environments in terms of an adaptive division of labor between multiple representations. It is hypothesized that, whereas the additive, independent linear effect of each cue can be explicitly abstracted and integrated by a serial, additive judgment process, a variety of sophisticated task properties, like non-additive cue combination, non-linear relations, and inter-cue correlation, are carried implicitly by exemplar memory. Three experiments investigating the effect of additive versus non-additive cue combination verify the predicted shift in cognitive representations as a function of the underlying combination rule. PMID:17376423
Acoustic scattering by multiple elliptical cylinders using collocation multipole method
NASA Astrophysics Data System (ADS)
Lee, Wei-Ming
2012-05-01
This paper presents the collocation multipole method for the acoustic scattering induced by multiple elliptical cylinders subjected to an incident plane sound wave. To satisfy the Helmholtz equation in the elliptical coordinate system, the scattered acoustic field is formulated in terms of angular and radial Mathieu functions which also satisfy the radiation condition at infinity. The sound-soft or sound-hard boundary condition is satisfied by uniformly collocating points on the boundaries. For the sound-hard or Neumann conditions, the normal derivative of the acoustic pressure is determined by using the appropriate directional derivative without requiring the addition theorem of Mathieu functions. By truncating the multipole expansion, a finite linear algebraic system is derived and the scattered field can then be determined according to the given incident acoustic wave. Once the total field is calculated as the sum of the incident field and the scattered field, the near field acoustic pressure along the scatterers and the far field scattering pattern can be determined. For the acoustic scattering of one elliptical cylinder, the proposed results match well with the analytical solutions. The proposed scattered fields induced by two and three elliptical-cylindrical scatterers are critically compared with those provided by the boundary element method to validate the present method. Finally, the effects of the convexity of an elliptical scatterer, the separation between scatterers and the incident wave number and angle on the acoustic scattering are investigated.
Thermally integrated staged methanol reformer and method
Skala, Glenn William; Hart-Predmore, David James; Pettit, William Henry; Borup, Rodney Lynn
2001-01-01
A thermally integrated two-stage methanol reformer including a heat exchanger and first and second reactors colocated in a common housing in which a gaseous heat transfer medium circulates to carry heat from the heat exchanger into the reactors. The heat transfer medium comprises principally hydrogen, carbon dioxide, methanol vapor and water vapor formed in a first stage reforming reaction. A small portion of the circulating heat transfer medium is drawn off and reacted in a second stage reforming reaction which substantially completes the reaction of the methanol and water remaining in the drawn-off portion. Preferably, a PrOx reactor will be included in the housing upstream of the heat exchanger to supplement the heat provided by the heat exchanger.
NASA Astrophysics Data System (ADS)
McGillivary, P. A.; Borges de Sousa, J.; Martins, R.; Rajan, K.
2012-12-01
Autonomous platforms are increasingly used as components of Integrated Ocean Observing Systems and oceanographic research cruises. Systems deployed can include gliders or propeller-driven autonomous underwater vessels (AUVs), autonomous surface vessels (ASVs), and unmanned aircraft systems (UAS). Prior field campaigns have demonstrated successful communication, sensor data fusion and visualization for studies using gliders and AUVs. However, additional requirements exist for incorporating ASVs and UASs into ship operations. For these systems to be optimally integrated into research vessel data management and operational planning systems involves addressing three key issues: real-time field data availability, platform coordination, and data archiving for later analysis. A fleet of AUVs, ASVs and UAS deployed from a research vessel is best operated as a system integrated with the ship, provided communications among them can be sustained. For this purpose, Disruptive Tolerant Networking (DTN) software protocols for operation in communication-challenged environments help ensure reliable high-bandwidth communications. Additionally, system components need to have considerable onboard autonomy, namely adaptive sampling capabilities using their own onboard sensor data stream analysis. We discuss Oceanographic Decision Support System (ODSS) software currently used for situational awareness and planning onshore, and in the near future event detection and response will be coordinated among multiple vehicles. Results from recent field studies from oceanographic research vessels using AUVs, ASVs and UAS, including the Rapid Environmental Picture (REP-12) cruise, are presented describing methods and results for use of multi-vehicle communication and deliberative control networks, adaptive sampling with single and multiple platforms, issues relating to data management and archiving, and finally challenges that remain in addressing these technological issues. Significantly, the
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2016-04-01
Until today a large number of competing computer models has been developed to understand hydrological processes and to simulate and predict streamflow dynamics of rivers. This is primarily the result of a lack of a unified theory in catchment hydrology due to insufficient process understanding and uncertainties related to model development and application. Therefore, the goal of this study is to analyze the uncertainty structure of a process-based hydrological catchment model employing a multiple hypotheses approach. The study focuses on three major problems that have received only little attention in previous investigations. First, to estimate the impact of model structural uncertainty by employing several alternative representations for each simulated process. Second, explore the influence of landscape discretization and parameterization from multiple datasets and user decisions. Third, employ several numerical solvers for the integration of the governing ordinary differential equations to study the effect on simulation results. The generated ensemble of model hypotheses is then analyzed and the three sources of uncertainty compared against each other. To ensure consistency and comparability all model structures and numerical solvers are implemented within a single simulation environment. First results suggest that the selection of a sophisticated numerical solver for the differential equations positively affects simulation outcomes. However, already some simple and easy to implement explicit methods perform surprisingly well and need less computational efforts than more advanced but time consuming implicit techniques. There is general evidence that ambiguous and subjective user decisions form a major source of uncertainty and can greatly influence model development and application at all stages.
Monitoring gray wolf populations using multiple survey methods
Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.
2013-01-01
The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method
Hollow fiber integrated microfluidic platforms for in vitro Co-culture of multiple cell types.
Huang, Jen-Huang; Harris, Jennifer F; Nath, Pulak; Iyer, Rashi
2016-10-01
This study demonstrates a rapid prototyping approach for fabricating and integrating porous hollow fibers (HFs) into microfluidic device. Integration of HF can enhance mass transfer and recapitulate tubular shapes for tissue-engineered environments. We demonstrate the integration of single or multiple HFs, which can give the users the flexibility to control the total surface area for tissue development. We also present three microfluidic designs to enable different co-culture conditions such as the ability to co-culture multiple cell types simultaneously on a flat and tubular surface, or inside the lumen of multiple HFs. Additionally, we introduce a pressurized cell seeding process that can allow the cells to uniformly adhere on the inner surface of HFs without losing their viabilities. Co-cultures of lung epithelial cells and microvascular endothelial cells were demonstrated on the different platforms for at least five days. Overall, these platforms provide new opportunities for co-culturing of multiple cell types in a single device to reconstruct native tissue micro-environment for biomedical and tissue engineering research. PMID:27613401
Method for distinguishing multiple targets using time-reversal acoustics
Berryman, James G.
2004-06-29
A method for distinguishing multiple targets using time-reversal acoustics. Time-reversal acoustics uses an iterative process to determine the optimum signal for locating a strongly reflecting target in a cluttered environment. An acoustic array sends a signal into a medium, and then receives the returned/reflected signal. This returned/reflected signal is then time-reversed and sent back into the medium again, and again, until the signal being sent and received is no longer changing. At that point, the array has isolated the largest eigenvalue/eigenvector combination and has effectively determined the location of a single target in the medium (the one that is most strongly reflecting). After the largest eigenvalue/eigenvector combination has been determined, to determine the location of other targets, instead of sending back the same signals, the method sends back these time reversed signals, but half of them will also be reversed in sign. There are various possibilities for choosing which half to do sign reversal. The most obvious choice is to reverse every other one in a linear array, or as in a checkerboard pattern in 2D. Then, a new send/receive, send-time reversed/receive iteration can proceed. Often, the first iteration in this sequence will be close to the desired signal from a second target. In some cases, orthogonalization procedures must be implemented to assure the returned signals are in fact orthogonal to the first eigenvector found.
Criteria for quantitative and qualitative data integration: mixed-methods research methodology.
Lee, Seonah; Smith, Carrol A M
2012-05-01
Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.
Integrated method for chaotic time series analysis
Hively, L.M.; Ng, E.G.
1998-09-29
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data are disclosed. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated. 8 figs.
Integrated method for chaotic time series analysis
Hively, Lee M.; Ng, Esmond G.
1998-01-01
Methods and apparatus for automatically detecting differences between similar but different states in a nonlinear process monitor nonlinear data. Steps include: acquiring the data; digitizing the data; obtaining nonlinear measures of the data via chaotic time series analysis; obtaining time serial trends in the nonlinear measures; and determining by comparison whether differences between similar but different states are indicated.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
NASA Astrophysics Data System (ADS)
Davies, J.; Beven, K.; Rodhe, A.; Nyberg, L.; Bishop, K.
2013-08-01
There is still a need for catchment hydrological and transport models that properly integrate the effects of preferential flows while accounting for differences in velocities and celerities. A modeling methodology is presented here which uses particle tracking methods to simulate both flow and transport in multiple pathways in a single consistent solution. Water fluxes and storages are determined by the volume and density of particles and transport is attained by labeling the particles with information that may be tracked throughout the lifetime of that particle in the catchment. The methodology allows representation of preferential flows through the use of particle velocity distributions, and mixing between pathways can be achieved with pathway transition probabilities. A transferable 3-D modeling methodology is presented for the first time and applied to a unique step-shift isotope experiment that was carried out at the 0.63 ha G1 catchment in Gårdsjön, Sweden. This application highlights the importance of combining flow and transport in hydrological representations, and the importance of pathway velocity distributions and interactions in obtaining a satisfactory representation of the observations.
An ontology for the integration of multiple genetic disorder data sources.
Gong, P; Qu, W; Feng, D D
2005-01-01
As a huge amount of gene disorder information is available on the Internet, there is an increasing requirement to integrate these data sources. The integration of gene disorder data sources provides an important tool in the research of life science, therapeutics, and genetic disease prevention and inhibition. The key challenge of such integration is how to deal with semantic heterogeneity of multiple information resources. The paper proposes an ontology-based approach to describe and extract the semantics of genetic disorder terminologies and provides a mechanism for sharing and reusing genetic disorder knowledge. According to this unified meta model, heterogeneous gene disorder data sources can be integrated, and a semantic middleware has the ability to do reasoning on the knowledge base of gene disorder for users and applications' various queries.
Shape integral method for magnetospheric shapes. [boundary layer calculations
NASA Technical Reports Server (NTRS)
Michel, F. C.
1979-01-01
A method is developed for calculating the shape of any magnetopause to arbitrarily high precision. The method uses an integral equation which is evaluated for a trial shape. The resulting values of the integral equation as a function of auxiliary variables indicate how close one is to the desired solution. A variational method can then be used to improve the trial shape. Some potential applications are briefly mentioned.
Investigation of the Multiple Method Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
Athans, M.; Baram, Y.; Castanon, D.; Dunn, K. P.; Green, C. S.; Lee, W. H.; Sandell, N. R., Jr.; Willsky, A. S.
1979-01-01
The stochastic adaptive control of the NASA F-8C digital-fly-by-wire aircraft using the multiple model adaptive control (MMAC) method is presented. The selection of the performance criteria for the lateral and the longitudinal dynamics, the design of the Kalman filters for different operating conditions, the identification algorithm associated with the MMAC method, the control system design, and simulation results obtained using the real time simulator of the F-8 aircraft at the NASA Langley Research Center are discussed.
Coupling equivalent plate and finite element formulations in multiple-method structural analyses
NASA Technical Reports Server (NTRS)
Giles, Gary L.; Norwood, Keith
1994-01-01
A coupled multiple-method analysis procedure for use late in conceptual design or early in preliminary design of aircraft structures is described. Using this method, aircraft wing structures are represented with equivalent plate models, and structural details such as engine/pylon structure, landing gear, or a 'stick' model of a fuselage are represented with beam finite element models. These two analysis methods are implemented in an integrated multiple-method formulation that involves the assembly and solution of a combined set of linear equations. The corresponding solution vector contains coefficients of the polynomials that describe the deflection of the wing and also the components of translations and rotations at the joints of the beam members. Two alternative approaches for coupling the methods are investigated; one using transition finite elements and the other using Lagrange multipliers. The coupled formulation is applied to the static analysis and vibration analysis of a conceptual design model of a fighter aircraft. The results from the coupled method are compared with corresponding results from an analysis in which the entire model is composed of finite elements.
Damping identification in frequency domain using integral method
NASA Astrophysics Data System (ADS)
Guo, Zhiwei; Sheng, Meiping; Ma, Jiangang; Zhang, Wulin
2015-03-01
A new method for damping identification of linear system in frequency domain is presented, by using frequency response function (FRF) with integral method. The FRF curve is firstly transformed to other type of frequency-related curve by changing the representations of horizontal and vertical axes. For the newly constructed frequency-related curve, integral is conducted and the area forming from the new curve is used to determine the damping. Three different methods based on integral are proposed in this paper, which are called FDI-1, FDI-2 and FDI-3 method, respectively. For a single degree of freedom (Sdof) system, the formulated relation of each method between integrated area and loss factor is derived theoretically. The numeral simulation and experiment results show that, the proposed integral methods have high precision, strong noise resistance and are very stable in repeated measurements. Among the three integral methods, FDI-3 method is the most recommended because of its higher accuracy and simpler algorithm. The new methods are limited to linear system in which modes are well separated, and for closely spaced mode system, mode decomposition process should be conducted firstly.
Exponential Methods for the Time Integration of Schroedinger Equation
Cano, B.; Gonzalez-Pachon, A.
2010-09-30
We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.
A multilevel finite element method for Fredholm integral eigenvalue problems
NASA Astrophysics Data System (ADS)
Xie, Hehu; Zhou, Tao
2015-12-01
In this work, we proposed a multigrid finite element (MFE) method for solving the Fredholm integral eigenvalue problems. The main motivation for such studies is to compute the Karhunen-Loève expansions of random fields, which play an important role in the applications of uncertainty quantification. In our MFE framework, solving the eigenvalue problem is converted to doing a series of integral iterations and eigenvalue solving in the coarsest mesh. Then, any existing efficient integration scheme can be used for the associated integration process. The error estimates are provided, and the computational complexity is analyzed. It is noticed that the total computational work of our method is comparable with a single integration step in the finest mesh. Several numerical experiments are presented to validate the efficiency of the proposed numerical method.
NASA Astrophysics Data System (ADS)
Rao, Gottipaty N.; Karpf, Andreas
2011-05-01
We report on the development of a new sensor for NO2 with ultrahigh sensitivity of detection. This has been accomplished by combining off-axis integrated cavity output spectroscopy (OA-ICOS) (which can provide large path lengths of the order of several km in a small volume cell) with multiple line integrated absorption spectroscopy (MLIAS) (where we integrate the absorption spectra over a large number of rotational-vibrational transitions of the molecular species to further improve the sensitivity). Employing an external cavity tunable quantum cascade laser operating in the 1601 - 1670 cm-1 range and a high-finesse optical cavity, the absorption spectra of NO2 over 100 transitions in the R-band have been recorded. From the observed linear relationship between the integrated absorption vs. concentration of NO2, we report an effective sensitivity of detection of 10 ppt for NO2. To the best of our knowledge, this is among the most sensitive levels of detection of NO2 to date. A sensitive sensor for the detection of NO2 will be helpful to monitor the ambient air quality, combustion emissions from the automobiles, power plants, aircraft and for the detection of nitrate based explosives (which are commonly used in improvised explosives (IEDs)). Additionally such a sensor would be valuable for the study of complex chemical reactions that undergo in the atmosphere resulting in the formation of photochemical smog, tropospheric ozone and acid rain.
Methods for the joint meta-analysis of multiple tests.
Trikalinos, Thomas A; Hoaglin, David C; Small, Kevin M; Terrin, Norma; Schmid, Christopher H
2014-12-01
Existing methods for meta-analysis of diagnostic test accuracy focus primarily on a single index test. We propose models for the joint meta-analysis of studies comparing multiple index tests on the same participants in paired designs. These models respect the grouping of data by studies, account for the within-study correlation between the tests' true-positive rates (TPRs) and between their false-positive rates (FPRs) (induced because tests are applied to the same participants), and allow for between-study correlations between TPRs and FPRs (such as those induced by threshold effects). We estimate models in the Bayesian setting. We demonstrate using a meta-analysis of screening for Down syndrome with two tests: shortened humerus (arm bone), and shortened femur (thigh bone). Separate and joint meta-analyses yielded similar TPR and FPR estimates. For example, the summary TPR for a shortened humerus was 35.3% (95% credible interval (CrI): 26.9, 41.8%) versus 37.9% (27.7, 50.3%) with joint versus separate meta-analysis. Joint meta-analysis is more efficient when calculating comparative accuracy: the difference in the summary TPRs was 0.0% (-8.9, 9.5%; TPR higher for shortened humerus) with joint versus 2.6% (-14.7, 19.8%) with separate meta-analyses. Simulation and empirical analyses are needed to refine the role of the proposed methodology.
A Rationale for Mixed Methods (Integrative) Research Programmes in Education
ERIC Educational Resources Information Center
Niaz, Mansoor
2008-01-01
Recent research shows that research programmes (quantitative, qualitative and mixed) in education are not displaced (as suggested by Kuhn) but rather lead to integration. The objective of this study is to present a rationale for mixed methods (integrative) research programs based on contemporary philosophy of science (Lakatos, Giere, Cartwright,…
Scientific concepts and applications of integrated discrete multiple organ co-culture technology
Gayathri, Loganathan; Dhanasekaran, Dharumadurai; Akbarsha, Mohammad A.
2015-01-01
Over several decades, animals have been used as models to investigate the human-specific drug toxicity, but the outcomes are not always reliably extrapolated to the humans in vivo. Appropriate in vitro human-based experimental system that includes in vivo parameters is required for the evaluation of multiple organ interaction, multiple organ/organ-specific toxicity, and metabolism of xenobiotic compounds to avoid the use of animals for toxicity testing. One such versatile in vitro technology in which human primary cells could be used is integrated discrete multiple organ co-culture (IdMOC). IdMOC system adopts wells-within-well concept that facilitates co-culture of cells from different organs in a discrete manner, separately in the respective media in the smaller inner wells which are then interconnected by an overlay of a universal medium in the large containing well. This novel in vitro approach mimics the in vivo situation to a great extent, and employs cells from multiple organs that are physically separated but interconnected by a medium that mimics the systemic circulation and provides for multiple organ interaction. Applications of IdMOC include assessment of multiple organ toxicity, drug distribution, organ-specific toxicity, screening of anticancer drugs, metabolic cytotoxicity, etc. PMID:25969651
A rainfall design method for spatial flood risk assessment: considering multiple flood sources
NASA Astrophysics Data System (ADS)
Jiang, X.; Tatano, H.
2015-08-01
Information about the spatial distribution of flood risk is important for integrated urban flood risk management. Focusing on urban areas, spatial flood risk assessment must reflect all risk information derived from multiple flood sources: rivers, drainage, coastal flooding etc. that may affect the area. However, conventional flood risk assessment deals with each flood source independently, which leads to an underestimation of flood risk in the floodplain. Even in floodplains that have no risk from coastal flooding, flooding from river channels and inundation caused by insufficient drainage capacity should be considered simultaneously. For integrated flood risk management, it is necessary to establish a methodology to estimate flood risk distribution across a floodplain. In this paper, a rainfall design method for spatial flood risk assessment, which considers the joint effects of multiple flood sources, is proposed. The concept of critical rainfall duration determined by the concentration time of flooding is introduced to connect response characteristics of different flood sources with rainfall. A copula method is then adopted to capture the correlation of rainfall amount with different critical rainfall durations. Rainfall events are designed taking advantage of the copula structure of correlation and marginal distribution of rainfall amounts within different critical rainfall durations. A case study in the Otsu River Basin, Osaka prefecture, Japan was conducted to demonstrate this methodology.
Choi, Hyunmo; Oh, Eunkyoo
2016-01-01
As sessile organisms, plants must be able to adapt to the environment. Plants respond to the environment by adjusting their growth and development, which is mediated by sophisticated signaling networks that integrate multiple environmental and endogenous signals. Recently, increasing evidence has shown that a bHLH transcription factor PIF4 plays a major role in the multiple signal integration for plant growth regulation. PIF4 is a positive regulator in cell elongation and its activity is regulated by various environmental signals, including light and temperature, and hormonal signals, including auxin, gibberellic acid and brassinosteroid, both transcriptionally and post-translationally. Moreover, recent studies have shown that the circadian clock and metabolic status regulate endogenous PIF4 level. The PIF4 transcription factor cooperatively regulates the target genes involved in cell elongation with hormone-regulated transcription factors. Therefore, PIF4 is a key integrator of multiple signaling pathways, which optimizes growth in the environment. This review will discuss our current understanding of the PIF4-mediated signaling networks that control plant growth. PMID:27432188
Choi, Hyunmo; Oh, Eunkyoo
2016-08-31
As sessile organisms, plants must be able to adapt to the environment. Plants respond to the environment by adjusting their growth and development, which is mediated by sophisticated signaling networks that integrate multiple environmental and endogenous signals. Recently, increasing evidence has shown that a bHLH transcription factor PIF4 plays a major role in the multiple signal integration for plant growth regulation. PIF4 is a positive regulator in cell elongation and its activity is regulated by various environmental signals, including light and temperature, and hormonal signals, including auxin, gibberellic acid and brassinosteroid, both transcriptionally and post-translationally. Moreover, recent studies have shown that the circadian clock and metabolic status regulate endogenous PIF4 level. The PIF4 transcription factor cooperatively regulates the target genes involved in cell elongation with hormone-regulated transcription factors. Therefore, PIF4 is a key integrator of multiple signaling pathways, which optimizes growth in the environment. This review will discuss our current understanding of the PIF4-mediated signaling networks that control plant growth. PMID:27432188
The eye in hand: predicting others' behavior by integrating multiple sources of information
Pezzulo, Giovanni; Costantini, Marcello
2015-01-01
The ability to predict the outcome of other beings' actions confers significant adaptive advantages. Experiments have assessed that human action observation can use multiple information sources, but it is currently unknown how they are integrated and how conflicts between them are resolved. To address this issue, we designed an action observation paradigm requiring the integration of multiple, potentially conflicting sources of evidence about the action target: the actor's gaze direction, hand preshape, and arm trajectory, and their availability and relative uncertainty in time. In two experiments, we analyzed participants' action prediction ability by using eye tracking and behavioral measures. The results show that the information provided by the actor's gaze affected participants' explicit predictions. However, results also show that gaze information was disregarded as soon as information on the actor's hand preshape was available, and this latter information source had widespread effects on participants' prediction ability. Furthermore, as the action unfolded in time, participants relied increasingly more on the arm movement source, showing sensitivity to its increasing informativeness. Therefore, the results suggest that the brain forms a robust estimate of the actor's motor intention by integrating multiple sources of information. However, when informative motor cues such as a preshaped hand with a given grip are available and might help in selecting action targets, people tend to capitalize on such motor cues, thus turning out to be more accurate and fast in inferring the object to be manipulated by the other's hand. PMID:25568158
NASA Astrophysics Data System (ADS)
Alqurashi, Muwaffaq; Wang, Jinling
2015-03-01
For positioning, navigation and timing (PNT) purposes, GNSS or GNSS/INS integration is utilised to provide real-time solutions. However, any potential sensor failures or faulty measurements due to malfunctions of sensor components or harsh operating environments may cause unsatisfactory estimation for PNT parameters. The inability for immediate detecting faulty measurements or sensor component failures will reduce the overall performance of the system. So, real time detection and identification of faulty measurements is required to make the system more accurate and reliable for different applications that need real time solutions such as real time mapping for safety or emergency purposes. Consequently, it is necessary to implement an online fault detection and isolation (FDI) algorithm which is a statistic-based approach to detect and identify multiple faults.However, further investigations on the performance of the FDI for multiple fault scenarios is still required. In this paper, the performance of the FDI method under multiple fault scenarios is evaluated, e.g., for two, three and four faults in the GNSS and GNSS/INS measurements under different conditions of visible satellites and satellites geometry. Besides, the reliability (e.g., MDB) and separability (correlation coefficients between faults detection statistics) measures are also investigated to measure the capability of the FDI method. A performance analysis of the FDI method is conducted under the geometric constraints, to show the importance of the FDI method in terms of fault detectability and separability for robust positioning and navigation for real time applications.
Improving the Accuracy of the Boundary Integral Method Based on the Helmholtz Integral
NASA Technical Reports Server (NTRS)
Koopmann, G. H.; Brod, K.
1985-01-01
Several recent papers in the literature have been based on various forms of the Helmholtz integral to compute the radiation fields of vibrating bodies. The surface integral form is given. The symbols of P,R micron, rho,G,R,V, and S micron are acoustic pressure, source coordinate, angular frequency, fluid density, Green function, field coordinate, surface velocity and body surface respectively. A discretized form of the surface integral is also given. Solutions to the surface integral are complicated with the singularity of the Green function at R=R micron and with the uniqueness problem at interior eigen frequencies of the enclosed space. The use of the interior integral circumvents the singularity problem since the field points are chosen in the interior space of the vibrating body where a zero pressure condition exists. The interior integral form is given. The method to improve the accuracy is detailed. Examples of the method is presented for a variety of radiators.
High order integral equation method for diffraction gratings.
Lu, Wangtao; Lu, Ya Yan
2012-05-01
Conventional integral equation methods for diffraction gratings require lattice sum techniques to evaluate quasi-periodic Green's functions. The boundary integral equation Neumann-to-Dirichlet map (BIE-NtD) method in Wu and Lu [J. Opt. Soc. Am. A 26, 2444 (2009)], [J. Opt. Soc. Am. A 28, 1191 (2011)] is a recently developed integral equation method that avoids the quasi-periodic Green's functions and is relatively easy to implement. In this paper, we present a number of improvements for this method, including a revised formulation that is more stable numerically, and more accurate methods for computing tangential derivatives along material interfaces and for matching boundary conditions with the homogeneous top and bottom regions. Numerical examples indicate that the improved BIE-NtD map method achieves a high order of accuracy for in-plane and conical diffractions of dielectric gratings.
NASA Astrophysics Data System (ADS)
Cruse, Thomas A.; Novati, Giorgio
The hypersingular Somigliana identity for the stress tensor is used as the basis for a traction boundary integral equation (BIE) suitable for numerical application to nonplanar cracks and to multiple cracks. The variety of derivations of hypersingular traction BIE formulations is reviewed and extended for this problem class. Numerical implementation is accomplished for piecewise-flat models of curved cracks, using local coordinate system integrations. A nonconforming, triangular boundary element implementation of the integral equations is given. Demonstration problems include several three-dimensional approximations to plane-strain fracture mechanics problems, for which exact or highly accurate numerical solutions exist. In all cases, the use of a piecewise-flat traction BIE implementation is shown to give excellent results.
NASA Astrophysics Data System (ADS)
Xie, Guizhong; Zhang, Dehai; Zhang, Jianming; Meng, Fannian; Du, Wenliao; Wen, Xiaoyu
2016-07-01
As a widely used numerical method, boundary element method (BEM) is efficient for computer aided engineering (CAE). However, boundary integrals with near singularity need to be calculated accurately and efficiently to implement BEM for CAE analysis on thin bodies successfully. In this paper, the distance in the denominator of the fundamental solution is first designed as an equivalent form using approximate expansion and the original sinh method can be revised into a new form considering the minimum distance and the approximate expansion. Second, the acquisition of the projection point by Newton-Raphson method is introduced. We acquire the nearest point between the source point and element edge by solving a cubic equation if the location of the projection point is outside the element, where boundary integrals with near singularity appear. Finally, the subtriangles of the local coordinate space are mapped into the integration space and the sinh method is applied in the integration space. The revised sinh method can be directly performed in the integration element. Averification test of our method is proposed. Results demonstrate that our method is effective for regularizing the boundary integrals with near singularity.
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1982-01-01
The computational methods used to predict and optimize the thermal structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a different yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
NASA Astrophysics Data System (ADS)
Li, D. H.; Zhang, X.; Sze, K. Y.; Liu, Y.
2016-10-01
In this paper, the extended layerwise method (XLWM), which was developed for laminated composite beams with multiple delaminations and transverse cracks (Li et al. in Int J Numer Methods Eng 101:407-434, 2015), is extended to laminated composite plates. The strong and weak discontinuous functions along the thickness direction are adopted to simulate multiple delaminations and interlaminar interfaces, respectively, whilst transverse cracks are modeled by the extended finite element method (XFEM). The interaction integral method and maximum circumferential tensile criterion are used to calculate the stress intensity factor (SIF) and crack growth angle, respectively. The XLWM for laminated composite plates can accurately predicts the displacement and stress fields near the crack tips and delamination fronts. The thickness distribution of SIF and thus the crack growth angles in different layers can be obtained. These information cannot be predicted by using other existing shell elements enriched by XFEM. Several numerical examples are studied to demonstrate the capabilities of the XLWM in static response analyses, SIF calculations and crack growth predictions.
NASA Astrophysics Data System (ADS)
Li, D. H.; Zhang, X.; Sze, K. Y.; Liu, Y.
2016-07-01
In this paper, the extended layerwise method (XLWM), which was developed for laminated composite beams with multiple delaminations and transverse cracks (Li et al. in Int J Numer Methods Eng 101:407-434, 2015), is extended to laminated composite plates. The strong and weak discontinuous functions along the thickness direction are adopted to simulate multiple delaminations and interlaminar interfaces, respectively, whilst transverse cracks are modeled by the extended finite element method (XFEM). The interaction integral method and maximum circumferential tensile criterion are used to calculate the stress intensity factor (SIF) and crack growth angle, respectively. The XLWM for laminated composite plates can accurately predicts the displacement and stress fields near the crack tips and delamination fronts. The thickness distribution of SIF and thus the crack growth angles in different layers can be obtained. These information cannot be predicted by using other existing shell elements enriched by XFEM. Several numerical examples are studied to demonstrate the capabilities of the XLWM in static response analyses, SIF calculations and crack growth predictions.
An integrated lean-methods approach to hospital facilities redesign.
Nicholas, John
2012-01-01
Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.
An integrated lean-methods approach to hospital facilities redesign.
Nicholas, John
2012-01-01
Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach. PMID:22671435
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Explicit Integration of Extremely Stiff Reaction Networks: Partial Equilibrium Methods
Guidry, Mike W; Billings, J. J.; Hix, William Raphael
2013-01-01
In two preceding papers [1,2] we have shown that, when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the new partial equilibrium methods, give an integration scheme that plausibly can deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that algebraically stabilized explicit methods may offer alternatives to implicit integration of even extremely stiff systems, and that these methods may permit integration of much larger networks than have been feasible previously in a variety of fields.
System and method for integrating hazard-based decision making tools and processes
Hodgin, C. Reed
2012-03-20
A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient. PMID:27547676
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Tuning of PID controllers for integrating systems using direct synthesis method.
Anil, Ch; Padma Sree, R
2015-07-01
A PID controller is designed for various forms of integrating systems with time delay using direct synthesis method. The method is based on comparing the characteristic equation of the integrating system and PID controller with a filter with the desired characteristic equation. The desired characteristic equation comprises of multiple poles which are placed at the same desired location. The tuning parameter is adjusted so as to achieve the desired robustness. Tuning rules in terms of process parameters are given for various forms of integrating systems. The tuning parameter can be selected for the desired robustness by specifying Ms value. The proposed controller design method is applied to various transfer function models and to the nonlinear model equations of jacketed CSTR to show its effectiveness and applicability.
Exoplanet as Third Bodies of Multiple Systems (A Quantization Method)
NASA Astrophysics Data System (ADS)
El Fady Morcos, Abd; Awadalla, Nabil; Hanna, Magdy
2016-07-01
Third bodies in some multiple systems are considered as exoplanets. Many authors have been expected that, the majority of solar-type stars exist in binary or multiple systems having planets (Duquennoy& Mayor, 1991). The planetary formation in binary systems has become an important issue. In This work, we are aiming to differentiate between the natures of a third bodies, are they stars or a planets. The formula derived by Morcos (2013), has been adapted and used to find the quantum numbers of some third bodies of some multiple stars systems. The observed date of some triple systems, from outer space and in the Interactive Extra-solar Planets Catalog", for W-type , W-subtypes contact binaries, W- Uma binaries eclipsing detached binaries and Algol semidetached types have been used. The expected quantum numbers of the third bodies are obtained. The calculated quantum numbers may be used as indicator for exoplanets.
Using MACSYMA to drive numerical methods to computer radiation integrals
Clark, B.A.
1986-01-01
Because the emission of thermal radiation is characterized by the Planck emission spectrum, a multigroup solution of the thermal-radiation transport equation demands the calculation of definite integrals of the Planck spectrum. In the past, many approximate methods have been used with varying degrees of accuracy and efficiency. This paper describes how a symbolic algebra package, in this case MACSYMA is used to develop new methods for accurately and efficiently evaluating multigroup Planck integrals. The advantage of using a symbolic algebra package is that the job of developing the new methods is accomplished more efficiently.
Application of integrated fluid-thermal-structural analysis methods
NASA Technical Reports Server (NTRS)
Wieting, Allan R.; Dechaumphai, Pramote; Bey, Kim S.; Thornton, Earl A.; Morgan, Ken
1988-01-01
Hypersonic vehicles operate in a hostile aerothermal environment which has a significant impact on their aerothermostructural performance. Significant coupling occurs between the aerodynamic flow field, structural heat transfer, and structural response creating a multidisciplinary interaction. Interfacing state-of-the-art disciplinary analysis methods is not efficient, hence interdisciplinary analysis methods integrated into a single aerothermostructural analyzer are needed. The NASA Langley Research Center is developing such methods in an analyzer called LIFTS (Langley Integrated Fluid-Thermal-Structural) analyzer. The evolution and status of LIFTS is reviewed and illustrated through applications.
Multiple Integration of the Heat-Conduction Equation for a Space Bounded From the Inside
NASA Astrophysics Data System (ADS)
Kot, V. A.
2016-03-01
An N-fold integration of the heat-conduction equation for a space bounded from the inside has been performed using a system of identical equalities with definition of the temperature function by a power polynomial with an exponential factor. It is shown that, in a number of cases, the approximate solutions obtained can be considered as exact because their errors comprise hundredths and thousandths of a percent. The method proposed for N-fold integration represents an alternative to classical integral transformations.
Musoke, David; Miiro, George; Karani, George; Morris, Keith; Kasasa, Simon; Ndejjo, Rawlance; Nakiyingi-Miiro, Jessica; Guwatudde, David; Musoke, Miph Boses
2015-01-01
Background The World Health Organization recommends use of multiple approaches to control malaria. The integrated approach to malaria prevention advocates the use of several malaria prevention methods in a holistic manner. This study assessed perceptions and practices on integrated malaria prevention in Wakiso district, Uganda. Methods A clustered cross-sectional survey was conducted among 727 households from 29 villages using both quantitative and qualitative methods. Assessment was done on awareness of various malaria prevention methods, potential for use of the methods in a holistic manner, and reasons for dislike of certain methods. Households were classified as using integrated malaria prevention if they used at least two methods. Logistic regression was used to test for factors associated with the use of integrated malaria prevention while adjusting for clustering within villages. Results Participants knew of the various malaria prevention methods in the integrated approach including use of insecticide treated nets (97.5%), removing mosquito breeding sites (89.1%), clearing overgrown vegetation near houses (97.9%), and closing windows and doors early in the evenings (96.4%). If trained, most participants (68.6%) would use all the suggested malaria prevention methods of the integrated approach. Among those who would not use all methods, the main reasons given were there being too many (70.2%) and cost (32.0%). Only 33.0% households were using the integrated approach to prevent malaria. Use of integrated malaria prevention by households was associated with reading newspapers (AOR 0.34; 95% CI 0.22 –0.53) and ownership of a motorcycle/car (AOR 1.75; 95% CI 1.03 – 2.98). Conclusion Although knowledge of malaria prevention methods was high and perceptions on the integrated approach promising, practices on integrated malaria prevention was relatively low. The use of the integrated approach can be improved by promoting use of multiple malaria prevention methods
A Comparison of Treatment Integrity Assessment Methods for Behavioral Intervention
ERIC Educational Resources Information Center
Koh, Seong A.
2010-01-01
The purpose of this study was to examine the similarity of outcomes from three different treatment integrity (TI) methods, and to identify the method which best corresponded to the assessment of a child's behavior. Six raters were recruited through individual contact via snowball sampling. A modified intervention component list and 19 video clips…
When Curriculum and Technology Meet: Technology Integration in Methods Courses
ERIC Educational Resources Information Center
Keeler, Christy G.
2008-01-01
Reporting on the results of an action research study, this manuscript provides examples of strategies used to integrate technology into a content methods course. The study used reflective teaching of a social studies methods course at a major Southwestern university in 10 course sections over a four-semester period. In alignment with the research…
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that
A flexible importance sampling method for integrating subgrid processes
NASA Astrophysics Data System (ADS)
Raut, E. K.; Larson, V. E.
2016-01-01
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that contains both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.
A flexible importance sampling method for integrating subgrid processes
Raut, E. K.; Larson, V. E.
2016-01-29
Numerical models of weather and climate need to compute grid-box-averaged rates of physical processes such as microphysics. These averages are computed by integrating subgrid variability over a grid box. For this reason, an important aspect of atmospheric modeling is spatial integration over subgrid scales. The needed integrals can be estimated by Monte Carlo integration. Monte Carlo integration is simple and general but requires many evaluations of the physical process rate. To reduce the number of function evaluations, this paper describes a new, flexible method of importance sampling. It divides the domain of integration into eight categories, such as the portion that containsmore » both precipitation and cloud, or the portion that contains precipitation but no cloud. It then allows the modeler to prescribe the density of sample points within each of the eight categories. The new method is incorporated into the Subgrid Importance Latin Hypercube Sampler (SILHS). The resulting method is tested on drizzling cumulus and stratocumulus cases. In the cumulus case, the sampling error can be considerably reduced by drawing more sample points from the region of rain evaporation.« less
Martel, Michelle M.; Schimmack, Ulrich; Nikolas, Molly; Nigg, Joel T.
2015-01-01
The Diagnostic and Statistical Manual of Mental Disorder—Fifth Edition explicitly requires that Attention-Deficit/Hyperactivity Disorder (ADHD) symptoms should be apparent across settings, taking into account reports from multiple informants. Yet, it provides no guidelines how information from different raters should be combined in ADHD diagnosis. We examined the validity of different approaches using structural equation modeling (SEM) for multiple-informant data. Participants were 725 children, 6 to 17 years old, and their primary caregivers and teachers, recruited from the community and completing a thorough research-based diagnostic assessment, including a clinician-administered diagnostic interview, parent and teacher standardized rating scales and cognitive testing. A best-estimate ADHD diagnosis was generated by a diagnostic team. An SEM model demonstrated convergent validity among raters. We found relatively weak symptom-specific agreement among raters, suggesting that a general average scoring algorithm is preferable to symptom-specific scoring algorithms such as the “or” and “and” algorithms. Finally, to illustrate the validity of this approach, we show that averaging makes it possible to reduce the number of items from 18 items to 8 items without a significant decrease in validity. In conclusion, information from multiple raters increases the validity of ADHD diagnosis, and averaging appears to be the optimal way to integrate information from multiple raters. PMID:25730162
NASA Astrophysics Data System (ADS)
Hagos Subagadis, Yohannes; Schütze, Niels; Grundmann, Jens
2015-04-01
The planning and implementation of effective water resources management strategies need an assessment of multiple (physical, environmental, and socio-economic) issues, and often requires new research in which knowledge of diverse disciplines are combined in a unified methodological and operational frameworks. Such integrative research to link different knowledge domains faces several practical challenges. Such complexities are further compounded by multiple actors frequently with conflicting interests and multiple uncertainties about the consequences of potential management decisions. A fuzzy-stochastic multiple criteria decision analysis tool was developed in this study to systematically quantify both probabilistic and fuzzy uncertainties associated with complex hydrosystems management. It integrated physical process-based models, fuzzy logic, expert involvement and stochastic simulation within a general framework. Subsequently, the proposed new approach is applied to a water-scarce coastal arid region water management problem in northern Oman, where saltwater intrusion into a coastal aquifer due to excessive groundwater extraction for irrigated agriculture has affected the aquifer sustainability, endangering associated socio-economic conditions as well as traditional social structure. Results from the developed method have provided key decision alternatives which can serve as a platform for negotiation and further exploration. In addition, this approach has enabled to systematically quantify both probabilistic and fuzzy uncertainties associated with the decision problem. Sensitivity analysis applied within the developed tool has shown that the decision makers' risk aversion and risk taking attitude may yield in different ranking of decision alternatives. The developed approach can be applied to address the complexities and uncertainties inherent in water resources systems to support management decisions, while serving as a platform for stakeholder participation.
ePRISM: A case study in multiple proxy and mixed temporal resolution integration
Robinson, Marci M.; Dowsett, Harry J.
2010-01-01
As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.
[The academic education in nursing and multiple-victim incidents: an integrative review].
Salvador, Pétala Tuani Candido de Oliveira; Dantas, Rodrigo Assis Neves; Dantas, Daniele Vieira; Torres, Gilson de Vasconcelos
2012-06-01
The objective of this study is to reflect on the knowledge, competencies and skill that must be promoted during the academic education of nurses for an effective professional practice in view of a multiple-victim incident (MVI). This is an integrative literature review regarding academic nursing education. The literature survey was performed on the BDENF, LILACS, SciELO, MEDLINE, Web of Knowledge and HighWire Press databases, using the following descriptors: higher education; nursing education; emergency nursing; and mass casualty incidents. The publications permitted considerations regarding the following themes: particularities; competencies and skills essential in nursing practice in view of multiple-victim incidents; and the professors' strategies to promote those competencies and skills. The literature analysis demonstrated that nursing education should be configured as a space to develop critical thinking skills, which requires professors to have an eclectic educational background.
Characterization of multiple-bit errors from single-ion tracks in integrated circuits
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Edmonds, L. D.; Smith, L. S.
1989-01-01
The spread of charge induced by an ion track in an integrated circuit and its subsequent collection at sensitive nodal junctions can cause multiple-bit errors. The authors have experimentally and analytically investigated this phenomenon using a 256-kb dynamic random-access memory (DRAM). The effects of different charge-transport mechanisms are illustrated, and two classes of ion-track multiple-bit error clusters are identified. It is demonstrated that ion tracks that hit a junction can affect the lateral spread of charge, depending on the nature of the pull-up load on the junction being hit. Ion tracks that do not hit a junction allow the nearly uninhibited lateral spread of charge.
Integrated multiple patch-clamp array chip via lateral cell trapping junctions
NASA Astrophysics Data System (ADS)
Seo, J.; Ionescu-Zanetti, C.; Diamond, J.; Lal, R.; Lee, L. P.
2004-03-01
We present an integrated multiple patch-clamp array chip by utilizing lateral cell trapping junctions. The intersectional design of a microfluidic network provides multiple cell addressing and manipulation sites for efficient electrophysiological measurements at a number of patch sites. The patch pores consist of openings in the sidewall of a main fluidic channel, and a membrane patch is drawn into a smaller horizontal channel. This device geometry not only minimizes capacitive coupling between the cell reservoir and the patch channel, but also allows simultaneous optical and electrical measurements of ion channel proteins. Evidence of the hydrodynamic placement of mammalian cells at the patch sites as well as measurements of patch sealing resistance is presented. Device fabrication is based on micromolding of polydimethylsiloxane, thus allowing inexpensive mass production of disposable high-throughput biochips.
A Tri-Factor Model for Integrating Ratings Across Multiple Informants
Bauer, Daniel J.; Howard, Andrea L.; Baldasaro, Ruth E.; Curran, Patrick J.; Hussong, Andrea M.; Chassin, Laurie; Zucker, Robert A.
2014-01-01
Psychologists often obtain ratings for target individuals from multiple informants such as parents or peers. In this paper we propose a tri-factor model for multiple informant data that separates target-level variability from informant-level variability and item-level variability. By leveraging item-level data, the tri-factor model allows for examination of a single trait rated on a single target. In contrast to many psychometric models developed for multitrait-multimethod data, the tri-factor model is predominantly a measurement model. It is used to evaluate item quality in scale development, test hypotheses about sources of target variability (e.g., sources of trait differences) versus informant variability (e.g., sources of rater bias), and generate integrative scores that are purged of the subjective biases of single informants. PMID:24079932
Explicit Integration of Extremely Stiff Reaction Networks: Asymptotic Methods
Guidry, Mike W; Budiardja, R.; Feger, E.; Billings, J. J.; Hix, William Raphael; Messer, O.E.B.; Roche, K. J.; McMahon, E.; He, M.
2013-01-01
We show that, even for extremely stiff systems, explicit integration may compete in both accuracy and speed with implicit methods if algebraic methods are used to stabilize the numerical integration. The stabilizing algebra differs for systems well removed from equilibrium and those near equilibrium. This paper introduces a quantitative distinction between these two regimes and addresses the former case in depth, presenting explicit asymptotic methods appropriate when the system is extremely stiff but only weakly equilibrated. A second paper [1] examines quasi-steady-state methods as an alternative to asymptotic methods in systems well away from equilibrium and a third paper [2] extends these methods to equilibrium conditions in extremely stiff systems using partial equilibrium methods. All three papers present systematic evidence for timesteps competitive with implicit methods. Because explicit methods can execute a timestep faster than an implicit method, our results imply that algebraically stabilized explicit algorithms may offer a means to integration of larger networks than have been feasible previously in various disciplines.
Selecting an Appropriate Multiple Comparison Technique: An Integration of Monte Carlo Studies.
ERIC Educational Resources Information Center
Myette, Beverly M.; White, Karl R.
Twenty Monte Carlo studies on multiple comparison (MC) techniques were conducted to examine which MC technique was the "method of choice." The results from these studies had several apparent contradictions when different techniques were investigated under varying sample size and variance conditions. Box's coefficient of variance variation and bias…
NASA Astrophysics Data System (ADS)
Dahlin, K.; Asner, G. P.
2010-12-01
The ability to map plant species distributions has long been one of the key goals of terrestrial remote sensing. Achieving this goal has been challenging, however, due to technical constraints and the difficulty in relating remote observations to ground measurements. Advances in both the types of data that can be collected remotely and in available analytical tools like multiple endmember spectral mixture analysis (MESMA) are allowing for rapid improvements in this field. In 2007 the Carnegie Airborne Observatory (CAO) acquired high resolution lidar and hyperspectral imagery of Jasper Ridge Biological Preserve (Woodside, California). The site contains a mosaic of vegetation types, from grassland to chaparral to evergreen forest. To build a spectral library, 415 GPS points were collected in the field, made up of 44 plant species, six plant categories (for nonphotosynthetic vegetation), and four substrate types. Using the lidar data to select the most illuminated pixels as seen from the aircraft (based on canopy shape and viewing angle), we then reduced the spectral library to only the most fully lit pixels. To identify individual plant species in the imagery, first the hyperspectral data was used to calculate the normalized difference vegetation index (NDVI), and then pixels with an NDVI less than 0.15 were removed from further analysis. The remaining image was stratified into five classes based on vegetation height derived from the lidar data. For each class, a suite of possible endmembers was identified and then three endmember selection procedures (endmember average RMS, minimum average spectral angle, and count based endmember selection) were employed to select the most representative endmembers from each species in each class. Two and three endmember models were then applied and each pixel was assigned a species or plant category based on the highest endmember fraction. To validate the approach, an independent set of 200 points was collected throughout the
Cortical mechanisms for trans-saccadic memory and integration of multiple object features
Prime, Steven L.; Vesia, Michael; Crawford, J. Douglas
2011-01-01
Constructing an internal representation of the world from successive visual fixations, i.e. separated by saccadic eye movements, is known as trans-saccadic perception. Research on trans-saccadic perception (TSP) has been traditionally aimed at resolving the problems of memory capacity and visual integration across saccades. In this paper, we review this literature on TSP with a focus on research showing that egocentric measures of the saccadic eye movement can be used to integrate simple object features across saccades, and that the memory capacity for items retained across saccades, like visual working memory, is restricted to about three to four items. We also review recent transcranial magnetic stimulation experiments which suggest that the right parietal eye field and frontal eye fields play a key functional role in spatial updating of objects in TSP. We conclude by speculating on possible cortical mechanisms for governing egocentric spatial updating of multiple objects in TSP. PMID:21242142
Integrative methods for analyzing big data in precision medicine.
Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša
2016-03-01
We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face.
Dynamical multiple-time stepping methods for overcoming resonance instabilities.
Chin, Siu A
2004-01-01
Current molecular dynamics simulations of biomolecules using multiple time steps to update the slowly changing force are hampered by instabilities beginning at time steps near the half period of the fastest vibrating mode. These "resonance" instabilities have became a critical barrier preventing the long time simulation of biomolecular dynamics. Attempts to tame these instabilities by altering the slowly changing force and efforts to damp them out by Langevin dynamics do not address the fundamental cause of these instabilities. In this work, we trace the instability to the nonanalytic character of the underlying spectrum and show that a correct splitting of the Hamiltonian, which renders the spectrum analytic, restores stability. The resulting Hamiltonian dictates that in addition to updating the momentum due to the slowly changing force, one must also update the position with a modified mass. Thus multiple-time stepping must be done dynamically.
NASA Astrophysics Data System (ADS)
Benedict, K. K.; Scott, S.
2013-12-01
While there has been a convergence towards a limited number of standards for representing knowledge (metadata) about geospatial (and other) data objects and collections, there exist a variety of community conventions around the specific use of those standards and within specific data discovery and access systems. This combination of limited (but multiple) standards and conventions creates a challenge for system developers that aspire to participate in multiple data infrastrucutres, each of which may use a different combination of standards and conventions. While Extensible Markup Language (XML) is a shared standard for encoding most metadata, traditional direct XML transformations (XSLT) from one standard to another often result in an imperfect transfer of information due to incomplete mapping from one standard's content model to another. This paper presents the work at the University of New Mexico's Earth Data Analysis Center (EDAC) in which a unified data and metadata management system has been developed in support of the storage, discovery and access of heterogeneous data products. This system, the Geographic Storage, Transformation and Retrieval Engine (GSTORE) platform has adopted a polyglot database model in which a combination of relational and document-based databases are used to store both data and metadata, with some metadata stored in a custom XML schema designed as a superset of the requirements for multiple target metadata standards: ISO 19115-2/19139/19110/19119, FGCD CSDGM (both with and without remote sensing extensions) and Dublin Core. Metadata stored within this schema is complemented by additional service, format and publisher information that is dynamically "injected" into produced metadata documents when they are requested from the system. While mapping from the underlying common metadata schema is relatively straightforward, the generation of valid metadata within each target standard is necessary but not sufficient for integration into
Methods for integration of transcriptomic data in genome-scale metabolic models
Kim, Min Kyung; Lun, Desmond S.
2014-01-01
Several computational methods have been developed that integrate transcriptomic data with genome-scale metabolic reconstructions to infer condition-specific system-wide intracellular metabolic flux distributions. In this mini-review, we describe each of these methods published to date with categorizing them based on four different grouping criteria (requirement for multiple gene expression datasets as input, requirement for a threshold to define a gene's high and low expression, requirement for a priori assumption of an appropriate objective function, and validation of predicted fluxes directly against measured intracellular fluxes). Then, we recommend which group of methods would be more suitable from a practical perspective. PMID:25379144
Tillib, S. V.; Strizhkov, B. N.; Mirzabekov, A. D.; Biochip Technology Center; Russian Academy of Sciences
2001-05-01
We have developed a method for parallel independent on-chip amplification and the following sequence variation analysis of multiple DNA regions directly using microchip with an array of nanoliter gel pads containing specific sets of tethered primers. The method has three key features. First, DNA to be amplified is enriched at gel pads by its hybridization with immobilized primers. Second, different sets of specific primers are immobilized within various gel pads, and primers are detached within gel pads just before polymerase chain reaction to enhance the amplification. A gel pad may contain an additional permanently immobilized dormant primer that is activated to carry out the allele-specific primer extension reaction to detect mutations. Third, multiple polymerase chain reactions are confined within nanoliter gel pads covered and separated from each other with mineral oil. The method was applied to simultaneously identify several abundant drug-resistant mutations in three genes of Mycobacterium tuberculosis.
NASA Technical Reports Server (NTRS)
Schneider, Harold
1959-01-01
This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.
Structure of the EGF receptor transactivation circuit integrates multiple signals with cell context
Joslin, Elizabeth J.; Shankaran, Harish; Opresko, Lee K.; Bollinger, Nikki; Lauffenburger, Douglas A.; Wiley, H. S.
2010-05-10
Transactivation of the epidermal growth factor receptor (EGFR) has been proposed to be a mechanism by which a variety of cellular inputs can be integrated into a single signaling pathway, but the regulatory topology of this important system is unclear. To understand the transactivation circuit, we first created a “non-binding” reporter for ligand shedding. We then quantitatively defined how signals from multiple agonists were integrated both upstream and downstream of the EGFR into the extracellular signal regulated kinase (ERK) cascade in human mammary epithelial cells. We found that transactivation is mediated by a recursive autocrine circuit where ligand shedding drives EGFR-stimulated ERK that in turn drives further ligand shedding. The time from shedding to ERK activation is fast (<5 min) whereas the recursive feedback is slow (>15 min). Simulations showed that this delay in positive feedback greatly enhanced system stability and robustness. Our results indicate that the transactivation circuit is constructed so that the magnitude of ERK signaling is governed by the sum of multiple direct inputs, while recursive, autocrine ligand shedding controls signal duration.
Comparison of Four Methods for Weighting Multiple Predictors.
ERIC Educational Resources Information Center
Aamodt, Michael G.; Kimbrough, Wilson W.
1985-01-01
Four methods were used to weight predictors associated with a Resident Assistant job: (1) rank order weights; (2) unit weights; (3) critical incident weights; and (4) regression weights. A cross-validation was also done. Most weighting methods were highly related. No method was superior in terms of protection from validity shrinkage. (GDC)
Predicted PAR1 inhibitors from multiple computational methods
NASA Astrophysics Data System (ADS)
Wang, Ying; Liu, Jinfeng; Zhu, Tong; Zhang, Lujia; He, Xiao; Zhang, John Z. H.
2016-08-01
Multiple computational approaches are employed in order to find potentially strong binders of PAR1 from the two molecular databases: the Specs database containing more than 200,000 commercially available molecules and the traditional Chinese medicine (TCM) database. By combining the use of popular docking scoring functions together with detailed molecular dynamics simulation and protein-ligand free energy calculations, a total of fourteen molecules are found to be potentially strong binders of PAR1. The atomic details in protein-ligand interactions of these molecules with PAR1 are analyzed to help understand the binding mechanism which should be very useful in design of new drugs.
Accelerometer Method and Apparatus for Integral Display and Control Functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1996-01-01
Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. Art accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.
Accelerometer Method and Apparatus for Integral Display and Control Functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1998-01-01
Method and apparatus for detecting mechanical vibrations and outputting a signal in response thereto is discussed. An accelerometer package having integral display and control functions is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine conditions over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase in amplitude over a selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated.
A General Simulation Method for Multiple Bodies in Proximate Flight
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
2003-01-01
Methods of unsteady aerodynamic simulation for an arbitrary number of independent bodies flying in close proximity are considered. A novel method to efficiently detect collision contact points is described. A method to compute body trajectories in response to aerodynamic loads, applied loads, and inter-body collisions is also given. The physical correctness of the methods are verified by comparison to a set of analytic solutions. The methods, combined with a Navier-Stokes solver, are used to demonstrate the possibility of predicting the unsteady aerodynamics and flight trajectories of moving bodies that involve rigid-body collisions.
Rice, Glenn; Teuschler, Linda; MacDonel, Margaret; Butler, Jim; Finster, Molly; Hertzberg, Rick; Harou, Lynne
2007-07-01
Available in abstract form only. Full text of publication follows: As information about environmental contamination has increased in recent years, so has public interest in the combined effects of multiple contaminants. This interest has been highlighted by recent tragedies such as the World Trade Center disaster and hurricane Katrina. In fact, assessing multiple contaminants, exposures, and effects has long been an issue for contaminated sites, including U.S. Department of Energy (DOE) legacy waste sites. Local citizens have explicitly asked the federal government to account for cumulative risks, with contaminants moving offsite via groundwater flow, surface runoff, and air dispersal being a common emphasis. Multiple exposures range from ingestion and inhalation to dermal absorption and external gamma irradiation. Three types of concerns can lead to cumulative assessments: (1) specific sources or releases - e.g., industrial facilities or accidental discharges; (2) contaminant levels - in environmental media or human tissues; and (3) elevated rates of disease - e.g., asthma or cancer. The specific initiator frames the assessment strategy, including a determination of appropriate models to be used. Approaches are being developed to better integrate a variety of data, extending from environmental to internal co-location of contaminants and combined effects, to support more practical assessments of cumulative health risks. (authors)
Detection method for dissociation of multiple-charged ions
Smith, Richard D.; Udseth, Harold R.; Rockwood, Alan L.
1991-01-01
Dissociations of multiple-charged ions are detected and analyzed by charge-separation tandem mass spectrometry. Analyte molecules are ionized to form multiple-charged parent ions. A particular charge parent ion state is selected in a first-stage mass spectrometer and its mass-to-charge ratio (M/Z) is detected to determine its mass and charge. The selected parent ions are then dissociated, each into a plurality of fragments including a set of daughter ions each having a mass of at least one molecular weight and a charge of at least one. Sets of daughter ions resulting from the dissociation of one parent ion (sibling ions) vary in number but typically include two to four ions, one or more multiply-charged. A second stage mass spectrometer detects mass-to-charge ratio (m/z) of the daughter ions and a temporal or temporo-spatial relationship among them. This relationship is used to correlate the daughter ions to determine which (m/z) ratios belong to a set of sibling ions. Values of mass and charge of each of the sibling ions are determined simultaneously from their respective (m/z) ratios such that the sibling ion charges are integers and sum to the parent ion charge.
Gao, Yan; Li, Peng; Pappas, Dimitri
2013-12-01
In this study, we introduced a novel and convenient approach to culture multiple cells in localized arrays of microfluidic chambers using one-step vacuum actuation. In one device, we integrated 8 individually addressable regions of culture chambers, each only requiring one simple vacuum operation to seed cell lines. Four cell lines were seeded in designated regions in one device via sequential injection with high purity (99.9 %-100 %) and cultured for long-term. The on-chip simultaneous culture of HuT 78, Ramos, PC-3 and C166-GFP cells for 48 h was demonstrated with viabilities of 92 %+/-2 %, 94 %+/-4 %, 96 %+/-2 % and 97 %+/-2 %, respectively. The longest culture period for C166-GFP cells in this study was 168 h with a viability of 96 %+/-10 %. Cell proliferation in each individual side channel can be tracked. Mass transport between the main channel and side channels was achieved through diffusion and studied using fluorescein solution. The main advantage of this device is the capability to perform multiple cell-based assays on the same device for better comparative studies. After treating cells with staurosporine or anti-human CD95 for 16 h, the apoptotic cell percentage of HuT 78, CCRF-CEM, PC-3 and Ramos cells were 36 %+/-3 %, 24 %+/-4 %, 12 %+/-2 %, 18 %+/-4 % for staurosporine, and 63 %+/-2 %, 45 %+/-1 %, 3 %+/-3 %, 27 %+/-12 % for anti-human CD95, respectively. With the advantages of enhanced integration, ease of use and fabrication, and flexibility, this device will be suitable for long-term multiple cell monitoring and cell based assays.
Multistep and Multistage Boundary Integral Methods for the Wave Equation
NASA Astrophysics Data System (ADS)
Banjai, Lehel
2009-09-01
We describe how time-discretized wave equation in a homogeneous medium can be solved by boundary integral methods. The time discretization can be a multistep, Runge-Kutta, or a more general multistep-multistage method. The resulting convolutional system of boundary integral equations falls in the family of convolution quadratures of Ch. Lubich. In this work our aim is to discuss a new technique for efficiently solving the discrete convolutional system and to present large scale 3D numerical experiments with a wide range of time-discretizations that have up to now not appeared in print. One of the conclusions is that Runge-Kutta methods are often the method of choice even at low accuracy; yet, in connection with hyperbolic problems BDF (backward difference formulas) have been predominant in the literature on convolution quadrature.
Approximation method to compute domain related integrals in structural studies
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2015-11-01
Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the
An Integrated Approach to Research Methods and Capstone
ERIC Educational Resources Information Center
Postic, Robert; McCandless, Ray; Stewart, Beth
2014-01-01
In 1991, the AACU issued a report on improving undergraduate education suggesting, in part, that a curriculum should be both comprehensive and cohesive. Since 2008, we have systematically integrated our research methods course with our capstone course in an attempt to accomplish the twin goals of comprehensiveness and cohesion. By taking this…
Integrating Methods and Materials: Developing Trainees' Reading Skills.
ERIC Educational Resources Information Center
Jarvis, Jennifer
1987-01-01
Explores issues arising from a research project which studied ways of meeting the reading needs of trainee primary school teachers (from Malawi and Tanzania) of English as a foreign language. Topics discussed include: the classroom teaching situation; teaching "quality"; and integration of materials and methods. (CB)
Singularity Preserving Numerical Methods for Boundary Integral Equations
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki (Principal Investigator)
1996-01-01
In the past twelve months (May 8, 1995 - May 8, 1996), under the cooperative agreement with Division of Multidisciplinary Optimization at NASA Langley, we have accomplished the following five projects: a note on the finite element method with singular basis functions; numerical quadrature for weakly singular integrals; superconvergence of degenerate kernel method; superconvergence of the iterated collocation method for Hammersteion equations; and singularity preserving Galerkin method for Hammerstein equations with logarithmic kernel. This final report consists of five papers describing these projects. Each project is preceeded by a brief abstract.
Adaptive Transmission Control Method for Communication-Broadcasting Integrated Services
NASA Astrophysics Data System (ADS)
Koto, Hideyuki; Furuya, Hiroki; Nakamura, Hajime
This paper proposes an adaptive transmission control method for massive and intensive telecommunication traffic generated by communication-broadcasting integrated services. The proposed method adaptively controls data transmissions from viewers depending on the congestion states, so that severe congestion can be effectively avoided. Furthermore, it utilizes the broadcasting channel which is not only scalable, but also reliable for controlling the responses from vast numbers of viewers. The performance of the proposed method is evaluated through experiments on a test bed where approximately one million viewers are emulated. The obtained results quantitatively demonstrate the performance of the proposed method and its effectiveness under massive and intensive traffic conditions.
NASA Astrophysics Data System (ADS)
Congdon, Peter
2010-03-01
This paper describes a structural equation methodology for obtaining social capital scores for survey subjects from multiple indicators of social support, neighbourhood and trust perceptions, and memberships of organizations. It adjusts for variation that is likely to occur in levels of social capital according to geographic context (e.g. level of area deprivation, geographic region, level of urbanity) and demographic group. Social capital is used as an explanatory factor for psychological distress using data from the 2006 Health Survey for England. A highly significant effect of social capital in reducing the chance of psychiatric caseness is obtained after controlling for other individual and geographic risk factors. Allowing for social capital has considerable effects on the impacts on psychiatric health of other risk factors. In particular, the impact of area deprivation category is much reduced. There is also evidence of significant differentiation in social capital between population categories and geographic contexts.
Integration of multiple determinants in the neuronal computation of economic values.
Raghuraman, Anantha P; Padoa-Schioppa, Camillo
2014-08-27
Economic goods may vary on multiple dimensions (determinants). A central conjecture in decision neuroscience is that choices between goods are made by comparing subjective values computed through the integration of all relevant determinants. Previous work identified three groups of neurons in the orbitofrontal cortex (OFC) of monkeys engaged in economic choices: (1) offer value cells, which encode the value of individual offers; (2) chosen value cells, which encode the value of the chosen good; and (3) chosen juice cells, which encode the identity of the chosen good. In principle, these populations could be sufficient to generate a decision. Critically, previous work did not assess whether offer value cells (the putative input to the decision) indeed encode subjective values as opposed to physical properties of the goods, and/or whether offer value cells integrate multiple determinants. To address these issues, we recorded from the OFC while monkeys chose between risky outcomes. Confirming previous observations, three populations of neurons encoded the value of individual offers, the value of the chosen option, and the value-independent choice outcome. The activity of both offer value cells and chosen value cells encoded values defined by the integration of juice quantity and probability. Furthermore, both populations reflected the subjective risk attitude of the animals. We also found additional groups of neurons encoding the risk associated with a particular option, the risky nature of the chosen option, and whether the trial outcome was positive or negative. These results provide substantial support for the conjecture described above and for the involvement of OFC in good-based decisions. PMID:25164656
Pavan, Ana Carolina; Marroig, Gabriel
2016-10-01
A phylogenetic systematic perspective is instrumental in recovering new species and their evolutionary relationships. The advent of new technologies for molecular and morphological data acquisition and analysis, allied to the integration of knowledge from different areas, such as ecology and population genetics, allows for the emergence of more rigorous, accurate and complete scientific hypothesis on species diversity. Mustached bats (genus Pteronotus) are a good model for the application of this integrative approach. They are a widely distributed and a morphologically homogeneous group, but comprising species with remarkable differences in their echolocation strategy and feeding behavior. The latest systematic review suggested six species with 17 subspecies in Pteronotus. Subsequent studies using discrete morphological characters supported the same arrangement. However, recent papers reported high levels of genetic divergence among conspecific taxa followed by bioacoustic and geographic agreement, suggesting an underestimated diversity in the genus. To date, no study merging genetic evidences and morphometric variation along the entire geographic range of this group has been attempted. Based on a comprehensive sampling including representatives of all current taxonomic units, we attempt to delimit species in Pteronotus through the application of multiple methodologies and hierarchically distinct datasets. The molecular approach includes six molecular markers from three genetic transmission systems; morphological investigations used 41 euclidean distances estimated through three-dimensional landmarks collected from 1628 skulls. The phylogenetic analysis reveals a greater diversity than previously reported, with a high correspondence among the genetic lineages and the currently recognized subspecies in the genus. Discriminant analysis of variables describing size and shape of cranial bones support the rising of the genetic groups to the specific status. Based on
Nodal integral method for transient heat conduction in a cylinder
Esser, P.D. )
1993-01-01
The accuracy and efficiency of nodal solution methods are well established for neutron diffusion in a variety of geometries, as well as for heat transfer and fluid flow in rectangular coordinates. This paper describes the development of a nodal integral method to solve the transient heat conduction equation in cylindrical geometry. Results for a test problem with an analytical solution indicate that the nodal solution provides higher accuracy than a conventional implicit finite difference scheme, while maintaining similar stability characteristics.
Pisella, L; Binkofski, F; Lasek, K; Toni, I; Rossetti, Y
2006-01-01
The current dominant view of the visual system is marked by the functional and anatomical dissociation between a ventral stream specialised for perception and a dorsal stream specialised for action. The "double-dissociation" between visual agnosia (VA), a deficit of visual recognition, and optic ataxia (OA), a deficit of visuo-manual guidance, considered as consecutive to ventral and dorsal damage, respectively, has provided the main argument for this dichotomic view. In the first part of this paper, we show that the currently available empirical data do not suffice to support a double-dissociation between OA and VA. In the second part, we review evidence coming from human neuropsychology and monkey data, which cast further doubts on the validity of a simple double-dissociation between perception and action because they argue for a far more complex organisation with multiple parallel visual-to-motor connections: 1. A dorso-dorsal pathway (involving the most dorsal part of the parietal and pre-motor cortices): for immediate visuo-motor control--with OA as typical disturbance. The latest research about OA is reviewed, showing how these patients exhibit deficits restricted to the most direct and fast visuo-motor transformations. We also propose that mild mirror ataxia, consisting of misreaching errors when the controlesional hand is guided to a visual goal though a mirror, could correspond to OA with an isolated "hand effect". 2. A ventral stream-prefrontal pathway (connections from the ventral visual stream to pre-frontal areas, by-passing the parietal areas): for "mediate" control (involving spatial or temporal transpositions [Rossetti, Y., & Pisella, L. (2003). Mediate responses as direct evidence for intention: Neuropsychology of Not to-, Not now- and Not there-tasks. In S. Johnson (Ed.), Cognitive Neuroscience perspectives on the problem of intentional action (pp. 67-105). MIT Press.])--with VA as typical disturbance. Preserved visuo-manual guidance in patients
Mechanical integrity test methods for Class 2 injection wells
Smith, K.P.
1991-03-01
Mechanical integrity testing of injection wells to ensure that they do not threaten an underground source of drinking water (USDW) is a key component of the Underground Injection Control (UIC) program of the Safe Drinking Water Act of 1974. Approximately 55% of all active injection wells are classified as Class II wells. These wells are used by the oil and gas industry primarily to dispose of waste fluids or to enhance production of hydrocarbons. Mechanical integrity is defined as the absence of significant leaks in the casing, tubing, or packers (internal integrity); and the absence of significant fluid movement into a USDW through cement channels behind the casing (external integrity). A wide variety of mechanical integrity test (MIT) methods have been developed to meet federal and primacy state program requirements. The internal mechanical integrity of standard injection wells can be evaluated by radioactive tracer surveys, standard annular pressure tests, annulus pressure monitoring, and continuous injection pressure versus injection rate monitoring. These tests are listed in order of decreasing reliability and increasing detection limits. 28 refs., 17 figs., 6 tabs.
Adaptive integral method with fast Gaussian gridding for solving combined field integral equations
NASA Astrophysics Data System (ADS)
Bakır, O.; Baǧ; Cı, H.; Michielssen, E.
Fast Gaussian gridding (FGG), a recently proposed nonuniform fast Fourier transform algorithm, is used to reduce the memory requirements of the adaptive integral method (AIM) for accelerating the method of moments-based solution of combined field integral equations pertinent to the analysis of scattering from three-dimensional perfect electrically conducting surfaces. Numerical results that demonstrate the efficiency and accuracy of the AIM-FGG hybrid in comparison to an AIM-accelerated solver, which uses moment matching to project surface sources onto an auxiliary grid, are presented.
ERIC Educational Resources Information Center
Tang, Kok-Sing; Delgado, Cesar; Moje, Elizabeth Birr
2014-01-01
This paper presents an integrative framework for analyzing science meaning-making with representations. It integrates the research on multiple representations and multimodal representations by identifying and leveraging the differences in their units of analysis in two dimensions: timescale and compositional grain size. Timescale considers the…
Metcalf, Jessica L; Prost, Stefan; Nogués-Bravo, David; DeChaine, Eric G; Anderson, Christian; Batra, Persaram; Araújo, Miguel B; Cooper, Alan; Guralnick, Robert P
2014-02-22
One of the grand goals of historical biogeography is to understand how and why species' population sizes and distributions change over time. Multiple types of data drawn from disparate fields, combined into a single modelling framework, are necessary to document changes in a species's demography and distribution, and to determine the drivers responsible for change. Yet truly integrated approaches are challenging and rarely performed. Here, we discuss a modelling framework that integrates spatio-temporal fossil data, ancient DNA, palaeoclimatological reconstructions, bioclimatic envelope modelling and coalescence models in order to statistically test alternative hypotheses of demographic and potential distributional changes for the iconic American bison (Bison bison). Using different assumptions about the evolution of the bioclimatic niche, we generate hypothetical distributional and demographic histories of the species. We then test these demographic models by comparing the genetic signature predicted by serial coalescence against sequence data derived from subfossils and modern populations. Our results supported demographic models that include both climate and human-associated drivers of population declines. This synthetic approach, integrating palaeoclimatology, bioclimatic envelopes, serial coalescence, spatio-temporal fossil data and heterochronous DNA sequences, improves understanding of species' historical biogeography by allowing consideration of both abiotic and biotic interactions at the population level.
Metcalf, Jessica L.; Prost, Stefan; Nogués-Bravo, David; DeChaine, Eric G.; Anderson, Christian; Batra, Persaram; Araújo, Miguel B.; Cooper, Alan; Guralnick, Robert P.
2014-01-01
One of the grand goals of historical biogeography is to understand how and why species' population sizes and distributions change over time. Multiple types of data drawn from disparate fields, combined into a single modelling framework, are necessary to document changes in a species's demography and distribution, and to determine the drivers responsible for change. Yet truly integrated approaches are challenging and rarely performed. Here, we discuss a modelling framework that integrates spatio-temporal fossil data, ancient DNA, palaeoclimatological reconstructions, bioclimatic envelope modelling and coalescence models in order to statistically test alternative hypotheses of demographic and potential distributional changes for the iconic American bison (Bison bison). Using different assumptions about the evolution of the bioclimatic niche, we generate hypothetical distributional and demographic histories of the species. We then test these demographic models by comparing the genetic signature predicted by serial coalescence against sequence data derived from subfossils and modern populations. Our results supported demographic models that include both climate and human-associated drivers of population declines. This synthetic approach, integrating palaeoclimatology, bioclimatic envelopes, serial coalescence, spatio-temporal fossil data and heterochronous DNA sequences, improves understanding of species' historical biogeography by allowing consideration of both abiotic and biotic interactions at the population level. PMID:24403338
Multiple cell radiation detector system, and method, and submersible sonde
Johnson, Larry O.; McIsaac, Charles V.; Lawrence, Robert S.; Grafwallner, Ervin G.
2002-01-01
A multiple cell radiation detector includes a central cell having a first cylindrical wall providing a stopping power less than an upper threshold; an anode wire suspended along a cylindrical axis of the central cell; a second cell having a second cylindrical wall providing a stopping power greater than a lower threshold, the second cylindrical wall being mounted coaxially outside of the first cylindrical wall; a first end cap forming a gas-tight seal at first ends of the first and second cylindrical walls; a second end cap forming a gas-tight seal at second ends of the first and second cylindrical walls; and a first group of anode wires suspended between the first and second cylindrical walls.
Yoga as a method of symptom management in multiple sclerosis
Frank, Rachael; Larimore, Jennifer
2015-01-01
Multiple Sclerosis (MS) is an immune-mediated process in which the body's immune system damages myelin in the central nervous system (CNS). The onset of this disorder typically occurs in young adults, and it is more common among women. Currently, there is no cure and the long-term disease progression makes symptomatic management critical for maintaining quality of life. Several pharmacotherapeutic agents are approved for treatment, but many patients seek complementary and alternative interventions. Reviews have been conducted regarding broad topics such as mindfulness-based interventions for people diagnosed with MS and the impact of yoga on a range of neurological disorders. The objective of the present review is to examine the potential benefits of yoga for individuals with MS and address its use in managing symptoms including pain, mental health, fatigue, spasticity, balance, bladder control, and sexual function. PMID:25983675
Material mechanical characterization method for multiple strains and strain rates
Erdmand, III, Donald L.; Kunc, Vlastimil; Simunovic, Srdjan; Wang, Yanli
2016-01-19
A specimen for measuring a material under multiple strains and strain rates. The specimen including a body having first and second ends and a gage region disposed between the first and second ends, wherein the body has a central, longitudinal axis passing through the first and second ends. The gage region includes a first gage section and a second gage section, wherein the first gage section defines a first cross-sectional area that is defined by a first plane that extends through the first gage section and is perpendicular to the central, longitudinal axis. The second gage section defines a second cross-sectional area that is defined by a second plane that extends through the second gage section and is perpendicular to the central, longitudinal axis and wherein the first cross-sectional area is different in size than the second cross-sectional area.
Yoga as a method of symptom management in multiple sclerosis.
Frank, Rachael; Larimore, Jennifer
2015-01-01
Multiple Sclerosis (MS) is an immune-mediated process in which the body's immune system damages myelin in the central nervous system (CNS). The onset of this disorder typically occurs in young adults, and it is more common among women. Currently, there is no cure and the long-term disease progression makes symptomatic management critical for maintaining quality of life. Several pharmacotherapeutic agents are approved for treatment, but many patients seek complementary and alternative interventions. Reviews have been conducted regarding broad topics such as mindfulness-based interventions for people diagnosed with MS and the impact of yoga on a range of neurological disorders. The objective of the present review is to examine the potential benefits of yoga for individuals with MS and address its use in managing symptoms including pain, mental health, fatigue, spasticity, balance, bladder control, and sexual function. PMID:25983675
Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; Beerli, Peter; Zeng, Xiankui; Lu, Dan; Tao, Yuezan
2016-02-05
Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less
Mixed time integration methods for transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Liu, W. K.
1983-01-01
The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. The computer implementation aspects and numerical evaluation of these mixed time implicit-explicit algorithms in thermal analysis of structures are presented. A computationally-useful method of estimating the critical time step for linear quadrilateral element is also given. Numerical tests confirm the stability criterion and accuracy characteristics of the methods. The superiority of these mixed time methods to the fully implicit method or the fully explicit method is also demonstrated.
Shen, Yao Qing; Burger, Gertraud
2007-01-01
Background Knowing the subcellular location of proteins provides clues to their function as well as the interconnectivity of biological processes. Dozens of tools are available for predicting protein location in the eukaryotic cell. Each tool performs well on certain data sets, but their predictions often disagree for a given protein. Since the individual tools each have particular strengths, we set out to integrate them in a way that optimally exploits their potential. The method we present here is applicable to various subcellular locations, but tailored for predicting whether or not a protein is localized in mitochondria. Knowledge of the mitochondrial proteome is relevant to understanding the role of this organelle in global cellular processes. Results In order to develop a method for enhanced prediction of subcellular localization, we integrated the outputs of available localization prediction tools by several strategies, and tested the performance of each strategy with known mitochondrial proteins. The accuracy obtained (up to 92%) surpasses by far the individual tools. The method of integration proved crucial to the performance. For the prediction of mitochondrion-located proteins, integration via a two-layer decision tree clearly outperforms simpler methods, as it allows emphasis of biologically relevant features such as the mitochondrial targeting peptide and transmembrane domains. Conclusion We developed an approach that enhances the prediction accuracy of mitochondrial proteins by uniting the strength of specialized tools. The combination of machine-learning based integration with biological expert knowledge leads to improved performance. This approach also alleviates the conundrum of how to choose between conflicting predictions. Our approach is easy to implement, and applicable to predicting subcellular locations other than mitochondria, as well as other biological features. For a trial of our approach, we provide a webservice for mitochondrial protein
Method for analyzing radiation sensitivity of integrated circuits
NASA Technical Reports Server (NTRS)
Gauthier, M. K.; Stanley, A. G. (Inventor)
1979-01-01
A method for analyzing the radiation sensitivity of an integrated circuit is described to determine the components. The application of a narrow radiation beam to portions of the circuit is considered. The circuit is operated under normal bias conditions during the application of radiation in a dosage that is likely to cause malfunction of at least some transistors, while the circuit is monitored for failure of the irradiated transistor. When a radiation sensitive transistor is found, then the radiation beam is further narrowed and, using a fresh integrated circuit, a very narrow beam is applied to different parts of the transistor, such as its junctions, to locate the points of greatest sensitivity.
Method to integrate full particle orbit in toroidal plasmas
NASA Astrophysics Data System (ADS)
Wei, X. S.; Xiao, Y.; Kuley, A.; Lin, Z.
2015-09-01
It is important to integrate full particle orbit accurately when studying charged particle dynamics in electromagnetic waves with frequency higher than cyclotron frequency. We have derived a form of the Boris scheme using magnetic coordinates, which can be used effectively to integrate the cyclotron orbit in toroidal geometry over a long period of time. The new method has been verified by a full particle orbit simulation in toroidal geometry without high frequency waves. The full particle orbit calculation recovers guiding center banana orbit. This method has better numeric properties than the conventional Runge-Kutta method for conserving particle energy and magnetic moment. The toroidal precession frequency is found to match that from guiding center simulation. Many other important phenomena in the presence of an electric field, such as E × B drift, Ware pinch effect and neoclassical polarization drift are also verified by the full orbit simulation.
Numerical analysis of Weyl's method for integrating boundary layer equations
NASA Technical Reports Server (NTRS)
Najfeld, I.
1982-01-01
A fast method for accurate numerical integration of Blasius equation is proposed. It is based on the limit interchange in Weyl's fixed point method formulated as an iterated limit process. Each inner limit represents convergence to a discrete solution. It is shown that the error in a discrete solution admits asymptotic expansion in even powers of step size. An extrapolation process is set up to operate on a sequence of discrete solutions to reach the outer limit. Finally, this method is extended to related boundary layer equations.
[An integrated segmentation method for 3D ultrasound carotid artery].
Yang, Xin; Wu, Huihui; Liu, Yang; Xu, Hongwei; Liang, Huageng; Cai, Wenjuan; Fang, Mengjie; Wang, Yujie
2013-07-01
An integrated segmentation method for 3D ultrasound carotid artery was proposed. 3D ultrasound image was sliced into transverse, coronal and sagittal 2D images on the carotid bifurcation point. Then, the three images were processed respectively, and the carotid artery contours and thickness were obtained finally. This paper tries to overcome the disadvantages of current computer aided diagnosis method, such as high computational complexity, easily introduced subjective errors et al. The proposed method could get the carotid artery overall information rapidly, accurately and completely. It could be transplanted into clinical usage for atherosclerosis diagnosis and prevention. PMID:24195385
Machado-Vieira, Rodrigo; Soeiro-De-Souza, Marcio G.; Richards, Erica M.; Teixeira, Antonio L.; Zarate, Carlos A.
2014-01-01
Objectives This paper reviews the neurobiology of bipolar disorder (BD), particularly findings associated with impaired cellular resilience and plasticity. Methods PubMed/Medline articles and book chapters published over the last 20 years were identified using the following keyword combinations: BD, calcium, cytokines, endoplasmic reticulum (ER), genetics, glucocorticoids, glutamate, imaging, ketamine, lithium, mania, mitochondria, neuroplasticity, neuroprotection, neurotrophic, oxidative stress, plasticity, resilience, and valproate. Results BD is associated with impaired cellular resilience and synaptic dysfunction at multiple levels, associated with impaired cellular resilience and plasticity. These findings were partially prevented or even reversed with the use of mood stabilizers, but longitudinal studies associated with clinical outcome remain scarce. Conclusions Evidence consistently suggests that BD involves impaired neural plasticity and cellular resilience at multiple levels. This includes the genetic and intra- and intercellular signalling levels, their impact on brain structure and function, as well as the final translation into behaviour/cognitive changes. Future studies are expected to adopt integrated translational approaches using a variety of methods (e.g., microarray approaches, neuroimaging, genetics, electrophysiology, and the new generation of –omics techniques). These studies will likely focus on more precise diagnoses and a personalized medicine paradigm in order to develop better treatments for those who need them most. PMID:23998912
Evaluating the Accuracy and Efficiency of Multiple Sequence Alignment Methods
Pervez, Muhammad Tariq; Babar, Masroor Ellahi; Nadeem, Asif; Aslam, Muhammad; Awan, Ali Raza; Aslam, Naeem; Hussain, Tanveer; Naveed, Nasir; Qadri, Salman; Waheed, Usman; Shoaib, Muhammad
2014-01-01
A comparison of 10 most popular Multiple Sequence Alignment (MSA) tools, namely, MUSCLE, MAFFT(L-INS-i), MAFFT (FFT-NS-2), T-Coffee, ProbCons, SATe, Clustal Omega, Kalign, Multalin, and Dialign-TX is presented. We also focused on the significance of some implementations embedded in algorithm of each tool. Based on 10 simulated trees of different number of taxa generated by R, 400 known alignments and sequence files were constructed using indel-Seq-Gen. A total of 4000 test alignments were generated to study the effect of sequence length, indel size, deletion rate, and insertion rate. Results showed that alignment quality was highly dependent on the number of deletions and insertions in the sequences and that the sequence length and indel size had a weaker effect. Overall, ProbCons was consistently on the top of list of the evaluated MSA tools. SATe, being little less accurate, was 529.10% faster than ProbCons and 236.72% faster than MAFFT(L-INS-i). Among other tools, Kalign and MUSCLE achieved the highest sum of pairs. We also considered BALiBASE benchmark datasets and the results relative to BAliBASE- and indel-Seq-Gen-generated alignments were consistent in the most cases. PMID:25574120
Evaluating the accuracy and efficiency of multiple sequence alignment methods.
Pervez, Muhammad Tariq; Babar, Masroor Ellahi; Nadeem, Asif; Aslam, Muhammad; Awan, Ali Raza; Aslam, Naeem; Hussain, Tanveer; Naveed, Nasir; Qadri, Salman; Waheed, Usman; Shoaib, Muhammad
2014-01-01
A comparison of 10 most popular Multiple Sequence Alignment (MSA) tools, namely, MUSCLE, MAFFT(L-INS-i), MAFFT (FFT-NS-2), T-Coffee, ProbCons, SATe, Clustal Omega, Kalign, Multalin, and Dialign-TX is presented. We also focused on the significance of some implementations embedded in algorithm of each tool. Based on 10 simulated trees of different number of taxa generated by R, 400 known alignments and sequence files were constructed using indel-Seq-Gen. A total of 4000 test alignments were generated to study the effect of sequence length, indel size, deletion rate, and insertion rate. Results showed that alignment quality was highly dependent on the number of deletions and insertions in the sequences and that the sequence length and indel size had a weaker effect. Overall, ProbCons was consistently on the top of list of the evaluated MSA tools. SATe, being little less accurate, was 529.10% faster than ProbCons and 236.72% faster than MAFFT(L-INS-i). Among other tools, Kalign and MUSCLE achieved the highest sum of pairs. We also considered BALiBASE benchmark datasets and the results relative to BAliBASE- and indel-Seq-Gen-generated alignments were consistent in the most cases.
ERIC Educational Resources Information Center
Rimpiläinen, Sanna
2015-01-01
What do different research methods and approaches "do" in practice? The article seeks to discuss this point by drawing upon socio-material research approaches and empirical examples taken from the early stages of an extensive case study on an interdisciplinary project between two multidisciplinary fields of study, education and computer…
NASA Astrophysics Data System (ADS)
Hu, Yanxia; Yang, Xiaozhong
2006-08-01
A method for obtaining first integrals and integrating factors of n-th order autonomous systems is proposed. The search for first integrals and integrating factors can be reduced to the search for a class of invariant manifolds of the systems. Finally, the proposed method is applied to Euler-Poisson equations (gyroscope system), and the fourth first integral of the system in general Kovalevskaya case can be obtained.
The quantum bouncer by the path integral method
NASA Astrophysics Data System (ADS)
Goodings, D. A.; Szeredi, T.
1991-10-01
The path integral formulation of quantum mechanics in the semiclassical or WKB approximation provides a physically intuitive way of relating a classical system to its quantum analog. A fruitful way of studying quantum chaos is based upon applying the Gutzwiller periodic orbit sum rule, a result derived by the path integral method in the WKB approximation. This provides some motivation for learning about path integral techniques. In this paper a pedagogical example of the path integral formalism is presented in the hope of conveying the basic physical and mathematical concepts. The ``quantum bouncer'' is studied—the quantum version of a particle moving in one dimension above a perfectly reflecting surface while subject to a constant force directed toward the surface. The classical counterpart of this system is a ball bouncing on a floor in a constant gravitational field, collisions with the floor being assumed to be elastic. Path integration is used to derive the energy eigenvalues and eigenfunctions of the quantum bouncer in the WKB or semiclassical approximation. The results are shown to be the same as those obtained by solving the Schrödinger equation in the same approximation.
An upwind nodal integral method for incompressible fluid flow
Esser, P.D. ); Witt, R.J. )
1993-05-01
An upwind nodal solution method is developed for the steady, two-dimensional flow of an incompressible fluid. The formulation is based on the nodal integral method, which uses transverse integrations, analytical solutions of the one-dimensional averaged equations, and node-averaged uniqueness constraints to derive the discretized nodal equations. The derivation introduces an exponential upwind bias by retaining the streamwise convection term in the homogeneous part of the transverse-integrated convection-diffusion equation. The method is adapted to the stream function-vorticity form of the Navier-Stokes equations, which are solved over a nonstaggered nodal mesh. A special nodal scheme is used for the Poisson stream function equation to properly account for the exponentially varying vorticity source. Rigorous expressions for the velocity components and the no-slip vorticity boundary condition are derived from the stream function formulation. The method is validated with several benchmark problems. An idealized purely convective flow of a scalar step function indicates that the nodal approximation errors are primarily dispersive, not dissipative, in nature. Results for idealized and actual recirculating driven-cavity flows reveal a significant reduction in false diffusion compared with conventional finite difference techniques.
Wang, Chih-Hung; Chang, Chih-Peng; Lee, Gwo-Bin
2016-12-15
DNA aptamers that can bind specific molecular targets have great potential as probes for microbial diagnostic applications. However, aptamers may change their conformation under different operating conditions, thus affecting their affinity and specificity towards the target molecules. In this study, a new integrated microfluidic system was developed that exploited the predictable change in conformation of a single universal influenza aptamer exposed to differing ion concentrations in order to detect multiple types of the influenza virus. Furthermore, the fluorescent-labeled universal aptamer used in this system could distinguish and detect three different influenza viruses (influenza A H1N1, H3N2, and influenza B) at the same time in 20min and therefore has great potential for point-of-care applications requiring rapid diagnosis of influenza viruses.
MR-elastography reveals degradation of tissue integrity in multiple sclerosis.
Wuerfel, Jens; Paul, Friedemann; Beierbach, Bernd; Hamhaber, Uwe; Klatt, Dieter; Papazoglou, Sebastian; Zipp, Frauke; Martus, Peter; Braun, Jürgen; Sack, Ingolf
2010-02-01
In multiple sclerosis (MS), diffuse brain parenchymal damage exceeding focal inflammation is increasingly recognized to be present from the very onset of the disease, and, although occult to conventional imaging techniques, may present a major cause of permanent neurological disability. Subtle tissue alterations significantly influence biomechanical properties given by stiffness and internal friction, that--in more accessible organs than the brain--are traditionally assessed by manual palpation during the clinical exam. The brain, however, is protected from our sense of touch, and thus our current knowledge on cerebral viscoelasticity is very limited. We developed a clinically feasible magnetic resonance elastography setup sensitive to subtle alterations of brain parenchymal biomechanical properties. Investigating 45 MS patients revealed a significant decrease (13%, P<0.001) of cerebral viscoelasticity compared to matched healthy volunteers, indicating a widespread tissue integrity degradation, while structure-geometry defining parameters remained unchanged. Cerebral viscoelasticity may represent a novel in vivo marker of neuroinflammatory and neurodegenerative pathology. PMID:19539039
Ingersoll, Thomas; Cole, Stephanie; Madren-Whalley, Janna; Booker, Lamont; Dorsey, Russell; Li, Albert; Salem, Harry
2016-01-01
Integrated Discrete Multiple Organ Co-culture (IDMOC) is emerging as an in-vitro alternative to in-vivo animal models for pharmacology studies. IDMOC allows dose-response relationships to be investigated at the tissue and organoid levels, yet, these relationships often exhibit responses that are far more complex than the binary responses often measured in whole animals. To accommodate departure from binary endpoints, IDMOC requires an expansion of analytic techniques beyond simple linear probit and logistic models familiar in toxicology. IDMOC dose-responses may be measured at continuous scales, exhibit significant non-linearity such as local maxima or minima, and may include non-independent measures. Generalized additive mixed-modeling (GAMM) provides an alternative description of dose-response that relaxes assumptions of independence and linearity. We compared GAMMs to traditional linear models for describing dose-response in IDMOC pharmacology studies. PMID:27110941
Integrative analysis of multiple diverse omics datasets by sparse group multitask regression.
Lin, Dongdong; Zhang, Jigang; Li, Jingyao; He, Hao; Deng, Hong-Wen; Wang, Yu-Ping
2014-01-01
A variety of high throughput genome-wide assays enable the exploration of genetic risk factors underlying complex traits. Although these studies have remarkable impact on identifying susceptible biomarkers, they suffer from issues such as limited sample size and low reproducibility. Combining individual studies of different genetic levels/platforms has the promise to improve the power and consistency of biomarker identification. In this paper, we propose a novel integrative method, namely sparse group multitask regression, for integrating diverse omics datasets, platforms, and populations to identify risk genes/factors of complex diseases. This method combines multitask learning with sparse group regularization, which will: (1) treat the biomarker identification in each single study as a task and then combine them by multitask learning; (2) group variables from all studies for identifying significant genes; (3) enforce sparse constraint on groups of variables to overcome the "small sample, but large variables" problem. We introduce two sparse group penalties: sparse group lasso and sparse group ridge in our multitask model, and provide an effective algorithm for each model. In addition, we propose a significance test for the identification of potential risk genes. Two simulation studies are performed to evaluate the performance of our integrative method by comparing it with conventional meta-analysis method. The results show that our sparse group multitask method outperforms meta-analysis method significantly. In an application to our osteoporosis studies, 7 genes are identified as significant genes by our method and are found to have significant effects in other three independent studies for validation. The most significant gene SOD2 has been identified in our previous osteoporosis study involving the same expression dataset. Several other genes such as TREML2, HTR1E, and GLO1 are shown to be novel susceptible genes for osteoporosis, as confirmed from other
DIVA: an iterative method for building modular integrated models
NASA Astrophysics Data System (ADS)
Hinkel, J.
2005-08-01
Integrated modelling of global environmental change impacts faces the challenge that knowledge from the domains of Natural and Social Science must be integrated. This is complicated by often incompatible terminology and the fact that the interactions between subsystems are usually not fully understood at the start of the project. While a modular modelling approach is necessary to address these challenges, it is not sufficient. The remaining question is how the modelled system shall be cut down into modules. While no generic answer can be given to this question, communication tools can be provided to support the process of modularisation and integration. Along those lines of thought a method for building modular integrated models was developed within the EU project DINAS-COAST and applied to construct a first model, which assesses the vulnerability of the world's coasts to climate change and sea-level-rise. The method focuses on the development of a common language and offers domain experts an intuitive interface to code their knowledge in form of modules. However, instead of rigorously defining interfaces between the subsystems at the project's beginning, an iterative model development process is defined and tools to facilitate communication and collaboration are provided. This flexible approach has the advantage that increased understanding about subsystem interactions, gained during the project's lifetime, can immediately be reflected in the model.
Preschoolers and VCRs in the Home: A Multiple Methods Approach.
ERIC Educational Resources Information Center
Krendl, Kathy A.; And Others
1993-01-01
Describes a study of children's use and understanding of videocassette recorders (VCRs) using a combination of research methods: a survey, observations, and interviews with parents and children. Results show the levels of parental support and direct access were critical factors in understanding the child's knowledge about and competence with…
Borja, Angel; Bricker, Suzanne B; Dauer, Daniel M; Demetriades, Nicolette T; Ferreira, João G; Forbes, Anthony T; Hutchings, Pat; Jia, Xiaoping; Kenchington, Richard; Carlos Marques, João; Zhu, Changbo
2008-09-01
In recent years, several sets of legislation worldwide (Oceans Act in USA, Australia or Canada; Water Framework Directive or Marine Strategy in Europe, National Water Act in South Africa, etc.) have been developed in order to address ecological quality or integrity, within estuarine and coastal systems. Most such legislation seeks to define quality in an integrative way, by using several biological elements, together with physico-chemical and pollution elements. Such an approach allows assessment of ecological status at the ecosystem level ('ecosystem approach' or 'holistic approach' methodologies), rather than at species level (e.g. mussel biomonitoring or Mussel Watch) or just at chemical level (i.e. quality objectives) alone. Increasing attention has been paid to the development of tools for different physico-chemical or biological (phytoplankton, zooplankton, benthos, algae, phanerogams, fishes) elements of the ecosystems. However, few methodologies integrate all the elements into a single evaluation of a water body. The need for such integrative tools to assess ecosystem quality is very important, both from a scientific and stakeholder point of view. Politicians and managers need information from simple and pragmatic, but scientifically sound methodologies, in order to show to society the evolution of a zone (estuary, coastal area, etc.), taking into account human pressures or recovery processes. These approaches include: (i) multidisciplinarity, inherent in the teams involved in their implementation; (ii) integration of biotic and abiotic factors; (iii) accurate and validated methods in determining ecological integrity; and (iv) adequate indicators to follow the evolution of the monitored ecosystems. While some countries increasingly use the establishment of marine parks to conserve marine biodiversity and ecological integrity, there is awareness (e.g. in Australia) that conservation and management of marine ecosystems cannot be restricted to Marine Protected
Borja, Angel; Bricker, Suzanne B; Dauer, Daniel M; Demetriades, Nicolette T; Ferreira, João G; Forbes, Anthony T; Hutchings, Pat; Jia, Xiaoping; Kenchington, Richard; Carlos Marques, João; Zhu, Changbo
2008-09-01
In recent years, several sets of legislation worldwide (Oceans Act in USA, Australia or Canada; Water Framework Directive or Marine Strategy in Europe, National Water Act in South Africa, etc.) have been developed in order to address ecological quality or integrity, within estuarine and coastal systems. Most such legislation seeks to define quality in an integrative way, by using several biological elements, together with physico-chemical and pollution elements. Such an approach allows assessment of ecological status at the ecosystem level ('ecosystem approach' or 'holistic approach' methodologies), rather than at species level (e.g. mussel biomonitoring or Mussel Watch) or just at chemical level (i.e. quality objectives) alone. Increasing attention has been paid to the development of tools for different physico-chemical or biological (phytoplankton, zooplankton, benthos, algae, phanerogams, fishes) elements of the ecosystems. However, few methodologies integrate all the elements into a single evaluation of a water body. The need for such integrative tools to assess ecosystem quality is very important, both from a scientific and stakeholder point of view. Politicians and managers need information from simple and pragmatic, but scientifically sound methodologies, in order to show to society the evolution of a zone (estuary, coastal area, etc.), taking into account human pressures or recovery processes. These approaches include: (i) multidisciplinarity, inherent in the teams involved in their implementation; (ii) integration of biotic and abiotic factors; (iii) accurate and validated methods in determining ecological integrity; and (iv) adequate indicators to follow the evolution of the monitored ecosystems. While some countries increasingly use the establishment of marine parks to conserve marine biodiversity and ecological integrity, there is awareness (e.g. in Australia) that conservation and management of marine ecosystems cannot be restricted to Marine Protected
Shi, Zhenghao; Ma, Jiejue; Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji
2016-01-01
Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices. PMID:27635395
Zhao, Minghua; Liu, Yonghong; Feng, Yaning; Zhang, Ming; He, Lifeng; Suzuki, Kenji
2016-01-01
Accurate lung segmentation is an essential step in developing a computer-aided lung disease diagnosis system. However, because of the high variability of computerized tomography (CT) images, it remains a difficult task to accurately segment lung tissue in CT slices using a simple strategy. Motived by the aforementioned, a novel CT lung segmentation method based on the integration of multiple strategies was proposed in this paper. Firstly, in order to avoid noise, the input CT slice was smoothed using the guided filter. Then, the smoothed slice was transformed into a binary image using an optimized threshold. Next, a region growing strategy was employed to extract thorax regions. Then, lung regions were segmented from the thorax regions using a seed-based random walk algorithm. The segmented lung contour was then smoothed and corrected with a curvature-based correction method on each axis slice. Finally, with the lung masks, the lung region was automatically segmented from a CT slice. The proposed method was validated on a CT database consisting of 23 scans, including a number of 883 2D slices (the number of slices per scan is 38 slices), by comparing it to the commonly used lung segmentation method. Experimental results show that the proposed method accurately segmented lung regions in CT slices. PMID:27635395
NASA Astrophysics Data System (ADS)
Brown, Craig J.; Sameoto, Jessica A.; Smith, Stephen J.
2012-08-01
The establishment of multibeam echosounders (MBES) as a mainstream tool in ocean mapping has facilitated integrative approaches toward nautical charting, benthic habitat mapping, and seafloor geotechnical surveys. The inherent bathymetric and backscatter information generated by MBES enables marine scientists to present highly accurate bathymetric data with a spatial resolution closely matching that of terrestrial mapping. Furthermore, developments in data collection and processing of MBES backscatter, combined with the quality of the co-registered depth information, have resulted in the increasing preferential use of multibeam technology over conventional sidescan sonar for the production of benthic habitat maps. A range of post-processing approaches can generate customized map products to meet multiple ocean management needs, thus extracting maximum value from a single survey data set. Based on recent studies over German Bank off SW Nova Scotia, Canada, we show how primary MBES bathymetric and backscatter data, along with supplementary data (i.e. in situ video and stills), were processed using a variety of methods to generate a series of maps. Methods conventionally used for classification of multi-spectral data were tested for classification of the MBES data set to produce a map summarizing broad bio-physical characteristics of the seafloor (i.e. a benthoscape map), which is of value for use in many aspects of marine spatial planning. A species-specific habitat map for the sea scallop Placopecten magellanicus was also generated from the MBES data by applying a Species Distribution Modeling (SDM) method to spatially predict habitat suitability, which offers tremendous promise for use in fisheries management. In addition, we explore the challenges of incorporating benthic community data into maps based on species information derived from a large number of seafloor photographs. Through the process of applying multiple methods to generate multiple maps for
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, A. T.; Cannon, A. J.
2015-06-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2004-05-18
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Multi-channel detector readout method and integrated circuit
Moses, William W.; Beuville, Eric; Pedrali-Noy, Marzio
2006-12-12
An integrated circuit which provides multi-channel detector readout from a detector array. The circuit receives multiple signals from the elements of a detector array and compares the sampled amplitudes of these signals against a noise-floor threshold and against one another. A digital signal is generated which corresponds to the location of the highest of these signal amplitudes which exceeds the noise floor threshold. The digital signal is received by a multiplexing circuit which outputs an analog signal corresponding the highest of the input signal amplitudes. In addition a digital control section provides for programmatic control of the multiplexer circuit, amplifier gain, amplifier reset, masking selection, and test circuit functionality on each input thereof.
Schleier III, Jerome J.; Marshall, Lucy A.; Davis, Ryan S.
2015-01-01
Decision analysis often considers multiple lines of evidence during the decision making process. Researchers and government agencies have advocated for quantitative weight-of-evidence approaches in which multiple lines of evidence can be considered when estimating risk. Therefore, we utilized Bayesian Markov Chain Monte Carlo to integrate several human-health risk assessment, biomonitoring, and epidemiology studies that have been conducted for two common insecticides (malathion and permethrin) used for adult mosquito management to generate an overall estimate of risk quotient (RQ). The utility of the Bayesian inference for risk management is that the estimated risk represents a probability distribution from which the probability of exceeding a threshold can be estimated. The mean RQs after all studies were incorporated were 0.4386, with a variance of 0.0163 for malathion and 0.3281 with a variance of 0.0083 for permethrin. After taking into account all of the evidence available on the risks of ULV insecticides, the probability that malathion or permethrin would exceed a level of concern was less than 0.0001. Bayesian estimates can substantially improve decisions by allowing decision makers to estimate the probability that a risk will exceed a level of concern by considering seemingly disparate lines of evidence. PMID:25648367
Romualdi, Chiara; Trevisan, Silvia; Celegato, Barbara; Costa, Germano; Lanfranchi, Gerolamo
2003-01-01
The variability of results in microarray technology is in part due to the fact that independent scans of a single hybridised microarray give spot images that are not quite the same. To solve this problem and turn it to our advantage, we introduced the approach of multiple scanning and of image integration of microarrays. To this end, we have developed specific software that creates a virtual image that statistically summarises a series of consecutive scans of a microarray. We provide evidence that the use of multiple imaging (i) enhances the detection of differentially expressed genes; (ii) increases the image homogeneity; and (iii) reveals false-positive results such as differentially expressed genes that are detected by a single scan but not confirmed by successive scanning replicates. The increase in the final number of differentially expressed genes detected in a microarray experiment with this approach is remarkable; 50% more for microarrays hybridised with targets labelled by reverse transcriptase, and 200% more for microarrays developed with the tyramide signal amplification (TSA) technique. The results have been confirmed by semi-quantitative RT–PCR tests. PMID:14627839
Multiple-mode Lamb wave scattering simulations using 3D elastodynamic finite integration technique.
Leckey, Cara A C; Rogge, Matthew D; Miller, Corey A; Hinders, Mark K
2012-02-01
We have implemented three-dimensional (3D) elastodynamic finite integration technique (EFIT) simulations to model Lamb wave scattering for two flaw-types in an aircraft-grade aluminum plate, a rounded rectangle flat-bottom hole and a disbond of the same shape. The plate thickness and flaws explored in this work include frequency-thickness regions where several Lamb wave modes exist and sometimes overlap in phase and/or group velocity. For the case of the flat-bottom hole the depth was incrementally increased to explore progressive changes in multiple-mode Lamb wave scattering due to the damage. The flat-bottom hole simulation results have been compared to experimental data and are shown to provide key insight for this well-defined experimental case by explaining unexpected results in experimental waveforms. For the rounded rectangle disbond flaw, which would be difficult to implement experimentally, we found that Lamb wave behavior differed significantly from the flat-bottom hole flaw. Most of the literature in this field is restricted to low frequency-thickness regions due to difficulties in interpreting data when multiple modes exist. We found that benchmarked 3D EFIT simulations can yield an understanding of scattering behavior for these higher frequency-thickness regions and in cases that would be difficult to set up experimentally. Additionally, our results show that 2D simulations would not have been sufficient for modeling the complicated scattering that occurred. PMID:21908011
Sinkhole Imaging With Multiple Geophysical Methods in Covered Karst Terrain
NASA Astrophysics Data System (ADS)
Weiss, M.
2005-05-01
A suite of geophysical surveys was run at the Geopark at the University of South Florida campus in Tampa in attempt to determine the degree to which methods could image a collapsed sinkhole with a diameter of ~4m and maximum depth of ~2.5m. Geologically, the Geopark is part of a covered karst terrane, with collapsed sinkholes filled in by overlying unconsolidated sand separated from the weathered limestone beneath by a clayey sand layer. The sinkholes are hydrologically significant as they may serve as sites of concentrated recharge. The methods used during the study include: refraction seismics, resistivity, electromagnetics (TEM and EM), and ground penetrating radar (GPR). Geophysical data are compared against cores. The resistivity, GPR, and seismic refraction profiles yield remarkably consistent images of the clayey sand layer. EM-31 data revealed regional trends in subsurface geology, but could not delineate specific sinkhole features with the desired resolution.
Support Operators Method for the Diffusion Equation in Multiple Materials
Winters, Andrew R.; Shashkov, Mikhail J.
2012-08-14
A second-order finite difference scheme for the solution of the diffusion equation on non-uniform meshes is implemented. The method allows the heat conductivity to be discontinuous. The algorithm is formulated on a one dimensional mesh and is derived using the support operators method. A key component of the derivation is that the discrete analog of the flux operator is constructed to be the negative adjoint of the discrete divergence, in an inner product that is a discrete analog of the continuum inner product. The resultant discrete operators in the fully discretized diffusion equation are symmetric and positive definite. The algorithm is generalized to operate on meshes with cells which have mixed material properties. A mechanism to recover intermediate temperature values in mixed cells using a limited linear reconstruction is introduced. The implementation of the algorithm is verified and the linear reconstruction mechanism is compared to previous results for obtaining new material temperatures.
Methods for Developing Emissions Scenarios for Integrated Assessment Models
Prinn, Ronald; Webster, Mort
2007-08-20
The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.
Computing thermal Wigner densities with the phase integration method
Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Tu, Bill; Johnston, Michael; Hui, Ka-Kit
2008-01-01
Background: Polypharmacy is a common and serious problem in the elderly today. Few solutions have been effective in reducing its incidence. Case summary: An 87-year-old female with a history of osteoarthritis and spinal stenosis presented with a five month history of severe right hip pain. She had been seen by multiple specialists and hospitalized many times. During these encounters, she was prescribed a long list of pain medications. However, these medications did not improve her pain and added to her risk of adverse drug events. After exhausting traditional Western medical therapies, she received a referral to the UCLA Center for East–West Medicine. There, clinicians treated her with a nonpharmacological integrative East-West medicine approach that included acupuncture, dry needling of trigger points, and education on self-acupressure. Her pain began improving and she was able to cut back on analgesic use under physician supervision. Ultimately, she improved to the point where she was able to discontinue all of her pain medications. Symptomatic relief was evidenced by improvement in health-related quality of life (HRQOL). Conclusions: This case study suggests that integrative East–West medicine may have the potential to reduce the incidence of polypharmacy in elderly patients presenting with pain conditions and improve their quality of life. PMID:20428398
Optimal Operation System of the Integrated District Heating System with Multiple Regional Branches
NASA Astrophysics Data System (ADS)
Kim, Ui Sik; Park, Tae Chang; Kim, Lae-Hyun; Yeo, Yeong Koo
This paper presents an optimal production and distribution management for structural and operational optimization of the integrated district heating system (DHS) with multiple regional branches. A DHS consists of energy suppliers and consumers, district heating pipelines network and heat storage facilities in the covered region. In the optimal management system, production of heat and electric power, regional heat demand, electric power bidding and sales, transport and storage of heat at each regional DHS are taken into account. The optimal management system is formulated as a mixed integer linear programming (MILP) where the objectives is to minimize the overall cost of the integrated DHS while satisfying the operation constraints of heat units and networks as well as fulfilling heating demands from consumers. Piecewise linear formulation of the production cost function and stairwise formulation of the start-up cost function are used to compute nonlinear cost function approximately. Evaluation of the total overall cost is based on weekly operations at each district heat branches. Numerical simulations show the increase of energy efficiency due to the introduction of the present optimal management system.
The blackboard model - A framework for integrating multiple cooperating expert systems
NASA Technical Reports Server (NTRS)
Erickson, W. K.
1985-01-01
The use of an artificial intelligence (AI) architecture known as the blackboard model is examined as a framework for designing and building distributed systems requiring the integration of multiple cooperating expert systems (MCXS). Aerospace vehicles provide many examples of potential systems, ranging from commercial and military aircraft to spacecraft such as satellites, the Space Shuttle, and the Space Station. One such system, free-flying, spaceborne telerobots to be used in construction, servicing, inspection, and repair tasks around NASA's Space Station, is examined. The major difficulties found in designing and integrating the individual expert system components necessary to implement such a robot are outlined. The blackboard model, a general expert system architecture which seems to address many of the problems found in designing and building such a system, is discussed. A progress report on a prototype system under development called DBB (Distributed BlackBoard model) is given. The prototype will act as a testbed for investigating the feasibility, utility, and efficiency of MCXS-based designs developed under the blackboard model.
Pavan, Ana Carolina; Marroig, Gabriel
2016-10-01
A phylogenetic systematic perspective is instrumental in recovering new species and their evolutionary relationships. The advent of new technologies for molecular and morphological data acquisition and analysis, allied to the integration of knowledge from different areas, such as ecology and population genetics, allows for the emergence of more rigorous, accurate and complete scientific hypothesis on species diversity. Mustached bats (genus Pteronotus) are a good model for the application of this integrative approach. They are a widely distributed and a morphologically homogeneous group, but comprising species with remarkable differences in their echolocation strategy and feeding behavior. The latest systematic review suggested six species with 17 subspecies in Pteronotus. Subsequent studies using discrete morphological characters supported the same arrangement. However, recent papers reported high levels of genetic divergence among conspecific taxa followed by bioacoustic and geographic agreement, suggesting an underestimated diversity in the genus. To date, no study merging genetic evidences and morphometric variation along the entire geographic range of this group has been attempted. Based on a comprehensive sampling including representatives of all current taxonomic units, we attempt to delimit species in Pteronotus through the application of multiple methodologies and hierarchically distinct datasets. The molecular approach includes six molecular markers from three genetic transmission systems; morphological investigations used 41 euclidean distances estimated through three-dimensional landmarks collected from 1628 skulls. The phylogenetic analysis reveals a greater diversity than previously reported, with a high correspondence among the genetic lineages and the currently recognized subspecies in the genus. Discriminant analysis of variables describing size and shape of cranial bones support the rising of the genetic groups to the specific status. Based on
Method for integrating microelectromechanical devices with electronic circuitry
Montague, S.; Smith, J.H.; Sniegowski, J.J.; McWhorter, P.J.
1998-08-25
A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry. 13 figs.
Method for integrating microelectromechanical devices with electronic circuitry
Montague, Stephen; Smith, James H.; Sniegowski, Jeffry J.; McWhorter, Paul J.
1998-01-01
A method for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry.
Synthesis of aircraft structures using integrated design and analysis methods
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Goetz, R. C.
1978-01-01
A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.
Method and Apparatus for Simultaneous Processing of Multiple Functions
NASA Technical Reports Server (NTRS)
Stoica, Adrian (Inventor); Andrei, Radu (Inventor); Zhu, David (Inventor); Mojarradi, Mohammad Mehdi (Inventor); Vo, Tuan A. (Inventor)
2015-01-01
Electronic logic gates that operate using N logic state levels, where N is greater than 2, and methods of operating such gates. The electronic logic gates operate according to truth tables. At least two input signals each having a logic state that can range over more than two logic states are provided to the logic gates. The logic gates each provide an output signal that can have one of N logic states. Examples of gates described include NAND/NAND gates having two inputs A and B and NAND/NAND gates having three inputs A, B, and C, where A, B and C can take any of four logic states. Systems using such gates are described, and their operation illustrated. Optical logic gates that operate using N logic state levels are also described.
Method and apparatus for determining material structural integrity
Pechersky, Martin
1996-01-01
A non-destructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis techniques to determine the damping loss factor of a material. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity as a function of time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method. If an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the amount of coil current used in vibrating the magnet. If a reciprocating transducer is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by a force gauge in the reciprocating transducer. Using known vibrational analysis methods, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity measurements. The damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
Neocartilage integration in temporomandibular joint discs: physical and enzymatic methods
Murphy, Meghan K.; Arzi, Boaz; Prouty, Shannon M.; Hu, Jerry C.; Athanasiou, Kyriacos A.
2015-01-01
Integration of engineered musculoskeletal tissues with adjacent native tissues presents a significant challenge to the field. Specifically, the avascularity and low cellularity of cartilage elicit the need for additional efforts in improving integration of neocartilage within native cartilage. Self-assembled neocartilage holds significant potential in replacing degenerated cartilage, though its stabilization and integration in native cartilage require further efforts. Physical and enzymatic stabilization methods were investigated in an in vitro model for temporomandibular joint (TMJ) disc degeneration. First, in phase 1, suture, glue and press-fit constructs were compared in TMJ disc intermediate zone defects. In phase 1, suturing enhanced interfacial shear stiffness and strength immediately; after four weeks, a 15-fold increase in stiffness and a ninefold increase in strength persisted over press-fit. Neither suture nor glue significantly altered neocartilage properties. In phase 2, the effects of the enzymatic stabilization regimen composed of lysyl oxidase, CuSO4 and hydroxylysine were investigated. A full factorial design was employed, carrying forward the best physical method from phase 1, suturing. Enzymatic stabilization significantly increased interfacial shear stiffness after eight weeks. Combined enzymatic stabilization and suturing led to a fourfold increase in shear stiffness and threefold increase in strength over press-fit. Histological analysis confirmed the presence of a collagen-rich interface. Enzymatic treatment additionally enhanced neocartilage mechanical properties, yielding a tensile modulus over 6 MPa and compressive instantaneous modulus over 1200 kPa at eight weeks. Suturing enhances stabilization of neocartilage, and enzymatic treatment enhances functional properties and integration of neocartilage in the TMJ disc. Methods developed here are applicable to other orthopaedic soft tissues, including knee meniscus and hyaline articular
Efficient Fully Implicit Time Integration Methods for Modeling Cardiac Dynamics
Rose, Donald J.; Henriquez, Craig S.
2013-01-01
Implicit methods are well known to have greater stability than explicit methods for stiff systems, but they often are not used in practice due to perceived computational complexity. This paper applies the Backward Euler method and a second-order one-step two-stage composite backward differentiation formula (C-BDF2) for the monodomain equations arising from mathematically modeling the electrical activity of the heart. The C-BDF2 scheme is an L-stable implicit time integration method and easily implementable. It uses the simplest Forward Euler and Backward Euler methods as fundamental building blocks. The nonlinear system resulting from application of the Backward Euler method for the monodomain equations is solved for the first time by a nonlinear elimination method, which eliminates local and non-symmetric components by using a Jacobian-free Newton solver, called Newton-Krylov solver. Unlike other fully implicit methods proposed for the monodomain equations in the literature, the Jacobian of the global system after the nonlinear elimination has much smaller size, is symmetric and possibly positive definite, which can be solved efficiently by standard optimal solvers. Numerical results are presented demonstrating that the C-BDF2 scheme can yield accurate results with less CPU times than explicit methods for both a single patch and spatially extended domains. PMID:19126449
Gregg, Watson W; Rousseaux, Cécile S
2014-01-01
Quantifying change in ocean biology using satellites is a major scientific objective. We document trends globally for the period 1998–2012 by integrating three diverse methodologies: ocean color data from multiple satellites, bias correction methods based on in situ data, and data assimilation to provide a consistent and complete global representation free of sampling biases. The results indicated no significant trend in global pelagic ocean chlorophyll over the 15 year data record. These results were consistent with previous findings that were based on the first 6 years and first 10 years of the SeaWiFS mission. However, all of the Northern Hemisphere basins (north of 10° latitude), as well as the Equatorial Indian basin, exhibited significant declines in chlorophyll. Trend maps showed the local trends and their change in percent per year. These trend maps were compared with several other previous efforts using only a single sensor (SeaWiFS) and more limited time series, showing remarkable consistency. These results suggested the present effort provides a path forward to quantifying global ocean trends using multiple satellite missions, which is essential if we are to understand the state, variability, and possible changes in the global oceans over longer time scales. PMID:26213675
A Low-Cost Method for Multiple Disease Prediction
Bayati, Mohsen; Bhaskar, Sonia; Montanari, Andrea
2015-01-01
Recently, in response to the rising costs of healthcare services, employers that are financially responsible for the healthcare costs of their workforce have been investing in health improvement programs for their employees. A main objective of these so called “wellness programs” is to reduce the incidence of chronic illnesses such as cardiovascular disease, cancer, diabetes, and obesity, with the goal of reducing future medical costs. The majority of these wellness programs include an annual screening to detect individuals with the highest risk of developing chronic disease. Once these individuals are identified, the company can invest in interventions to reduce the risk of those individuals. However, capturing many biomarkers per employee creates a costly screening procedure. We propose a statistical data-driven method to address this challenge by minimizing the number of biomarkers in the screening procedure while maximizing the predictive power over a broad spectrum of diseases. Our solution uses multi-task learning and group dimensionality reduction from machine learning and statistics. We provide empirical validation of the proposed solution using data from two different electronic medical records systems, with comparisons to a statistical benchmark. PMID:26958164
Community Engagement in US Biobanking: Multiplicity of Meaning and Method
Haldeman, Kaaren M.; Cadigan, R. Jean; Davis, Arlene; Goldenberg, Aaron; Henderson, Gail E.; Lassiter, Dragana; Reavely, Erik
2014-01-01
Background/Aims Efforts to improve individual and population health increasingly rely on large scale collections of human biological specimens and associated data. Such collections or “biobanks” are hailed as valuable resources for facilitating translational biomedical research. However, biobanks also raise important ethical considerations, such as whether, how and why biobanks might engage with those who contributed specimens. This paper examines perceptions and practices of community engagement (CE) among individuals who operate six diverse biobanks in the U.S. Methods Twenty-four people from a diverse group of six biobanks were interviewed in-person or via telephone from March-July, 2011. Interview transcripts were coded and analyzed for common themes. Results Emergent themes include how biobank personnel understand “community” and community engagement as it pertains to biobank operations; information regarding the diversity of practices of CE; and the reasons why biobanks conduct CE. Conclusion Despite recommendations from federal agencies to conduct CE, the interpretation of CE varies widely among biobank employees, ultimately affecting how CE is practiced and what goals are achieved. PMID:24556734
Average wavefunction method for multiple scattering theory and applications
Singh, H.
1985-01-01
A general approximation scheme, the average wavefunction approximation (AWM), applicable to scattering of atoms and molecules off multi-center targets, is proposed. The total potential is replaced by a sum of nonlocal, separable interactions. Each term in the sum projects the wave function onto a weighted average in the vicinity of a given scattering center. The resultant solution is an infinite order approximation to the true solution, and choosing the weighting function as the zeroth order solution guarantees agreement with the Born approximation to second order. In addition, the approximation also becomes increasingly more accurate in the low energy long wave length limit. A nonlinear, nonperturbative literature scheme for the wave function is proposed. An extension of the scheme to multichannel scattering suitable for treating inelastic scattering is also presented. The method is applied to elastic scattering of a gas off a solid surface. The formalism is developed for both periodic as well as disordered surfaces. Numerical results are presented for atomic clusters on a flat hard wall with a Gaussian like potential at each atomic scattering site. The effect of relative lateral displacement of two clusters upon the scattering pattern is shown. The ability of AWM to accommodate disorder through statistical averaging over cluster configuration is illustrated. Enhanced uniform back scattering is observed with increasing roughness on the surface. Finally, the AWM is applied to atom-molecule scattering.
2012-01-01
Background The increasing prevalence of multiple chronic conditions has accentuated the importance of coordinating and integrating health care services. Patients with better continuity of care (COC) have a lower utilization rate of emergency department (ED) services, lower hospitalization and better care outcomes. Previous COC studies have focused on the care outcome of patients with a single chronic condition or that of physician-patient relationships; few studies have investigated the care outcome of patients with multiple chronic conditions. Using multi-chronic patients as subjects, this study proposes an integrated continuity of care (ICOC) index to verify the association between COC and care outcomes for two scopes of chronic conditions, at physician and medical facility levels. Methods This study used a dataset of 280,840 subjects, obtained from the Longitudinal Health Insurance Database (LHID 2005), compiled by the National Health Research Institutes, of the National Health Insurance Bureau of Taiwan. Principal Component Analysis (PCA) was used to integrate the indices of density, dispersion and sequence into ICOC to measure COC outcomes - the utilization rate of ED services and hospitalization. A Generalized Estimating Equations model was used to verify the care outcomes. Results We discovered that the higher the COC at medical facility level, the lower the utilization rate of ED services and hospitalization for patients; by contrast, the higher the COC at physician level, the higher the utilization rate of ED services (odds ratio > 1; Exp(β) = 2.116) and hospitalization (odds ratio > 1; Exp(β) = 1.688). When only those patients with major chronic conditions with the highest number of medical visits were considered, it was found that the higher the COC at both medical facility and physician levels, the lower the utilization rate of ED services and hospitalization. Conclusions The study shows that ICOC is more stable than single indices and
Extrapolation of critical Rayleigh values using static nodal integral methods
Wilson, G.L.; Rydin, R.A.
1988-01-01
The Benard problem is the study of the convective motion of a fluid in a rectangular cavity that is uniformly heated form below. Flow bifurcation in the cavity is a function of the Rayleigh number (Ra). The time-dependent nodal integral method (TDNIM) has been reported previously; its development leads to a set of 11 equations per node. The static nodal integral method (SNIM) was derived from the TDNIM by forcing the dependent variable at adjacent time steps (one of the velocity components or temperature) to take on the node integral average value. The paper summarizes the SNIM calculation of Ra for mesh sizes ranging from 4 x 4 to 24 x 24. The numerical calculation of Ra is within plus or minus one-half unit. The relative errors are calculated based on the obtained extrapolated value of Ra{sub best}* = 2584. The paper also summarizes three-point schemes used with increasingly finer mesh combinations. This approach avoids the contamination of the results with a coarse mesh; however, the calculation on n is very sensitive to small changes in the numerical values obtained for Ra*. In this approach, the extrapolated values quickly converge to Ra*{sub e} between 2583 and 2584 with n {approx}2.0 as desired, and give a best value of Ra*{sub best} = 2584.
Methods and systems for integrating fluid dispensing technology with stereolithography
Medina, Francisco; Wicker, Ryan; Palmer, Jeremy A.; Davis, Don W.; Chavez, Bart D.; Gallegos, Phillip L.
2010-02-09
An integrated system and method of integrating fluid dispensing technologies (e.g., direct-write (DW)) with rapid prototyping (RP) technologies (e.g., stereolithography (SL)) without part registration comprising: an SL apparatus and a fluid dispensing apparatus further comprising a translation mechanism adapted to translate the fluid dispensing apparatus along the Z-, Y- and Z-axes. The fluid dispensing apparatus comprises: a pressurized fluid container; a valve mechanism adapted to control the flow of fluid from the pressurized fluid container; and a dispensing nozzle adapted to deposit the fluid in a desired location. To aid in calibration, the integrated system includes a laser sensor and a mechanical switch. The method further comprises building a second part layer on top of the fluid deposits and optionally accommodating multi-layered circuitry by incorporating a connector trace. Thus, the present invention is capable of efficiently building single and multi-material SL fabricated parts embedded with complex three-dimensional circuitry using DW.
Mathies, Richard A.; Singhal, Pankaj; Xie, Jin; Glazer, Alexander N.
2002-01-01
This invention relates to a microfabricated capillary electrophoresis chip for detecting multiple redox-active labels simultaneously using a matrix coding scheme and to a method of selectively labeling analytes for simultaneous electrochemical detection of multiple label-analyte conjugates after electrophoretic or chromatographic separation.
Integrated Force Method Solution to Indeterminate Structural Mechanics Problems
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Halford, Gary R.
2004-01-01
Strength of materials problems have been classified into determinate and indeterminate problems. Determinate analysis primarily based on the equilibrium concept is well understood. Solutions of indeterminate problems required additional compatibility conditions, and its comprehension was not exclusive. A solution to indeterminate problem is generated by manipulating the equilibrium concept, either by rewriting in the displacement variables or through the cutting and closing gap technique of the redundant force method. Compatibility improvisation has made analysis cumbersome. The authors have researched and understood the compatibility theory. Solutions can be generated with equal emphasis on the equilibrium and compatibility concepts. This technique is called the Integrated Force Method (IFM). Forces are the primary unknowns of IFM. Displacements are back-calculated from forces. IFM equations are manipulated to obtain the Dual Integrated Force Method (IFMD). Displacement is the primary variable of IFMD and force is back-calculated. The subject is introduced through response variables: force, deformation, displacement; and underlying concepts: equilibrium equation, force deformation relation, deformation displacement relation, and compatibility condition. Mechanical load, temperature variation, and support settling are equally emphasized. The basic theory is discussed. A set of examples illustrate the new concepts. IFM and IFMD based finite element methods are introduced for simple problems.
Integral structural-functional method for characterizing microbial populations
NASA Astrophysics Data System (ADS)
Yakushev, A. V.
2015-04-01
An original integral structural-functional method has been proposed for characterizing microbial communities. The novelty of the approach is the in situ study of microorganisms based on the growth kinetics of microbial associations in liquid nutrient broth media under selective conditions rather than on the level of taxa or large functional groups. The method involves the analysis of the integral growth model of a periodic culture. The kinetic parameters of such associations reflect their capacity of growing on different media, i.e., their physiological diversity, and the metabolic capacity of the microorganisms for growth on a nutrient medium. Therefore, the obtained parameters are determined by the features of the microbial ecological strategies. The inoculation of a dense medium from the original inoculate allows characterizing the taxonomic composition of the dominants in the soil community. The inoculation from the associations developed on selective media characterizes the composition of syntrophic groups, which fulfill a specific function in nature. This method is of greater information value than the classical methods of inoculation on selective media.
Method and apparatus for determining material structural integrity
Pechersky, M.J.
1994-01-01
Disclosed are a nondestructive method and apparatus for determining the structural integrity of materials by combining laser vibrometry with damping analysis to determine the damping loss factor. The method comprises the steps of vibrating the area being tested over a known frequency range and measuring vibrational force and velocity vs time over the known frequency range. Vibrational velocity is preferably measured by a laser vibrometer. Measurement of the vibrational force depends on the vibration method: if an electromagnetic coil is used to vibrate a magnet secured to the area being tested, then the vibrational force is determined by the coil current. If a reciprocating transducer is used, the vibrational force is determined by a force gauge in the transducer. Using vibrational analysis, a plot of the drive point mobility of the material over the preselected frequency range is generated from the vibrational force and velocity data. Damping loss factor is derived from a plot of the drive point mobility over the preselected frequency range using the resonance dwell method and compared with a reference damping loss factor for structural integrity evaluation.
NASA Astrophysics Data System (ADS)
Liu, Zhijun; Zhang, Liangpei; Liu, Zhenmin; Jiao, Hongbo; Chen, Liqun
2008-12-01
In order to manage the internal resources of Gulf of Tonkin and integrate multiple-source spatial data, the establishment of region unified plan management system is needed. The data fusion and the integrated research should be carried on because there are some difficulties in the course of the system's establishment. For example, kinds of planning and the project data format are different, and data criterion is not unified. Besides, the time state property is strong, and spatial reference is inconsistent, etc. In this article the ARCGIS ENGINE is introduced as the developing platform, key technologies are researched, such as multiple-source data transformation and fusion, remote sensing data and DEM fusion and integrated, plan and project data integration, and so on. Practice shows that the system improves the working efficiency of Guangxi Gulf of Tonkin Economic Zone Management Committee significantly and promotes planning construction work of the economic zone remarkably.
NASA Astrophysics Data System (ADS)
Cheng, Q.
2013-12-01
This paper introduces several techniques recently developed based on the concepts of multiplicative cascade processes and multifractals for processing exploration geochemical and geophysical data for recognition of geological features and delineation of target areas for undiscovered mineral deposits. From a nonlinear point of view extreme geo-processes such as cloud formation, rainfall, hurricanes, flooding, landslides, earthquakes, igneous activities, tectonics and mineralization often show singular property that they may result in anomalous amounts of energy release or mass accumulation that generally are confined to narrow intervals in space or time. The end products of these non-linear processes have in common that they can be modeled as fractals or multifractals. Here we show that the three fundamental concepts of scaling in the context of multifractals: singularity, self-similarity and fractal dimension spectrum, make multifractal theory and methods useful for geochemical and geophysical data processing for general purposes of geological features recognition. These methods include: a local singularity analysis based on a area-density (C-A) multifractal model used as a scaling high-pass filtering technique capable of extracting weak signals caused by buried geological features; a suite of multifractal filtering techniques based on spectrum density - area (S-A) multifractal models implemented in various domain including frequency domain can be used for unmixing geochemical or geophysical fields according to distinct generalized self-similarities characterized in certain domain; and multiplicative cascade processes for integration of diverse evidential layers of information for prediction of point events such as location of mineral deposits. It is demonstrated by several case studies involving Fe, Sn, Mo-Ag and Mo-W mineral deposits that singularity method can be utilized to process stream sediment/soil geochemical data and gravity/aeromagnetic data as high
Rowat, S C
1998-01-01
The central nervous, immune, and endocrine systems communicate through multiple common messengers. Over evolutionary time, what may be termed integrated defense system(s) (IDS) have developed to coordinate these communications for specific contexts; these include the stress response, acute-phase response, nonspecific immune response, immune response to antigen, kindling, tolerance, time-dependent sensitization, neurogenic switching, and traumatic dissociation (TD). These IDSs are described and their overlap is examined. Three models of disease production are generated: damage, in which IDSs function incorrectly; inadequate/inappropriate, in which IDS response is outstripped by a changing context; and evolving/learning, in which the IDS learned response to a context is deemed pathologic. Mechanisms of multiple chemical sensitivity (MCS) are developed from several IDS disease models. Model 1A is pesticide damage to the central nervous system, overlapping with body chemical burdens, TD, and chronic zinc deficiency; model 1B is benzene disruption of interleukin-1, overlapping with childhood developmental windows and hapten-antigenic spreading; and model 1C is autoimmunity to immunoglobulin-G (IgG), overlapping with spreading to other IgG-inducers, sudden spreading of inciters, and food-contaminating chemicals. Model 2A is chemical and stress overload, including comparison with the susceptibility/sensitization/triggering/spreading model; model 2B is genetic mercury allergy, overlapping with: heavy metals/zinc displacement and childhood/gestational mercury exposures; and model 3 is MCS as evolution and learning. Remarks are offered on current MCS research. Problems with clinical measurement are suggested on the basis of IDS models. Large-sample patient self-report epidemiology is described as an alternative or addition to clinical biomarker and animal testing. Images Figure 1 Figure 2 Figure 3 Figure 1 Figure 2 Figure 3 Figure 4 Figure 5 PMID:9539008
Fourier-sparsity integrated method for complex target ISAR imagery.
Gao, Xunzhang; Liu, Zhen; Chen, Haowen; Li, Xiang
2015-01-01
In existing sparsity-driven inverse synthetic aperture radar (ISAR) imaging framework a sparse recovery (SR) algorithm is usually applied to azimuth compression to achieve high resolution in the cross-range direction. For range compression, however, direct application of an SR algorithm is not very effective because the scattering centers resolved in the high resolution range profiles at different view angles always exhibit irregular range cell migration (RCM), especially for complex targets, which will blur the ISAR image. To alleviate the sparse recovery-induced RCM in range compression, a sparsity-driven framework for ISAR imaging named Fourier-sparsity integrated (FSI) method is proposed in this paper, which can simultaneously achieve better focusing performance in both the range and cross-range domains. Experiments using simulated data and real data demonstrate the superiority of our proposed framework over existing sparsity-driven methods and range-Doppler methods. PMID:25629707
Application of the boundary integral method to immiscible displacement problems
Masukawa, J.; Horne, R.N.
1988-08-01
This paper presents an application of the boundary integral method (BIM) to fluid displacement problems to demonstrate its usefulness in reservoir simulation. A method for solving two-dimensional (2D), piston-like displacement for incompressible fluids with good accuracy has been developed. Several typical example problems with repeated five-spot patterns were solved for various mobility ratios. The solutions were compared with the analytical solutions to demonstrate accuracy. Singularity programming was found to be a major advantage in handling flow in the vicinity of wells. The BIM was found to be an excellent way to solve immiscible displacement problems. Unlike analytic methods, it can accommodate complex boundary shapes and does not suffer from numerical dispersion at the front.
A multi-disciplinary approach for the integrated assessment of multiple risks in delta areas.
NASA Astrophysics Data System (ADS)
Sperotto, Anna; Torresan, Silvia; Critto, Andrea; Marcomini, Antonio
2016-04-01
The assessment of climate change related risks is notoriously difficult due to the complex and uncertain combinations of hazardous events that might happen, the multiplicity of physical processes involved, the continuous changes and interactions of environmental and socio-economic systems. One important challenge lies in predicting and modelling cascades of natural and man -made hazard events which can be triggered by climate change, encompassing different spatial and temporal scales. Another regard the potentially difficult integration of environmental, social and economic disciplines in the multi-risk concept. Finally, the effective interaction between scientists and stakeholders is essential to ensure that multi-risk knowledge is translated into efficient adaptation and management strategies. The assessment is even more complex at the scale of deltaic systems which are particularly vulnerable to global environmental changes, due to the fragile equilibrium between the presence of valuable natural ecosystems and relevant economic activities. Improving our capacity to assess the combined effects of multiple hazards (e.g. sea-level rise, storm surges, reduction in sediment load, local subsidence, saltwater intrusion) is therefore essential to identify timely opportunities for adaptation. A holistic multi-risk approach is here proposed to integrate terminology, metrics and methodologies from different research fields (i.e. environmental, social and economic sciences) thus creating shared knowledge areas to advance multi risk assessment and management in delta regions. A first testing of the approach, including the application of Bayesian network analysis for the assessment of impacts of climate change on key natural systems (e.g. wetlands, protected areas, beaches) and socio-economic activities (e.g. agriculture, tourism), is applied in the Po river delta in Northern Italy. The approach is based on a bottom-up process involving local stakeholders early in different
Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve
2012-12-10
In biomedical research such as the development of vaccines for infectious diseases or cancer, study outcomes measured by an assay or device are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across data sources. We incorporate such adjustment in the main study by comparing and combining independent samples from different laboratories via integration of external data, collected on paired samples from the same two laboratories. We propose the following: (i) normalization of individual-level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; (ii) comparison of mean assay values between two independent samples in the main study accounting for inter-source measurement error; and (iii) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the error-prone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real enzyme-linked immunosorbent spot assay data generated by two HIV vaccine laboratories.
Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve
2014-01-01
Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070
NASA Astrophysics Data System (ADS)
Masychev, Victor I.
2000-11-01
In this research we present the results of approbation of two methods of optical caries diagnostics: PNC-spectral diagnostics and caries detection by laser integral fluorescence. The research was conducted in a dental clinic. PNC-method analyses parameters of probing laser radiation and PNC-spectrums of stimulated secondary radiations: backscattering and endogenous fluorescence of caries-involved bacterias. He-Ne-laser ((lambda) =632,8 nm, 1-2mW) was used as a source of probing (stimulated) radiation. For registration of signals, received from intact and pathological teeth PDA-detector was applied. PNC-spectrums were processed by special algorithms, and were displayed on PC monitor. The method of laser integral fluorescence was used for comparison. In this case integral power of fluorescence of human teeth was measured. As a source of probing (stimulated) radiation diode lasers ((lambda) =655 nm, 0.1 mW and 630nm, 1mW) and He-Ne laser were applied. For registration of signals Si-photodetector was used. Integral power was shown in a digital indicator. Advantages and disadvantages of these methods are described in this research. It is disclosed that the method of laser integral power of fluorescence has the following characteristics: simplicity of construction and schema-technical decisions. However the method of PNC-spectral diagnostics are characterized by considerably more sensitivity in diagnostics of initial caries and capability to differentiate pathologies of various stages (for example, calculus/initial caries). Estimation of spectral characteristics of PNC-signals allows eliminating a number of drawbacks, which are character for detection by method of laser integral fluorescence (for instance, detection of fluorescent fillings, plagues, calculus, discolorations generally, amalgam, gold fillings as if it were caries.
Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays
NASA Astrophysics Data System (ADS)
Choi, Heejin; Min, Sung-Wook; Jung, Sungyong; Park, Jae-Hyeung; Lee, Byoungho
2003-04-01
In spite of many advantages of integral imaging, the viewing zone in which an observer can see three-dimensional images is limited within a narrow range. Here, we propose a novel method to increase the number of viewing zones by using a dynamic barrier array. We prove our idea by fabricating and locating the dynamic barrier array between a lens array and a display panel. By tilting the barrier array, it is possible to distribute images for each viewing zone. Thus, the number of viewing zones can be increased with an increment of the states of the barrier array tilt.
Identification of Genes for Complex Diseases by Integrating Multiple Types of Genomic Data
Cao, Hongbao; Lei, Shufeng; Deng, Hong-Wen; Wang, Yu-Ping
2014-01-01
Combining multi-type of genomic data for integrative analyses can take advantage of complementary information and thus can have higher power to identify genes/variables that would otherwise be impossible with individual data analysis. Here we proposed a sparse representation based clustering (SRC) method for integrative data analyses, and applied to the analysis of 376821 SNPs in 200 subjects (100 cases and 100 controls) and expression data for 22283 genes in 80 subjects (40 cases and 40 controls) to identify significant genes for osteoporosis (OP). Comparing our results with previous studies, we identified some genes known related to OP risk, as well as some uncovered novel osteoporosis susceptible genes (‘DICER1’, ‘PTMA’, etc.) that may function importantly in osteoporosis etiology. In addition, the identified genes can lead to higher accuracy for the identification of osteoporosis subjects when compared with the traditional T-test and Fisher-exact test, which further validate the proposed SRC approach for integrative analysis. PMID:23367184
Towards a Better Understanding of CMMI and Agile Integration - Multiple Case Study of Four Companies
NASA Astrophysics Data System (ADS)
Pikkarainen, Minna
The amount of software is increasing in the different domains in Europe. This provides the industries in smaller countries good opportunities to work in the international markets. Success in the global markets however demands the rapid production of high quality, error free software. Both CMMI and agile methods seem to provide a ready solution for quality and lead time improvements. There is not, however, much empirical evidence available either about 1) how the integration of these two aspects can be done in practice or 2) what it actually demands from assessors and software process improvement groups. The goal of this paper is to increase the understanding of CMMI and agile integration, in particular, focusing on the research question: how to use ‘lightweight’ style of CMMI assessments in agile contexts. This is done via four case studies in which assessments were conducted using the goals of CMMI integrated project management and collaboration and coordination with relevant stakeholder process areas and practices from XP and Scrum. The study shows that the use of agile practices may support the fulfilment of the goals of CMMI process areas but there are still many challenges for the agile teams to be solved within the continuous improvement programs. It also identifies practical advices to the assessors and improvement groups to take into consideration when conducting assessment in the context of agile software development.
Cabib, Christopher; Llufriu, Sara; Casanova-Molla, Jordi; Saiz, Albert; Valls-Solé, Josep
2015-03-01
Slowness of voluntary movements in patients with multiple sclerosis (MS) may be due to various factors, including attentional and cognitive deficits, delays in motor conduction time, and impairment of specific central nervous system circuits. In 13 healthy volunteers and 20 mildly disabled, relapsing-remitting MS patients, we examined simple reaction time (SRT) tasks requiring sensorimotor integration in circuits involving the corpus callosum and the brain stem. A somatosensory stimulus was used as the imperative signal (IS), and subjects were requested to react with either the ipsilateral or the contralateral hand (uncrossed vs. crossed SRT). In 33% of trials, a startling auditory stimulus was presented together with the IS, and the percentage reaction time change with respect to baseline SRT trials was measured (StartReact effect). The difference between crossed and uncrossed SRT, which requires interhemispheric conduction, was significantly larger in patients than in healthy subjects (P = 0.021). The StartReact effect, which involves activation of brain stem motor pathways, was reduced significantly in patients with respect to healthy subjects (uncrossed trials: P = 0.015; crossed trials: P = 0.005). In patients, a barely significant correlation was found between SRT delay and conduction abnormalities in motor and sensory pathways (P = 0.02 and P = 0.04, respectively). The abnormalities found specifically in trials reflecting interhemispheric transfer of information, as well as the evidence for reduced subcortical motor preparation, indicate that a delay in reaction time execution in MS patients cannot be explained solely by conduction slowing in motor and sensory pathways but suggest, instead, defective sensorimotor integration mechanisms in at least the two circuits examined.
Integrating Multiple Distribution Models to Guide Conservation Efforts of an Endangered Toad
Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models. PMID:26125634
Integrated dementia care in The Netherlands: a multiple case study of case management programmes.
Minkman, Mirella M N; Ligthart, Suzanne A; Huijsman, Robbert
2009-09-01
The number of dementia patients is growing, and they require a variety of services, making integrated care essential for the ability to continue living in the community. Many healthcare systems in developed countries are exploring new approaches for delivering health and social care. The purpose of this study was to describe and analyse a new approach in extensive case management programmes concerned with long-term dementia care in The Netherlands. The focus is on the characteristics, and success and failure factors of these programmes.A multiple case study was conducted in eight regional dementia care provider networks in The Netherlands. Based on a literature study, a questionnaire was developed for the responsible managers and case managers of the eight case management programmes. During 16 semistructured face-to-face interviews with both respondent groups, a deeper insight into the dementia care programmes was provided. Project documentation for all the cases was studied. The eight programmes were developed independently to improve the quality and continuity of long-term dementia care. The programmes show overlap in terms of their vision, tasks of case managers, case management process and the participating partners in the local dementia care networks. Differences concern the targeted dementia patient groups as well as the background of the case managers and their position in the local dementia care provider network. Factors for success concern the expert knowledge of case managers, investment in a strong provider network and coherent conditions for effective inter-organizational cooperation to deliver integrated care. When explored, caregiver and patient satisfaction was high. Further research into the effects on client outcomes, service use and costs is recommended in order to further analyse the impact of this approach in long-term care. To facilitate implementation, with a focus on joint responsibilities of the involved care providers, policy
Integrating multiple distribution models to guide conservation efforts of an endangered toad
Treglia, Michael L.; Fisher, Robert N.; Fitzgerald, Lee A.
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species’ ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species’ current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.
Integrating Multiple Distribution Models to Guide Conservation Efforts of an Endangered Toad.
Treglia, Michael L; Fisher, Robert N; Fitzgerald, Lee A
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species' ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species' current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models.
Integrating Multiple Distribution Models to Guide Conservation Efforts of an Endangered Toad.
Treglia, Michael L; Fisher, Robert N; Fitzgerald, Lee A
2015-01-01
Species distribution models are used for numerous purposes such as predicting changes in species' ranges and identifying biodiversity hotspots. Although implications of distribution models for conservation are often implicit, few studies use these tools explicitly to inform conservation efforts. Herein, we illustrate how multiple distribution models developed using distinct sets of environmental variables can be integrated to aid in identification sites for use in conservation. We focus on the endangered arroyo toad (Anaxyrus californicus), which relies on open, sandy streams and surrounding floodplains in southern California, USA, and northern Baja California, Mexico. Declines of the species are largely attributed to habitat degradation associated with vegetation encroachment, invasive predators, and altered hydrologic regimes. We had three main goals: 1) develop a model of potential habitat for arroyo toads, based on long-term environmental variables and all available locality data; 2) develop a model of the species' current habitat by incorporating recent remotely-sensed variables and only using recent locality data; and 3) integrate results of both models to identify sites that may be employed in conservation efforts. We used a machine learning technique, Random Forests, to develop the models, focused on riparian zones in southern California. We identified 14.37% and 10.50% of our study area as potential and current habitat for the arroyo toad, respectively. Generally, inclusion of remotely-sensed variables reduced modeled suitability of sites, thus many areas modeled as potential habitat were not modeled as current habitat. We propose such sites could be made suitable for arroyo toads through active management, increasing current habitat by up to 67.02%. Our general approach can be employed to guide conservation efforts of virtually any species with sufficient data necessary to develop appropriate distribution models. PMID:26125634
Integral wave-migration method applied to electromagnetic data
Bartel, L.C.
1994-12-31
Migration of the electromagnetic (EM) wave field will be discussed as a solution of the wave equation in which surface magnetic field measurements are the known boundary values. This approach is similar to classical optical diffraction theory. Here data is taken on a aperture, migrated (extrapolated), and deconvolved with a source function. The EM image is formed when the imaginary part of the Fourier transformed migrated field at time zero is zero or at least a minimum. The integral formulation for migration is applied to model data for surface magnetic fields calculated for a grounded, vertical electric source (VES). The conductivity structure is determined from comparing the measured migrated fields to calculated migrated fields for a yet to be determined conductivity structure. This comparison results in solving a Fredholm integral equation of the first kind for the conductivity structure. Solutions are obtained using the conjugate gradient method. The imaging method used here is similar to the EM holographic method reported earlier, except here the magnitudes, as well as the phases, of the extrapolated fields are preserved so that material properties can be determined.
A Dynamic Integration Method for Borderland Database using OSM data
NASA Astrophysics Data System (ADS)
Zhou, X.-G.; Jiang, Y.; Zhou, K.-X.; Zeng, L.
2013-11-01
Spatial data is the fundamental of borderland analysis of the geography, natural resources, demography, politics, economy, and culture. As the spatial region used in borderland researching usually covers several neighboring countries' borderland regions, the data is difficult to achieve by one research institution or government. VGI has been proven to be a very successful means of acquiring timely and detailed global spatial data at very low cost. Therefore VGI will be one reasonable source of borderland spatial data. OpenStreetMap (OSM) has been known as the most successful VGI resource. But OSM data model is far different from the traditional authoritative geographic information. Thus the OSM data needs to be converted to the scientist customized data model. With the real world changing fast, the converted data needs to be updated. Therefore, a dynamic integration method for borderland data is presented in this paper. In this method, a machine study mechanism is used to convert the OSM data model to the user data model; a method used to select the changed objects in the researching area over a given period from OSM whole world daily diff file is presented, the change-only information file with designed form is produced automatically. Based on the rules and algorithms mentioned above, we enabled the automatic (or semiautomatic) integration and updating of the borderland database by programming. The developed system was intensively tested.
NASA Astrophysics Data System (ADS)
Naraghi, M. H. N.; Chung, B. T. F.
1982-06-01
A multiple step fixed random walk Monte Carlo method for solving heat conduction in solids with distributed internal heat sources is developed. In this method, the probability that a walker reaches a point a few steps away is calculated analytically and is stored in the computer. Instead of moving to the immediate neighboring point the walker is allowed to jump several steps further. The present multiple step random walk technique can be applied to both conventional Monte Carlo and the Exodus methods. Numerical results indicate that the present method compares well with finite difference solutions while the computation speed is much faster than that of single step Exodus and conventional Monte Carlo methods.
Method of and apparatus for testing the integrity of filters
Herman, R.L.
1985-05-07
A method of and apparatus are disclosed for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage. 5 figs.
Method for deposition of a conductor in integrated circuits
Creighton, J. Randall; Dominguez, Frank; Johnson, A. Wayne; Omstead, Thomas R.
1997-01-01
A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten.
Method for deposition of a conductor in integrated circuits
Creighton, J.R.; Dominguez, F.; Johnson, A.W.; Omstead, T.R.
1997-09-02
A method is described for fabricating integrated semiconductor circuits and, more particularly, for the selective deposition of a conductor onto a substrate employing a chemical vapor deposition process. By way of example, tungsten can be selectively deposited onto a silicon substrate. At the onset of loss of selectivity of deposition of tungsten onto the silicon substrate, the deposition process is interrupted and unwanted tungsten which has deposited on a mask layer with the silicon substrate can be removed employing a halogen etchant. Thereafter, a plurality of deposition/etch back cycles can be carried out to achieve a predetermined thickness of tungsten. 2 figs.
Investigation of system integration methods for bubble domain flight recorders
NASA Technical Reports Server (NTRS)
Chen, T. T.; Bohning, O. D.
1975-01-01
System integration methods for bubble domain flight records are investigated. Bubble memory module packaging and assembly, the control electronics design and construction, field coils, and permanent magnet bias structure design are studied. A small 60-k bit engineering model was built and tested to demonstrate the feasibility of the bubble recorder. Based on the various studies performed, a projection is made on a 50,000,000-bit prototype recorder. It is estimated that the recorder will occupy 190 cubic in., weigh 12 lb, and consume 12 w power when all of its four tracks are operated in parallel at 150 kHz data rate.
The biocommunication method: On the road to an integrative biology
Witzany, Guenther
2016-01-01
ABSTRACT Although molecular biology, genetics, and related special disciplines represent a large amount of empirical data, a practical method for the evaluation and overview of current knowledge is far from being realized. The main concepts and narratives in these fields have remained nearly the same for decades and the more recent empirical data concerning the role of noncoding RNAs and persistent viruses and their defectives do not fit into this scenario. A more innovative approach such as applied biocommunication theory could translate empirical data into a coherent perspective on the functions within and between biological organisms and arguably lead to a sustainable integrative biology. PMID:27195071
Methods of and apparatus for testing the integrity of filters
Herman, R.L.
1984-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstram upstream and downstream of such filter stage. Samples of the particel concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Method of and apparatus for testing the integrity of filters
Herman, Raymond L [Richland, WA
1985-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Method of producing an integral resonator sensor and case
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill V. (Inventor); Challoner, A. Dorian (Inventor); Hayworth, Ken J. (Inventor); Wiberg, Dean V. (Inventor); Yee, Karl Y. (Inventor)
2005-01-01
The present invention discloses an inertial sensor having an integral resonator. A typical sensor comprises a planar mechanical resonator for sensing motion of the inertial sensor and a case for housing the resonator. The resonator and a wall of the case are defined through an etching process. A typical method of producing the resonator includes etching a baseplate, bonding a wafer to the etched baseplate, through etching the wafer to form a planar mechanical resonator and the wall of the case and bonding an end cap wafer to the wall to complete the case.
Optical matrix-matrix multiplication method demonstrated by the use of a multifocus hololens
NASA Technical Reports Server (NTRS)
Liu, H. K.; Liang, Y.-Z.
1984-01-01
A method of optical matrix-matrix multiplication is presented. The feasibility of the method is also experimentally demonstrated by the use of a dichromated-gelatin multifocus holographic lens (hololens). With the specific values of matrices chosen, the average percentage error between the theoretical and experimental data of the elements of the output matrix of the multiplication of some specific pairs of 3 x 3 matrices is 0.4 percent, which corresponds to an 8-bit accuracy.
Sensitivity method for integrated structure/active control law design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1987-01-01
The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods
Sandstrom, Mary M.; Brown, Geoffrey W.; Preston, Daniel N.; Pollard, Colin J.; Warner, Kirstin F.; Remmers, Daniel L.; Sorensen, Daniel N.; Whinnery, LeRoy L.; Phillips, Jason J.; Shelley, Timothy J.; Reyes, Jose A.; Hsu, Peter C.; Reynolds, John G.
2013-03-25
The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.
An integrated economic model of multiple types and uses of water
NASA Astrophysics Data System (ADS)
Luckmann, Jonas; Grethe, Harald; McDonald, Scott; Orlov, Anton; Siddig, Khalid
2014-05-01
Water scarcity is an increasing problem in many parts of the world and the management of water has become an important issue on the political economy agenda in many countries. As water is used in most economic activities and the allocation of water is often a complex problem involving different economic agents and sectors, Computable General Equilibrium (CGE) models have been proven useful to analyze water allocation problems, although their adaptation to include water is still relatively undeveloped. This paper provides a description of an integrated water-focused CGE model (STAGE_W) that includes multiple types and uses of water, and for the first time, the reclamation of wastewater as well as the provision of brackish groundwater as separate, independent activities with specific cost structures. The insights provided by the model are illustrated with an application to the Israeli water sector assuming that freshwater resources available to the economy are cut by 50%. We analyze how the Israeli economy copes with this shock if it reduces potable water supply compared with further investments in the desalination sector. The results demonstrate that the effects on the economy are slightly negative under both scenarios. Counter intuitively, the provision of additional potable water to the economy through desalination does not substantively reduce the negative outcomes. This is mainly due to the high costs of desalination, which are currently subsidized, with the distribution of the negative welfare effect over household groups dependent on how these subsidies are financed.
NASA Astrophysics Data System (ADS)
Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei
2015-12-01
Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.
Bornstein, Robert F.
2015-01-01
Recent controversies have illuminated the strengths and limitations of different frameworks for conceptualizing personality pathology (e.g., trait perspectives, categorical models), and stimulated debate regarding how best to diagnose personality disorders (PDs) in DSM-5, and in other diagnostic systems (i.e., the International Classification of Diseases, the Psychodynamic Diagnostic Manual). In this article I argue that regardless of how PDs are conceptualized and which diagnostic system is employed, multi-method assessment must play a central role in PD diagnosis. By complementing self-reports with evidence from other domains (e.g., performance-based tests), a broader range of psychological processes are engaged in the patient, and the impact of self-perception and self-presentation biases may be better understood. By providing the assessor with evidence drawn from multiple modalities, some of which provide converging patterns and some of which yield divergent results, the assessor is compelled to engage this evidence more deeply. The mindful processing that ensues can help minimize the deleterious impact of naturally occurring information processing bias and distortion on the part of the clinician (e.g., heuristics, attribution errors), bringing greater clarity to the synthesis and integration of assessment data. PMID:25856565
Peng, Ting; Sun, Xiaochun; Mumm, Rita H
2014-01-01
Multiple trait integration (MTI) is a multi-step process of converting an elite variety/hybrid for value-added traits (e.g. transgenic events) through backcross breeding. From a breeding standpoint, MTI involves four steps: single event introgression, event pyramiding, trait fixation, and version testing. This study explores the feasibility of marker-aided backcross conversion of a target maize hybrid for 15 transgenic events in the light of the overall goal of MTI of recovering equivalent performance in the finished hybrid conversion along with reliable expression of the value-added traits. Using the results to optimize single event introgression (Peng et al. Optimized breeding strategies for multiple trait integration: I. Minimizing linkage drag in single event introgression. Mol Breed, 2013) which produced single event conversions of recurrent parents (RPs) with ≤8 cM of residual non-recurrent parent (NRP) germplasm with ~1 cM of NRP germplasm in the 20 cM regions flanking the event, this study focused on optimizing process efficiency in the second and third steps in MTI: event pyramiding and trait fixation. Using computer simulation and probability theory, we aimed to (1) fit an optimal breeding strategy for pyramiding of eight events into the female RP and seven in the male RP, and (2) identify optimal breeding strategies for trait fixation to create a 'finished' conversion of each RP homozygous for all events. In addition, next-generation seed needs were taken into account for a practical approach to process efficiency. Building on work by Ishii and Yonezawa (Optimization of the marker-based procedures for pyramiding genes from multiple donor lines: I. Schedule of crossing between the donor lines. Crop Sci 47:537-546, 2007a), a symmetric crossing schedule for event pyramiding was devised for stacking eight (seven) events in a given RP. Options for trait fixation breeding strategies considered selfing and doubled haploid approaches to achieve homozygosity
Integrating multiple disturbance aspects: management of an invasive thistle, Carduus nutans
Zhang, Rui; Shea, Katriona
2012-01-01
Background and Aims Disturbances occur in most ecological systems, and play an important role in biological invasions. We delimit five key disturbance aspects: intensity, frequency, timing, duration and extent. Few studies address more than one of these aspects, yet interactions and interdependence between aspects may lead to complex outcomes. Methods In a two-cohort experimental study, we examined how multiple aspects (intensity, frequency and timing) of a mowing disturbance regime affect the survival, phenology, growth and reproduction of an invasive thistle Carduus nutans (musk thistle). Key Results Our results show that high intensity and late timing strongly delay flowering phenology and reduce plant survival, capitulum production and plant height. A significant interaction between intensity and timing further magnifies the main effects. Unexpectedly, high frequency alone did not effectively reduce reproduction. However, a study examining only frequency and intensity, and not timing, would have erroneously attributed the importance of timing to frequency. Conclusions We used management of an invasive species as an example to demonstrate the importance of a multiple-aspect disturbance framework. Failure to consider possible interactions, and the inherent interdependence of certain aspects, could result in misinterpretation and inappropriate management efforts. This framework can be broadly applied to improve our understanding of disturbance effects on individual responses, population dynamics and community composition. PMID:22199031
Accelerometer method and apparatus for integral display and control functions
NASA Technical Reports Server (NTRS)
Bozeman, Richard J., Jr. (Inventor)
1992-01-01
Vibration analysis has been used for years to provide a determination of the proper functioning of different types of machinery, including rotating machinery and rocket engines. A determination of a malfunction, if detected at a relatively early stage in its development, will allow changes in operating mode or a sequenced shutdown of the machinery prior to a total failure. Such preventative measures result in less extensive and/or less expensive repairs, and can also prevent a sometimes catastrophic failure of equipment. Standard vibration analyzers are generally rather complex, expensive, and of limited portability. They also usually result in displays and controls being located remotely from the machinery being monitored. Consequently, a need exists for improvements in accelerometer electronic display and control functions which are more suitable for operation directly on machines and which are not so expensive and complex. The invention includes methods and apparatus for detecting mechanical vibrations and outputting a signal in response thereto. The apparatus includes an accelerometer package having integral display and control functions. The accelerometer package is suitable for mounting upon the machinery to be monitored. Display circuitry provides signals to a bar graph display which may be used to monitor machine condition over a period of time. Control switches may be set which correspond to elements in the bar graph to provide an alert if vibration signals increase over the selected trip point. The circuitry is shock mounted within the accelerometer housing. The method provides for outputting a broadband analog accelerometer signal, integrating this signal to produce a velocity signal, integrating and calibrating the velocity signal before application to a display driver, and selecting a trip point at which a digitally compatible output signal is generated. The benefits of a vibration recording and monitoring system with controls and displays readily
An integrated-intensity method for emission spectrographic computer analysis
Thomas, Catharine P.
1975-01-01
An integrated-intensity method has been devised to improve the computer analysis of data by emission spectrography. The area of the intensity profile of a spectral line is approximated by a rectangle whose height is related to the intensity difference between the peak and background of the line and whose width is measured at a fixed transmittance below the apex of the line. The method is illustrated by the determination of strontium in the presence of greater than 10 percent calcium. The Sr 3380.711-A line, which is unaffected by calcium and which has a linear analytical curve extending from 100-3,000 ppm, has been used to determine strontium in 18 standard reference rocks covering a wide range of geologic materials. Both the accuracy and the precision of the determinations were well within the accepted range for a semiquantitative procedure.
The reduced basis method for the electric field integral equation
Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.
2011-06-20
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.
Modeling the Multiple-Antenna Wireless Channel Using Maximum Entropy Methods
NASA Astrophysics Data System (ADS)
Guillaud, M.; Debbah, M.; Moustakas, A. L.
2007-11-01
Analytical descriptions of the statistics of wireless channel models are desirable tools for communication systems engineering. When multiple antennas are available at the transmit and/or the receive side (the Multiple-Input Multiple-Output, or MIMO, case), the statistics of the matrix H representing the gains between the antennas of a transmit and a receive antenna array, and in particular the correlation between its coefficients, are known to be of paramount importance for the design of such systems. However these characteristics depend on the operating environment, since the electromagnetic propagation paths are dictated by the surroundings of the antenna arrays, and little knowledge about these is available at the time of system design. An approach using the Maximum Entropy principle to derive probability density functions for the channel matrix, based on various degrees of knowledge about the environment, is presented. The general idea is to apply the maximum entropy principle to obtain the distribution of each parameter of interest (e.g. correlation), and then to marginalize them out to obtain the full channel distribution. It was shown in previous works, using sophisticated integrals from statistical physics, that by using the full spatial correlation matrix E{vec(H)vec(H)H} as the intermediate modeling parameter, this method can yield surprisingly concise channel descriptions. In this case, the joint probability density function is shown to be merely a function of the Frobenius norm of the channel matrix |H|F. In the present paper, we investigate the case where information about the average covariance matrix is available (e.g. through measurements). The maximum entropy distribution of the covariance is derived under this constraint. Furthermore, we consider also the doubly correlated case, where the intermediate modeling parameters are chosen as the transmit- and receive-side channel covariance matrices (respectively E{HHH} and E{HHH}). We compare the
The Flux-integral Method for Multidimensional Convection and Diffusion
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Macvean, M. K.; Lock, A. P.
1994-01-01
The flux-integral method is a procedure for constructing an explicit, single-step, forward-in-time, conservative, control volume update of the unsteady, multidimensional convection-diffusion equation. The convective plus diffusive flux at each face of a control-volume cell is estimated by integrating the transported variable and its face-normal derivative over the volume swept out by the convecting velocity field. This yields a unique description of the fluxes, whereas other conservative methods rely on nonunique, arbitrary pseudoflux-difference splitting procedures. The accuracy of the resulting scheme depends on the form of the subcell interpolation assumed, given cell-average data. Cellwise constant behavior results in a (very artificially diffusive) first-order convection scheme. Second-order convection-diffusion schemes correspond to cellwise linear (or bilinear) subcell interpolation. Cellwise quadratic subcell interpolants generate a highly accurate convection-diffusion scheme with excellent phase accuracy. Under constant-coefficient conditions, this is a uniformly third-order polynomial interpolation algorithm (UTOPIA).
Takiguchi, K; Okuno, M; Takahashi, H; Moriwaki, O
2007-04-01
We propose a novel integrated photonic decoder for two-dimensional (time spreading, wavelength hopping) optical code division multiple access. The decoder is composed of multiplexers-demultiplexers, variable delay lines, and a coupler, which processes complementary codes and utilizes balanced detection to reduce unwanted cross-correlation interference. We successfully carried out a 10 Gbit/s transmission that demonstrated its effectiveness.
Lee, Minjung; Dignam, James J.; Han, Junhee
2014-01-01
We propose a nonparametric approach for cumulative incidence estimation when causes of failure are unknown or missing for some subjects. Under the missing at random assumption, we estimate the cumulative incidence function using multiple imputation methods. We develop asymptotic theory for the cumulative incidence estimators obtained from multiple imputation methods. We also discuss how to construct confidence intervals for the cumulative incidence function and perform a test for comparing the cumulative incidence functions in two samples with missing cause of failure. Through simulation studies, we show that the proposed methods perform well. The methods are illustrated with data from a randomized clinical trial in early stage breast cancer. PMID:25043107
Yeung, Edward S.; Gong, Xiaoyi
2004-09-07
The present invention provides a method of analyzing multiple samples simultaneously by absorption detection. The method comprises: (i) providing a planar array of multiple containers, each of which contains a sample comprising at least one absorbing species, (ii) irradiating the planar array of multiple containers with a light source and (iii) detecting absorption of light with a detetion means that is in line with the light source at a distance of at leaat about 10 times a cross-sectional distance of a container in the planar array of multiple containers. The absorption of light by a sample indicates the presence of an absorbing species in it. The method can further comprise: (iv) measuring the amount of absorption of light detected in (iii) indicating the amount of the absorbing species in the sample. Also provided by the present invention is a system for use in the abov metho.The system comprises; (i) a light source comrnpising or consisting essentially of at leaat one wavelength of light, the absorption of which is to be detected, (ii) a planar array of multiple containers, and (iii) a detection means that is in line with the light source and is positioned in line with and parallel to the planar array of multiple contiainers at a distance of at least about 10 times a cross-sectional distance of a container.
Parametric bootstrap methods for testing multiplicative terms in GGE and AMMI models.
Forkman, Johannes; Piepho, Hans-Peter
2014-09-01
The genotype main effects and genotype-by-environment interaction effects (GGE) model and the additive main effects and multiplicative interaction (AMMI) model are two common models for analysis of genotype-by-environment data. These models are frequently used by agronomists, plant breeders, geneticists and statisticians for analysis of multi-environment trials. In such trials, a set of genotypes, for example, crop cultivars, are compared across a range of environments, for example, locations. The GGE and AMMI models use singular value decomposition to partition genotype-by-environment interaction into an ordered sum of multiplicative terms. This article deals with the problem of testing the significance of these multiplicative terms in order to decide how many terms to retain in the final model. We propose parametric bootstrap methods for this problem. Models with fixed main effects, fixed multiplicative terms and random normally distributed errors are considered. Two methods are derived: a full and a simple parametric bootstrap method. These are compared with the alternatives of using approximate F-tests and cross-validation. In a simulation study based on four multi-environment trials, both bootstrap methods performed well with regard to Type I error rate and power. The simple parametric bootstrap method is particularly easy to use, since it only involves repeated sampling of standard normally distributed values. This method is recommended for selecting the number of multiplicative terms in GGE and AMMI models. The proposed methods can also be used for testing components in principal component analysis.
NASA Technical Reports Server (NTRS)
Foernsler, Lynda J.
1996-01-01
Checklists are used by the flight crew to properly configure an aircraft for safe flight and to ensure a high level of safety throughout the duration of the flight. In addition, the checklist provides a sequential framework to meet cockpit operational requirements, and it fosters cross-checking of the flight deck configuration among crew members. This study examined the feasibility of integrating multiple checklists for non-normal procedures into a single procedure for a typical transport aircraft. For the purposes of this report, a typical transport aircraft is one that represents a midpoint between early generation aircraft (B-727/737-200 and DC-10) and modern glass cockpit aircraft (B747-400/777 and MD-11). In this report, potential conflicts among non-normal checklist items during multiple failure situations for a transport aircraft are identified and analyzed. The non-normal checklist procedure that would take precedence for each of the identified multiple failure flight conditions is also identified. The rationale behind this research is that potential conflicts among checklist items might exist when integrating multiple checklists for non-normal procedures into a single checklist. As a rule, multiple failures occurring in today's highly automated and redundant system transport aircraft are extremely improbable. In addition, as shown in this analysis, conflicts among checklist items in a multiple failure flight condition are exceedingly unlikely. The possibility of a multiple failure flight condition occurring with a conflict among checklist items is so remote that integration of the non-normal checklists into a single checklist appears to be a plausible option.
The value of integrating information from multiple hazards for flood risk management
NASA Astrophysics Data System (ADS)
Castillo-Rodríguez, J. T.; Escuder-Bueno, I.; Altarejos-García, L.; Serrano-Lombillo, A.
2013-07-01
This article presents a methodology for estimating flood risk in urban areas integrating pluvial flooding, river flooding and failure of both small and large dams. The first part includes a review of basic concepts and existing methods on flood risk analysis, evaluation and management. Traditionally, flood risk analyses have focused on specific site studies and qualitative or semi-quantitative approaches. However, in this context, a general methodology to perform a quantitative flood risk analysis including different flood hazards was still required. The second part describes the proposed methodology, which presents an integrated approach - combining pluvial, river flooding and dam failure, as applied to a case study: a urban area located downstream a dam under construction. Such methodology represents an upgrade of the methodological piece developed within the SUFRI project. This article shows how outcomes from flood risk analysis provide better and more complete information to inform authorities, local entities and the stakeholders involved on decision-making with regard to flood risk management.
Two-Dimensional Integral Reacting Computer Code for Multiple Phase Flows
1997-05-05
ICRKFLO solves conservation equations for gaseous species, droplets, and solid particles of various sizes. General conservation laws, expressed by ellipitic-type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy and dissipation for a three-phase reacting flow. Associated sub-models include integral combustion, two-parameter turbulence, particle melting and evaporation, droplet evaporation, and interfacial submodels. An evolving integral reaction submodel, originally designed for ICOMFLO2 to solve numerical stabilitymore » problems associated with Arrhenius type differential reaction submodels, was expanded and enhanced to handle petroleum cracking applications. A two-parameter turbulence submodel accounts for droplet and particle dispersion by gas phase turbulence with feedback effects on the gas phase. The evaporation submodel treats not only particle evaporation but the droplet size distribution shift caused by evaporation. Interfacial submodels correlate momentum and energy transfer between phases. Three major upgrades, adding new capabilities and improved physical modeling, were implemnted in IRCKFLO Version 2.0. They are :(1) particle-particle and particle wall interactions; (2) a two-step process for computing the reaction kinetics for a very large number of chemical reactions within a complex non-isothermal hydrodynamic flow field; and (3) a sectional coupling method combined with a triangular blocked cell technique for computing reacting multiphase flow systems of complex geometry while preserving the advantages of grid orthogonality.« less
Anderson, Kari B.; Halpin, Stephen T.; Johnson, Alicia S.; Martin, R. Scott; Spence, Dana M.
2012-01-01
In Part II of this series describing the use of polystyrene (PS) devices for microfluidic-based cellular assays, various cellular types and detection strategies are employed to determine three fundamental assays often associated with cells. Specifically, using either integrated electrochemical sensing or optical measurements with a standard multi-well plate reader, cellular uptake, production, or release of important cellular analytes are determined on a PS-based device. One experiment involved the fluorescence measurement of nitric oxide (NO) produced within an endothelial cell line following stimulation with ATP. The result was a four-fold increase in NO production (as compared to a control), with this receptor-based mechanism of NO production verifying the maintenance of cell receptors following immobilization onto the PS substrate. The ability to monitor cellular uptake was also demonstrated by optical determination of Ca2+ into endothelial cells following stimulation with the Ca2+ ionophore A20317. The result was a significant increase (42%) in the calcium uptake in the presence of the ionophore, as compared to a control (17%) (p < 0.05). Finally, the release of catecholamines from a dopaminergic cell line (PC 12 cells) was electrochemically monitored, with the electrodes being embedded into the PS-based device. The PC 12 cells had better adherence on the PS devices, as compared to use of PDMS. Potassium-stimulation resulted in the release of 114 ± 11 µM catecholamines, a significant increase (p < 0.05) over the release from cells that had been exposed to an inhibitor (reserpine, 20 ± 2 µM of catecholamines). The ability to successfully measure multiple analytes, generated in different means from various cells under investigation, suggests that PS may be a useful material for microfluidic device fabrication, especially considering the enhanced cell adhesion to PS, its enhanced rigidity/amenability to automation, and its ability to enable a wider range of
Integration of multiple intraguild predator cues for oviposition decisions by a predatory mite
Walzer, Andreas; Schausberger, Peter
2012-01-01
In mutual intraguild predation (IGP), the role of individual guild members is strongly context dependent and, during ontogeny, can shift from an intraguild (IG) prey to a food competitor or to an IG predator. Consequently, recognition of an offspring's predator is more complex for IG than classic prey females. Thus, IG prey females should be able to modulate their oviposition decisions by integrating multiple IG predator cues and by experience. Using a guild of plant-inhabiting predatory mites sharing the spider mite Tetranychus urticae as prey and passing through ontogenetic role shifts in mutual IGP, we assessed the effects of single and combined direct cues of the IG predator Amblyseius andersoni (eggs and traces left by a female on the substrate) on prey patch selection and oviposition behaviour of naïve and IG predator-experienced IG prey females of Phytoseiulus persimilis. The IG prey females preferentially resided in patches without predator cues when the alternative patch contained traces of predator females or the cue combination. Preferential egg placement in patches without predator cues was only apparent in the choice situation with the cue combination. Experience increased the responsiveness of females exposed to the IG predator cue combination, indicated by immediate selection of the prey patch without predator cues and almost perfect oviposition avoidance in patches with the cue combination. We argue that the evolution of the ability of IG prey females to evaluate offspring's IGP risk accurately is driven by the irreversibility of oviposition and the functionally complex relationships between predator guild members. PMID:23264692
NASA Astrophysics Data System (ADS)
Qian, Xiaoliang; Schlick, Tamar
2002-04-01
We develop an efficient multiple-time-step force splitting scheme for particle-mesh-Ewald molecular dynamics simulations. Our method exploits smooth switch functions effectively to regulate direct and reciprocal space terms for the electrostatic interactions. The reciprocal term with the near field contributions removed is assigned to the slow class; the van der Waals and regulated particle-mesh-Ewald direct-space terms, each associated with a tailored switch function, are assigned to the medium class. All other bonded terms are assigned to the fast class. This versatile protocol yields good stability and accuracy for Newtonian algorithms, with temperature and pressure coupling, as well as for Langevin dynamics. Since the van der Waals interactions need not be cut at short distances to achieve moderate speedup, this integrator represents an enhancement of our prior multiple-time-step implementation for microcanonical ensembles. Our work also tests more rigorously the stability of such splitting schemes, in combination with switching methodology. Performance of the algorithms is optimized and tested on liquid water, solvated DNA, and solvated protein systems over 400 ps or longer simulations. With a 6 fs outer time step, we find computational speedup ratios of over 6.5 for Newtonian dynamics, compared with 0.5 fs single-time-step simulations. With modest Langevin damping, an outer time step of up to 16 fs can be used with a speedup ratio of 7.5. Theoretical analyses in our appendices produce guidelines for choosing the Langevin damping constant and show the close relationship among the leapfrog Verlet, velocity Verlet, and position Verlet variants.
Lindenmeyer, C.W.
1993-01-26
An apparatus and method to automate the handling of multiple digital tape cassettes for processing by commercially available cassette tape readers and recorders. A removable magazine rack stores a plurality of tape cassettes, and cooperates with a shuttle device that automatically inserts and removes cassettes from the magazine to the reader and vice-versa. Photocells are used to identify and index to the desired tape cassette. The apparatus allows digital information stored on multiple cassettes to be processed without significant operator intervention.
Lindenmeyer, Carl W.
1993-01-01
An apparatus and method to automate the handling of multiple digital tape cassettes for processing by commercially available cassette tape readers and recorders. A removable magazine rack stores a plurality of tape cassettes, and cooperates with a shuttle device that automatically inserts and removes cassettes from the magazine to the reader and vice-versa. Photocells are used to identify and index to the desired tape cassette. The apparatus allows digital information stored on multiple cassettes to be processed without significant operator intervention.
The U.S. Environmental Protection Agency recently established the Ecosystem Services Research Program to help formulate methods and models for conducting comprehensive risk assessments that quantify how multiple ecosystem services interact and respond in concert to environmental ...
ERIC Educational Resources Information Center
Große, Cornelia S.
2014-01-01
It is commonly suggested to mathematics teachers to present learners different methods in order to solve one problem. This so-called "learning with multiple solution methods" is also recommended from a psychological point of view. However, existing research leaves many questions unanswered, particularly concerning the effects of…
ERIC Educational Resources Information Center
Sie Hoe, Lau; Ngee Kiong, Lau; Kian Sam, Hong; Bin Usop, Hasbee
2009-01-01
Assessment is central to any educational process. Number Right (NR) scoring method is a conventional scoring method for multiple choice items, where students need to pick one option as the correct answer. One point is awarded for the correct response and zero for any other responses. However, it has been heavily criticized for guessing and failure…
a Multiple Riccati Equations Rational-Exponent Method and its Application to Whitham-Broer Equation
NASA Astrophysics Data System (ADS)
Liu, Qing; Wang, Zi-Hua; Jia, Dong-Li
2013-03-01
According to two dependent solutions to a generalized Riccati equation together with the equation itself, a multiple Riccati equations rational-exponent method is proposed and applied to Whitham-Broer-Kaup equation. It shows that this method is a more concise and efficient approach and can uniformly derive many types of combined solutions to nonlinear partial differential equations.
Pineda, Silvia; Real, Francisco X; Kogevinas, Manolis; Carrato, Alfredo; Chanock, Stephen J; Malats, Núria; Van Steen, Kristel
2015-12-01
Omics data integration is becoming necessary to investigate the genomic mechanisms involved in complex diseases. During the integration process, many challenges arise such as data heterogeneity, the smaller number of individuals in comparison to the number of parameters, multicollinearity, and interpretation and validation of results due to their complexity and lack of knowledge about biological processes. To overcome some of these issues, innovative statistical approaches are being developed. In this work, we propose a permutation-based method to concomitantly assess significance and correct by multiple testing with the MaxT algorithm. This was applied with penalized regression methods (LASSO and ENET) when exploring relationships between common genetic variants, DNA methylation and gene expression measured in bladder tumor samples. The overall analysis flow consisted of three steps: (1) SNPs/CpGs were selected per each gene probe within 1Mb window upstream and downstream the gene; (2) LASSO and ENET were applied to assess the association between each expression probe and the selected SNPs/CpGs in three multivariable models (SNP, CPG, and Global models, the latter integrating SNPs and CPGs); and (3) the significance of each model was assessed using the permutation-based MaxT method. We identified 48 genes whose expression levels were significantly associated with both SNPs and CPGs. Importantly, 36 (75%) of them were replicated in an independent data set (TCGA) and the performance of the proposed method was checked with a simulation study. We further support our results with a biological interpretation based on an enrichment analysis. The approach we propose allows reducing computational time and is flexible and easy to implement when analyzing several types of omics data. Our results highlight the importance of integrating omics data by applying appropriate statistical strategies to discover new insights into the complex genetic mechanisms involved in disease
Apparatus and method for defect testing of integrated circuits
Cole, Jr., Edward I.; Soden, Jerry M.
2000-01-01
An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V.sub.DD, to an IC under test and measures a transient voltage component, V.sub.DDT, signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V.sub.DDT signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V.sub.DDT signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.
Apparatus and method for defect testing of integrated circuits
Cole, E.I. Jr.; Soden, J.M.
2000-02-29
An apparatus and method for defect and failure-mechanism testing of integrated circuits (ICs) is disclosed. The apparatus provides an operating voltage, V(DD), to an IC under test and measures a transient voltage component, V(DDT), signal that is produced in response to switching transients that occur as test vectors are provided as inputs to the IC. The amplitude or time delay of the V(DDT) signal can be used to distinguish between defective and defect-free (i.e. known good) ICs. The V(DDT) signal is measured with a transient digitizer, a digital oscilloscope, or with an IC tester that is also used to input the test vectors to the IC. The present invention has applications for IC process development, for the testing of ICs during manufacture, and for qualifying ICs for reliability.
Attachment method for stacked integrated circuit (IC) chips
Bernhardt, A.F.; Malba, V.
1999-08-03
An attachment method for stacked integrated circuit (IC) chips is disclosed. The method involves connecting stacked chips, such as DRAM memory chips, to each other and/or to a circuit board. Pads on the individual chips are rerouted to form pads on the side of the chip, after which the chips are stacked on top of each other whereby desired interconnections to other chips or a circuit board can be accomplished via the side-located pads. The pads on the side of a chip are connected to metal lines on a flexible plastic tape (flex) by anisotropically conductive adhesive (ACA). Metal lines on the flex are likewise connected to other pads on chips and/or to pads on a circuit board. In the case of a stack of DRAM chips, pads to corresponding address lines on the various chips may be connected to the same metal line on the flex to form an address bus. This method has the advantage of reducing the number of connections required to be made to the circuit board due to bussing; the flex can accommodate dimensional variation in the alignment of chips in the stack; bonding of the ACA is accomplished at low temperature and is otherwise simpler and less expensive than solder bonding; chips can be bonded to the ACA all at once if the sides of the chips are substantially coplanar, as in the case for stacks of identical chips, such as DRAM. 12 figs.
Attachment method for stacked integrated circuit (IC) chips
Bernhardt, Anthony F.; Malba, Vincent
1999-01-01
An attachment method for stacked integrated circuit (IC) chips. The method involves connecting stacked chips, such as DRAM memory chips, to each other and/or to a circuit board. Pads on the individual chips are rerouted to form pads on the side of the chip, after which the chips are stacked on top of each other whereby desired interconnections to other chips or a circuit board can be accomplished via the side-located pads. The pads on the side of a chip are connected to metal lines on a flexible plastic tape (flex) by anisotropically conductive adhesive (ACA). Metal lines on the flex are likewise connected to other pads on chips and/or to pads on a circuit board. In the case of a stack of DRAM chips, pads to corresponding address lines on the various chips may be connected to the same metal line on the flex to form an address bus. This method has the advantage of reducing the number of connections required to be made to the circuit board due to bussing; the flex can accommodate dimensional variation in the alignment of chips in the stack; bonding of the ACA is accomplished at low temperature and is otherwise simpler and less expensive than solder bonding; chips can be bonded to the ACA all at once if the sides of the chips are substantially coplanar, as in the case for stacks of identical chips, such as DRAM.
Tong, Pan; Coombes, Kevin R.
2012-01-01
Motivation: Identifying genes altered in cancer plays a crucial role in both understanding the mechanism of carcinogenesis and developing novel therapeutics. It is known that there are various mechanisms of regulation that can lead to gene dysfunction, including copy number change, methylation, abnormal expression, mutation and so on. Nowadays, all these types of alterations can be simultaneously interrogated by different types of assays. Although many methods have been proposed to identify altered genes from a single assay, there is no method that can deal with multiple assays accounting for different alteration types systematically. Results: In this article, we propose a novel method, integration using item response theory (integIRTy), to identify altered genes by using item response theory that allows integrated analysis of multiple high-throughput assays. When applied to a single assay, the proposed method is more robust and reliable than conventional methods such as Student’s t-test or the Wilcoxon rank-sum test. When used to integrate multiple assays, integIRTy can identify novel-altered genes that cannot be found by looking at individual assay separately. We applied integIRTy to three public cancer datasets (ovarian carcinoma, breast cancer, glioblastoma) for cross-assay type integration which all show encouraging results. Availability and implementation: The R package integIRTy is available at the web site http://bioinformatics.mdanderson.org/main/OOMPA:Overview. Contact: kcoombes@mdanderson.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23014630
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kanae, S.; Seto, S.; Hirabayashi, Y.; Oki, T.
2012-12-01
Bias-correction methods applied to monthly temperature and precipitation data simulated by multiple General Circulation Models (GCMs) are evaluated in this study. Although various methods have been proposed recently, an intercomparison among them using multiple GCM simulations has seldom been reported. Here, five previous methods as well as a proposed new method are compared. Before the comparison, we classified previous methods. The methods proposed in previous studies can be classified into four types based on the following two criteria: 1) Whether the statistics (e.g. mean, standard deviation, or the coefficient of variation) of future simulation is used in bias-correction; and 2) whether the estimation of cumulative probability is included in bias-correction. The methods which require future statistics will depend on the data in the projection period, while those which do not use future statistics are not. The classification proposed can characterize each bias-correction method. These methods are applied to temperature and precipitation simulated from 12 GCMs in the Coupled Model Intercomparison Project (CMIP3) archives. Parameters of each method are calibrated by using 1948-1972 observed data and validated for the 1974-1998 period. These methods are then applied to GCM future simulations (2073-2097), and the bias-corrected data are intercompared. For the historical simulation, negligible difference can be found between observed and bias-corrected data. However, the difference in the future simulation is large dependent on the characteristics of each method. The frequency (probability) that the 2073-2097 bias-corrected data exceed the 95th percentile of the 1948-1972 observed data is estimated in order to evaluate the differences among methods. The difference between proposed and one of the previous method is more than 10% in many areas. The differences of bias-corrected data among methods are discussed based on their respective characteristics. The results
A Multi-Index Integrated Change detection method for updating the National Land Cover Database
Jin, Suming; Yang, Limin; Xian, George Z.; Danielson, Patrick; Homer, Collin G.
2010-01-01
Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.
A Multi-Index Integrated Change Detection Method for Updating the National Land Cover Database
NASA Astrophysics Data System (ADS)
Jin, S.; Yang, L.; Xian, G. Z.; Danielson, P.; Homer, C.
2010-12-01
Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.
Trigonometrically fitted two step hybrid method for the numerical integration of second order IVPs
NASA Astrophysics Data System (ADS)
Monovasilis, Th.; Kalogiratou, Z.; Simos, T. E.
2016-06-01
In this work we consider the numerical integration of second order ODEs where the first derivative is missing. We construct trigonometrically fitted two step hybrid methods. We apply the new methods on the numerical integration of several test problems.
Hubbell rectangular source integral calculation using a fast Chebyshev wavelets method.
Manai, K; Belkadhi, K
2016-07-01
An integration method based on Chebyshev wavelets is presented and used to calculate the Hubbell rectangular source integral. A study of the convergence and the accuracy of the method was carried out by comparing it to previous studies. PMID:27152913
Atomic Calculations with a One-Parameter, Single Integral Method.
ERIC Educational Resources Information Center
Baretty, Reinaldo; Garcia, Carmelo
1989-01-01
Presents an energy function E(p) containing a single integral and one variational parameter, alpha. Represents all two-electron integrals within the local density approximation as a single integral. Identifies this as a simple treatment for use in an introductory quantum mechanics course. (MVL)
mulPBA: an efficient multiple protein structure alignment method based on a structural alphabet.
Léonard, Sylvain; Joseph, Agnel Praveen; Srinivasan, Narayanaswamy; Gelly, Jean-Christophe; de Brevern, Alexandre G
2014-04-01
The increasing number of available protein structures requires efficient tools for multiple structure comparison. Indeed, multiple structural alignments are essential for the analysis of function, evolution and architecture of protein structures. For this purpose, we proposed a new web server called multiple Protein Block Alignment (mulPBA). This server implements a method based on a structural alphabet to describe the backbone conformation of a protein chain in terms of dihedral angles. This 'sequence-like' representation enables the use of powerful sequence alignment methods for primary structure comparison, followed by an iterative refinement of the structural superposition. This approach yields alignments superior to most of the rigid-body alignment methods and highly comparable with the flexible structure comparison approaches. We implement this method in a web server designed to do multiple structure superimpositions from a set of structures given by the user. Outputs are given as both sequence alignment and superposed 3D structures visualized directly by static images generated by PyMol or through a Jmol applet allowing dynamic interaction. Multiple global quality measures are given. Relatedness between structures is indicated by a distance dendogram. Superimposed structures in PDB format can be also downloaded, and the results are quickly obtained. mulPBA server can be accessed at www.dsimb.inserm.fr/dsimb_tools/mulpba/ .
A method to visualize the evolution of multiple interacting spatial systems
NASA Astrophysics Data System (ADS)
Heitzler, Magnus; Hackl, Jürgen; Adey, Bryan T.; Iosifescu-Enescu, Ionut; Lam, Juan Carlos; Hurni, Lorenz
2016-07-01
Integrated modeling approaches are being increasingly used to simulate the behavior of, and the interaction between, several interdependent systems. They are becoming more and more important in many fields, including, but not being limited to, civil engineering, hydrology and climate impact research. It is beneficial when using these approaches to be able to visualize both, the intermediary and final results of scenario-based analyses that are conducted in both, space and time. This requires appropriate visualization techniques that enable to efficiently navigate between multiple such scenarios. In recent years, several innovative visualization techniques have been developed that allow for such navigation purposes. These techniques, however, are limited to the representation of one system at a time. Improvements are possible with respect to the ability to visualize the results related to multiple scenarios for multiple interdependent spatio-temporal systems. To address this issue, existing multi-scenario navigation techniques based on small multiples and line graphs are extended by multiple system representations and inter-system impact representations. This not only allows to understand the evolution of the systems under consideration but also eases identifying events where one system influences another system significantly. In addition, the concept of selective branching is described that allows to remove otherwise redundant information from the visualization by considering the logical and temporal dependencies between these systems. This visualization technique is applied to a risk assessment methodology that allows to determine how different environmental systems (i.e. precipitation, flooding, and landslides) influence each other as well as how their impact on civil infrastructure affects society. The results of this work are concepts for improved visualization techniques for multiple interacting spatial systems. The successful validation with domain experts of
NASA Astrophysics Data System (ADS)
He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu
2014-11-01
Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.
Integrated method for the measurement of trace atmospheric bases
NASA Astrophysics Data System (ADS)
Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.
2011-09-01
Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace atmospheric nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.
Integrated method for the measurement of trace nitrogenous atmospheric bases
NASA Astrophysics Data System (ADS)
Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.
2011-12-01
Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv), as supported by field measurements in an urban park and in the exhaust of on-road vehicles.
Unsteady aerodynamic simulation of multiple bodies in relative motion: A prototype method
NASA Technical Reports Server (NTRS)
Meakin, Robert L.
1989-01-01
A prototype method for time-accurate simulation of multiple aerodynamic bodies in relative motion is presented. The method is general and features unsteady chimera domain decomposition techniques and an implicit approximately factored finite-difference procedure to solve the time-dependent thin-layer Navier-Stokes equations. The method is applied to a set of two- and three- dimensional test problems to establish spatial and temporal accuracy, quantify computational efficiency, and begin to test overall code robustness.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, J.D.; Bailey, L.L.; O'Connell, A.F.; Talancy, N.W.; Grant, E.H.C.; Gilbert, A.T.; Annand, E.M.; Husband, T.P.; Hines, J.E.
2008-01-01
1. Occupancy estimation and modelling based on detection?nondetection data provide an effective way of exploring change in a species' distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method. 2. We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species' use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site. 3. We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species. 4. Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, J.D.; Bailey, L.L.; O'Connell, Jr.; Talancy, N.W.; Campbell, Grant E.H.; Gilbert, A.T.; Annand, E.M.; Husband, T.P.; Hines, J.E.
2008-01-01
1. Occupancy estimation and modelling based on detection-nondetection data provide an effective way of exploring change in a species' distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method. 2. We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species' use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site. 3. We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species. 4. Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can
Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng
2014-08-01
Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.
Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil
NASA Astrophysics Data System (ADS)
Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng
2014-08-01
Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.
Baallal Jacobsen, Simo Abdessamad; Jensen, Niels B.; Kildegaard, Kanchana R.; Herrgård, Markus J.; Schneider, Konstantin; Koza, Anna; Forster, Jochen; Nielsen, Jens; Borodina, Irina
2016-01-01
Saccharomyces cerevisiae is widely used in the biotechnology industry for production of ethanol, recombinant proteins, food ingredients and other chemicals. In order to generate highly producing and stable strains, genome integration of genes encoding metabolic pathway enzymes is the preferred option. However, integration of pathway genes in single or few copies, especially those encoding rate-controlling steps, is often not sufficient to sustain high metabolic fluxes. By exploiting the sequence diversity in the long terminal repeats (LTR) of Ty retrotransposons, we developed a new set of integrative vectors, EasyCloneMulti, that enables multiple and simultaneous integration of genes in S. cerevisiae. By creating vector backbones that combine consensus sequences that aim at targeting subsets of Ty sequences and a quickly degrading selective marker, integrations at multiple genomic loci and a range of expression levels were obtained, as assessed with the green fluorescent protein (GFP) reporter system. The EasyCloneMulti vector set was applied to balance the expression of the rate-controlling step in the β-alanine pathway for biosynthesis of 3-hydroxypropionic acid (3HP). The best 3HP producing clone, with 5.45 g.L-1 of 3HP, produced 11 times more 3HP than the lowest producing clone, which demonstrates the capability of EasyCloneMulti vectors to impact metabolic pathway enzyme activity. PMID:26934490
Non-destructive testing method and apparatus utilizing phase multiplication holography
Collins, H. Dale; Prince, James M.; Davis, Thomas J.
1984-01-01
An apparatus and method for imaging of structural characteristics in test objects using radiation amenable to coherent signal processing methods. Frequency and phase multiplication of received flaw signals is used to simulate a test wavelength at least one to two orders of magnitude smaller than the actual wavelength. The apparent reduction in wavelength between the illumination and recording radiation performs a frequency translation hologram. The hologram constructed with a high synthetic frequency and flaw phase multiplication is similar to a conventional acoustic hologram construction at the high frequency.
NASA Astrophysics Data System (ADS)
Gangammanavar, Harsha
The contribution of renewable resources to the energy portfolio across the world has been steadily increasing over the past few years. Several studies predict the continuation of this trend in the future leading to large scale integration of renewable resources into energy networks. A principal challenge associated with this is the intermittency and non-dispatchability of the renewable sources. This necessitates incorporation of faster reserves, storage devices and similar services operating alongside the slow ramping conventional generators in the energy network. To maintain the robustness of such a network, there are proposals to require hourly planning for some resources, and sub-hourly planning for others: an hourly scale may be used for conventional generator production levels and a sub-hourly scale for renewable generator levels and/or storage and transmission network utilization. This dissertation will present a multiple time scale stochastic programming formulation of the economic dispatch problem and algorithmic frameworks to tackle it. The first approach highlights the difference between hourly and sub-hourly planning of economic dispatch and uses the two-stage Stochastic Decomposition (SD) algorithm. The second framework combines three principal components: optimization, dynamic control and simulation. The conventional generator decisions are obtained iteratively by solving a regularized linear problem in the first stage of SD. For these first stage decisions, a policy for recommending the dispatch decisions is identified using an Approximate Dynamic Programming based controller. A vector autoregression based simulator is used to provide the sub-hourly wind generation scenarios. The performance of these algorithms was tested on the IEEE model energy networks and the Illinois energy network. The insights gained regarding the benefits of sub-hourly planning and role of operating reserves/storage in energy network with high renewable penetration will be
NASA Astrophysics Data System (ADS)
Dickson, Neil E. M.; Comte, Jean-Christophe; Renard, Philippe; Straubhaar, Julien A.; McKinley, Jennifer M.; Ofterdinger, Ulrich
2015-08-01
The process of accounting for heterogeneity has made significant advances in statistical research, primarily in the framework of stochastic analysis and the development of multiple-point statistics (MPS). Among MPS techniques, the direct sampling (DS) method is tested to determine its ability to delineate heterogeneity from aerial magnetics data in a regional sandstone aquifer intruded by low-permeability volcanic dykes in Northern Ireland, UK. The use of two two-dimensional bivariate training images aids in creating spatial probability distributions of heterogeneities of hydrogeological interest, despite relatively `noisy' magnetics data (i.e. including hydrogeologically irrelevant urban noise and regional geologic effects). These distributions are incorporated into a hierarchy system where previously published density function and upscaling methods are applied to derive regional distributions of equivalent hydraulic conductivity tensor K. Several K models, as determined by several stochastic realisations of MPS dyke locations, are computed within groundwater flow models and evaluated by comparing modelled heads with field observations. Results show a significant improvement in model calibration when compared to a simplistic homogeneous and isotropic aquifer model that does not account for the dyke occurrence evidenced by airborne magnetic data. The best model is obtained when normal and reverse polarity dykes are computed separately within MPS simulations and when a probability threshold of 0.7 is applied. The presented stochastic approach also provides improvement when compared to a previously published deterministic anisotropic model based on the unprocessed (i.e. noisy) airborne magnetics. This demonstrates the potential of coupling MPS to airborne geophysical data for regional groundwater modelling.
Method for discriminating synchronous multiple lung cancers of the same histological type
Zhou, Xudong; Tian, Long; Fan, Jun; Lai, Yutian; Li, Shuangjiang; Che, Guowei; Huang, Jian
2016-01-01
Abstract With the development of imaging technology, an increasing number of synchronous multiple lung cancers (SMLCs) have been diagnosed in recent years. Patients with >1 tumor are diagnosed with either synchronous multiple primary lung cancers (SMPLCs) or other primary tumors and metastases. Clinical guidelines, histological characteristics, and molecular diagnostics have been used to discriminate SMPLCs from other multiple lung cancers. However, there is still ambiguity in the diagnosis of SMPLCs of the same histological type. We enrolled 24 patients with the same histological type of SMLCs and assessed their status using established clinical guidelines, comprehensive histological subtyping, and molecular analysis. The sum value of the differential microRNA (miRNA) expression profiles (ΔΔCt) with matched tumors was evaluated to discriminate SMPLCs of the same histological type from metastases. Twelve patients with lymph node metastases were included for comparison, and the sum value of the ΔΔCt of 5 miRNAs between primary tumors and lymph node metastases was <9. Patients definitively diagnosed with SMPLCs by integrated analysis were also classified as SMPLCs by miRNA analysis; 6 patients showed conflicting diagnoses by integrated and miRNA analysis and 14 were given the same classification. Analysis of miRNA expression profiles is considered to be a useful tool for discriminating SMPLCs from intrapulmonary metastases. PMID:27495091
Low-noise multiple watermarks technology based on complex double random phase encoding method
NASA Astrophysics Data System (ADS)
Zheng, Jihong; Lu, Rongwen; Sun, Liujie; Zhuang, Songlin
2010-11-01
Based on double random phase encoding method (DRPE), watermarking technology may provide a stable and robust method to protect the copyright of the printing. However, due to its linear character, DRPE exist the serious safety risk when it is attacked. In this paper, a complex coding method, which means adding the chaotic encryption based on logistic mapping before the DRPE coding, is provided and simulated. The results testify the complex method will provide better security protection for the watermarking. Furthermore, a low-noise multiple watermarking is studied, which means embedding multiple watermarks into one host printing and decrypt them with corresponding phase keys individually. The Digital simulation and mathematic analysis show that with the same total embedding weight factor, multiply watermarking will improve signal noise ratio (SNR) of the output printing image significantly. The complex multiply watermark method may provide a robust, stability, reliability copyright protection with higher quality printing image.
Marucci, Evandro A.; Neves, Leandro A.; Valêncio, Carlo R.; Pinto, Alex R.; Cansian, Adriano M.; de Souza, Rogeria C. G.; Shiyou, Yang; Machado, José M.
2014-01-01
With the advance of genomic researches, the number of sequences involved in comparative methods has grown immensely. Among them, there are methods for similarities calculation, which are used by many bioinformatics applications. Due the huge amount of data, the union of low complexity methods with the use of parallel computing is becoming desirable. The k-mers counting is a very efficient method with good biological results. In this work, the development of a parallel algorithm for multiple sequence similarities calculation using the k-mers counting method is proposed. Tests show that the algorithm presents a very good scalability and a nearly linear speedup. For 14 nodes was obtained 12x speedup. This algorithm can be used in the parallelization of some multiple sequence alignment tools, such as MAFFT and MUSCLE. PMID:25140318
[A factor analysis method for contingency table data with unlimited multiple choice questions].
Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie
2016-02-01
The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions.
The initial rise method extended to multiple trapping levels in thermoluminescent materials.
Furetta, C; Guzmán, S; Ruiz, B; Cruz-Zaragoza, E
2011-02-01
The well known Initial Rise Method (IR) is commonly used to determine the activation energy when only one glow peak is presented and analysed in the phosphor materials. However, when the glow peak is more complex, a wide peak and some holders appear in the structure. The application of the Initial Rise Method is not valid because multiple trapping levels are considered and then the thermoluminescent analysis becomes difficult to perform. This paper shows the case of a complex glow curve structure as an example and shows that the calculation is also possible using the IR method. The aim of the paper is to extend the well known Initial Rise Method (IR) to the case of multiple trapping levels. The IR method is applied to minerals extracted from Nopal cactus and Oregano spices because the thermoluminescent glow curve's shape suggests a trap distribution instead of a single trapping level. PMID:21051238
Marucci, Evandro A; Zafalon, Geraldo F D; Momente, Julio C; Neves, Leandro A; Valêncio, Carlo R; Pinto, Alex R; Cansian, Adriano M; de Souza, Rogeria C G; Shiyou, Yang; Machado, José M
2014-01-01
With the advance of genomic researches, the number of sequences involved in comparative methods has grown immensely. Among them, there are methods for similarities calculation, which are used by many bioinformatics applications. Due the huge amount of data, the union of low complexity methods with the use of parallel computing is becoming desirable. The k-mers counting is a very efficient method with good biological results. In this work, the development of a parallel algorithm for multiple sequence similarities calculation using the k-mers counting method is proposed. Tests show that the algorithm presents a very good scalability and a nearly linear speedup. For 14 nodes was obtained 12x speedup. This algorithm can be used in the parallelization of some multiple sequence alignment tools, such as MAFFT and MUSCLE. PMID:25140318
Multiscale renormalization group methods for effective potentials with multiple scalar fields
NASA Astrophysics Data System (ADS)
Wang, Zhi-Wei; Steele, Tom; McKeon, Gerry
2015-04-01
Conformally symmetric scalar extensions of the Standard Model are particular appealing to reveal the underlying mechanism for electroweak symmetry breaking and to provide dark matter candidates. The Gildener & Weinberg (GW) method is widely used in these models, but is limited to weakly coupled theories. In this talk, multi-scale renormalization group (RG) methods are reviewed and applied to the analysis of the effective potential for radiative symmetry breaking with multiple scalar fields, allowing an extension of the GW method beyond the weak coupling limit. A model containing two interacting real scalar fields is used as an example to illustrate these multi-scale RG methods. Extensions of these multi-scale methods for effective potentials in models containing multiple scalars with O(M) × O(N) symmetry will also be discussed. Reseach funded by NSERC (Natural Sciences and Engineering Research Council of Canada).
[A factor analysis method for contingency table data with unlimited multiple choice questions].
Toyoda, Hideki; Haiden, Reina; Kubo, Saori; Ikehara, Kazuya; Isobe, Yurie
2016-02-01
The purpose of this study is to propose a method of factor analysis for analyzing contingency tables developed from the data of unlimited multiple-choice questions. This method assumes that the element of each cell of the contingency table has a binominal distribution and a factor analysis model is applied to the logit of the selection probability. Scree plot and WAIC are used to decide the number of factors, and the standardized residual, the standardized difference between the sample, and the proportion ratio, is used to select items. The proposed method was applied to real product impression research data on advertised chips and energy drinks. Since the results of the analysis showed that this method could be used in conjunction with conventional factor analysis model, and extracted factors were fully interpretable, and suggests the usefulness of the proposed method in the study of psychology using unlimited multiple-choice questions. PMID:26964368
Statistical Methods for Magnetic Resonance Image Analysis with Applications to Multiple Sclerosis
NASA Astrophysics Data System (ADS)
Pomann, Gina-Maria
image regression techniques have been shown to have modest performance for assessing the integrity of the blood-brain barrier based on imaging without contrast agents. These models have centered on the problem of cross-sectional classification in which patients are imaged at a single study visit and pre-contrast images are used to predict post-contrast imaging. In this paper, we extend these methods to incorporate historical imaging information, and we find the proposed model to exhibit improved performance. We further develop scan-stratified case-control sampling techniques that reduce the computational burden of local image regression models while respecting the low proportion of the brain that exhibits abnormal vascular permeability. In the third part of this thesis, we present methods to evaluate tissue damage in patients with MS. We propose a lag functional linear model to predict a functional response using multiple functional predictors observed at discrete grids with noise. Two procedures are proposed to estimate the regression parameter functions; 1) a semi-local smoothing approach using generalized cross-validation; and 2) a global smoothing approach using a restricted maximum likelihood framework. Numerical studies are presented to analyze predictive accuracy in many realistic scenarios. We find that the global smoothing approach results in higher predictive accuracy than the semi-local approach. The methods are employed to estimate a measure of tissue damage in patients with MS. In patients with MS, the myelin sheaths around the axons of the neurons in the brain and spinal cord are damaged. The model facilitates the use of commonly acquired imaging modalities to estimate a measure of tissue damage within lesions. The proposed model outperforms the cross-sectional models that do not account for temporal patterns of lesional development and repair.
NASA Astrophysics Data System (ADS)
Sica, Robert; Haefele, Alexander
2016-04-01
While the application of optimal estimation methods (OEMs) is well-known for the retrieval of atmospheric parameters from passive instruments, active instruments have typically not employed the OEM. For instance, the measurement of temperature in the middle atmosphere with Rayleigh-scatter lidars is an important technique for assessing atmospheric change. Current retrieval schemes for these temperatures have several shortcomings which can be overcome using an OEM. Forward models have been constructed that fully characterize the measurement and allow the simultaneous retrieval of temperature, dead time and background. The OEM allows a full uncertainty budget to be obtained on a per profile basis that includes, in addition to the statistical uncertainties, the smoothing error and uncertainties due to Rayleigh extinction, ozone absorption, the lidar constant, nonlinearity in the counting system, variation of the Rayleigh-scatter cross section with altitude, pressure, acceleration due to gravity and the variation of mean molecular mass with altitude. The vertical resolution of the temperature profile is found at each height, and a quantitative determination is made of the maximum height to which the retrieval is valid. A single temperature profile can be retrieved from measurements with multiple channels that cover different height ranges, vertical resolutions and even different detection methods. The OEM employed is shown to give robust estimates of temperature consistent with previous methods, while requiring minimal computational time. Retrieval of water vapour mixing ratio from vibrational Raman scattering lidar measurements is another example where an OEM offers a considerable advantage over the standard analysis technique, with the same advantages as discussed above for Rayleigh-scatter temperatures but with an additional benefit. The conversion of the lidar measurement into mixing ratio requires a calibration constant to be employed. Using OEM the calibration
High-quality slab-based intermixing method for fusion rendering of multiple medical objects.
Kim, Dong-Joon; Kim, Bohyoung; Lee, Jeongjin; Shin, Juneseuk; Kim, Kyoung Won; Shin, Yeong-Gil
2016-01-01
The visualization of multiple 3D objects has been increasingly required for recent applications in medical fields. Due to the heterogeneity in data representation or data configuration, it is difficult to efficiently render multiple medical objects in high quality. In this paper, we present a novel intermixing scheme for fusion rendering of multiple medical objects while preserving the real-time performance. First, we present an in-slab visibility interpolation method for the representation of subdivided slabs. Second, we introduce virtual zSlab, which extends an infinitely thin boundary (such as polygonal objects) into a slab with a finite thickness. Finally, based on virtual zSlab and in-slab visibility interpolation, we propose a slab-based visibility intermixing method with the newly proposed rendering pipeline. Experimental results demonstrate that the proposed method delivers more effective multiple-object renderings in terms of rendering quality, compared to conventional approaches. And proposed intermixing scheme provides high-quality intermixing results for the visualization of intersecting and overlapping surfaces by resolving aliasing and z-fighting problems. Moreover, two case studies are presented that apply the proposed method to the real clinical applications. These case studies manifest that the proposed method has the outstanding advantages of the rendering independency and reusability.
The use of artificial intelligence techniques to improve the multiple payload integration process
NASA Technical Reports Server (NTRS)
Cutts, Dannie E.; Widgren, Brian K.
1992-01-01
A maximum return of science and products with a minimum expenditure of time and resources is a major goal of mission payload integration. A critical component then, in successful mission payload integration is the acquisition and analysis of experiment requirements from the principal investigator and payload element developer teams. One effort to use artificial intelligence techniques to improve the acquisition and analysis of experiment requirements within the payload integration process is described.
Keating, Kristina; Slater, Lee; Ntarlagiannis, Dimitris; Williams, Kenneth H.
2015-02-24
This documents contains the final report for the project "Integrated Geophysical Measurements for Bioremediation Monitoring: Combining Spectral Induced Polarization, Nuclear Magnetic Resonance and Magnetic Methods" (DE-SC0007049) Executive Summary: Our research aimed to develop borehole measurement techniques capable of monitoring subsurface processes, such as changes in pore geometry and iron/sulfur geochemistry, associated with remediation of heavy metals and radionuclides. Previous work has demonstrated that geophysical method spectral induced polarization (SIP) can be used to assess subsurface contaminant remediation; however, SIP signals can be generated from multiple sources limiting their interpretation value. Integrating multiple geophysical methods, such as nuclear magnetic resonance (NMR) and magnetic susceptibility (MS), with SIP, could reduce the ambiguity of interpretation that might result from a single method. Our research efforts entails combining measurements from these methods, each sensitive to different mineral forms and/or mineral-fluid interfaces, providing better constraints on changes in subsurface biogeochemical processes and pore geometries significantly improving our understanding of processes impacting contaminant remediation. The Rifle Integrated Field Research Challenge (IFRC) site was used as a test location for our measurements. The Rifle IFRC site is located at a former uranium ore-processing facility in Rifle, Colorado. Leachate from spent mill tailings has resulted in residual uranium contamination of both groundwater and sediments within the local aquifer. Studies at the site include an ongoing acetate amendment strategy, native microbial populations are stimulated by introduction of carbon intended to alter redox conditions and immobilize uranium. To test the geophysical methods in the field, NMR and MS logging measurements were collected before, during, and after acetate amendment. Next, laboratory NMR, MS, and SIP measurements
Transient 3d contact problems—NTS method: mixed methods and conserving integration
NASA Astrophysics Data System (ADS)
Hesch, Christian; Betsch, Peter
2011-10-01
The present work deals with a new formulation for transient large deformation contact problems. It is well known, that one-step implicit time integration schemes for highly non-linear systems fail to conserve the total energy of the system. To deal with this drawback, a mixed method is newly proposed in conjunction with the concept of a discrete gradient. In particular, we reformulate the well known and widely-used node-to-segment methods and establish an energy-momentum scheme. The advocated approach ensures robustness and enhanced numerical stability, demonstrated in several three-dimensional applications of the proposed algorithm.
Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory
Meilinger, Tobias; Watanabe, Katsumi
2016-01-01
Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions. PMID:27101011
Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory.
Meilinger, Tobias; Watanabe, Katsumi
2016-01-01
Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions.
Multiple Strategies for Spatial Integration of 2D Layouts within Working Memory.
Meilinger, Tobias; Watanabe, Katsumi
2016-01-01
Prior results on the spatial integration of layouts within a room differed regarding the reference frame that participants used for integration. We asked whether these differences also occur when integrating 2D screen views and, if so, what the reasons for this might be. In four experiments we showed that integrating reference frames varied as a function of task familiarity combined with processing time, cues for spatial transformation, and information about action requirements paralleling results in the 3D case. Participants saw part of an object layout in screen 1, another part in screen 2, and reacted on the integrated layout in screen 3. Layout presentations between two screens coincided or differed in orientation. Aligning misaligned screens for integration is known to increase errors/latencies. The error/latency pattern was thus indicative of the reference frame used for integration. We showed that task familiarity combined with self-paced learning, visual updating, and knowing from where to act prioritized the integration within the reference frame of the initial presentation, which was updated later, and from where participants acted respectively. Participants also heavily relied on layout intrinsic frames. The results show how humans flexibly adjust their integration strategy to a wide variety of conditions. PMID:27101011
An integrated pan-tropical biomass map using multiple reference datasets.
Avitabile, Valerio; Herold, Martin; Heuvelink, Gerard B M; Lewis, Simon L; Phillips, Oliver L; Asner, Gregory P; Armston, John; Ashton, Peter S; Banin, Lindsay; Bayol, Nicolas; Berry, Nicholas J; Boeckx, Pascal; de Jong, Bernardus H J; DeVries, Ben; Girardin, Cecile A J; Kearsley, Elizabeth; Lindsell, Jeremy A; Lopez-Gonzalez, Gabriela; Lucas, Richard; Malhi, Yadvinder; Morel, Alexandra; Mitchard, Edward T A; Nagy, Laszlo; Qie, Lan; Quinones, Marcela J; Ryan, Casey M; Ferry, Slik J W; Sunderland, Terry; Laurin, Gaia Vaglio; Gatti, Roberto Cazzolla; Valentini, Riccardo; Verbeeck, Hans; Wijaya, Arief; Willcock, Simon
2016-04-01
We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of field observations and locally calibrated high-resolution biomass maps, harmonized and upscaled to 14 477 1-km AGB estimates. Our data fusion approach uses bias removal and weighted linear averaging that incorporates and spatializes the biomass patterns indicated by the reference data. The method was applied independently in areas (strata) with homogeneous error patterns of the input (Saatchi and Baccini) maps, which were estimated from the reference data and additional covariates. Based on the fused map, we estimated AGB stock for the tropics (23.4 N-23.4 S) of 375 Pg dry mass, 9-18% lower than the Saatchi and Baccini estimates. The fused map also showed differing spatial patterns of AGB over large areas, with higher AGB density in the dense forest areas in the Congo basin, Eastern Amazon and South-East Asia, and lower values in Central America and in most dry vegetation areas of Africa than either of the input maps. The validation exercise, based on 2118 estimates from the reference dataset not used in the fusion process, showed that the fused map had a RMSE 15-21% lower than that of the input maps and, most importantly, nearly unbiased estimates (mean bias 5 Mg dry mass ha(-1) vs. 21 and 28 Mg ha(-1) for the input maps). The fusion method can be applied at any scale including the policy-relevant national level, where it can provide improved biomass estimates by integrating existing regional biomass maps as input maps and additional, country-specific reference datasets.
An integrated pan-tropical biomass map using multiple reference datasets.
Avitabile, Valerio; Herold, Martin; Heuvelink, Gerard B M; Lewis, Simon L; Phillips, Oliver L; Asner, Gregory P; Armston, John; Ashton, Peter S; Banin, Lindsay; Bayol, Nicolas; Berry, Nicholas J; Boeckx, Pascal; de Jong, Bernardus H J; DeVries, Ben; Girardin, Cecile A J; Kearsley, Elizabeth; Lindsell, Jeremy A; Lopez-Gonzalez, Gabriela; Lucas, Richard; Malhi, Yadvinder; Morel, Alexandra; Mitchard, Edward T A; Nagy, Laszlo; Qie, Lan; Quinones, Marcela J; Ryan, Casey M; Ferry, Slik J W; Sunderland, Terry; Laurin, Gaia Vaglio; Gatti, Roberto Cazzolla; Valentini, Riccardo; Verbeeck, Hans; Wijaya, Arief; Willcock, Simon
2016-04-01
We combined two existing datasets of vegetation aboveground biomass (AGB) (Proceedings of the National Academy of Sciences of the United States of America, 108, 2011, 9899; Nature Climate Change, 2, 2012, 182) into a pan-tropical AGB map at 1-km resolution using an independent reference dataset of field observations and locally calibrated high-resolution biomass maps, harmonized and upscaled to 14 477 1-km AGB estimates. Our data fusion approach uses bias removal and weighted linear averaging that incorporates and spatializes the biomass patterns indicated by the reference data. The method was applied independently in areas (strata) with homogeneous error patterns of the input (Saatchi and Baccini) maps, which were estimated from the reference data and additional covariates. Based on the fused map, we estimated AGB stock for the tropics (23.4 N-23.4 S) of 375 Pg dry mass, 9-18% lower than the Saatchi and Baccini estimates. The fused map also showed differing spatial patterns of AGB over large areas, with higher AGB density in the dense forest areas in the Congo basin, Eastern Amazon and South-East Asia, and lower values in Central America and in most dry vegetation areas of Africa than either of the input maps. The validation exercise, based on 2118 estimates from the reference dataset not used in the fusion process, showed that the fused map had a RMSE 15-21% lower than that of the input maps and, most importantly, nearly unbiased estimates (mean bias 5 Mg dry mass ha(-1) vs. 21 and 28 Mg ha(-1) for the input maps). The fusion method can be applied at any scale including the policy-relevant national level, where it can provide improved biomass estimates by integrating existing regional biomass maps as input maps and additional, country-specific reference datasets. PMID:26499288
Kapil, V; VandeVondele, J; Ceriotti, M
2016-02-01
The development and implementation of increasingly accurate methods for electronic structure calculations mean that, for many atomistic simulation problems, treating light nuclei as classical particles is now one of the most serious approximations. Even though recent developments have significantly reduced the overhead for modeling the quantum nature of the nuclei, the cost is still prohibitive when combined with advanced electronic structure methods. Here we present how multiple time step integrators can be combined with ring-polymer contraction techniques (effectively, multiple time stepping in imaginary time) to reduce virtually to zero the overhead of modelling nuclear quantum effects, while describing inter-atomic forces at high levels of electronic structure theory. This is demonstrated for a combination of MP2 and semi-local DFT applied to the Zundel cation. The approach can be seamlessly combined with other methods to reduce the computational cost of path integral calculations, such as high-order factorizations of the Boltzmann operator or generalized Langevin equation thermostats.
NASA Astrophysics Data System (ADS)
Kapil, V.; VandeVondele, J.; Ceriotti, M.
2016-02-01
The development and implementation of increasingly accurate methods for electronic structure calculations mean that, for many atomistic simulation problems, treating light nuclei as classical particles is now one of the most serious approximations. Even though recent developments have significantly reduced the overhead for modeling the quantum nature of the nuclei, the cost is still prohibitive when combined with advanced electronic structure methods. Here we present how multiple time step integrators can be combined with ring-polymer contraction techniques (effectively, multiple time stepping in imaginary time) to reduce virtually to zero the overhead of modelling nuclear quantum effects, while describing inter-atomic forces at high levels of electronic structure theory. This is demonstrated for a combination of MP2 and semi-local DFT applied to the Zundel cation. The approach can be seamlessly combined with other methods to reduce the computational cost of path integral calculations, such as high-order factorizations of the Boltzmann operator or generalized Langevin equation thermostats.
Calculation of unsteady transonic flows using the integral equation method
NASA Technical Reports Server (NTRS)
Nixon, D.
1978-01-01
The basic integral equations for a harmonically oscillating airfoil in a transonic flow with shock waves are derived; the reduced frequency is assumed to be small. The problems associated with shock wave motion are treated using a strained coordinate system. The integral equation is linear and consists of both line integrals and surface integrals over the flow field which are evaluated by quadrature. This leads to a set of linear algebraic equations that can be solved directly. The shock motion is obtained explicitly by enforcing the condition that the flow is continuous except at a shock wave. Results obtained for both lifting and nonlifting oscillatory flows agree satisfactorily with other accurate results.
Developing integrated methods to address complex resource and environmental issues
Smith, Kathleen S.; Phillips, Jeffrey D.; McCafferty, Anne E.; Clark, Roger N.
2016-02-08
IntroductionThis circular provides an overview of selected activities that were conducted within the U.S. Geological Survey (USGS) Integrated Methods Development Project, an interdisciplinary project designed to develop new tools and conduct innovative research requiring integration of geologic, geophysical, geochemical, and remote-sensing expertise. The project was supported by the USGS Mineral Resources Program, and its products and acquired capabilities have broad applications to missions throughout the USGS and beyond.In addressing challenges associated with understanding the location, quantity, and quality of mineral resources, and in investigating the potential environmental consequences of resource development, a number of field and laboratory capabilities and interpretative methodologies evolved from the project that have applications to traditional resource studies as well as to studies related to ecosystem health, human health, disaster and hazard assessment, and planetary science. New or improved tools and research findings developed within the project have been applied to other projects and activities. Specifically, geophysical equipment and techniques have been applied to a variety of traditional and nontraditional mineral- and energy-resource studies, military applications, environmental investigations, and applied research activities that involve climate change, mapping techniques, and monitoring capabilities. Diverse applied geochemistry activities provide a process-level understanding of the mobility, chemical speciation, and bioavailability of elements, particularly metals and metalloids, in a variety of environmental settings. Imaging spectroscopy capabilities maintained and developed within the project have been applied to traditional resource studies as well as to studies related to ecosystem health, human health, disaster assessment, and planetary science. Brief descriptions of capabilities and laboratory facilities and summaries of some
NASA Astrophysics Data System (ADS)
Hesch, Christian; Betsch, Peter
2011-10-01
The present work deals with the development of an energy-momentum conserving method to unilateral contact constraints and is a direct continuation of a previous work (Hesch and Betsch in Comput Mech 2011, doi: 10.1007/s00466-011-0597-2) dealing with the NTS method. In this work, we introduce the mortar method and a newly developed segmentation process for the consistent integration of the contact interface. For the application of the energy-momentum approach to mortar constraints, we extend an approach based on a mixed formulation to the segment definition of the mortar constraints. The enhanced numerical stability of the newly proposed discretization method will be shown in several examples.
Reliable Transition State Searches Integrated with the Growing String Method.
Zimmerman, Paul
2013-07-01
The growing string method (GSM) is highly useful for locating reaction paths connecting two molecular intermediates. GSM has often been used in a two-step procedure to locate exact transition states (TS), where GSM creates a quality initial structure for a local TS search. This procedure and others like it, however, do not always converge to the desired transition state because the local search is sensitive to the quality of the initial guess. This article describes an integrated technique for simultaneous reaction path and exact transition state search. This is achieved by implementing an eigenvector following optimization algorithm in internal coordinates with Hessian update techniques. After partial convergence of the string, an exact saddle point search begins under the constraint that the maximized eigenmode of the TS node Hessian has significant overlap with the string tangent near the TS. Subsequent optimization maintains connectivity of the string to the TS as well as locks in the TS direction, all but eliminating the possibility that the local search leads to the wrong TS. To verify the robustness of this approach, reaction paths and TSs are found for a benchmark set of more than 100 elementary reactions.
Method for integrating microelectromechanical devices with electronic circuitry
Barron, Carole C.; Fleming, James G.; Montague, Stephen
1999-01-01
A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry on a common substrate. The MEM device can be fabricated within a substrate cavity and encapsulated with a sacrificial material. This allows the MEM device to be annealed and the substrate planarized prior to forming electronic circuitry on the substrate using a series of standard processing steps. After fabrication of the electronic circuitry, the electronic circuitry can be protected by a two-ply protection layer of titanium nitride (TiN) and tungsten (W) during an etch release process whereby the MEM device is released for operation by etching away a portion of a sacrificial material (e.g. silicon dioxide or a silicate glass) that encapsulates the MEM device. The etch release process is preferably performed using a mixture of hydrofluoric acid (HF) and hydrochloric acid (HCI) which reduces the time for releasing the MEM device compared to use of a buffered oxide etchant. After release of the MEM device, the TiN:W protection layer can be removed with a peroxide-based etchant without damaging the electronic circuitry.
Involving Stakeholders in Building Integrated Fisheries Models Using Bayesian Methods
NASA Astrophysics Data System (ADS)
Haapasaari, Päivi; Mäntyniemi, Samu; Kuikka, Sakari
2013-06-01
A participatory Bayesian approach was used to investigate how the views of stakeholders could be utilized to develop models to help understand the Central Baltic herring fishery. In task one, we applied the Bayesian belief network methodology to elicit the causal assumptions of six stakeholders on factors that influence natural mortality, growth, and egg survival of the herring stock in probabilistic terms. We also integrated the expressed views into a meta-model using the Bayesian model averaging (BMA) method. In task two, we used influence diagrams to study qualitatively how the stakeholders frame the management problem of the herring fishery and elucidate what kind of causalities the different views involve. The paper combines these two tasks to assess the suitability of the methodological choices to participatory modeling in terms of both a modeling tool and participation mode. The paper also assesses the potential of the study to contribute to the development of participatory modeling practices. It is concluded that the subjective perspective to knowledge, that is fundamental in Bayesian theory, suits participatory modeling better than a positivist paradigm that seeks the objective truth. The methodology provides a flexible tool that can be adapted to different kinds of needs and challenges of participatory modeling. The ability of the approach to deal with small data sets makes it cost-effective in participatory contexts. However, the BMA methodology used in modeling the biological uncertainties is so complex that it needs further development before it can be introduced to wider use in participatory contexts.
NASA Astrophysics Data System (ADS)
Li, M.; Helfrich, S.
2011-12-01
Global snow and ice cover is a key component in the climate and hydrologic system as well as daily weather forecasting. The National Oceanic and Atmospheric Administration (NOAA) has produced a daily northern hemisphere snow and ice cover chart since 1997 through the Interactive Multisensor Snow and Ice Mapping System (IMS). The IMS integrates and visualizes a wide variety of satellite data, as well as derived snow/ice products and surface observations, to provide meteorologists with the ability to interactively prepare the daily northern hemisphere snow and ice cover chart. These products are presently used as operational inputs into several weather prediction models and are applied in climate monitoring. The IMS is currently on its second version (released in 2004) and scheduled to be upgraded to the third version (V3) in 2013. The IMS V3 will have nearly 40 external inputs as data sources processed by the IMS, which fall into five data formats: binary image, HDF file, GeoTIFF image, Shapefile image and ASCII file. With the exception of the GeoTIFF and Shapefile files, which are used directly by IMS, all other types of data are pre-processed to ENVI image file format and "sectorized" for different areas around the northern hemisphere. The IMS V3 will generate daily snow and ice cover maps in five formats: ASCII, ENVI, GeoTIFF, GIF and GRIB2 and three resolutions: 24km, 4km and 1km. In this presentation, the methods are discussed for accessing and processing satellite data, model results and surface reports. All input data with varying formats and resolutions are processed to a fixed projection. The visualization methodology for IMS are provided for five different resolutions of 48km, 24km, 8km, 4km, 2km and 1km. This work will facilitate the future enhancement of IMS, provide users with an understanding of the software architecture, provide a prospectus on future data sources, and help to preserve the integrity of the long-standing satellite-derived snow and ice
Full Wave Simulation of Integrated Circuits Using Hybrid Numerical Methods
NASA Astrophysics Data System (ADS)
Tan, Jilin
Transmission lines play an important role in digital electronics, and in microwave and millimeter-wave circuits. Analysis, modeling, and design of transmission lines are critical to the development of the circuitry in the chip, subsystem, and system levels. In the past several decays, at the EM modeling level, the quasi-static approximation has been widely used due to its great simplicity. As the clock rates increase, the inter-connect effects such as signal delay, distortion, dispersion, reflection, and crosstalk, limit the performance of microwave systems. Meanwhile, the quasi-static approach loses its validity for some complex system structures. Since the successful system design of the PCB, MCM, and the chip packaging, rely very much on the computer aided EM level modeling and simulation, many new methods have been developed, such as the full wave approach, to guarantee the successful design. Many difficulties exist in the rigorous EM level analysis. Some of these include the difficulties in describing the behavior of the conductors with finite thickness and finite conductivity, the field singularity, and the arbitrary multilayered multi-transmission lines structures. This dissertation concentrates on the full wave study of the multi-conductor transmission lines with finite conductivity and finite thickness buried in an arbitrary lossy multilayered environment. Two general approaches have been developed. The first one is the integral equation method in which the dyadic Green's function for arbitrary layered media has been correctly formulated and has been tested both analytically and numerically. By applying this method, the double layered high dielectric permitivitty problem and the heavy dielectrical lossy problem in multilayered media in the CMOS circuit design have been solved. The second approach is the edge element method. In this study, the correct functional for the two dimensional propagation problem has been successfully constructed in a rigorous way
Comparison of Methods to Trace Multiple Subskills: Is LR-DBN Best?
ERIC Educational Resources Information Center
Xu, Yanbo; Mostow, Jack
2012-01-01
A long-standing challenge for knowledge tracing is how to update estimates of multiple subskills that underlie a single observable step. We characterize approaches to this problem by how they model knowledge tracing, fit its parameters, predict performance, and update subskill estimates. Previous methods allocated blame or credit among subskills…
Investigation of the Multiple Model Adaptive Control (MMAC) method for flight control systems
NASA Technical Reports Server (NTRS)
1975-01-01
The application was investigated of control theoretic ideas to the design of flight control systems for the F-8 aircraft. The design of an adaptive control system based upon the so-called multiple model adaptive control (MMAC) method is considered. Progress is reported.
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
ERIC Educational Resources Information Center
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Magic Finger Teaching Method in Learning Multiplication Facts among Deaf Students
ERIC Educational Resources Information Center
Thai, Liong; Yasin, Mohd. Hanafi Mohd
2016-01-01
Deaf students face problems in mastering multiplication facts. This study aims to identify the effectiveness of Magic Finger Teaching Method (MFTM) and students' perception towards MFTM. The research employs a quasi experimental with non-equivalent pre-test and post-test control group design. Pre-test, post-test and questionnaires were used. As…
One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure dist...
Double Cross-Validation in Multiple Regression: A Method of Estimating the Stability of Results.
ERIC Educational Resources Information Center
Rowell, R. Kevin
In multiple regression analysis, where resulting predictive equation effectiveness is subject to shrinkage, it is especially important to evaluate result replicability. Double cross-validation is an empirical method by which an estimate of invariance or stability can be obtained from research data. A procedure for double cross-validation is…
Propensity Scores: Method for Matching on Multiple Variables in Down Syndrome Research
ERIC Educational Resources Information Center
Blackford, Jennifer Urbano
2009-01-01
Confounding variables can affect the results from studies of children with Down syndrome and their families. Traditional methods for addressing confounders are often limited, providing control for only a few confounding variables. This study introduces propensity score matching to control for multiple confounding variables. Using Tennessee birth…
ERIC Educational Resources Information Center
Mekonnen, Adugna K.
2010-01-01
This study develops a multiple measure placement method (MMPM) that is comprised of predictor variables such as ACCUPLACER math (ACCM), ACCUPLACER reading (ACCR), arithmetic diagnostic test (ADT), high school grade point average (HSGPA), high school mathematics performance (HSMP), and duration since last mathematics course taken in high school…
Optimized particle-mesh Ewald/multiple-time step integration for molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Batcho, Paul F.; Case, David A.; Schlick, Tamar
2001-09-01
We develop an efficient multiple time step (MTS) force splitting scheme for biological applications in the AMBER program in the context of the particle-mesh Ewald (PME) algorithm. Our method applies a symmetric Trotter factorization of the Liouville operator based on the position-Verlet scheme to Newtonian and Langevin dynamics. Following a brief review of the MTS and PME algorithms, we discuss performance speedup and the force balancing involved to maximize accuracy, maintain long-time stability, and accelerate computational times. Compared to prior MTS efforts in the context of the AMBER program, advances are possible by optimizing PME parameters for MTS applications and by using the position-Verlet, rather than velocity-Verlet, scheme for the inner loop. Moreover, ideas from the Langevin/MTS algorithm LN are applied to Newtonian formulations here. The algorithm's performance is optimized and tested on water, solvated DNA, and solvated protein systems. We find CPU speedup ratios of over 3 for Newtonian formulations when compared to a 1 fs single-step Verlet algorithm using outer time steps of 6 fs in a three-class splitting scheme; accurate conservation of energies is demonstrated over simulations of length several hundred ps. With modest Langevin forces, we obtain stable trajectories for outer time steps up to 12 fs and corresponding speedup ratios approaching 5. We end by suggesting that modified Ewald formulations, using tailored alternatives to the Gaussian screening functions for the Coulombic terms, may allow larger time steps and thus further speedups for both Newtonian and Langevin protocols; such developments are reported separately.
Integrated p-n junction InGaN/GaN multiple-quantum-well devices with diverse functionalities
NASA Astrophysics Data System (ADS)
Cai, Wei; Gao, Xumin; Yuan, Wei; Yang, Yongchao; Yuan, Jialei; Zhu, Hongbo; Wang, Yongjin
2016-05-01
We propose, fabricate, and demonstrate integrated p-n junction InGaN/GaN multiple-quantum-well devices with diverse functionalities on a GaN-on-silicon platform. Suspended devices with a common n-contact are realized using a wafer-level process. For the integrated devices, part of the light emitted by a light-emitting diode (LED) is guided in-plane through a suspended waveguide and is sensed by another photodiode. The induced photocurrent is tuned by the LED. The integrated devices can act as two independent LEDs to deliver different signals simultaneously for free-space visible light communication. Furthermore, the suspended devices can be used as two separate photodiodes to detect incident light with a distinct on/off switching performance.
Developing integrated methods to address complex resource and environmental issues
Smith, Kathleen S.; Phillips, Jeffrey D.; McCafferty, Anne E.; Clark, Roger N.
2016-02-08
IntroductionThis circular provides an overview of selected activities that were conducted within the U.S. Geological Survey (USGS) Integrated Methods Development Project, an interdisciplinary project designed to develop new tools and conduct innovative research requiring integration of geologic, geophysical, geochemical, and remote-sensing expertise. The project was supported by the USGS Mineral Resources Program, and its products and acquired capabilities have broad applications to missions throughout the USGS and beyond.In addressing challenges associated with understanding the location, quantity, and quality of mineral resources, and in investigating the potential environmental consequences of resource development, a number of field and laboratory capabilities and interpretative methodologies evolved from the project that have applications to traditional resource studies as well as to studies related to ecosystem health, human health, disaster and hazard assessment, and planetary science. New or improved tools and research findings developed within the project have been applied to other projects and activities. Specifically, geophysical equipment and techniques have been applied to a variety of traditional and nontraditional mineral- and energy-resource studies, military applications, environmental investigations, and applied research activities that involve climate change, mapping techniques, and monitoring capabilities. Diverse applied geochemistry activities provide a process-level understanding of the mobility, chemical speciation, and bioavailability of elements, particularly metals and metalloids, in a variety of environmental settings. Imaging spectroscopy capabilities maintained and developed within the project have been applied to traditional resource studies as well as to studies related to ecosystem health, human health, disaster assessment, and planetary science. Brief descriptions of capabilities and laboratory facilities and summaries of some