Hao, Yong; Sun, Xu-Dong; Yang, Qiang
2012-12-01
Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.
Student Solution Manual for Essential Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-02-01
1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics.
Essential Mathematical Methods for the Physical Sciences
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2011-02-01
1. Matrices and vector spaces; 2. Vector calculus; 3. Line, surface and volume integrals; 4. Fourier series; 5. Integral transforms; 6. Higher-order ODEs; 7. Series solutions of ODEs; 8. Eigenfunction methods; 9. Special functions; 10. Partial differential equations; 11. Solution methods for PDEs; 12. Calculus of variations; 13. Integral equations; 14. Complex variables; 15. Applications of complex variables; 16. Probability; 17. Statistics; Appendices; Index.
Adaptive Synchronization of Fractional Order Complex-Variable Dynamical Networks via Pinning Control
NASA Astrophysics Data System (ADS)
Ding, Da-Wei; Yan, Jie; Wang, Nian; Liang, Dong
2017-09-01
In this paper, the synchronization of fractional order complex-variable dynamical networks is studied using an adaptive pinning control strategy based on close center degree. Some effective criteria for global synchronization of fractional order complex-variable dynamical networks are derived based on the Lyapunov stability theory. From the theoretical analysis, one concludes that under appropriate conditions, the complex-variable dynamical networks can realize the global synchronization by using the proper adaptive pinning control method. Meanwhile, we succeed in solving the problem about how much coupling strength should be applied to ensure the synchronization of the fractional order complex networks. Therefore, compared with the existing results, the synchronization method in this paper is more general and convenient. This result extends the synchronization condition of the real-variable dynamical networks to the complex-valued field, which makes our research more practical. Finally, two simulation examples show that the derived theoretical results are valid and the proposed adaptive pinning method is effective. Supported by National Natural Science Foundation of China under Grant No. 61201227, National Natural Science Foundation of China Guangdong Joint Fund under Grant No. U1201255, the Natural Science Foundation of Anhui Province under Grant No. 1208085MF93, 211 Innovation Team of Anhui University under Grant Nos. KJTD007A and KJTD001B, and also supported by Chinese Scholarship Council
Mathematical Methods for Physics and Engineering Third Edition Paperback Set
NASA Astrophysics Data System (ADS)
Riley, Ken F.; Hobson, Mike P.; Bence, Stephen J.
2006-06-01
Prefaces; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics; Index.
Student Solution Manual for Mathematical Methods for Physics and Engineering Third Edition
NASA Astrophysics Data System (ADS)
Riley, K. F.; Hobson, M. P.
2006-03-01
Preface; 1. Preliminary algebra; 2. Preliminary calculus; 3. Complex numbers and hyperbolic functions; 4. Series and limits; 5. Partial differentiation; 6. Multiple integrals; 7. Vector algebra; 8. Matrices and vector spaces; 9. Normal modes; 10. Vector calculus; 11. Line, surface and volume integrals; 12. Fourier series; 13. Integral transforms; 14. First-order ordinary differential equations; 15. Higher-order ordinary differential equations; 16. Series solutions of ordinary differential equations; 17. Eigenfunction methods for differential equations; 18. Special functions; 19. Quantum operators; 20. Partial differential equations: general and particular; 21. Partial differential equations: separation of variables; 22. Calculus of variations; 23. Integral equations; 24. Complex variables; 25. Application of complex variables; 26. Tensors; 27. Numerical methods; 28. Group theory; 29. Representation theory; 30. Probability; 31. Statistics.
Workspace Program for Complex-Number Arithmetic
NASA Technical Reports Server (NTRS)
Patrick, M. C.; Howell, Leonard W., Jr.
1986-01-01
COMPLEX is workspace program designed to empower APL with complexnumber capabilities. Complex-variable methods provide analytical tools invaluable for applications in mathematics, science, and engineering. COMPLEX written in APL.
NASA Astrophysics Data System (ADS)
Bender, Carl
2017-01-01
The theory of complex variables is extremely useful because it helps to explain the mathematical behavior of functions of a real variable. Complex variable theory also provides insight into the nature of physical theories. For example, it provides a simple and beautiful picture of quantization and it explains the underlying reason for the divergence of perturbation theory. By using complex-variable methods one can generalize conventional Hermitian quantum theories into the complex domain. The result is a new class of parity-time-symmetric (PT-symmetric) theories whose remarkable physical properties have been studied and verified in many recent laboratory experiments.
Ribic, C.A.; Miller, T.W.
1998-01-01
We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.
Multivariate analysis: greater insights into complex systems
USDA-ARS?s Scientific Manuscript database
Many agronomic researchers measure and collect multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate (MV) statistical methods encompass the simultaneous analysis of all random variables (RV) measured on each experimental or sampling ...
Aging and the complexity of cardiovascular dynamics
NASA Technical Reports Server (NTRS)
Kaplan, D. T.; Furman, M. I.; Pincus, S. M.; Ryan, S. M.; Lipsitz, L. A.; Goldberger, A. L.
1991-01-01
Biomedical signals often vary in a complex and irregular manner. Analysis of variability in such signals generally does not address directly their complexity, and so may miss potentially useful information. We analyze the complexity of heart rate and beat-to-beat blood pressure using two methods motivated by nonlinear dynamics (chaos theory). A comparison of a group of healthy elderly subjects with healthy young adults indicates that the complexity of cardiovascular dynamics is reduced with aging. This suggests that complexity of variability may be a useful physiological marker.
A Complex Systems Approach to Causal Discovery in Psychiatry.
Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin
2016-01-01
Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.
Kellogg, Joshua J; Graf, Tyler N; Paine, Mary F; McCune, Jeannine S; Kvalheim, Olav M; Oberlies, Nicholas H; Cech, Nadja B
2017-05-26
A challenge that must be addressed when conducting studies with complex natural products is how to evaluate their complexity and variability. Traditional methods of quantifying a single or a small range of metabolites may not capture the full chemical complexity of multiple samples. Different metabolomics approaches were evaluated to discern how they facilitated comparison of the chemical composition of commercial green tea [Camellia sinensis (L.) Kuntze] products, with the goal of capturing the variability of commercially used products and selecting representative products for in vitro or clinical evaluation. Three metabolomic-related methods-untargeted ultraperformance liquid chromatography-mass spectrometry (UPLC-MS), targeted UPLC-MS, and untargeted, quantitative 1 HNMR-were employed to characterize 34 commercially available green tea samples. Of these methods, untargeted UPLC-MS was most effective at discriminating between green tea, green tea supplement, and non-green-tea products. A method using reproduced correlation coefficients calculated from principal component analysis models was developed to quantitatively compare differences among samples. The obtained results demonstrated the utility of metabolomics employing UPLC-MS data for evaluating similarities and differences between complex botanical products.
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
The Relation of Finite Element and Finite Difference Methods
NASA Technical Reports Server (NTRS)
Vinokur, M.
1976-01-01
Finite element and finite difference methods are examined in order to bring out their relationship. It is shown that both methods use two types of discrete representations of continuous functions. They differ in that finite difference methods emphasize the discretization of independent variable, while finite element methods emphasize the discretization of dependent variable (referred to as functional approximations). An important point is that finite element methods use global piecewise functional approximations, while finite difference methods normally use local functional approximations. A general conclusion is that finite element methods are best designed to handle complex boundaries, while finite difference methods are superior for complex equations. It is also shown that finite volume difference methods possess many of the advantages attributed to finite element methods.
Synchronization in node of complex networks consist of complex chaotic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Qiang, E-mail: qiangweibeihua@163.com; Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin; Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024
2014-07-15
A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.
2017-01-01
A challenge that must be addressed when conducting studies with complex natural products is how to evaluate their complexity and variability. Traditional methods of quantifying a single or a small range of metabolites may not capture the full chemical complexity of multiple samples. Different metabolomics approaches were evaluated to discern how they facilitated comparison of the chemical composition of commercial green tea [Camellia sinensis (L.) Kuntze] products, with the goal of capturing the variability of commercially used products and selecting representative products for in vitro or clinical evaluation. Three metabolomic-related methods—untargeted ultraperformance liquid chromatography–mass spectrometry (UPLC-MS), targeted UPLC-MS, and untargeted, quantitative 1HNMR—were employed to characterize 34 commercially available green tea samples. Of these methods, untargeted UPLC-MS was most effective at discriminating between green tea, green tea supplement, and non-green-tea products. A method using reproduced correlation coefficients calculated from principal component analysis models was developed to quantitatively compare differences among samples. The obtained results demonstrated the utility of metabolomics employing UPLC-MS data for evaluating similarities and differences between complex botanical products. PMID:28453261
Datamining approaches for modeling tumor control probability.
Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D
2010-11-01
Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.
Hromadka, T.V.; Yen, C.C.; Guymon, G.L.
1985-01-01
The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.
NASA Astrophysics Data System (ADS)
Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino
2013-12-01
Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.
Hashemian, Behrooz; Millán, Daniel; Arroyo, Marino
2013-12-07
Collective variables (CVs) are low-dimensional representations of the state of a complex system, which help us rationalize molecular conformations and sample free energy landscapes with molecular dynamics simulations. Given their importance, there is need for systematic methods that effectively identify CVs for complex systems. In recent years, nonlinear manifold learning has shown its ability to automatically characterize molecular collective behavior. Unfortunately, these methods fail to provide a differentiable function mapping high-dimensional configurations to their low-dimensional representation, as required in enhanced sampling methods. We introduce a methodology that, starting from an ensemble representative of molecular flexibility, builds smooth and nonlinear data-driven collective variables (SandCV) from the output of nonlinear manifold learning algorithms. We demonstrate the method with a standard benchmark molecule, alanine dipeptide, and show how it can be non-intrusively combined with off-the-shelf enhanced sampling methods, here the adaptive biasing force method. We illustrate how enhanced sampling simulations with SandCV can explore regions that were poorly sampled in the original molecular ensemble. We further explore the transferability of SandCV from a simpler system, alanine dipeptide in vacuum, to a more complex system, alanine dipeptide in explicit water.
Probabilistic Geoacoustic Inversion in Complex Environments
2015-09-30
Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is
ERIC Educational Resources Information Center
Fuwa, Minori; Kayama, Mizue; Kunimune, Hisayoshi; Hashimoto, Masami; Asano, David K.
2015-01-01
We have explored educational methods for algorithmic thinking for novices and implemented a block programming editor and a simple learning management system. In this paper, we propose a program/algorithm complexity metric specified for novice learners. This metric is based on the variable usage in arithmetic and relational formulas in learner's…
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Ba, Kaixian; Yu, Bin; Cao, Yuan; Zhu, Qixin; Zhao, Hualong
2016-05-01
Each joint of hydraulic drive quadruped robot is driven by the hydraulic drive unit (HDU), and the contacting between the robot foot end and the ground is complex and variable, which increases the difficulty of force control inevitably. In the recent years, although many scholars researched some control methods such as disturbance rejection control, parameter self-adaptive control, impedance control and so on, to improve the force control performance of HDU, the robustness of the force control still needs improving. Therefore, how to simulate the complex and variable load characteristics of the environment structure and how to ensure HDU having excellent force control performance with the complex and variable load characteristics are key issues to be solved in this paper. The force control system mathematic model of HDU is established by the mechanism modeling method, and the theoretical models of a novel force control compensation method and a load characteristics simulation method under different environment structures are derived, considering the dynamic characteristics of the load stiffness and the load damping under different environment structures. Then, simulation effects of the variable load stiffness and load damping under the step and sinusoidal load force are analyzed experimentally on the HDU force control performance test platform, which provides the foundation for the force control compensation experiment research. In addition, the optimized PID control parameters are designed to make the HDU have better force control performance with suitable load stiffness and load damping, under which the force control compensation method is introduced, and the robustness of the force control system with several constant load characteristics and the variable load characteristics respectively are comparatively analyzed by experiment. The research results indicate that if the load characteristics are known, the force control compensation method presented in this paper has positive compensation effects on the load characteristics variation, i.e., this method decreases the effects of the load characteristics variation on the force control performance and enhances the force control system robustness with the constant PID parameters, thereby, the online PID parameters tuning control method which is complex needs not be adopted. All the above research provides theoretical and experimental foundation for the force control method of the quadruped robot joints with high robustness.
Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil
2016-11-17
Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends' preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors.
A variable circular-plot method for estimating bird numbers
R. T. Reynolds; J. M. Scott; R. A. Nussbaum
1980-01-01
A bird census method is presented that is designed for tall, structurally complex vegetation types, and rugged terrain. With this method the observer counts all birds seen or heard around a station, and estimates the horizontal distance from the station to each bird. Count periods at stations vary according to the avian community and structural complexity of the...
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Kleb, William L.
2005-01-01
A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.
Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Kleb, William L.
2005-01-01
A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Messai, Habib; Farman, Muhammad; Sarraj-Laabidi, Abir; Hammami-Semmar, Asma; Semmar, Nabil
2016-01-01
Background. Olive oils (OOs) show high chemical variability due to several factors of genetic, environmental and anthropic types. Genetic and environmental factors are responsible for natural compositions and polymorphic diversification resulting in different varietal patterns and phenotypes. Anthropic factors, however, are at the origin of different blends’ preparation leading to normative, labelled or adulterated commercial products. Control of complex OO samples requires their (i) characterization by specific markers; (ii) authentication by fingerprint patterns; and (iii) monitoring by traceability analysis. Methods. These quality control and management aims require the use of several multivariate statistical tools: specificity highlighting requires ordination methods; authentication checking calls for classification and pattern recognition methods; traceability analysis implies the use of network-based approaches able to separate or extract mixed information and memorized signals from complex matrices. Results. This chapter presents a review of different chemometrics methods applied for the control of OO variability from metabolic and physical-chemical measured characteristics. The different chemometrics methods are illustrated by different study cases on monovarietal and blended OO originated from different countries. Conclusion. Chemometrics tools offer multiple ways for quantitative evaluations and qualitative control of complex chemical variability of OO in relation to several intrinsic and extrinsic factors. PMID:28231172
The QSAR study of flavonoid-metal complexes scavenging rad OH free radical
NASA Astrophysics Data System (ADS)
Wang, Bo-chu; Qian, Jun-zhen; Fan, Ying; Tan, Jun
2014-10-01
Flavonoid-metal complexes have antioxidant activities. However, quantitative structure-activity relationships (QSAR) of flavonoid-metal complexes and their antioxidant activities has still not been tackled. On the basis of 21 structures of flavonoid-metal complexes and their antioxidant activities for scavenging rad OH free radical, we optimised their structures using Gaussian 03 software package and we subsequently calculated and chose 18 quantum chemistry descriptors such as dipole, charge and energy. Then we chose several quantum chemistry descriptors that are very important to the IC50 of flavonoid-metal complexes for scavenging rad OH free radical through method of stepwise linear regression, Meanwhile we obtained 4 new variables through the principal component analysis. Finally, we built the QSAR models based on those important quantum chemistry descriptors and the 4 new variables as the independent variables and the IC50 as the dependent variable using an Artificial Neural Network (ANN), and we validated the two models using experimental data. These results show that the two models in this paper are reliable and predictable.
Unification of the complex Langevin method and the Lefschetzthimble method
NASA Astrophysics Data System (ADS)
Nishimura, Jun; Shimasaki, Shinji
2018-03-01
Recently there has been remarkable progress in solving the sign problem, which occurs in investigating statistical systems with a complex weight. The two promising methods, the complex Langevin method and the Lefschetz thimble method, share the idea of complexifying the dynamical variables, but their relationship has not been clear. Here we propose a unified formulation, in which the sign problem is taken care of by both the Langevin dynamics and the holomorphic gradient flow. We apply our formulation to a simple model in three different ways and show that one of them interpolates the two methods by changing the flow time.
Diversified models for portfolio selection based on uncertain semivariance
NASA Astrophysics Data System (ADS)
Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini
2017-02-01
Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.
NASA Astrophysics Data System (ADS)
Ma, Huanfei; Leng, Siyang; Tao, Chenyang; Ying, Xiong; Kurths, Jürgen; Lai, Ying-Cheng; Lin, Wei
2017-07-01
Data-based and model-free accurate identification of intrinsic time delays and directional interactions is an extremely challenging problem in complex dynamical systems and their networks reconstruction. A model-free method with new scores is proposed to be generally capable of detecting single, multiple, and distributed time delays. The method is applicable not only to mutually interacting dynamical variables but also to self-interacting variables in a time-delayed feedback loop. Validation of the method is carried out using physical, biological, and ecological models and real data sets. Especially, applying the method to air pollution data and hospital admission records of cardiovascular diseases in Hong Kong reveals the major air pollutants as a cause of the diseases and, more importantly, it uncovers a hidden time delay (about 30-40 days) in the causal influence that previous studies failed to detect. The proposed method is expected to be universally applicable to ascertaining and quantifying subtle interactions (e.g., causation) in complex systems arising from a broad range of disciplines.
Li, Ying; Yang, Da-Jian; Chen, Shi-Lin; Chen, Si-Bao; Chan, Albert Sun-Chi
2008-07-09
The aim of the study was to develop and evaluate a new method for the production of puerarin phospholipids complex (PPC) microparticles. The advanced particle formation method, solution enhanced dispersion by supercritical fluids (SEDS), was used for the preparation of puerarin (Pur), phospholipids (PC) and their complex particles for the first time. Evaluation of the processing variables on PPC particle characteristics was also conducted. The processing variables included temperature, pressure, solution concentration, the flow rate of supercritical carbon dioxide (SC-CO2) and the relative flow rate of drug solution to CO2. The morphology, particle size and size distribution of the particles were determined. Meanwhile Pur and phospholipids were separately prepared by gas antisolvent precipitation (GAS) method and solid characterization of particles by the two supercritical methods was also compared. Pur formed by GAS was more orderly, purer crystal, whereas amorphous Pur particles between 0.5 and 1microm were formed by SEDS. The complex was successfully obtained by SEDS exhibiting amorphous, partially agglomerated spheres comprised of particles sized only about 1microm. SEDS method may be useful for the processing of other pharmaceutical preparations besides phospholipids complex particles. Furthermore adopting a GAS process to recrystallize pharmaceuticals will provide a highly versatile methodology to generate new polymorphs of drugs in addition to conventional techniques.
NASA Astrophysics Data System (ADS)
Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe
2017-09-01
We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.
An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.
Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca
2002-09-01
In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly underestimated if the measurement error is not taken into account ('regression dilution bias'). Markov chain Monte Carlo methods may be of great value to dentists in allowing analysis of data sets which exhibit a wide range of different forms of complexity.
Synaptic dynamics contribute to long-term single neuron response fluctuations.
Reinartz, Sebastian; Biro, Istvan; Gal, Asaf; Giugliano, Michele; Marom, Shimon
2014-01-01
Firing rate variability at the single neuron level is characterized by long-memory processes and complex statistics over a wide range of time scales (from milliseconds up to several hours). Here, we focus on the contribution of non-stationary efficacy of the ensemble of synapses-activated in response to a given stimulus-on single neuron response variability. We present and validate a method tailored for controlled and specific long-term activation of a single cortical neuron in vitro via synaptic or antidromic stimulation, enabling a clear separation between two determinants of neuronal response variability: membrane excitability dynamics vs. synaptic dynamics. Applying this method we show that, within the range of physiological activation frequencies, the synaptic ensemble of a given neuron is a key contributor to the neuronal response variability, long-memory processes and complex statistics observed over extended time scales. Synaptic transmission dynamics impact on response variability in stimulation rates that are substantially lower compared to stimulation rates that drive excitability resources to fluctuate. Implications to network embedded neurons are discussed.
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.
Effect of spray drying on the properties of amylose-hexadecylammonium chloride inclusion complexes
USDA-ARS?s Scientific Manuscript database
Water soluble amylose-hexadecyl ammonium chloride complexes were prepared from high amylose corn starch and hexadecyl ammonium chloride by excess steam jet cooking. Amylose inclusion complexes were spray dried to determine the viability of spray drying as a production method. The variables tested in...
A rapid and repeatable method to deposit bioaerosols on material surfaces.
Calfee, M Worth; Lee, Sang Don; Ryan, Shawn P
2013-03-01
A simple method for repeatably inoculating surfaces with a precise quantity of aerosolized spores was developed. Laboratory studies were conducted to evaluate the variability of the method within and between experiments, the spatial distribution of spore deposition, the applicability of the method to complex surface types, and the relationship between material surface roughness and spore recoveries. Surface concentrations, as estimated by recoveries from wetted-wipe sampling, were between 5×10(3) and 1.5×10(4)CFUcm(-2) across the entire area (930cm(2)) inoculated. Between-test variability (Cv) in spore recoveries was 40%, 81%, 66%, and 20% for stainless steel, concrete, wood, and drywall, respectively. Within-test variability was lower, and did not exceed 33%, 47%, 52%, and 20% for these materials. The data demonstrate that this method is repeatable, is effective at depositing spores across a target surface area, and can be used to dose complex materials such as concrete, wood, and drywall. In addition, the data demonstrate that surface sampling recoveries vary by material type, and this variability can partially be explained by the material surface roughness index. This deposition method was developed for use in biological agent detection, sampling, and decontamination studies, however, is potentially beneficial to any scientific discipline that investigates surfaces containing aerosol-borne particles. Published by Elsevier B.V.
Hu, Yanzhu; Ai, Xinbo
2016-01-01
Complex network methodology is very useful for complex system explorer. However, the relationships among variables in complex system are usually not clear. Therefore, inferring association networks among variables from their observed data has been a popular research topic. We propose a synthetic method, named small-shuffle partial symbolic transfer entropy spectrum (SSPSTES), for inferring association network from multivariate time series. The method synthesizes surrogate data, partial symbolic transfer entropy (PSTE) and Granger causality. A proper threshold selection is crucial for common correlation identification methods and it is not easy for users. The proposed method can not only identify the strong correlation without selecting a threshold but also has the ability of correlation quantification, direction identification and temporal relation identification. The method can be divided into three layers, i.e. data layer, model layer and network layer. In the model layer, the method identifies all the possible pair-wise correlation. In the network layer, we introduce a filter algorithm to remove the indirect weak correlation and retain strong correlation. Finally, we build a weighted adjacency matrix, the value of each entry representing the correlation level between pair-wise variables, and then get the weighted directed association network. Two numerical simulated data from linear system and nonlinear system are illustrated to show the steps and performance of the proposed approach. The ability of the proposed method is approved by an application finally. PMID:27832153
Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc
2017-01-01
Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes. PMID:28572780
Scaling Linguistic Characterization of Precipitation Variability
NASA Astrophysics Data System (ADS)
Primo, C.; Gutierrez, J. M.
2003-04-01
Rainfall variability is influenced by changes in the aggregation of daily rainfall. This problem is of great importance for hydrological, agricultural and ecological applications. Rainfall averages, or accumulations, are widely used as standard climatic parameters. However different aggregation schemes may lead to the same average or accumulated values. In this paper we present a fractal method to characterize different aggregation schemes. The method provides scaling exponents characterizing weekly or monthly rainfall patterns for a given station. To this aim, we establish an analogy with linguistic analysis, considering precipitation as a discrete variable (e.g., rain, no rain). Each weekly, or monthly, symbolic precipitation sequence of observed precipitation is then considered as a "word" (in this case, a binary word) which defines a specific weekly rainfall pattern. Thus, each site defines a "language" characterized by the words observed in that site during a period representative of the climatology. Then, the more variable the observed weekly precipitation sequences, the more complex the obtained language. To characterize these languages, we first applied the Zipf's method obtaining scaling histograms of rank ordered frequencies. However, to obtain significant exponents, the scaling must be maintained some orders of magnitude, requiring long sequences of daily precipitation which are not available at particular stations. Thus this analysis is not suitable for applications involving particular stations (such as regionalization). Then, we introduce an alternative fractal method applicable to data from local stations. The so-called Chaos-Game method uses Iterated Function Systems (IFS) for graphically representing rainfall languages, in a way that complex languages define complex graphical patterns. The box-counting dimension and the entropy of the resulting patterns are used as linguistic parameters to quantitatively characterize the complexity of the patterns. We illustrate the high climatological discrimination power of the linguistic parameters in the Iberian peninsula, when compared with other standard techniques (such as seasonal mean accumulated precipitation). As an example, standard and linguistic parameters are used as inputs for a clustering regionalization method, comparing the resulting clusters.
An outline of graphical Markov models in dentistry.
Helfenstein, U; Steiner, M; Menghini, G
1999-12-01
In the usual multiple regression model there is one response variable and one block of several explanatory variables. In contrast, in reality there may be a block of several possibly interacting response variables one would like to explain. In addition, the explanatory variables may split into a sequence of several blocks, each block containing several interacting variables. The variables in the second block are explained by those in the first block; the variables in the third block by those in the first and the second block etc. During recent years methods have been developed allowing analysis of problems where the data set has the above complex structure. The models involved are called graphical models or graphical Markov models. The main result of an analysis is a picture, a conditional independence graph with precise statistical meaning, consisting of circles representing variables and lines or arrows representing significant conditional associations. The absence of a line between two circles signifies that the corresponding two variables are independent conditional on the presence of other variables in the model. An example from epidemiology is presented in order to demonstrate application and use of the models. The data set in the example has a complex structure consisting of successive blocks: the variable in the first block is year of investigation; the variables in the second block are age and gender; the variables in the third block are indices of calculus, gingivitis and mutans streptococci and the final response variables in the fourth block are different indices of caries. Since the statistical methods may not be easily accessible to dentists, this article presents them in an introductory form. Graphical models may be of great value to dentists in allowing analysis and visualisation of complex structured multivariate data sets consisting of a sequence of blocks of interacting variables and, in particular, several possibly interacting responses in the final block.
ERIC Educational Resources Information Center
MacPherson, Megan K.; Smith, Anne
2013-01-01
Purpose: To investigate the potential effects of increased sentence length and syntactic complexity on the speech motor control of children who stutter (CWS). Method: Participants repeated sentences of varied length and syntactic complexity. Kinematic measures of articulatory coordination variability and movement duration during perceptually…
Applied statistics in agricultural, biological, and environmental sciences.
USDA-ARS?s Scientific Manuscript database
Agronomic research often involves measurement and collection of multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate statistical methods encompass the simultaneous analysis of all random variables measured on each experimental or s...
NASA Astrophysics Data System (ADS)
Jiang, T.; Yue, Y.
2017-12-01
It is well known that the mono-frequency directional seismic wave technology can concentrate seismic waves into a beam. However, little work on the method and effect of variable frequency directional seismic wave under complex geological conditions have been done .We studied the variable frequency directional wave theory in several aspects. Firstly, we studied the relation between directional parameters and the direction of the main beam. Secondly, we analyzed the parameters that affect the beam width of main beam significantly, such as spacing of vibrator, wavelet dominant frequency, and number of vibrator. In addition, we will study different characteristics of variable frequency directional seismic wave in typical velocity models. In order to examine the propagation characteristics of directional seismic wave, we designed appropriate parameters according to the character of direction parameters, which is capable to enhance the energy of the main beam direction. Further study on directional seismic wave was discussed in the viewpoint of power spectral. The results indicate that the energy intensity of main beam direction increased 2 to 6 times for a multi-ore body velocity model. It showed us that the variable frequency directional seismic technology provided an effective way to strengthen the target signals under complex geological conditions. For concave interface model, we introduced complicated directional seismic technology which supports multiple main beams to obtain high quality data. Finally, we applied the 9-element variable frequency directional seismic wave technology to process the raw data acquired in a oil-shale exploration area. The results show that the depth of exploration increased 4 times with directional seismic wave method. Based on the above analysis, we draw the conclusion that the variable frequency directional seismic wave technology can improve the target signals of different geologic conditions and increase exploration depth with little cost. Due to inconvenience of hydraulic vibrators in complicated surface area, we suggest that the combination of high frequency portable vibrator and variable frequency directional seismic wave method is an alternative technology to increase depth of exploration or prospecting.
Burgess, Stephen; Daniel, Rhian M; Butterworth, Adam S; Thompson, Simon G
2015-01-01
Background: Mendelian randomization uses genetic variants, assumed to be instrumental variables for a particular exposure, to estimate the causal effect of that exposure on an outcome. If the instrumental variable criteria are satisfied, the resulting estimator is consistent even in the presence of unmeasured confounding and reverse causation. Methods: We extend the Mendelian randomization paradigm to investigate more complex networks of relationships between variables, in particular where some of the effect of an exposure on the outcome may operate through an intermediate variable (a mediator). If instrumental variables for the exposure and mediator are available, direct and indirect effects of the exposure on the outcome can be estimated, for example using either a regression-based method or structural equation models. The direction of effect between the exposure and a possible mediator can also be assessed. Methods are illustrated in an applied example considering causal relationships between body mass index, C-reactive protein and uric acid. Results: These estimators are consistent in the presence of unmeasured confounding if, in addition to the instrumental variable assumptions, the effects of both the exposure on the mediator and the mediator on the outcome are homogeneous across individuals and linear without interactions. Nevertheless, a simulation study demonstrates that even considerable heterogeneity in these effects does not lead to bias in the estimates. Conclusions: These methods can be used to estimate direct and indirect causal effects in a mediation setting, and have potential for the investigation of more complex networks between multiple interrelated exposures and disease outcomes. PMID:25150977
Evaluations of Structural Interventions for HIV Prevention: A Review of Approaches and Methods.
Iskarpatyoti, Brittany S; Lebov, Jill; Hart, Lauren; Thomas, Jim; Mandal, Mahua
2018-04-01
Structural interventions alter the social, economic, legal, political, and built environments that underlie processes affecting population health. We conducted a systematic review of evaluations of structural interventions for HIV prevention in low- and middle-income countries (LMICs) to better understand methodological and other challenges and identify effective evaluation strategies. We included 27 peer-reviewed articles on interventions related to economic empowerment, education, and substance abuse in LMICs. Twenty-one evaluations included clearly articulated theories of change (TOCs); 14 of these assessed the TOC by measuring intermediary variables in the causal pathway between the intervention and HIV outcomes. Although structural interventions address complex interactions, no evaluation included methods designed to evaluate complex systems. To strengthen evaluations of structural interventions, we recommend clearly articulating a TOC and measuring intermediate variables between the predictor and outcome. We additionally recommend adapting study designs and analytic methods outside traditional epidemiology to better capture complex results, influences external to the intervention, and unintended consequences.
Dynamical complexity changes during two forms of meditation
NASA Astrophysics Data System (ADS)
Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng
2011-06-01
Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.
NASA Astrophysics Data System (ADS)
Kim, Ho Sung
2013-12-01
A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.
Explicit resolutions for the complex of several Fueter operators
NASA Astrophysics Data System (ADS)
Bureš, Jarolim; Damiano, Alberto; Sabadini, Irene
2007-02-01
An analogue of the Dolbeault complex is introduced for regular functions of several quaternionic variables and studied by means of two different methods. The first one comes from algebraic analysis (for a thorough treatment see the book [F. Colombo, I. Sabadini, F. Sommen, D.C. Struppa, Analysis of Dirac systems and computational algebra, Progress in Mathematical Physics, Vol. 39, Birkhäuser, Boston, 2004]), while the other one relies on the symmetry of the equations and the methods of representation theory (see [F. Colombo, V. Souček, D.C. Struppa, Invariant resolutions for several Fueter operators, J. Geom. Phys. 56 (2006) 1175-1191; R.J. Baston, Quaternionic Complexes, J. Geom. Phys. 8 (1992) 29-52]). The comparison of the two results allows one to describe the operators appearing in the complex in an explicit form. This description leads to a duality theorem which is the generalization of the classical Martineau-Harvey theorem and which is related to hyperfunctions of several quaternionic variables.
Effects of head-down bed rest on complex heart rate variability: Response to LBNP testing
NASA Technical Reports Server (NTRS)
Goldberger, Ary L.; Mietus, Joseph E.; Rigney, David R.; Wood, Margie L.; Fortney, Suzanne M.
1994-01-01
Head-down bed rest is used to model physiological changes during spaceflight. We postulated that bed rest would decrease the degree of complex physiological heart rate variability. We analyzed continuous heart rate data from digitized Holter recordings in eight healthy female volunteers (age 28-34 yr) who underwent a 13-day 6 deg head-down bed rest study with serial lower body negative pressure (LBNP) trials. Heart rate variability was measured on a 4-min data sets using conventional time and frequency domain measures as well as with a new measure of signal 'complexity' (approximate entropy). Data were obtained pre-bed rest (control), during bed rest (day 4 and day 9 or 11), and 2 days post-bed rest (recovery). Tolerance to LBNP was significantly reduced on both bed rest days vs. pre-bed rest. Heart rate variability was assessed at peak LBNP. Heart rate approximate entropy was significantly decreased at day 4 and day 9 or 11, returning toward normal during recovery. Heart rate standard deviation and the ratio of high- to low-power frequency did not change significantly. We conclude that short-term bed rest is associated with a decrease in the complex variability of heart rate during LBNP testing in healthy young adult women. Measurement of heart rate complexity, using a method derived from nonlinear dynamics ('chaos theory'), may provide a sensitive marker of this loss of physiological variability, complementing conventional time and frequency domain statistical measures.
Estimation of Monthly Near Surface Air Temperature Using Geographically Weighted Regression in China
NASA Astrophysics Data System (ADS)
Wang, M. M.; He, G. J.; Zhang, Z. M.; Zhang, Z. J.; Liu, X. G.
2018-04-01
Near surface air temperature (NSAT) is a primary descriptor of terrestrial environment conditions. The availability of NSAT with high spatial resolution is deemed necessary for several applications such as hydrology, meteorology and ecology. In this study, a regression-based NSAT mapping method is proposed. This method is combined remote sensing variables with geographical variables, and uses geographically weighted regression to estimate NSAT. The altitude was selected as geographical variable; and the remote sensing variables include land surface temperature (LST) and Normalized Difference vegetation index (NDVI). The performance of the proposed method was assessed by predict monthly minimum, mean, and maximum NSAT from point station measurements in China, a domain with a large area, complex topography, and highly variable station density, and the NSAT maps were validated against the meteorology observations. Validation results with meteorological data show the proposed method achieved an accuracy of 1.58 °C. It is concluded that the proposed method for mapping NSAT is very operational and has good precision.
USDA-ARS?s Scientific Manuscript database
Next generation sequencing technologies and improved bioinformatics methods have provided opportunities to study sequence variability in complex polyploid transcriptomes. In this study, we used a diverse panel of twenty-two Arachis accessions representing seven Arachis hypogaea market classes, A-, B...
Specifying and Refining a Complex Measurement Model.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…
A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.
1980-01-01
de Silva [141, and Weisman and Wood [76). A particular direct search algorithm, the simplex method, has been cited for having the potential for...spaced discrete points on a line which makes the direction suitable for an efficient integer search technique based on Fibonacci numbers. Two...defined by a subset of variables. The complex algorithm is particularly well suited for this subspace search for two reasons. First, the complex method
Efficient dual approach to distance metric learning.
Shen, Chunhua; Kim, Junae; Liu, Fayao; Wang, Lei; van den Hengel, Anton
2014-02-01
Distance metric learning is of fundamental interest in machine learning because the employed distance metric can significantly affect the performance of many learning methods. Quadratic Mahalanobis metric learning is a popular approach to the problem, but typically requires solving a semidefinite programming (SDP) problem, which is computationally expensive. The worst case complexity of solving an SDP problem involving a matrix variable of size D×D with O(D) linear constraints is about O(D(6.5)) using interior-point methods, where D is the dimension of the input data. Thus, the interior-point methods only practically solve problems exhibiting less than a few thousand variables. Because the number of variables is D(D+1)/2, this implies a limit upon the size of problem that can practically be solved around a few hundred dimensions. The complexity of the popular quadratic Mahalanobis metric learning approach thus limits the size of problem to which metric learning can be applied. Here, we propose a significantly more efficient and scalable approach to the metric learning problem based on the Lagrange dual formulation of the problem. The proposed formulation is much simpler to implement, and therefore allows much larger Mahalanobis metric learning problems to be solved. The time complexity of the proposed method is roughly O(D(3)), which is significantly lower than that of the SDP approach. Experiments on a variety of data sets demonstrate that the proposed method achieves an accuracy comparable with the state of the art, but is applicable to significantly larger problems. We also show that the proposed method can be applied to solve more general Frobenius norm regularized SDP problems approximately.
Multivariate analysis in thoracic research.
Mengual-Macenlle, Noemí; Marcos, Pedro J; Golpe, Rafael; González-Rivas, Diego
2015-03-01
Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use.
NASA Astrophysics Data System (ADS)
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2017-04-01
Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.
Bayesian dynamical systems modelling in the social sciences.
Ranganathan, Shyam; Spaiser, Viktoria; Mann, Richard P; Sumpter, David J T
2014-01-01
Data arising from social systems is often highly complex, involving non-linear relationships between the macro-level variables that characterize these systems. We present a method for analyzing this type of longitudinal or panel data using differential equations. We identify the best non-linear functions that capture interactions between variables, employing Bayes factor to decide how many interaction terms should be included in the model. This method punishes overly complicated models and identifies models with the most explanatory power. We illustrate our approach on the classic example of relating democracy and economic growth, identifying non-linear relationships between these two variables. We show how multiple variables and variable lags can be accounted for and provide a toolbox in R to implement our approach.
Brown, Samuel M.; Tate, Quinn; Jones, Jason P.; Knox, Daniel; Kuttler, Kathryn G.; Lanspa, Michael; Rondina, Matthew T.; Grissom, Colin K.; Behera, Subhasis; Mathews, V.J.; Morris, Alan
2013-01-01
Introduction Heart-rate variability reflects autonomic nervous system tone as well as the overall health of the baroreflex system. We hypothesized that loss of complexity in heart-rate variability upon ICU admission would be associated with unsuccessful early resuscitation of sepsis. Methods We prospectively enrolled patients admitted to ICUs with severe sepsis or septic shock from 2009 to 2011. We studied 30 minutes of EKG, sampled at 500 Hz, at ICU admission and calculated heart-rate complexity via detrended fluctuation analysis. Primary outcome was vasopressor independence at 24 hours after ICU admission. Secondary outcome was 28-day mortality. Results We studied 48 patients, of whom 60% were vasopressor independent at 24 hours. Five (10%) died within 28 days. The ratio of fractal alpha parameters was associated with both vasopressor independence and 28-day mortality (p=0.04) after controlling for mean heart rate. In the optimal model, SOFA score and the long-term fractal alpha parameter were associated with vasopressor independence. Conclusions Loss of complexity in heart rate variability is associated with worse outcome early in severe sepsis and septic shock. Further work should evaluate whether complexity of heart rate variability (HRV) could guide treatment in sepsis. PMID:23958243
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
2017-10-13
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakhanenko, Nikita A.; Kunert-Graf, James; Galas, David J.
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. Here, we present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discretemore » variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis—that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. Finally, we illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.« less
The Information Content of Discrete Functions and Their Application in Genetic Data Analysis.
Sakhanenko, Nikita A; Kunert-Graf, James; Galas, David J
2017-12-01
The complex of central problems in data analysis consists of three components: (1) detecting the dependence of variables using quantitative measures, (2) defining the significance of these dependence measures, and (3) inferring the functional relationships among dependent variables. We have argued previously that an information theory approach allows separation of the detection problem from the inference of functional form problem. We approach here the third component of inferring functional forms based on information encoded in the functions. We present here a direct method for classifying the functional forms of discrete functions of three variables represented in data sets. Discrete variables are frequently encountered in data analysis, both as the result of inherently categorical variables and from the binning of continuous numerical variables into discrete alphabets of values. The fundamental question of how much information is contained in a given function is answered for these discrete functions, and their surprisingly complex relationships are illustrated. The all-important effect of noise on the inference of function classes is found to be highly heterogeneous and reveals some unexpected patterns. We apply this classification approach to an important area of biological data analysis-that of inference of genetic interactions. Genetic analysis provides a rich source of real and complex biological data analysis problems, and our general methods provide an analytical basis and tools for characterizing genetic problems and for analyzing genetic data. We illustrate the functional description and the classes of a number of common genetic interaction modes and also show how different modes vary widely in their sensitivity to noise.
NASA Technical Reports Server (NTRS)
Smith, C. B.
1982-01-01
The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.
Path optimization method for the sign problem
NASA Astrophysics Data System (ADS)
Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji
2018-03-01
We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.
Porta, Alberto; Bari, Vlasta; Bassani, Tito; Marchi, Andrea; Tassin, Stefano; Canesi, Margherita; Barbic, Franca; Furlan, Raffaello
2013-01-01
Entropy-based approaches are frequently used to quantify complexity of short-term cardiovascular control from spontaneous beat-to-beat variability of heart period (HP) and systolic arterial pressure (SAP). Among these tools the ones optimizing a critical parameter such as the pattern length are receiving more and more attention. This study compares two entropy-based techniques for the quantification of complexity making use of completely different strategies to optimize the pattern length. Comparison was carried out over HP and SAP variability series recorded from 12 Parkinson's disease (PD) patients without orthostatic hypotension or symptoms of orthostatic intolerance and 12 age-matched healthy control (HC) subjects. Regardless of the method, complexity of cardiovascular control increased in PD group, thus suggesting the early impairment of cardiovascular function.
Classical versus Computer Algebra Methods in Elementary Geometry
ERIC Educational Resources Information Center
Pech, Pavel
2005-01-01
Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…
Why significant variables aren't automatically good predictors.
Lo, Adeline; Chernoff, Herman; Zheng, Tian; Lo, Shaw-Hwa
2015-11-10
Thus far, genome-wide association studies (GWAS) have been disappointing in the inability of investigators to use the results of identified, statistically significant variants in complex diseases to make predictions useful for personalized medicine. Why are significant variables not leading to good prediction of outcomes? We point out that this problem is prevalent in simple as well as complex data, in the sciences as well as the social sciences. We offer a brief explanation and some statistical insights on why higher significance cannot automatically imply stronger predictivity and illustrate through simulations and a real breast cancer example. We also demonstrate that highly predictive variables do not necessarily appear as highly significant, thus evading the researcher using significance-based methods. We point out that what makes variables good for prediction versus significance depends on different properties of the underlying distributions. If prediction is the goal, we must lay aside significance as the only selection standard. We suggest that progress in prediction requires efforts toward a new research agenda of searching for a novel criterion to retrieve highly predictive variables rather than highly significant variables. We offer an alternative approach that was not designed for significance, the partition retention method, which was very effective predicting on a long-studied breast cancer data set, by reducing the classification error rate from 30% to 8%.
An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.
Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei
2016-01-11
Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.
NASA Astrophysics Data System (ADS)
Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.
2015-08-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.
[Recurrence plot analysis of HRV for brain ischemia and asphyxia].
Chen, Xiaoming; Qiu, Yihong; Zhu, Yisheng
2008-02-01
Heart rate variability (HRV) is the tiny variability existing in the cycles of the heart beats, which reflects the corresponding balance between sympathetic and vagus nerves. Since the nonlinear characteristic of HRV is confirmed, the Recurrence Plot method, a nonlinear dynamic analysis method based on the complexity, could be used to analyze HRV. The results showed the recurrence plot structures and some quantitative indices (L-Mean, L-Entr) during asphyxia insult vary significantly as compared to those in normal conditions, which offer a new method to monitor brain asphyxia injury.
A non-linear data mining parameter selection algorithm for continuous variables
Razavi, Marianne; Brady, Sean
2017-01-01
In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829
Emsley, Richard; Dunn, Graham; White, Ian R
2010-06-01
Complex intervention trials should be able to answer both pragmatic and explanatory questions in order to test the theories motivating the intervention and help understand the underlying nature of the clinical problem being tested. Key to this is the estimation of direct effects of treatment and indirect effects acting through intermediate variables which are measured post-randomisation. Using psychological treatment trials as an example of complex interventions, we review statistical methods which crucially evaluate both direct and indirect effects in the presence of hidden confounding between mediator and outcome. We review the historical literature on mediation and moderation of treatment effects. We introduce two methods from within the existing causal inference literature, principal stratification and structural mean models, and demonstrate how these can be applied in a mediation context before discussing approaches and assumptions necessary for attaining identifiability of key parameters of the basic causal model. Assuming that there is modification by baseline covariates of the effect of treatment (i.e. randomisation) on the mediator (i.e. covariate by treatment interactions), but no direct effect on the outcome of these treatment by covariate interactions leads to the use of instrumental variable methods. We describe how moderation can occur through post-randomisation variables, and extend the principal stratification approach to multiple group methods with explanatory models nested within the principal strata. We illustrate the new methodology with motivating examples of randomised trials from the mental health literature.
ERIC Educational Resources Information Center
Kim, Ho Sung
2013-01-01
A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final…
Variable sensory perception in autism.
Haigh, Sarah M
2018-03-01
Autism is associated with sensory and cognitive abnormalities. Individuals with autism generally show normal or superior early sensory processing abilities compared to healthy controls, but deficits in complex sensory processing. In the current opinion paper, it will be argued that sensory abnormalities impact cognition by limiting the amount of signal that can be used to interpret and interact with environment. There is a growing body of literature showing that individuals with autism exhibit greater trial-to-trial variability in behavioural and cortical sensory responses. If multiple sensory signals that are highly variable are added together to process more complex sensory stimuli, then this might destabilise later perception and impair cognition. Methods to improve sensory processing have shown improvements in more general cognition. Studies that specifically investigate differences in sensory trial-to-trial variability in autism, and the potential changes in variability before and after treatment, could ascertain if trial-to-trial variability is a good mechanism to target for treatment in autism. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gashkov, Sergey B; Sergeev, Igor' S
2012-10-31
This work suggests a method for deriving lower bounds for the complexity of polynomials with positive real coefficients implemented by circuits of functional elements over the monotone arithmetic basis {l_brace}x+y, x {center_dot} y{r_brace} Union {l_brace}a {center_dot} x | a Element-Of R{sub +}{r_brace}. Using this method, several new results are obtained. In particular, we construct examples of polynomials of degree m-1 in each of the n variables with coefficients 0 and 1 having additive monotone complexity m{sup (1-o(1))n} and multiplicative monotone complexity m{sup (1/2-o(1))n} as m{sup n}{yields}{infinity}. In this form, the lower bounds derived here are sharp. Bibliography: 72 titles.
Measurement of the M² beam propagation factor using a focus-tunable liquid lens.
Niederriter, Robert D; Gopinath, Juliet T; Siemens, Mark E
2013-03-10
We demonstrate motion-free beam quality M² measurements of stigmatic, simple astigmatic, and general astigmatic (twisted) beams using only a focus-tunable liquid lens and a CCD camera. We extend the variable-focus technique to the characterization of general astigmatic beams by measuring the 10 second-order moments of the power density distribution for the twisted beam produced by passage through multimode optical fiber. Our method measures the same M² values as the traditional variable-distance method for a wide range of laser beam sources, including nearly TEM(00) (M²≈1) and general astigmatic multimode beams (M²≈8). The method is simple and compact, with no moving parts or complex apparatus and measurement precision comparable to the standard variable-distance method.
Statistical Analysis of Big Data on Pharmacogenomics
Fan, Jianqing; Liu, Han
2013-01-01
This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905
Klop, D; Engelbrecht, L
2013-12-01
This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.
TopoSCALE v.1.0: downscaling gridded climate data in complex terrain
NASA Astrophysics Data System (ADS)
Fiddes, J.; Gruber, S.
2014-02-01
Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of observations (i.e. remote areas or future periods).
Geoelectrical characterisation of basement aquifers: the case of Iberekodo, southwestern Nigeria
NASA Astrophysics Data System (ADS)
Aizebeokhai, Ahzegbobor P.; Oyeyemi, Kehinde D.
2018-03-01
Basement aquifers, which occur within the weathered and fractured zones of crystalline bedrocks, are important groundwater resources in tropical and subtropical regions. The development of basement aquifers is complex owing to their high spatial variability. Geophysical techniques are used to obtain information about the hydrologic characteristics of the weathered and fractured zones of the crystalline basement rocks, which relates to the occurrence of groundwater in the zones. The spatial distributions of these hydrologic characteristics are then used to map the spatial variability of the basement aquifers. Thus, knowledge of the spatial variability of basement aquifers is useful in siting wells and boreholes for optimal and perennial yield. Geoelectrical resistivity is one of the most widely used geophysical methods for assessing the spatial variability of the weathered and fractured zones in groundwater exploration efforts in basement complex terrains. The presented study focuses on combining vertical electrical sounding with two-dimensional (2D) geoelectrical resistivity imaging to characterise the weathered and fractured zones in a crystalline basement complex terrain in southwestern Nigeria. The basement aquifer was delineated, and the nature, extent and spatial variability of the delineated basement aquifer were assessed based on the spatial variability of the weathered and fractured zones. The study shows that a multiple-gradient array for 2D resistivity imaging is sensitive to vertical and near-surface stratigraphic features, which have hydrological implications. The integration of resistivity sounding with 2D geoelectrical resistivity imaging is efficient and enhances near-surface characterisation in basement complex terrain.
ERIC Educational Resources Information Center
Si, Yajuan; Reiter, Jerome P.
2013-01-01
In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…
ERIC Educational Resources Information Center
Kim, Seohyun; Lu, Zhenqiu; Cohen, Allan S.
2018-01-01
Bayesian algorithms have been used successfully in the social and behavioral sciences to analyze dichotomous data particularly with complex structural equation models. In this study, we investigate the use of the Polya-Gamma data augmentation method with Gibbs sampling to improve estimation of structural equation models with dichotomous variables.…
ERIC Educational Resources Information Center
Yardley, Sarah; Brosnan, Caragh; Richardson, Jane; Hays, Richard
2013-01-01
This paper addresses the question "what are the variables influencing social interactions and learning during Authentic Early Experience (AEE)?" AEE is a complex educational intervention for new medical students. Following critique of the existing literature, multiple qualitative methods were used to create a study framework conceptually…
High frequency vibration analysis by the complex envelope vectorization.
Giannini, O; Carcaterra, A; Sestieri, A
2007-06-01
The complex envelope displacement analysis (CEDA) is a procedure to solve high frequency vibration and vibro-acoustic problems, providing the envelope of the physical solution. CEDA is based on a variable transformation mapping the high frequency oscillations into signals of low frequency content and has been successfully applied to one-dimensional systems. However, the extension to plates and vibro-acoustic fields met serious difficulties so that a general revision of the theory was carried out, leading finally to a new method, the complex envelope vectorization (CEV). In this paper the CEV method is described, underlying merits and limits of the procedure, and a set of applications to vibration and vibro-acoustic problems of increasing complexity are presented.
A simple method to measure the complex permittivity of materials at variable temperatures
NASA Astrophysics Data System (ADS)
Yang, Xiaoqing; Yin, Yang; Liu, Zhanwei; Zhang, Di; Wu, Shiyue; Yuan, Jianping; Li, Lixin
2017-10-01
Measurement of the complex permittivity (CP) of a material at different temperatures in microwave heating applications is difficult and complicated. In this paper a simple and convenient method is employed to measure the CP of a material over variable temperature. In this method the temperature of a sample is increased experimentally to obtain the formula for the relationship between CP and temperature by a genetic algorithm. We chose agar solution (sample) and a Yangshao reactor (microwave heating system) to validate the reliability and feasibility of this method. The physical parameters (the heat capacity, C p , density, ρ, and thermal conductivity, k) of the sample are set as constants in the process of simulation and inversion. We analyze the influence of the variation of physical parameters with temperature on the accuracy of the inversion results. It is demonstrated that the variation of these physical parameters has little effect on the inversion results in a certain temperature range.
NASA Astrophysics Data System (ADS)
Chen, L.; Cheng, Y. M.
2018-07-01
In this paper, the complex variable reproducing kernel particle method (CVRKPM) for solving the bending problems of isotropic thin plates on elastic foundations is presented. In CVRKPM, one-dimensional basis function is used to obtain the shape function of a two-dimensional problem. CVRKPM is used to form the approximation function of the deflection of the thin plates resting on elastic foundation, the Galerkin weak form of thin plates on elastic foundation is employed to obtain the discretized system equations, the penalty method is used to apply the essential boundary conditions, and Winkler and Pasternak foundation models are used to consider the interface pressure between the plate and the foundation. Then the corresponding formulae of CVRKPM for thin plates on elastic foundations are presented in detail. Several numerical examples are given to discuss the efficiency and accuracy of CVRKPM in this paper, and the corresponding advantages of the present method are shown.
Behavior of complex mixtures in aquatic environments: a synthesis of PNL ecological research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fickeisen, D.H.; Vaughan, B.E.
1984-06-01
The term complex mixture has been recently applied to energy-related process streams, products and wastes that typically contain hundreds or thousands of individual organic compounds, like petroleum or synthetic fuel oils; but it is more generally applicable. A six-year program of ecological research has focused on four areas important to understanding the environmental behavior of complex mixtures: physicochemical variables, individual organism responses, ecosystems-level determinations, and metabolism. Of these areas, physicochemical variables and organism responses were intensively studied; system-level determinations and metabolism represent more recent directions. Chemical characterization was integrated throughout all areas of the program, and state-of-the-art methods were applied.more » 155 references, 35 figures, 4 tables.« less
Beat to beat variability in cardiovascular variables: noise or music?
NASA Technical Reports Server (NTRS)
Appel, M. L.; Berger, R. D.; Saul, J. P.; Smith, J. M.; Cohen, R. J.
1989-01-01
Cardiovascular variables such as heart rate, arterial blood pressure, stroke volume and the shape of electrocardiographic complexes all fluctuate on a beat to beat basis. These fluctuations have traditionally been ignored or, at best, treated as noise to be averaged out. The variability in cardiovascular signals reflects the homeodynamic interplay between perturbations to cardiovascular function and the dynamic response of the cardiovascular regulatory systems. Modern signal processing techniques provide a means of analyzing beat to beat fluctuations in cardiovascular signals, so as to permit a quantitative, noninvasive or minimally invasive method of assessing closed loop hemodynamic regulation and cardiac electrical stability. This method promises to provide a new approach to the clinical diagnosis and management of alterations in cardiovascular regulation and stability.
Graphic tracings of condylar paths and measurements of condylar angles.
el-Gheriani, A S; Winstanley, R B
1989-01-01
A study was carried out to determine the accuracy of different methods of measuring condylar inclination from graphical recordings of condylar paths. Thirty subjects made protrusive mandibular movements while condylar inclination was recorded on a graph paper card. A mandibular facebow and intraoral central bearing plate facilitated the procedure. The first method proved to be too variable to be of value in measuring condylar angles. The spline curve fitting technique was shown to be accurate, but its use clinically may prove complex. The mathematical method was more practical and overcame the variability of the tangent method. Other conclusions regarding condylar inclination are outlined.
Teacher Stress: Complex Model Building with LISREL. Pedagogical Reports, No. 16.
ERIC Educational Resources Information Center
Tellenback, Sten
This paper presents a complex causal model of teacher stress based on data received from the responses of 1,466 teachers from Malmo, Sweden to a questionnaire. Also presented is a method for treating the model variables as higher-order factors or higher-order theoretical constructs. The paper's introduction presents a brief review of teacher…
ERIC Educational Resources Information Center
Kim, Jeong-eun
2012-01-01
This dissertation investigates optimal conditions for form-focused instruction (FFI) by considering effects of internal (i.e., timing and types of FFI) and external (i.e., complexity and familiarity) variables of FFI when it is offered within a primarily meaning-focused context of adult second language (L2) learning. Ninety-two Korean-speaking…
Genomic Methods for Clinical and Translational Pain Research
Wang, Dan; Kim, Hyungsuk; Wang, Xiao-Min; Dionne, Raymond
2012-01-01
Pain is a complex sensory experience for which the molecular mechanisms are yet to be fully elucidated. Individual differences in pain sensitivity are mediated by a complex network of multiple gene polymorphisms, physiological and psychological processes, and environmental factors. Here, we present the methods for applying unbiased molecular-genetic approaches, genome-wide association study (GWAS), and global gene expression analysis, to help better understand the molecular basis of pain sensitivity in humans and variable responses to analgesic drugs. PMID:22351080
Dietrich, Stefan; Floegel, Anna; Troll, Martina; Kühn, Tilman; Rathmann, Wolfgang; Peters, Anette; Sookthai, Disorn; von Bergen, Martin; Kaaks, Rudolf; Adamski, Jerzy; Prehn, Cornelia; Boeing, Heiner; Schulze, Matthias B; Illig, Thomas; Pischon, Tobias; Knüppel, Sven; Wang-Sattler, Rui; Drogan, Dagmar
2016-10-01
The application of metabolomics in prospective cohort studies is statistically challenging. Given the importance of appropriate statistical methods for selection of disease-associated metabolites in highly correlated complex data, we combined random survival forest (RSF) with an automated backward elimination procedure that addresses such issues. Our RSF approach was illustrated with data from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study, with concentrations of 127 serum metabolites as exposure variables and time to development of type 2 diabetes mellitus (T2D) as outcome variable. Out of this data set, Cox regression with a stepwise selection method was recently published. Replication of methodical comparison (RSF and Cox regression) was conducted in two independent cohorts. Finally, the R-code for implementing the metabolite selection procedure into the RSF-syntax is provided. The application of the RSF approach in EPIC-Potsdam resulted in the identification of 16 incident T2D-associated metabolites which slightly improved prediction of T2D when used in addition to traditional T2D risk factors and also when used together with classical biomarkers. The identified metabolites partly agreed with previous findings using Cox regression, though RSF selected a higher number of highly correlated metabolites. The RSF method appeared to be a promising approach for identification of disease-associated variables in complex data with time to event as outcome. The demonstrated RSF approach provides comparable findings as the generally used Cox regression, but also addresses the problem of multicollinearity and is suitable for high-dimensional data. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
Section Height Determination Methods of the Isotopographic Surface in a Complex Terrain Relief
ERIC Educational Resources Information Center
Syzdykova, Guldana D.; Kurmankozhaev, Azimhan K.
2016-01-01
A new method for determining the vertical interval of isotopographic surfaces on rugged terrain was developed. The method is based on the concept of determining the differentiated size of the vertical interval using spatial-statistical properties inherent in the modal characteristic, the degree of variability of apical heights and the chosen map…
Simulating initial attack with two fire containment models
Romain M. Mees
1985-01-01
Given a variable rate of fireline construction and an elliptical fire growth model, two methods for estimating the required number of resources, time to containment, and the resulting fire area were compared. Five examples illustrate some of the computational differences between the simple and the complex methods. The equations for the two methods can be used and...
Krüger, Melanie; Straube, Andreas; Eggert, Thomas
2017-01-01
In recent years, theory-building in motor neuroscience and our understanding of the synergistic control of the redundant human motor system has significantly profited from the emergence of a range of different mathematical approaches to analyze the structure of movement variability. Approaches such as the Uncontrolled Manifold method or the Noise-Tolerance-Covariance decomposition method allow to detect and interpret changes in movement coordination due to e.g., learning, external task constraints or disease, by analyzing the structure of within-subject, inter-trial movement variability. Whereas, for cyclical movements (e.g., locomotion), mathematical approaches exist to investigate the propagation of movement variability in time (e.g., time series analysis), similar approaches are missing for discrete, goal-directed movements, such as reaching. Here, we propose canonical correlation analysis as a suitable method to analyze the propagation of within-subject variability across different time points during the execution of discrete movements. While similar analyses have already been applied for discrete movements with only one degree of freedom (DoF; e.g., Pearson's product-moment correlation), canonical correlation analysis allows to evaluate the coupling of inter-trial variability across different time points along the movement trajectory for multiple DoF-effector systems, such as the arm. The theoretical analysis is illustrated by empirical data from a study on reaching movements under normal and disturbed proprioception. The results show increased movement duration, decreased movement amplitude, as well as altered movement coordination under ischemia, which results in a reduced complexity of movement control. Movement endpoint variability is not increased under ischemia. This suggests that healthy adults are able to immediately and efficiently adjust the control of complex reaching movements to compensate for the loss of proprioceptive information. Further, it is shown that, by using canonical correlation analysis, alterations in movement coordination that indicate changes in the control strategy concerning the use of motor redundancy can be detected, which represents an important methodical advance in the context of neuromechanics.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Application of a data-mining method based on Bayesian networks to lesion-deficit analysis
NASA Technical Reports Server (NTRS)
Herskovits, Edward H.; Gerring, Joan P.
2003-01-01
Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.
Automated reverse engineering of nonlinear dynamical systems
Bongard, Josh; Lipson, Hod
2007-01-01
Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated “reverse engineering” approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future. PMID:17553966
Automated reverse engineering of nonlinear dynamical systems.
Bongard, Josh; Lipson, Hod
2007-06-12
Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated "reverse engineering" approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future.
Recurrence-plot-based measures of complexity and their application to heart-rate-variability data.
Marwan, Norbert; Wessel, Niels; Meyerfeldt, Udo; Schirdewan, Alexander; Kurths, Jürgen
2002-08-01
The knowledge of transitions between regular, laminar or chaotic behaviors is essential to understand the underlying mechanisms behind complex systems. While several linear approaches are often insufficient to describe such processes, there are several nonlinear methods that, however, require rather long time observations. To overcome these difficulties, we propose measures of complexity based on vertical structures in recurrence plots and apply them to the logistic map as well as to heart-rate-variability data. For the logistic map these measures enable us not only to detect transitions between chaotic and periodic states, but also to identify laminar states, i.e., chaos-chaos transitions. The traditional recurrence quantification analysis fails to detect the latter transitions. Applying our measures to the heart-rate-variability data, we are able to detect and quantify the laminar phases before a life-threatening cardiac arrhythmia occurs thereby facilitating a prediction of such an event. Our findings could be of importance for the therapy of malignant cardiac arrhythmias.
Koch, Iris; Reimer, Kenneth J; Bakker, Martine I; Basta, Nicholas T; Cave, Mark R; Denys, Sébastien; Dodd, Matt; Hale, Beverly A; Irwin, Rob; Lowney, Yvette W; Moore, Margo M; Paquin, Viviane; Rasmussen, Pat E; Repaso-Subang, Theresa; Stephenson, Gladys L; Siciliano, Steven D; Wragg, Joanna; Zagury, Gerald J
2013-01-01
Bioaccessibility is a measurement of a substance's solubility in the human gastro-intestinal system, and is often used in the risk assessment of soils. The present study was designed to determine the variability among laboratories using different methods to measure the bioaccessibility of 24 inorganic contaminants in one standardized soil sample, the standard reference material NIST 2710. Fourteen laboratories used a total of 17 bioaccessibility extraction methods. The variability between methods was assessed by calculating the reproducibility relative standard deviations (RSDs), where reproducibility is the sum of within-laboratory and between-laboratory variability. Whereas within-laboratory repeatability was usually better than (<) 15% for most elements, reproducibility RSDs were much higher, indicating more variability, although for many elements they were comparable to typical uncertainties (e.g., 30% in commercial laboratories). For five trace elements of interest, reproducibility RSDs were: arsenic (As), 22-44%; cadmium (Cd), 11-41%; Cu, 15-30%; lead (Pb), 45-83%; and Zn, 18-56%. Only one method variable, pH, was found to correlate significantly with bioaccessibility for aluminum (Al), Cd, copper (Cu), manganese (Mn), Pb and zinc (Zn) but other method variables could not be examined systematically because of the study design. When bioaccessibility results were directly compared with bioavailability results for As (swine and mouse) and Pb (swine), four methods returned results within uncertainty ranges for both elements: two that were defined as simpler (gastric phase only, limited chemicals) and two were more complex (gastric + intestinal phases, with a mixture of chemicals).
Robert E. Keane
2013-01-01
Wildland fuelbeds are exceptionally complex, consisting of diverse particles of many sizes, types and shapes with abundances and properties that are highly variable in time and space. This complexity makes it difficult to accurately describe, classify, sample and map fuels for wildland fire research and management. As a result, many fire behaviour and effects software...
Streamflow variability and classification using false nearest neighbor method
NASA Astrophysics Data System (ADS)
Vignesh, R.; Jothiprakash, V.; Sivakumar, B.
2015-12-01
Understanding regional streamflow dynamics and patterns continues to be a challenging problem. The present study introduces the false nearest neighbor (FNN) algorithm, a nonlinear dynamic-based method, to examine the spatial variability of streamflow over a region. The FNN method is a dimensionality-based approach, where the dimension of the time series represents its variability. The method uses phase space reconstruction and nearest neighbor concepts, and identifies false neighbors in the reconstructed phase space. The FNN method is applied to monthly streamflow data monitored over a period of 53 years (1950-2002) in an extensive network of 639 stations in the contiguous United States (US). Since selection of delay time in phase space reconstruction may influence the FNN outcomes, analysis is carried out for five different delay time values: monthly, seasonal, and annual separation of data as well as delay time values obtained using autocorrelation function (ACF) and average mutual information (AMI) methods. The FNN dimensions for the 639 streamflow series are generally identified to range from 4 to 12 (with very few exceptional cases), indicating a wide range of variability in the dynamics of streamflow across the contiguous US. However, the FNN dimensions for a majority of the streamflow series are found to be low (less than or equal to 6), suggesting low level of complexity in streamflow dynamics in most of the individual stations and over many sub-regions. The FNN dimension estimates also reveal that streamflow dynamics in the western parts of the US (including far west, northwestern, and southwestern parts) generally exhibit much greater variability compared to that in the eastern parts of the US (including far east, northeastern, and southeastern parts), although there are also differences among 'pockets' within these regions. These results are useful for identification of appropriate model complexity at individual stations, patterns across regions and sub-regions, interpolation and extrapolation of data, and catchment classification. An attempt is also made to relate the FNN dimensions with catchment characteristics and streamflow statistical properties.
Ioannidis, J P; McQueen, P G; Goedert, J J; Kaslow, R A
1998-03-01
Complex immunogenetic associations of disease involving a large number of gene products are difficult to evaluate with traditional statistical methods and may require complex modeling. The authors evaluated the performance of feed-forward backpropagation neural networks in predicting rapid progression to acquired immunodeficiency syndrome (AIDS) for patients with human immunodeficiency virus (HIV) infection on the basis of major histocompatibility complex variables. Networks were trained on data from patients from the Multicenter AIDS Cohort Study (n = 139) and then validated on patients from the DC Gay cohort (n = 102). The outcome of interest was rapid disease progression, defined as progression to AIDS in <6 years from seroconversion. Human leukocyte antigen (HLA) variables were selected as network inputs with multivariate regression and a previously described algorithm selecting markers with extreme point estimates for progression risk. Network performance was compared with that of logistic regression. Networks with 15 HLA inputs and a single hidden layer of five nodes achieved a sensitivity of 87.5% and specificity of 95.6% in the training set, vs. 77.0% and 76.9%, respectively, achieved by logistic regression. When validated on the DC Gay cohort, networks averaged a sensitivity of 59.1% and specificity of 74.3%, vs. 53.1% and 61.4%, respectively, for logistic regression. Neural networks offer further support to the notion that HIV disease progression may be dependent on complex interactions between different class I and class II alleles and transporters associated with antigen processing variants. The effect in the current models is of moderate magnitude, and more data as well as other host and pathogen variables may need to be considered to improve the performance of the models. Artificial intelligence methods may complement linear statistical methods for evaluating immunogenetic associations of disease.
Buscema, Massimo; Grossi, Enzo
2008-01-01
We describe here a new mapping method able to find out connectivity traces among variables thanks to an artificial adaptive system, the Auto Contractive Map (AutoCM), able to define the strength of the associations of each variable with all the others in a dataset. After the training phase, the weights matrix of the AutoCM represents the map of the main connections between the variables. The example of gastro-oesophageal reflux disease data base is extremely useful to figure out how this new approach can help to re-design the overall structure of factors related to complex and specific diseases description.
Riegel, Barbara; Lee, Christopher S; Sochalski, Julie
2010-05-01
Comparing disease management programs and their effects is difficult because of wide variability in program intensity and complexity. The purpose of this effort was to develop an instrument that can be used to describe the intensity and complexity of heart failure (HF) disease management programs. Specific composition criteria were taken from the American Heart Association (AHA) taxonomy of disease management and hierarchically scored to allow users to describe the intensity and complexity of the domains and subdomains of HF disease management programs. The HF Disease Management Scoring Instrument (HF-DMSI) incorporates 6 of the 8 domains from the taxonomy: recipient, intervention content, delivery personnel, method of communication, intensity/complexity, and environment. The 3 intervention content subdomains (education/counseling, medication management, and peer support) are described separately. In this first test of the HF-DMSI, overall intensity (measured as duration) and complexity were rated using an ordinal scoring system. Possible scores reflect a clinical rationale and differ by category, with zero given only if the element could potentially be missing (eg, surveillance by remote monitoring). Content validity was evident as the instrument matches the existing AHA taxonomy. After revision and refinement, 2 authors obtained an inter-rater reliability intraclass correlation coefficient score of 0.918 (confidence interval, 0.880 to 0.944, P<0.001) in their rating of 12 studies. The areas with most variability among programs were delivery personnel and method of communication. The HF-DMSI is useful for describing the intensity and complexity of HF disease management programs.
Quantifying clutter: A comparison of four methods and their relationship to bat detection
Joy M. O’Keefe; Susan C. Loeb; Hoke S. Hill Jr.; J. Drew Lanham
2014-01-01
The degree of spatial complexity in the environment, or clutter, affects the quality of foraging habitats for bats and their detection with acoustic systems. Clutter has been assessed in a variety of ways but there are no standardized methods for measuring clutter. We compared four methods (Visual Clutter, Cluster, Single Variable, and Clutter Index) and related these...
High performance frame synchronization for continuous variable quantum key distribution systems.
Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua
2015-08-24
Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
Forecasting seasonal hydrologic response in major river basins
NASA Astrophysics Data System (ADS)
Bhuiyan, A. M.
2014-05-01
Seasonal precipitation variation due to natural climate variation influences stream flow and the apparent frequency and severity of extreme hydrological conditions such as flood and drought. To study hydrologic response and understand the occurrence of extreme hydrological events, the relevant forcing variables must be identified. This study attempts to assess and quantify the historical occurrence and context of extreme hydrologic flow events and quantify the relation between relevant climate variables. Once identified, the flow data and climate variables are evaluated to identify the primary relationship indicators of hydrologic extreme event occurrence. Existing studies focus on developing basin-scale forecasting techniques based on climate anomalies in El Nino/La Nina episodes linked to global climate. Building on earlier work, the goal of this research is to quantify variations in historical river flows at seasonal temporal-scale, and regional to continental spatial-scale. The work identifies and quantifies runoff variability of major river basins and correlates flow with environmental forcing variables such as El Nino, La Nina, sunspot cycle. These variables are expected to be the primary external natural indicators of inter-annual and inter-seasonal patterns of regional precipitation and river flow. Relations between continental-scale hydrologic flows and external climate variables are evaluated through direct correlations in a seasonal context with environmental phenomenon such as sun spot numbers (SSN), Southern Oscillation Index (SOI), and Pacific Decadal Oscillation (PDO). Methods including stochastic time series analysis and artificial neural networks are developed to represent the seasonal variability evident in the historical records of river flows. River flows are categorized into low, average and high flow levels to evaluate and simulate flow variations under associated climate variable variations. Results demonstrated not any particular method is suited to represent scenarios leading to extreme flow conditions. For selected flow scenarios, the persistence model performance may be comparable to more complex multivariate approaches, and complex methods did not always improve flow estimation. Overall model performance indicates inclusion of river flows and forcing variables on average improve model extreme event forecasting skills. As a means to further refine the flow estimation, an ensemble forecast method is implemented to provide a likelihood-based indication of expected river flow magnitude and variability. Results indicate seasonal flow variations are well-captured in the ensemble range, therefore the ensemble approach can often prove efficient in estimating extreme river flow conditions. The discriminant prediction approach, a probabilistic measure to forecast streamflow, is also adopted to derive model performance. Results show the efficiency of the method in terms of representing uncertainties in the forecasts.
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
Calculation of the bending of electromechanical aircraft element made of the carbon fiber
NASA Astrophysics Data System (ADS)
Danilova-Volkovskaya, Galina; Chepurnenko, Anton; Begak, Aleksandr; Savchenko, Andrey
2017-10-01
We consider a method of calculation of an orthotropic plate with variable thickness. The solution is performed numerically by the finite element method. The calculation is made for the springs of a hang glider made of carbon fiber. The comparison of the results with Sofistik software complex is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil
2016-04-29
We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less
A system of three-dimensional complex variables
NASA Technical Reports Server (NTRS)
Martin, E. Dale
1986-01-01
Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.
Recurrence Quantification Analysis of Sentence-Level Speech Kinematics
Tiede, Mark; Riley, Michael A.; Whalen, D. H.
2016-01-01
Purpose Current approaches to assessing sentence-level speech variability rely on measures that quantify variability across utterances and use normalization procedures that alter raw trajectory data. The current work tests the feasibility of a less restrictive nonlinear approach—recurrence quantification analysis (RQA)—via a procedural example and subsequent analysis of kinematic data. Method To test the feasibility of RQA, lip aperture (i.e., the Euclidean distance between lip-tracking sensors) was recorded for 21 typically developing adult speakers during production of a simple utterance. The utterance was produced in isolation and in carrier structures differing just in length or in length and complexity. Four RQA indices were calculated: percent recurrence (%REC), percent determinism (%DET), stability (MAXLINE), and stationarity (TREND). Results Percent determinism (%DET) decreased only for the most linguistically complex sentence; MAXLINE decreased as a function of linguistic complexity but increased for the longer-only sentence; TREND decreased as a function of both length and linguistic complexity. Conclusions This research note demonstrates the feasibility of using RQA as a tool to compare speech variability across speakers and groups. RQA offers promise as a technique to assess effects of potential stressors (e.g., linguistic or cognitive factors) on the speech production system. PMID:27824987
Effects of additional data on Bayesian clustering.
Yamazaki, Keisuke
2017-10-01
Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Zhang, Xia; Hu, Changqin
2017-09-08
Penicillins are typical of complex ionic samples which likely contain large number of degradation-related impurities (DRIs) with different polarities and charge properties. It is often a challenge to develop selective and robust high performance liquid chromatography (HPLC) methods for the efficient separation of all DRIs. In this study, an analytical quality by design (AQbD) approach was proposed for stability-indicating method development of cloxacillin. The structures, retention and UV characteristics rules of penicillins and their impurities were summarized and served as useful prior knowledge. Through quality risk assessment and screen design, 3 critical process parameters (CPPs) were defined, including 2 mixture variables (MVs) and 1 process variable (PV). A combined mixture-process variable (MPV) design was conducted to evaluate the 3 CPPs simultaneously and a response surface methodology (RSM) was used to achieve the optimal experiment parameters. A dual gradient elution was performed to change buffer pH, mobile-phase type and strength simultaneously. The design spaces (DSs) was evaluated using Monte Carlo simulation to give their possibility of meeting the specifications of CQAs. A Plackett-Burman design was performed to test the robustness around the working points and to decide the normal operating ranges (NORs). Finally, validation was performed following International Conference on Harmonisation (ICH) guidelines. To our knowledge, this is the first study of using MPV design and dual gradient elution to develop HPLC methods and improve separations for complex ionic samples. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lin, Yu-Cheng; Lin, Yu-Hsuan; Lo, Men-Tzung; Peng, Chung-Kang; Huang, Norden E.; Yang, Cheryl C. H.; Kuo, Terry B. J.
2016-02-01
The complex fluctuations in heart rate variability (HRV) reflect cardiac autonomic modulation and are an indicator of congestive heart failure (CHF). This paper proposes a novel nonlinear approach to HRV investigation, the multi dynamic trend analysis (MDTA) method, based on the empirical mode decomposition algorithm of the Hilbert-Huang transform combined with a variable-sized sliding-window method. Electrocardiographic signal data obtained from the PhysioNet database were used. These data were from subjects with CHF (mean age = 59.4 ± 8.4), an age-matched elderly healthy control group (59.3 ± 10.6), and a healthy young group (30.3 ± 4.8); the HRVs of these subjects were processed using the MDTA method, time domain analysis, and frequency domain analysis. Among all HRV parameters, the MDTA absolute value slope (MDTS) and MDTA deviation (MDTD) exhibited the greatest area under the curve (AUC) of the receiver operating characteristics in distinguishing between the CHF group and the healthy controls (AUC = 1.000) and between the healthy elderly subject group and the young subject group (AUC = 0.834 ± 0.067 for MDTS; 0.837 ± 0.066 for MDTD). The CHF subjects presented with lower MDTA indices than those of the healthy elderly subject group. Furthermore, the healthy elderly subjects exhibited lower MDTA indices than those of the young controls. The MDTA method can adaptively and automatically identify the intrinsic fluctuation on variable temporal and spatial scales when investigating complex fluctuations in the cardiac autonomic regulation effects of aging and CHF.
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2013-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262
Complexity reduction of biochemical rate expressions.
Schmidt, Henning; Madsen, Mads F; Danø, Sune; Cedersund, Gunnar
2008-03-15
The current trend in dynamical modelling of biochemical systems is to construct more and more mechanistically detailed and thus complex models. The complexity is reflected in the number of dynamic state variables and parameters, as well as in the complexity of the kinetic rate expressions. However, a greater level of complexity, or level of detail, does not necessarily imply better models, or a better understanding of the underlying processes. Data often does not contain enough information to discriminate between different model hypotheses, and such overparameterization makes it hard to establish the validity of the various parts of the model. Consequently, there is an increasing demand for model reduction methods. We present a new reduction method that reduces complex rational rate expressions, such as those often used to describe enzymatic reactions. The method is a novel term-based identifiability analysis, which is easy to use and allows for user-specified reductions of individual rate expressions in complete models. The method is one of the first methods to meet the classical engineering objective of improved parameter identifiability without losing the systems biology demand of preserved biochemical interpretation. The method has been implemented in the Systems Biology Toolbox 2 for MATLAB, which is freely available from http://www.sbtoolbox2.org. The Supplementary Material contains scripts that show how to use it by applying the method to the example models, discussed in this article.
Stochastic Time Models of Syllable Structure
Shaw, Jason A.; Gafos, Adamantios I.
2015-01-01
Drawing on phonology research within the generative linguistics tradition, stochastic methods, and notions from complex systems, we develop a modelling paradigm linking phonological structure, expressed in terms of syllables, to speech movement data acquired with 3D electromagnetic articulography and X-ray microbeam methods. The essential variable in the models is syllable structure. When mapped to discrete coordination topologies, syllabic organization imposes systematic patterns of variability on the temporal dynamics of speech articulation. We simulated these dynamics under different syllabic parses and evaluated simulations against experimental data from Arabic and English, two languages claimed to parse similar strings of segments into different syllabic structures. Model simulations replicated several key experimental results, including the fallibility of past phonetic heuristics for syllable structure, and exposed the range of conditions under which such heuristics remain valid. More importantly, the modelling approach consistently diagnosed syllable structure proving resilient to multiple sources of variability in experimental data including measurement variability, speaker variability, and contextual variability. Prospects for extensions of our modelling paradigm to acoustic data are also discussed. PMID:25996153
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ju, E-mail: jliu@ices.utexas.edu; Gomez, Hector; Evans, John A.
2013-09-01
We propose a new methodology for the numerical solution of the isothermal Navier–Stokes–Korteweg equations. Our methodology is based on a semi-discrete Galerkin method invoking functional entropy variables, a generalization of classical entropy variables, and a new time integration scheme. We show that the resulting fully discrete scheme is unconditionally stable-in-energy, second-order time-accurate, and mass-conservative. We utilize isogeometric analysis for spatial discretization and verify the aforementioned properties by adopting the method of manufactured solutions and comparing coarse mesh solutions with overkill solutions. Various problems are simulated to show the capability of the method. Our methodology provides a means of constructing unconditionallymore » stable numerical schemes for nonlinear non-convex hyperbolic systems of conservation laws.« less
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.
Hero, Alfred O; Rajaratnam, Bala
2016-01-01
When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.
The Aristotle method: a new concept to evaluate quality of care based on complexity.
Lacour-Gayet, François; Clarke, David R
2005-06-01
Evaluation of quality of care is a duty of the modern medical practice. A reliable method of quality evaluation able to compare fairly institutions and inform a patient and his family of the potential risk of a procedure is clearly needed. It is now well recognized that any method that purports to evaluate quality of care should include a case mix/risk stratification method. No valuable method was available until recently in pediatric cardiac surgery. The Aristotle method is a new concept of evaluation of quality of care in congenital heart surgery based on the complexity of the surgical procedures. Involving a panel of expert surgeons, the project started in 1999 and included 50 pediatric surgeons from 23 countries. The basic score adjusts the complexity of a given procedure and is calculated as the sum of potential for mortality, potential for morbidity and anticipated technical difficulty. The Comprehensive Score further adjusts the complexity according to the specific patient characteristics (anatomy, associated procedures, co-morbidity, etc.). The Aristotle method is original as it introduces several new concepts: the calculated complexity is a constant for a given patient all over the world; complexity is an independent value and risk is a variable depending on the performance; and Performance = Complexity x Outcome. The Aristotle score is a good vector of communication between patients, doctors and insurance companies and may stimulate the quality and the organization of heath care in our field and in others.
Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya
2013-01-01
Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.
The process and utility of classification and regression tree methodology in nursing research
Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda
2014-01-01
Aim This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Background Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Design Discussion paper. Data sources English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984–2013. Discussion Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Implications for Nursing Research Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Conclusion Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. PMID:24237048
The process and utility of classification and regression tree methodology in nursing research.
Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda
2014-06-01
This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Discussion paper. English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984-2013. Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. © 2013 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Prashanth, K. N.; Basavaiah, K.
2018-01-01
Two simple and sensitive extraction-free spectrophotometric methods are described for the determination of flunarizine dihydrochloride. The methods are based on the ion-pair complex formation between the nitrogenous compound flunarizine (FNZ), converted from flunarizine dihydrochloride (FNH), and the acidic dye phenol red (PR), in which experimental variables were circumvented. The first method (method A) is based on the formation of a yellow-colored ion-pair complex (1:1 drug:dye) between FNZ and PR in chloroform, which is measured at 415 nm. In the second method (method B), the formed drug-dye ion-pair complex is treated with ethanolic potassium hydroxide in an ethanolic medium, and the resulting base form of the dye is measured at 580 nm. The stoichiometry of the formed ion-pair complex between the drug and dye (1:1) is determined by Job's continuous variations method, and the stability constant of the complex is also calculated. These methods quantify FNZ over the concentration ranges 5.0-70.0 in method A and 0.5-7.0 μg/mL in method B. The calculated molar absorptivities are 6.17 × 103 and 5.5 × 104 L/mol·cm-1 for method A and method B, respectively, with corresponding Sandell sensitivity values of 0.0655 and 0.0074 μg/cm2. The methods are applied to the determination of FNZ in pure drug and human urine.
An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.
1994-01-01
Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.
New parameters in adaptive testing of ferromagnetic materials utilizing magnetic Barkhausen noise
NASA Astrophysics Data System (ADS)
Pal'a, Jozef; Ušák, Elemír
2016-03-01
A new method of magnetic Barkhausen noise (MBN) measurement and optimization of the measured data processing with respect to non-destructive evaluation of ferromagnetic materials was tested. Using this method we tried to found, if it is possible to enhance sensitivity and stability of measurement results by replacing the traditional MBN parameter (root mean square) with some new parameter. In the tested method, a complex set of the MBN from minor hysteresis loops is measured. Afterward, the MBN data are collected into suitably designed matrices and optimal parameters of MBN with respect to maximum sensitivity to the evaluated variable are searched. The method was verified on plastically deformed steel samples. It was shown that the proposed measuring method and measured data processing bring an improvement of the sensitivity to the evaluated variable when comparing with measuring traditional MBN parameter. Moreover, we found a parameter of MBN, which is highly resistant to the changes of applied field amplitude and at the same time it is noticeably more sensitive to the evaluated variable.
multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows
NASA Astrophysics Data System (ADS)
Turnquist, Brian; Owkes, Mark
2017-11-01
Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.
Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph
2012-06-22
Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.
2012-01-01
Background Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. Results This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. Conclusions The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding. PMID:22727013
Missing data imputation: focusing on single imputation.
Zhang, Zhongheng
2016-01-01
Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations.
Missing data imputation: focusing on single imputation
2016-01-01
Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations. PMID:26855945
Student-Authored Case Studies as a Learning Tool in Physical Education Teacher Education
ERIC Educational Resources Information Center
Richards, K. Andrew; Hemphill, Michael A.; Templin, Thomas J.; Eubank, Andrew M.
2012-01-01
In order to prepare undergraduate students better for the realities of school life, instructors of some methods courses have started to use case studies for teaching. These cases are used to highlight the complexity and variability of the educational environment. This method of teaching, which has its roots in business, law, and medicine, has…
Resampling and Distribution of the Product Methods for Testing Indirect Effects in Complex Models
ERIC Educational Resources Information Center
Williams, Jason; MacKinnon, David P.
2008-01-01
Recent advances in testing mediation have found that certain resampling methods and tests based on the mathematical distribution of 2 normal random variables substantially outperform the traditional "z" test. However, these studies have primarily focused only on models with a single mediator and 2 component paths. To address this limitation, a…
Simple and practical approach for computing the ray Hessian matrix in geometrical optics.
Lin, Psang Dain
2018-02-01
A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.
Designing Better Scaffolding in Teaching Complex Systems with Graphical Simulations
NASA Astrophysics Data System (ADS)
Li, Na
Complex systems are an important topic in science education today, but they are usually difficult for secondary-level students to learn. Although graphic simulations have many advantages in teaching complex systems, scaffolding is a critical factor for effective learning. This dissertation study was conducted around two complementary research questions on scaffolding: (1) How can we chunk and sequence learning activities in teaching complex systems? (2) How can we help students make connections among system levels across learning activities (level bridging)? With a sample of 123 seventh-graders, this study employed a 3x2 experimental design that factored sequencing methods (independent variable 1; three levels) with level-bridging scaffolding (independent variable 2; two levels) and compared the effectiveness of each combination. The study measured two dependent variables: (1) knowledge integration (i.e., integrating and connecting content-specific normative concepts and providing coherent scientific explanations); (2) understanding of the deep causal structure (i.e., being able to grasp and transfer the causal knowledge of a complex system). The study used a computer-based simulation environment as the research platform to teach the ideal gas law as a system. The ideal gas law is an emergent chemical system that has three levels: (1) experiential macro level (EM) (e.g., an aerosol can explodes when it is thrown into the fire); (2) abstract macro level (AM) (i.e., the relationships among temperature, pressure and volume); (3) micro level (Mi) (i.e., molecular activity). The sequencing methods of these levels were manipulated by changing the order in which they were delivered with three possibilities: (1) EM-AM-Mi; (2) Mi-AM-EM; (3) AM-Mi-EM. The level-bridging scaffolding variable was manipulated on two aspects: (1) inserting inter-level questions among learning activities; (2) two simulations dynamically linked in the final learning activity. Addressing the first research question, the Experiential macro-Abstract macro-Micro (EM-AM-Mi) sequencing method, following the "concrete to abstract" principle, produced better knowledge integration while the Micro-Abstract macro-Experiential macro (Mi-AM-EM) sequencing method, congruent with the causal direction of the emergent system, produced better understanding of the deep causal structure only when level-bridging scaffolding was provided. The Abstract macro-Micro-Experiential macro (AM-Mi-EM) sequencing method produced worse performance in general, because it did not follow the "concrete to abstract" principle, nor did it align with the causal structure of the emergent system. As to the second research question, the results showed that level-bridging scaffolding was important for both knowledge integration and understanding of the causal structure in learning the ideal gas law system.
System and method for modeling and analyzing complex scenarios
Shevitz, Daniel Wolf
2013-04-09
An embodiment of the present invention includes a method for analyzing and solving possibility tree. A possibility tree having a plurality of programmable nodes is constructed and solved with a solver module executed by a processor element. The solver module executes the programming of said nodes, and tracks the state of at least a variable through a branch. When a variable of said branch is out of tolerance with a parameter, the solver disables remaining nodes of the branch and marks the branch as an invalid solution. The valid solutions are then aggregated and displayed as valid tree solutions.
Rebuilding DEMATEL threshold value: an example of a food and beverage information system.
Hsieh, Yi-Fang; Lee, Yu-Cheng; Lin, Shao-Bin
2016-01-01
This study demonstrates how a decision-making trial and evaluation laboratory (DEMATEL) threshold value can be quickly and reasonably determined in the process of combining DEMATEL and decomposed theory of planned behavior (DTPB) models. Models are combined to identify the key factors of a complex problem. This paper presents a case study of a food and beverage information system as an example. The analysis of the example indicates that, given direct and indirect relationships among variables, if a traditional DTPB model only simulates the effects of the variables without considering that the variables will affect the original cause-and-effect relationships among the variables, then the original DTPB model variables cannot represent a complete relationship. For the food and beverage example, a DEMATEL method was employed to reconstruct a DTPB model and, more importantly, to calculate reasonable DEMATEL threshold value for determining additional relationships of variables in the original DTPB model. This study is method-oriented, and the depth of investigation into any individual case is limited. Therefore, the methods proposed in various fields of study should ideally be used to identify deeper and more practical implications.
Reconstructing the equilibrium Boltzmann distribution from well-tempered metadynamics.
Bonomi, M; Barducci, A; Parrinello, M
2009-08-01
Metadynamics is a widely used and successful method for reconstructing the free-energy surface of complex systems as a function of a small number of suitably chosen collective variables. This is achieved by biasing the dynamics of the system. The bias acting on the collective variables distorts the probability distribution of the other variables. Here we present a simple reweighting algorithm for recovering the unbiased probability distribution of any variable from a well-tempered metadynamics simulation. We show the efficiency of the reweighting procedure by reconstructing the distribution of the four backbone dihedral angles of alanine dipeptide from two and even one dimensional metadynamics simulation. 2009 Wiley Periodicals, Inc.
Assessing the accuracy and stability of variable selection ...
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used, or stepwise procedures are employed which iteratively add/remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating dataset consists of the good/poor condition of n=1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p=212) of landscape features from the StreamCat dataset. Two types of RF models are compared: a full variable set model with all 212 predictors, and a reduced variable set model selected using a backwards elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors, and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substanti
NASA Astrophysics Data System (ADS)
Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana
2013-04-01
The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the initialization procedures tested. It also allows us determine the upper limit of improvement that can be expected if more sophisticated initialization methods are used in decadal prediction simulations and if models have an internal variability agreeing with the observed one. Furthermore, since pseudo-observations are available everywhere at any time step, we also analyse the differences between simulations initialized with a complete dataset of pseudo-observations and the ones for which pseudo-observations data are not assimilated everywhere. In a second step, simulations are realized in a realistic framework, i.e. through the use of actual available observations. The same data assimilation methods are tested in order to check if more sophisticated methods can improve the reliability and the accuracy of decadal prediction simulations, even if they are performed with models that overestimate the internal variability of the sea ice extent in the Southern Ocean.
NASA Astrophysics Data System (ADS)
Schauberger, Bernhard; Rolinski, Susanne; Müller, Christoph
2016-12-01
Variability of crop yields is detrimental for food security. Under climate change its amplitude is likely to increase, thus it is essential to understand the underlying causes and mechanisms. Crop models are the primary tool to project future changes in crop yields under climate change. A systematic overview of drivers and mechanisms of crop yield variability (YV) can thus inform crop model development and facilitate improved understanding of climate change impacts on crop yields. Yet there is a vast body of literature on crop physiology and YV, which makes a prioritization of mechanisms for implementation in models challenging. Therefore this paper takes on a novel approach to systematically mine and organize existing knowledge from the literature. The aim is to identify important mechanisms lacking in models, which can help to set priorities in model improvement. We structure knowledge from the literature in a semi-quantitative network. This network consists of complex interactions between growing conditions, plant physiology and crop yield. We utilize the resulting network structure to assign relative importance to causes of YV and related plant physiological processes. As expected, our findings confirm existing knowledge, in particular on the dominant role of temperature and precipitation, but also highlight other important drivers of YV. More importantly, our method allows for identifying the relevant physiological processes that transmit variability in growing conditions to variability in yield. We can identify explicit targets for the improvement of crop models. The network can additionally guide model development by outlining complex interactions between processes and by easily retrieving quantitative information for each of the 350 interactions. We show the validity of our network method as a structured, consistent and scalable dictionary of literature. The method can easily be applied to many other research fields.
Novel Approach for Solving the Equation of Motion of a Simple Harmonic Oscillator. Classroom Notes
ERIC Educational Resources Information Center
Gauthier, N.
2004-01-01
An elementary method, based on the use of complex variables, is proposed for solving the equation of motion of a simple harmonic oscillator. The method is first applied to the equation of motion for an undamped oscillator and it is then extended to the more important case of a damped oscillator. It is finally shown that the method can readily be…
Measuring the surgical 'learning curve': methods, variables and competency.
Khan, Nuzhath; Abboudi, Hamid; Khan, Mohammed Shamim; Dasgupta, Prokar; Ahmed, Kamran
2014-03-01
To describe how learning curves are measured and what procedural variables are used to establish a 'learning curve' (LC). To assess whether LCs are a valuable measure of competency. A review of the surgical literature pertaining to LCs was conducted using the Medline and OVID databases. Variables should be fully defined and when possible, patient-specific variables should be used. Trainee's prior experience and level of supervision should be quantified; the case mix and complexity should ideally be constant. Logistic regression may be used to control for confounding variables. Ideally, a learning plateau should reach a predefined/expert-derived competency level, which should be fully defined. When the group splitting method is used, smaller cohorts should be used in order to narrow the range of the LC. Simulation technology and competence-based objective assessments may be used in training and assessment in LC studies. Measuring the surgical LC has potential benefits for patient safety and surgical education. However, standardisation in the methods and variables used to measure LCs is required. Confounding variables, such as participant's prior experience, case mix, difficulty of procedures and level of supervision, should be controlled. Competency and expert performance should be fully defined. © 2013 The Authors. BJU International © 2013 BJU International.
Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture
NASA Technical Reports Server (NTRS)
Gloersen, Per (Inventor)
2004-01-01
An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.
Alternative Approaches to Evaluation in Empirical Microeconomics
ERIC Educational Resources Information Center
Blundell, Richard; Dias, Monica Costa
2009-01-01
This paper reviews some of the most popular policy evaluation methods in empirical microeconomics: social experiments, natural experiments, matching, instrumental variables, discontinuity design, and control functions. It discusses identification of traditionally used average parameters and more complex distributional parameters. The adequacy,…
On the sensitivity of complex, internally coupled systems
NASA Technical Reports Server (NTRS)
Sobieszczanskisobieski, Jaroslaw
1988-01-01
A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.
Adaptive simplification of complex multiscale systems.
Chiavazzo, Eliodoro; Karlin, Ilya
2011-03-01
A fully adaptive methodology is developed for reducing the complexity of large dissipative systems. This represents a significant step toward extracting essential physical knowledge from complex systems, by addressing the challenging problem of a minimal number of variables needed to exactly capture the system dynamics. Accurate reduced description is achieved, by construction of a hierarchy of slow invariant manifolds, with an embarrassingly simple implementation in any dimension. The method is validated with the autoignition of the hydrogen-air mixture where a reduction to a cascade of slow invariant manifolds is observed.
Eticha, Tadele; Kahsay, Getu; Hailu, Teklebrhan; Gebretsadikan, Tesfamichael; Asefa, Fitsum; Gebretsadik, Hailekiros; Thangabalan, Boovizhikannan
2018-01-01
A simple extractive spectrophotometric technique has been developed and validated for the determination of miconazole nitrate in pure and pharmaceutical formulations. The method is based on the formation of a chloroform-soluble ion-pair complex between the drug and bromocresol green (BCG) dye in an acidic medium. The complex showed absorption maxima at 422 nm, and the system obeys Beer's law in the concentration range of 1-30 µ g/mL with molar absorptivity of 2.285 × 10 4 L/mol/cm. The composition of the complex was studied by Job's method of continuous variation, and the results revealed that the mole ratio of drug : BCG is 1 : 1. Full factorial design was used to optimize the effect of variable factors, and the method was validated based on the ICH guidelines. The method was applied for the determination of miconazole nitrate in real samples.
Solving the Inverse-Square Problem with Complex Variables
ERIC Educational Resources Information Center
Gauthier, N.
2005-01-01
The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…
Predicting radiotherapy outcomes using statistical learning techniques
NASA Astrophysics Data System (ADS)
El Naqa, Issam; Bradley, Jeffrey D.; Lindsay, Patricia E.; Hope, Andrew J.; Deasy, Joseph O.
2009-09-01
Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for 'generalizabilty' validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among model variables. These models have the capacity to predict on unseen data. Part of this work was first presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.
HDMR methods to assess reliability in slope stability analyses
NASA Astrophysics Data System (ADS)
Kozubal, Janusz; Pula, Wojciech; Vessia, Giovanna
2014-05-01
Stability analyses of complex rock-soil deposits shall be tackled considering the complex structure of discontinuities within rock mass and embedded soil layers. These materials are characterized by a high variability in physical and mechanical properties. Thus, to calculate the slope safety factor in stability analyses two issues must be taken into account: 1) the uncertainties related to structural setting of the rock-slope mass and 2) the variability in mechanical properties of soils and rocks. High Dimensional Model Representation (HDMR) (Chowdhury et al. 2009; Chowdhury and Rao 2010) can be used to carry out the reliability index within complex rock-soil slopes when numerous random variables with high coefficient of variations are considered. HDMR implements the inverse reliability analysis, meaning that the unknown design parameters are sought provided that prescribed reliability index values are attained. Such approach uses implicit response functions according to the Response Surface Method (RSM). The simple RSM can be efficiently applied when less than four random variables are considered; as the number of variables increases, the efficiency in reliability index estimation decreases due to the great amount of calculations. Therefore, HDMR method is used to improve the computational accuracy. In this study, the sliding mechanism in Polish Flysch Carpathian Mountains have been studied by means of HDMR. The Southern part of Poland where Carpathian Mountains are placed is characterized by a rather complicated sedimentary pattern of flysh rocky-soil deposits that can be simplified into three main categories: (1) normal flysch, consisting of adjacent sandstone and shale beds of approximately equal thickness, (2) shale flysch, where shale beds are thicker than adjacent sandstone beds, and (3) sandstone flysch, where the opposite holds. Landslides occur in all flysch deposit types thus some configurations of possible unstable settings (within fractured rocky-soil masses) resulting in sliding mechanisms have been investigated in this study. The reliability indices values drawn from the HDRM method have been compared with conventional approaches as neural networks: the efficiency of HDRM is shown in the case studied. References Chowdhury R., Rao B.N. and Prasad A.M. 2009. High-dimensional model representation for structural reliability analysis. Commun. Numer. Meth. Engng, 25: 301-337. Chowdhury R. and Rao B. 2010. Probabilistic Stability Assessment of Slopes Using High Dimensional Model Representation. Computers and Geotechnics, 37: 876-884.
An Equation-Free Reduced-Order Modeling Approach to Tropical Pacific Simulation
NASA Astrophysics Data System (ADS)
Wang, Ruiwen; Zhu, Jiang; Luo, Zhendong; Navon, I. M.
2009-03-01
The “equation-free” (EF) method is often used in complex, multi-scale problems. In such cases it is necessary to know the closed form of the required evolution equations about oscopic variables within some applied fields. Conceptually such equations exist, however, they are not available in closed form. The EF method can bypass this difficulty. This method can obtain oscopic information by implementing models at a microscopic level. Given an initial oscopic variable, through lifting we can obtain the associated microscopic variable, which may be evolved using Direct Numerical Simulations (DNS) and by restriction, we can obtain the necessary oscopic information and the projective integration to obtain the desired quantities. In this paper we apply the EF POD-assisted method to the reduced modeling of a large-scale upper ocean circulation in the tropical Pacific domain. The computation cost is reduced dramatically. Compared with the POD method, the method provided more accurate results and it did not require the availability of any explicit equations or the right-hand side (RHS) of the evolution equation.
Integrated geostatistics for modeling fluid contacts and shales in Prudhoe Bay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perez, G.; Chopra, A.K.; Severson, C.D.
1997-12-01
Geostatistics techniques are being used increasingly to model reservoir heterogeneity at a wide range of scales. A variety of techniques is now available with differing underlying assumptions, complexity, and applications. This paper introduces a novel method of geostatistics to model dynamic gas-oil contacts and shales in the Prudhoe Bay reservoir. The method integrates reservoir description and surveillance data within the same geostatistical framework. Surveillance logs and shale data are transformed to indicator variables. These variables are used to evaluate vertical and horizontal spatial correlation and cross-correlation of gas and shale at different times and to develop variogram models. Conditional simulationmore » techniques are used to generate multiple three-dimensional (3D) descriptions of gas and shales that provide a measure of uncertainty. These techniques capture the complex 3D distribution of gas-oil contacts through time. The authors compare results of the geostatistical method with conventional techniques as well as with infill wells drilled after the study. Predicted gas-oil contacts and shale distributions are in close agreement with gas-oil contacts observed at infill wells.« less
Modularity and the spread of perturbations in complex dynamical systems
NASA Astrophysics Data System (ADS)
Kolchinsky, Artemy; Gates, Alexander J.; Rocha, Luis M.
2015-12-01
We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.
Modularity and the spread of perturbations in complex dynamical systems.
Kolchinsky, Artemy; Gates, Alexander J; Rocha, Luis M
2015-12-01
We propose a method to decompose dynamical systems based on the idea that modules constrain the spread of perturbations. We find partitions of system variables that maximize "perturbation modularity," defined as the autocovariance of coarse-grained perturbed trajectories. The measure effectively separates the fast intramodular from the slow intermodular dynamics of perturbation spreading (in this respect, it is a generalization of the "Markov stability" method of network community detection). Our approach captures variation of modular organization across different system states, time scales, and in response to different kinds of perturbations: aspects of modularity which are all relevant to real-world dynamical systems. It offers a principled alternative to detecting communities in networks of statistical dependencies between system variables (e.g., "relevance networks" or "functional networks"). Using coupled logistic maps, we demonstrate that the method uncovers hierarchical modular organization planted in a system's coupling matrix. Additionally, in homogeneously coupled map lattices, it identifies the presence of self-organized modularity that depends on the initial state, dynamical parameters, and type of perturbations. Our approach offers a powerful tool for exploring the modular organization of complex dynamical systems.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Variability of hand tremor in rest and in posture--a pilot study.
Rahimi, Fariborz; Bee, Carina; South, Angela; Debicki, Derek; Jog, Mandar
2011-01-01
Previous, studies have demonstrated variability in the frequency and amplitude in tremor between subjects and between trials in both healthy individuals and those with disease states. However, to date, few studies have examined the composition of tremor. Efficacy of treatment for tremor using techniques such as Botulinum neurotoxin type A (BoNT A) injection may benefit from a better understanding of tremor variability, but more importantly, tremor composition. In the present study, we evaluated tremor variability and composition in 8 participants with either essential tremor or Parkinson disease tremor using kinematic recording methods. Our preliminary findings suggest that while individual patients may have more intra-trial and intra-task variability, overall, task effect was significant only for amplitude of tremor. Composition of tremor varied among patients and the data suggest that tremor composition is complex involving multiple muscle groups. These results may support the value of kinematic assessment methods and the improved understanding of tremor composition in the management of tremor.
Rosen, G D
2006-06-01
Meta-analysis is a vague descriptor used to encompass very diverse methods of data collection analysis, ranging from simple averages to more complex statistical methods. Holo-analysis is a fully comprehensive statistical analysis of all available data and all available variables in a specified topic, with results expressed in a holistic factual empirical model. The objectives and applications of holo-analysis include software production for prediction of responses with confidence limits, translation of research conditions to praxis (field) circumstances, exposure of key missing variables, discovery of theoretically unpredictable variables and interactions, and planning future research. Holo-analyses are cited as examples of the effects on broiler feed intake and live weight gain of exogenous phytases, which account for 70% of variation in responses in terms of 20 highly significant chronological, dietary, environmental, genetic, managemental, and nutrient variables. Even better future accountancy of variation will be facilitated if and when authors of papers routinely provide key data for currently neglected variables, such as temperatures, complete feed formulations, and mortalities.
An improved partial least-squares regression method for Raman spectroscopy
NASA Astrophysics Data System (ADS)
Momenpour Tehran Monfared, Ali; Anis, Hanan
2017-10-01
It is known that the performance of partial least-squares (PLS) regression analysis can be improved using the backward variable selection method (BVSPLS). In this paper, we further improve the BVSPLS based on a novel selection mechanism. The proposed method is based on sorting the weighted regression coefficients, and then the importance of each variable of the sorted list is evaluated using root mean square errors of prediction (RMSEP) criterion in each iteration step. Our Improved BVSPLS (IBVSPLS) method has been applied to leukemia and heparin data sets and led to an improvement in limit of detection of Raman biosensing ranged from 10% to 43% compared to PLS. Our IBVSPLS was also compared to the jack-knifing (simpler) and Genetic Algorithm (more complex) methods. Our method was consistently better than the jack-knifing method and showed either a similar or a better performance compared to the genetic algorithm.
Li, Ziyi; Safo, Sandra E; Long, Qi
2017-07-11
Sparse principal component analysis (PCA) is a popular tool for dimensionality reduction, pattern recognition, and visualization of high dimensional data. It has been recognized that complex biological mechanisms occur through concerted relationships of multiple genes working in networks that are often represented by graphs. Recent work has shown that incorporating such biological information improves feature selection and prediction performance in regression analysis, but there has been limited work on extending this approach to PCA. In this article, we propose two new sparse PCA methods called Fused and Grouped sparse PCA that enable incorporation of prior biological information in variable selection. Our simulation studies suggest that, compared to existing sparse PCA methods, the proposed methods achieve higher sensitivity and specificity when the graph structure is correctly specified, and are fairly robust to misspecified graph structures. Application to a glioblastoma gene expression dataset identified pathways that are suggested in the literature to be related with glioblastoma. The proposed sparse PCA methods Fused and Grouped sparse PCA can effectively incorporate prior biological information in variable selection, leading to improved feature selection and more interpretable principal component loadings and potentially providing insights on molecular underpinnings of complex diseases.
Sayago, Ana; Asuero, Agustin G
2006-09-14
A bilogarithmic hyperbolic cosine method for the spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data has been devised and applied to literature data. A weighting scheme, however, is necessary in order to take into account the transformation for linearization. The method may be considered a useful alternative to methods in which one variable is involved on both sides of the basic equation (i.e. Heller and Schwarzenbach, Likussar and Adsul and Ramanathan). Classical least squares lead in those instances to biased and approximate stability constants and limiting absorbance values. The advantages of the proposed method are: the method gives a clear indication of the existence of only one complex in solution, it is flexible enough to allow for weighting of measurements and the computation procedure yield the best value of logbeta11 and its limit of error. The agreement between the values obtained by applying the weighted hyperbolic cosine method and the non-linear regression (NLR) method is good, being in both cases the mean quadratic error at a minimum.
The Complex Action Recognition via the Correlated Topic Model
Tu, Hong-bin; Xia, Li-min; Wang, Zheng-wu
2014-01-01
Human complex action recognition is an important research area of the action recognition. Among various obstacles to human complex action recognition, one of the most challenging is to deal with self-occlusion, where one body part occludes another one. This paper presents a new method of human complex action recognition, which is based on optical flow and correlated topic model (CTM). Firstly, the Markov random field was used to represent the occlusion relationship between human body parts in terms of an occlusion state variable. Secondly, the structure from motion (SFM) is used for reconstructing the missing data of point trajectories. Then, we can extract the key frame based on motion feature from optical flow and the ratios of the width and height are extracted by the human silhouette. Finally, we use the topic model of correlated topic model (CTM) to classify action. Experiments were performed on the KTH, Weizmann, and UIUC action dataset to test and evaluate the proposed method. The compared experiment results showed that the proposed method was more effective than compared methods. PMID:24574920
Stability of uncertain impulsive complex-variable chaotic systems with time-varying delays.
Zheng, Song
2015-09-01
In this paper, the robust exponential stabilization of uncertain impulsive complex-variable chaotic delayed systems is considered with parameters perturbation and delayed impulses. It is assumed that the considered complex-variable chaotic systems have bounded parametric uncertainties together with the state variables on the impulses related to the time-varying delays. Based on the theories of adaptive control and impulsive control, some less conservative and easily verified stability criteria are established for a class of complex-variable chaotic delayed systems with delayed impulses. Some numerical simulations are given to validate the effectiveness of the proposed criteria of impulsive stabilization for uncertain complex-variable chaotic delayed systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Reliability analysis of composite structures
NASA Technical Reports Server (NTRS)
Kan, Han-Pin
1992-01-01
A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.
Control of complex networks requires both structure and dynamics
NASA Astrophysics Data System (ADS)
Gates, Alexander J.; Rocha, Luis M.
2016-04-01
The study of network structure has uncovered signatures of the organization of complex systems. However, there is also a need to understand how to control them; for example, identifying strategies to revert a diseased cell to a healthy state, or a mature cell to a pluripotent state. Two recent methodologies suggest that the controllability of complex systems can be predicted solely from the graph of interactions between variables, without considering their dynamics: structural controllability and minimum dominating sets. We demonstrate that such structure-only methods fail to characterize controllability when dynamics are introduced. We study Boolean network ensembles of network motifs as well as three models of biochemical regulation: the segment polarity network in Drosophila melanogaster, the cell cycle of budding yeast Saccharomyces cerevisiae, and the floral organ arrangement in Arabidopsis thaliana. We demonstrate that structure-only methods both undershoot and overshoot the number and which sets of critical variables best control the dynamics of these models, highlighting the importance of the actual system dynamics in determining control. Our analysis further shows that the logic of automata transition functions, namely how canalizing they are, plays an important role in the extent to which structure predicts dynamics.
The Cramér-Rao Bounds and Sensor Selection for Nonlinear Systems with Uncertain Observations.
Wang, Zhiguo; Shen, Xiaojing; Wang, Ping; Zhu, Yunmin
2018-04-05
This paper considers the problems of the posterior Cramér-Rao bound and sensor selection for multi-sensor nonlinear systems with uncertain observations. In order to effectively overcome the difficulties caused by uncertainty, we investigate two methods to derive the posterior Cramér-Rao bound. The first method is based on the recursive formula of the Cramér-Rao bound and the Gaussian mixture model. Nevertheless, it needs to compute a complex integral based on the joint probability density function of the sensor measurements and the target state. The computation burden of this method is relatively high, especially in large sensor networks. Inspired by the idea of the expectation maximization algorithm, the second method is to introduce some 0-1 latent variables to deal with the Gaussian mixture model. Since the regular condition of the posterior Cramér-Rao bound is unsatisfied for the discrete uncertain system, we use some continuous variables to approximate the discrete latent variables. Then, a new Cramér-Rao bound can be achieved by a limiting process of the Cramér-Rao bound of the continuous system. It avoids the complex integral, which can reduce the computation burden. Based on the new posterior Cramér-Rao bound, the optimal solution of the sensor selection problem can be derived analytically. Thus, it can be used to deal with the sensor selection of a large-scale sensor networks. Two typical numerical examples verify the effectiveness of the proposed methods.
Edge delamination in angle-ply composite laminates, part 5
NASA Technical Reports Server (NTRS)
Wang, S. S.
1981-01-01
A theoretical method was developed for describing the edge delamination stress intensity characteristics in angle-ply composite laminates. The method is based on the theory of anisotropic elasticity. The edge delamination problem is formulated using Lekhnitskii's complex-variable stress potentials and an especially developed eigenfunction expansion method. The method predicts exact orders of the three-dimensional stress singularity in a delamination crack tip region. With the aid of boundary collocation, the method predicts the complete stress and displacement fields in a finite-dimensional, delaminated composite. Fracture mechanics parameters such as the mixed-mode stress intensity factors and associated energy release rates for edge delamination can be calculated explicity. Solutions are obtained for edge delaminated (theta/-theta theta/-theta) angle-ply composites under uniform axial extension. Effects of delamination lengths, fiber orientations, lamination and geometric variables are studied.
NASA Technical Reports Server (NTRS)
Pikkujamsa, S. M.; Makikallio, T. H.; Sourander, L. B.; Raiha, I. J.; Puukka, P.; Skytta, J.; Peng, C. K.; Goldberger, A. L.; Huikuri, H. V.
1999-01-01
BACKGROUND: New methods of R-R interval variability based on fractal scaling and nonlinear dynamics ("chaos theory") may give new insights into heart rate dynamics. The aims of this study were to (1) systematically characterize and quantify the effects of aging from early childhood to advanced age on 24-hour heart rate dynamics in healthy subjects; (2) compare age-related changes in conventional time- and frequency-domain measures with changes in newly derived measures based on fractal scaling and complexity (chaos) theory; and (3) further test the hypothesis that there is loss of complexity and altered fractal scaling of heart rate dynamics with advanced age. METHODS AND RESULTS: The relationship between age and cardiac interbeat (R-R) interval dynamics from childhood to senescence was studied in 114 healthy subjects (age range, 1 to 82 years) by measurement of the slope, beta, of the power-law regression line (log power-log frequency) of R-R interval variability (10(-4) to 10(-2) Hz), approximate entropy (ApEn), short-term (alpha(1)) and intermediate-term (alpha(2)) fractal scaling exponents obtained by detrended fluctuation analysis, and traditional time- and frequency-domain measures from 24-hour ECG recordings. Compared with young adults (<40 years old, n=29), children (<15 years old, n=27) showed similar complexity (ApEn) and fractal correlation properties (alpha(1), alpha(2), beta) of R-R interval dynamics despite lower spectral and time-domain measures. Progressive loss of complexity (decreased ApEn, r=-0.69, P<0.001) and alterations of long-term fractal-like heart rate behavior (increased alpha(2), r=0.63, decreased beta, r=-0.60, P<0.001 for both) were observed thereafter from middle age (40 to 60 years, n=29) to old age (>60 years, n=29). CONCLUSIONS: Cardiac interbeat interval dynamics change markedly from childhood to old age in healthy subjects. Children show complexity and fractal correlation properties of R-R interval time series comparable to those of young adults, despite lower overall heart rate variability. Healthy aging is associated with R-R interval dynamics showing higher regularity and altered fractal scaling consistent with a loss of complex variability.
Yang, Mingjun; Huang, Jing; MacKerell, Alexander D
2015-06-09
Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
NASA Astrophysics Data System (ADS)
Wu, Wei; Xu, An-Ding; Liu, Hong-Bin
2015-01-01
Climate data in gridded format are critical for understanding climate change and its impact on eco-environment. The aim of the current study is to develop spatial databases for three climate variables (maximum, minimum temperatures, and relative humidity) over a large region with complex topography in southwestern China. Five widely used approaches including inverse distance weighting, ordinary kriging, universal kriging, co-kriging, and thin-plate smoothing spline were tested. Root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE) showed that thin-plate smoothing spline with latitude, longitude, and elevation outperformed other models. Average RMSE, MAE, and MAPE of the best models were 1.16 °C, 0.74 °C, and 7.38 % for maximum temperature; 0.826 °C, 0.58 °C, and 6.41 % for minimum temperature; and 3.44, 2.28, and 3.21 % for relative humidity, respectively. Spatial datasets of annual and monthly climate variables with 1-km resolution covering the period 1961-2010 were then obtained using the best performance methods. Comparative study showed that the current outcomes were in well agreement with public datasets. Based on the gridded datasets, changes in temperature variables were investigated across the study area. Future study might be needed to capture the uncertainty induced by environmental conditions through remote sensing and knowledge-based methods.
NASA Astrophysics Data System (ADS)
Verechagin, V.; Kris, R.; Schwarzband, I.; Milstein, A.; Cohen, B.; Shkalim, A.; Levy, S.; Price, D.; Bal, E.
2018-03-01
Over the years, mask and wafers defects dispositioning has become an increasingly challenging and time consuming task. With design rules getting smaller, OPC getting complex and scanner illumination taking on free-form shapes - the probability of a user to perform accurate and repeatable classification of defects detected by mask inspection tools into pass/fail bins is reducing. The critical challenging of mask defect metrology for small nodes ( < 30 nm) was reviewed in [1]. While Critical Dimension (CD) variation measurement is still the method of choice for determining a mask defect future impact on wafer, the high complexity of OPCs combined with high variability in pattern shapes poses a challenge for any automated CD variation measurement method. In this study, a novel approach for measurement generalization is presented. CD variation assessment performance is evaluated on multiple different complex shape patterns, and is benchmarked against an existing qualified measurement methodology.
Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques
2012-01-01
Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.
NASA Astrophysics Data System (ADS)
Goodwell, Allison E.; Kumar, Praveen
2017-07-01
Information theoretic measures can be used to identify nonlinear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1 min environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.
Power of data mining methods to detect genetic associations and interactions.
Molinaro, Annette M; Carriero, Nicholas; Bjornson, Robert; Hartge, Patricia; Rothman, Nathaniel; Chatterjee, Nilanjan
2011-01-01
Genetic association studies, thus far, have focused on the analysis of individual main effects of SNP markers. Nonetheless, there is a clear need for modeling epistasis or gene-gene interactions to better understand the biologic basis of existing associations. Tree-based methods have been widely studied as tools for building prediction models based on complex variable interactions. An understanding of the power of such methods for the discovery of genetic associations in the presence of complex interactions is of great importance. Here, we systematically evaluate the power of three leading algorithms: random forests (RF), Monte Carlo logic regression (MCLR), and multifactor dimensionality reduction (MDR). We use the algorithm-specific variable importance measures (VIMs) as statistics and employ permutation-based resampling to generate the null distribution and associated p values. The power of the three is assessed via simulation studies. Additionally, in a data analysis, we evaluate the associations between individual SNPs in pro-inflammatory and immunoregulatory genes and the risk of non-Hodgkin lymphoma. The power of RF is highest in all simulation models, that of MCLR is similar to RF in half, and that of MDR is consistently the lowest. Our study indicates that the power of RF VIMs is most reliable. However, in addition to tuning parameters, the power of RF is notably influenced by the type of variable (continuous vs. categorical) and the chosen VIM. Copyright © 2011 S. Karger AG, Basel.
Optimizing structure of complex technical system by heterogeneous vector criterion in interval form
NASA Astrophysics Data System (ADS)
Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.
2018-05-01
The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.
Umari, Amjad M.J.; Gorelick, Steven M.
1986-01-01
In the numerical modeling of groundwater solute transport, explicit solutions may be obtained for the concentration field at any future time without computing concentrations at intermediate times. The spatial variables are discretized and time is left continuous in the governing differential equation. These semianalytical solutions have been presented in the literature and involve the eigensystem of a coefficient matrix. This eigensystem may be complex (i.e., have imaginary components) due to the asymmetry created by the advection term in the governing advection-dispersion equation. Previous investigators have either used complex arithmetic to represent a complex eigensystem or chosen large dispersivity values for which the imaginary components of the complex eigenvalues may be ignored without significant error. It is shown here that the error due to ignoring the imaginary components of complex eigenvalues is large for small dispersivity values. A new algorithm that represents the complex eigensystem by converting it to a real eigensystem is presented. The method requires only real arithmetic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shuai; Xiong, Lihua; Li, Hong-Yi
2015-05-26
Hydrological simulations to delineate the impacts of climate variability and human activities are subjected to uncertainties related to both parameter and structure of the hydrological models. To analyze the impact of these uncertainties on the model performance and to yield more reliable simulation results, a global calibration and multimodel combination method that integrates the Shuffled Complex Evolution Metropolis (SCEM) and Bayesian Model Averaging (BMA) of four monthly water balance models was proposed. The method was applied to the Weihe River Basin (WRB), the largest tributary of the Yellow River, to determine the contribution of climate variability and human activities tomore » runoff changes. The change point, which was used to determine the baseline period (1956-1990) and human-impacted period (1991-2009), was derived using both cumulative curve and Pettitt’s test. Results show that the combination method from SCEM provides more skillful deterministic predictions than the best calibrated individual model, resulting in the smallest uncertainty interval of runoff changes attributed to climate variability and human activities. This combination methodology provides a practical and flexible tool for attribution of runoff changes to climate variability and human activities by hydrological models.« less
Extending Quantum Chemistry of Bound States to Electronic Resonances
NASA Astrophysics Data System (ADS)
Jagau, Thomas-C.; Bravaya, Ksenia B.; Krylov, Anna I.
2017-05-01
Electronic resonances are metastable states with finite lifetime embedded in the ionization or detachment continuum. They are ubiquitous in chemistry, physics, and biology. Resonances play a central role in processes as diverse as DNA radiolysis, plasmonic catalysis, and attosecond spectroscopy. This review describes novel equation-of-motion coupled-cluster (EOM-CC) methods designed to treat resonances and bound states on an equal footing. Built on complex-variable techniques such as complex scaling and complex absorbing potentials that allow resonances to be associated with a single eigenstate of the molecular Hamiltonian rather than several continuum eigenstates, these methods extend electronic-structure tools developed for bound states to electronic resonances. Selected examples emphasize the formal advantages as well as the numerical accuracy of EOM-CC in the treatment of electronic resonances. Connections to experimental observables such as spectra and cross sections, as well as practical aspects of implementing complex-valued approaches, are also discussed.
Collection Evaluation for Interdisciplinary Fields: A Comprehensive Approach.
ERIC Educational Resources Information Center
Dobson, Cynthia; And Others
1996-01-01
Collection development for interdisciplinary areas is more complex than for traditionally well-defined disciplines, so new evaluation methods are needed. This article identifies variables in interdisciplinary fields and presents a model of their typical information components. Traditional use-centered and materials-centered evaluation methods…
Analytical close-form solutions to the elastic fields of solids with dislocations and surface stress
NASA Astrophysics Data System (ADS)
Ye, Wei; Paliwal, Bhasker; Ougazzaden, Abdallah; Cherkaoui, Mohammed
2013-07-01
The concept of eigenstrain is adopted to derive a general analytical framework to solve the elastic field for 3D anisotropic solids with general defects by considering the surface stress. The formulation shows the elastic constants and geometrical features of the surface play an important role in determining the elastic fields of the solid. As an application, the analytical close-form solutions to the stress fields of an infinite isotropic circular nanowire are obtained. The stress fields are compared with the classical solutions and those of complex variable method. The stress fields from this work demonstrate the impact from the surface stress when the size of the nanowire shrinks but becomes negligible in macroscopic scale. Compared with the power series solutions of complex variable method, the analytical solutions in this work provide a better platform and they are more flexible in various applications. More importantly, the proposed analytical framework profoundly improves the studies of general 3D anisotropic materials with surface effects.
Ectopic beats in approximate entropy and sample entropy-based HRV assessment
NASA Astrophysics Data System (ADS)
Singh, Butta; Singh, Dilbag; Jaryal, A. K.; Deepak, K. K.
2012-05-01
Approximate entropy (ApEn) and sample entropy (SampEn) are the promising techniques for extracting complex characteristics of cardiovascular variability. Ectopic beats, originating from other than the normal site, are the artefacts contributing a serious limitation to heart rate variability (HRV) analysis. The approaches like deletion and interpolation are currently in use to eliminate the bias produced by ectopic beats. In this study, normal R-R interval time series of 10 healthy and 10 acute myocardial infarction (AMI) patients were analysed by inserting artificial ectopic beats. Then the effects of ectopic beats editing by deletion, degree-zero and degree-one interpolation on ApEn and SampEn have been assessed. Ectopic beats addition (even 2%) led to reduced complexity, resulting in decreased ApEn and SampEn of both healthy and AMI patient data. This reduction has been found to be dependent on level of ectopic beats. Editing of ectopic beats by interpolation degree-one method is found to be superior to other methods.
Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul
2013-01-01
Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters. PMID:24204873
Gasche, Loïc; Mahévas, Stéphanie; Marchal, Paul
2013-01-01
Ecosystems are usually complex, nonlinear and strongly influenced by poorly known environmental variables. Among these systems, marine ecosystems have high uncertainties: marine populations in general are known to exhibit large levels of natural variability and the intensity of fishing efforts can change rapidly. These uncertainties are a source of risks that threaten the sustainability of both fish populations and fishing fleets targeting them. Appropriate management measures have to be found in order to reduce these risks and decrease sensitivity to uncertainties. Methods have been developed within decision theory that aim at allowing decision making under severe uncertainty. One of these methods is the information-gap decision theory. The info-gap method has started to permeate ecological modelling, with recent applications to conservation. However, these practical applications have so far been restricted to simple models with analytical solutions. Here we implement a deterministic approach based on decision theory in a complex model of the Eastern English Channel. Using the ISIS-Fish modelling platform, we model populations of sole and plaice in this area. We test a wide range of values for ecosystem, fleet and management parameters. From these simulations, we identify management rules controlling fish harvesting that allow reaching management goals recommended by ICES (International Council for the Exploration of the Sea) working groups while providing the highest robustness to uncertainties on ecosystem parameters.
Climate Change Impacts and Vulnerability Assessment in Industrial Complexes
NASA Astrophysics Data System (ADS)
Lee, H. J.; Lee, D. K.
2016-12-01
Climate change has recently caused frequent natural disasters, such as floods, droughts, and heat waves. Such disasters have also increased industrial damages. We must establish climate change adaptation policies to reduce the industrial damages. It is important to make accurate vulnerability assessment to establish climate change adaptation policies. Thus, this study aims at establishing a new index to assess vulnerability level in industrial complexes. Most vulnerability indices have been developed with subjective approaches, such as the Delphi survey and the Analytic Hierarchy Process(AHP). The subjective approaches rely on the knowledge of a few experts, which provokes the lack of the reliability of the indices. To alleviate the problem, we have designed a vulnerability index incorporating objective approaches. We have investigated 42 industrial complex sites in Republic of Korea (ROK). To calculate weights of variables, we used entropy method as an objective method integrating the Delphi survey as a subjective method. Finally, we found our method integrating both subjective method and objective method could generate result. The integration of the entropy method enables us to assess the vulnerability objectively. Our method will be useful to establish climate change adaptation policies by reducing the uncertainties of the methods based on the subjective approaches.
Optimal control in microgrid using multi-agent reinforcement learning.
Li, Fu-Dong; Wu, Min; He, Yong; Chen, Xin
2012-11-01
This paper presents an improved reinforcement learning method to minimize electricity costs on the premise of satisfying the power balance and generation limit of units in a microgrid with grid-connected mode. Firstly, the microgrid control requirements are analyzed and the objective function of optimal control for microgrid is proposed. Then, a state variable "Average Electricity Price Trend" which is used to express the most possible transitions of the system is developed so as to reduce the complexity and randomicity of the microgrid, and a multi-agent architecture including agents, state variables, action variables and reward function is formulated. Furthermore, dynamic hierarchical reinforcement learning, based on change rate of key state variable, is established to carry out optimal policy exploration. The analysis shows that the proposed method is beneficial to handle the problem of "curse of dimensionality" and speed up learning in the unknown large-scale world. Finally, the simulation results under JADE (Java Agent Development Framework) demonstrate the validity of the presented method in optimal control for a microgrid with grid-connected mode. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Protein complex purification from Thermoplasma acidophilum using a phage display library.
Hubert, Agnes; Mitani, Yasuo; Tamura, Tomohiro; Boicu, Marius; Nagy, István
2014-03-01
We developed a novel protein complex isolation method using a single-chain variable fragment (scFv) based phage display library in a two-step purification procedure. We adapted the antibody-based phage display technology which has been developed for single target proteins to a protein mixture containing about 300 proteins, mostly subunits of Thermoplasma acidophilum complexes. T. acidophilum protein specific phages were selected and corresponding scFvs were expressed in Escherichia coli. E. coli cell lysate containing the expressed His-tagged scFv specific against one antigen protein and T. acidophilum crude cell lysate containing intact target protein complexes were mixed, incubated and subjected to protein purification using affinity and size exclusion chromatography steps. This method was confirmed to isolate intact particles of thermosome and proteasome suitable for electron microscopy analysis and provides a novel protein complex isolation strategy applicable to organisms where no genetic tools are available. Copyright © 2013 Elsevier B.V. All rights reserved.
Nonlinear dynamics of cardiovascular ageing
Shiogai, Y.; Stefanovska, A.; McClintock, P.V.E.
2010-01-01
The application of methods drawn from nonlinear and stochastic dynamics to the analysis of cardiovascular time series is reviewed, with particular reference to the identification of changes associated with ageing. The natural variability of the heart rate (HRV) is considered in detail, including the respiratory sinus arrhythmia (RSA) corresponding to modulation of the instantaneous cardiac frequency by the rhythm of respiration. HRV has been intensively studied using traditional spectral analyses, e.g. by Fourier transform or autoregressive methods, and, because of its complexity, has been used as a paradigm for testing several proposed new methods of complexity analysis. These methods are reviewed. The application of time–frequency methods to HRV is considered, including in particular the wavelet transform which can resolve the time-dependent spectral content of HRV. Attention is focused on the cardio-respiratory interaction by introduction of the respiratory frequency variability signal (RFV), which can be acquired simultaneously with HRV by use of a respiratory effort transducer. Current methods for the analysis of interacting oscillators are reviewed and applied to cardio-respiratory data, including those for the quantification of synchronization and direction of coupling. These reveal the effect of ageing on the cardio-respiratory interaction through changes in the mutual modulation of the instantaneous cardiac and respiratory frequencies. Analyses of blood flow signals recorded with laser Doppler flowmetry are reviewed and related to the current understanding of how endothelial-dependent oscillations evolve with age: the inner lining of the vessels (the endothelium) is shown to be of crucial importance to the emerging picture. It is concluded that analyses of the complex and nonlinear dynamics of the cardiovascular system can illuminate the mechanisms of blood circulation, and that the heart, the lungs and the vascular system function as a single entity in dynamical terms. Clear evidence is found for dynamical ageing. PMID:20396667
Nonlinear dynamics of cardiovascular ageing
NASA Astrophysics Data System (ADS)
Shiogai, Y.; Stefanovska, A.; McClintock, P. V. E.
2010-03-01
The application of methods drawn from nonlinear and stochastic dynamics to the analysis of cardiovascular time series is reviewed, with particular reference to the identification of changes associated with ageing. The natural variability of the heart rate (HRV) is considered in detail, including the respiratory sinus arrhythmia (RSA) corresponding to modulation of the instantaneous cardiac frequency by the rhythm of respiration. HRV has been intensively studied using traditional spectral analyses, e.g. by Fourier transform or autoregressive methods, and, because of its complexity, has been used as a paradigm for testing several proposed new methods of complexity analysis. These methods are reviewed. The application of time-frequency methods to HRV is considered, including in particular the wavelet transform which can resolve the time-dependent spectral content of HRV. Attention is focused on the cardio-respiratory interaction by introduction of the respiratory frequency variability signal (RFV), which can be acquired simultaneously with HRV by use of a respiratory effort transducer. Current methods for the analysis of interacting oscillators are reviewed and applied to cardio-respiratory data, including those for the quantification of synchronization and direction of coupling. These reveal the effect of ageing on the cardio-respiratory interaction through changes in the mutual modulation of the instantaneous cardiac and respiratory frequencies. Analyses of blood flow signals recorded with laser Doppler flowmetry are reviewed and related to the current understanding of how endothelial-dependent oscillations evolve with age: the inner lining of the vessels (the endothelium) is shown to be of crucial importance to the emerging picture. It is concluded that analyses of the complex and nonlinear dynamics of the cardiovascular system can illuminate the mechanisms of blood circulation, and that the heart, the lungs and the vascular system function as a single entity in dynamical terms. Clear evidence is found for dynamical ageing.
Dusing, Stacey C; Izzo, Theresa A.; Thacker, Leroy R.; Galloway, James C
2014-01-01
Background and Aims Postural control differs between infants born preterm and full term at 1–3 weeks of age. It is unclear if differences persist or alter the development of early behaviors. The aim of this longitudinal study was to compare changes in postural control variability during development of head control and reaching in infants born preterm and full term. Methods Eighteen infants born preterm (mean gestational age 28.3±3.1 weeks) were included in this study and compared to existing data from 22 infants born full term. Postural variability was assessed longitudinally using root mean squared displacement and approximate entropy of the center of pressure displacement from birth to 6 months as measures of the magnitude of the variability and complexity of postural control. Behavioral coding was used to quantify development of head control and reaching. Results Group differences were identified in postural complexity during the development of head control and reaching. Infants born preterm used more repetitive and less adaptive postural control strategies than infants born full term. Both groups changed their postural complexity utilized during the development of head control and reaching. Discussion Early postural complexity was decreased in infants born preterm, compared to infants born full term. Commonly used clinical assessments did not identify these early differences in postural control. Altered postural control in infants born preterm influenced ongoing skill development in the first six months of life. PMID:24485170
NASA Technical Reports Server (NTRS)
Schoenauer, W.; Daeubler, H. G.; Glotz, G.; Gruening, J.
1986-01-01
An implicit difference procedure for the solution of equations for a chemically reacting hypersonic boundary layer is described. Difference forms of arbitrary error order in the x and y coordinate plane were used to derive estimates for discretization error. Computational complexity and time were minimized by the use of this difference method and the iteration of the nonlinear boundary layer equations was regulated by discretization error. Velocity and temperature profiles are presented for Mach 20.14 and Mach 18.5; variables are velocity profiles, temperature profiles, mass flow factor, Stanton number, and friction drag coefficient; three figures include numeric data.
Lytwak, Lauren A; Stanley, Julie M; Mejía, Michelle L; Holliday, Bradley J
2010-09-07
A bromo tricarbonyl rhenium(I) complex with a thiophene-functionalized bis(pyrazolyl) pyridine ligand (L), ReBr(L)(CO)(3) (1), has been synthesized and characterized by variable temperature and COSY 2-D (1)H NMR spectroscopy, single-crystal X-ray diffraction, and photophysical methods. Complex 1 is highly luminescent in both solution and solid-state, consistent with phosphorescence from an emissive (3)MLCT excited state with an additional contribution from a LC (3)(pi-->pi*) transition. The single-crystal X-ray diffraction structure of the title ligand is also reported.
NASA Astrophysics Data System (ADS)
Tirandaz, Hamed; Karami-Mollaee, Ali
2018-06-01
Chaotic systems demonstrate complex behaviour in their state variables and their parameters, which generate some challenges and consequences. This paper presents a new synchronisation scheme based on integral sliding mode control (ISMC) method on a class of complex chaotic systems with complex unknown parameters. Synchronisation between corresponding states of a class of complex chaotic systems and also convergence of the errors of the system parameters to zero point are studied. The designed feedback control vector and complex unknown parameter vector are analytically achieved based on the Lyapunov stability theory. Moreover, the effectiveness of the proposed methodology is verified by synchronisation of the Chen complex system and the Lorenz complex systems as the leader and the follower chaotic systems, respectively. In conclusion, some numerical simulations related to the synchronisation methodology is given to illustrate the effectiveness of the theoretical discussions.
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hero, Alfred O.; Rajaratnam, Bala
When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
Hero, Alfred O.; Rajaratnam, Bala
2015-01-01
When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700
Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining
Hero, Alfred O.; Rajaratnam, Bala
2015-12-09
When can reliable inference be drawn in the ‘‘Big Data’’ context? This article presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large-scale inference. In large-scale data applications like genomics, connectomics, and eco-informatics, the data set is often variable rich but sample starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than the number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for ‘‘Big Data.’’ Sample complexity, however, hasmore » received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; and 3) the purely high-dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high-dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. We demonstrate various regimes of correlation mining based on the unifying perspective of high-dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.« less
Clifton, Lei; Clifton, David A; Hahn, Clive E W; Farmeryy, Andrew D
2013-01-01
Conventional methods for estimating cardiopulmonary variables usually require complex gas analyzers and the active co-operation of the patient. Therefore, they are not compatible with the crowded environment of the intensive care unit (ICU) or operating theatre, where patient co-operation is typically impossible. However, it is these patients that would benefit the most from accurate estimation of cardiopulmonary variables, because of their critical condition. This paper describes the results of a collaborative development between an anesthesiologists and biomedical engineers to create a compact and non-invasive system for the measurement of cardiopulmonary variables such as lung volume, airway dead space volume, and pulmonary blood flow. In contrast with conventional methods, the compact apparatus and non-invasive nature of the proposed method allow it to be used in the ICU, as well as in general clinical settings. We propose the use of a non-invasive method, in which tracer gases are injected into the patient's inspired breath, and the concentration of the tracer gases is subsequently measured. A novel breath-by-breath tidal ventilation model is then used to estimate the value of a patient's cardiopulmonary variables. Experimental results from an artificial lung demonstrate minimal error in the estimation of known parameters using the proposed method. Results from analysis of a cohort of 20 healthy volunteers (within the Oxford University Hospitals NHS Trust) show that the values of estimated cardiopulmonary variables from these subjects lies within the expected ranges. Advantages of this method are that it is non-invasive, compact, portable, and can perform analysis in real time with less than 1 min of acquired respiratory data.
An analysis of relational complexity in an air traffic control conflict detection task.
Boag, Christine; Neal, Andrew; Loft, Shayne; Halford, Graeme S
2006-11-15
Theoretical analyses of air traffic complexity were carried out using the Method for the Analysis of Relational Complexity. Twenty-two air traffic controllers examined static air traffic displays and were required to detect and resolve conflicts. Objective measures of performance included conflict detection time and accuracy. Subjective perceptions of mental workload were assessed by a complexity-sorting task and subjective ratings of the difficulty of different aspects of the task. A metric quantifying the complexity of pair-wise relations among aircraft was able to account for a substantial portion of the variance in the perceived complexity and difficulty of conflict detection problems, as well as reaction time. Other variables that influenced performance included the mean minimum separation between aircraft pairs and the amount of time that aircraft spent in conflict.
Handling Practicalities in Agricultural Policy Optimization for Water Quality Improvements
Bilevel and multi-objective optimization methods are often useful to spatially target agri-environmental policy throughout a watershed. This type of problem is complex and is comprised of a number of practicalities: (i) a large number of decision variables, (ii) at least two inte...
Southern Forestry Smoke Management Guidebook
Hugh E. Mobley; [senior compiler
1976-01-01
A system for predicting and modifying smoke concentrations from prescription fires is introduced. While limited to particulate matter and the more typical southern fuels, the system is for both simple and complex applications. Forestrysmoke constituents, variables affecting smoke production and dispersion, and new methods for estimating available fuel are presented....
Integrated Reconfigurable Intelligent Systems (IRIS) for Complex Naval Systems
2010-02-21
RKF45] and Adams Variable Step- Size Predictor - Corrector methods). While such algorithms naturally are usually used to numerically solve differential...verified by yet another function call. Due to their nature, such methods are referred to as predictor - corrector methods. While computationally expensive...CONTRACT NUMBER N00014-09- C -0394 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER N/A 6. Author(s) Dr. Dimitri N. Mavris Dr. Yongchang Li 5d
Symmetry reduction and exact solutions of two higher-dimensional nonlinear evolution equations.
Gu, Yongyi; Qi, Jianming
2017-01-01
In this paper, symmetries and symmetry reduction of two higher-dimensional nonlinear evolution equations (NLEEs) are obtained by Lie group method. These NLEEs play an important role in nonlinear sciences. We derive exact solutions to these NLEEs via the [Formula: see text]-expansion method and complex method. Five types of explicit function solutions are constructed, which are rational, exponential, trigonometric, hyperbolic and elliptic function solutions of the variables in the considered equations.
Close-range laser scanning in forests: towards physically based semantics across scales.
Morsdorf, F; Kükenbrink, D; Schneider, F D; Abegg, M; Schaepman, M E
2018-04-06
Laser scanning with its unique measurement concept holds the potential to revolutionize the way we assess and quantify three-dimensional vegetation structure. Modern laser systems used at close range, be it on terrestrial, mobile or unmanned aerial platforms, provide dense and accurate three-dimensional data whose information just waits to be harvested. However, the transformation of such data to information is not as straightforward as for airborne and space-borne approaches, where typically empirical models are built using ground truth of target variables. Simpler variables, such as diameter at breast height, can be readily derived and validated. More complex variables, e.g. leaf area index, need a thorough understanding and consideration of the physical particularities of the measurement process and semantic labelling of the point cloud. Quantified structural models provide a framework for such labelling by deriving stem and branch architecture, a basis for many of the more complex structural variables. The physical information of the laser scanning process is still underused and we show how it could play a vital role in conjunction with three-dimensional radiative transfer models to shape the information retrieval methods of the future. Using such a combined forward and physically based approach will make methods robust and transferable. In addition, it avoids replacing observer bias from field inventories with instrument bias from different laser instruments. Still, an intensive dialogue with the users of the derived information is mandatory to potentially re-design structural concepts and variables so that they profit most of the rich data that close-range laser scanning provides.
NASA Technical Reports Server (NTRS)
Zhang, Yiqiang; Alexander, J. I. D.; Ouazzani, J.
1994-01-01
Free and moving boundary problems require the simultaneous solution of unknown field variables and the boundaries of the domains on which these variables are defined. There are many technologically important processes that lead to moving boundary problems associated with fluid surfaces and solid-fluid boundaries. These include crystal growth, metal alloy and glass solidification, melting and name propagation. The directional solidification of semi-conductor crystals by the Bridgman-Stockbarger method is a typical example of such a complex process. A numerical model of this growth method must solve the appropriate heat, mass and momentum transfer equations and determine the location of the melt-solid interface. In this work, a Chebyshev pseudospectra collocation method is adapted to the problem of directional solidification. Implementation involves a solution algorithm that combines domain decomposition, finite-difference preconditioned conjugate minimum residual method and a Picard type iterative scheme.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
Roussy, Georges; Dichtel, Bernard; Chaabane, Haykel
2003-01-01
By using a new integrated circuit, which is marketed for bluetooth applications, it is possible to simplify the method of measuring the complex impedance, complex reflection coefficient and complex transmission coefficient in an industrial microwave setup. The Analog Devices circuit AD 8302, which measures gain and phase up to 2.7 GHz, operates with variable level input signals and is less sensitive to both amplitude and frequency fluctuations of the industrial magnetrons than are mixers and AM crystal detectors. Therefore, accurate gain and phase measurements can be performed with low stability generators. A mechanical setup with an AD 8302 is described; the calibration procedure and its performance are presented.
NASA Astrophysics Data System (ADS)
Oluoch, K.; Marwan, N.; Trauth, M.; Loew, A.; Kurths, J.
2012-04-01
The African continent lie almost entirely within the tropics and as such its (tropical) climate systems are predominantly governed by the heterogeneous, spatial and temporal variability of the Hadley and Walker circulations. The variabilities in these meridional and zonal circulations lead to intensification or suppression of the intensities, durations and frequencies of the Inter-tropical Convergence Zone (ICTZ) migration, trade winds and subtropical high-pressure regions and the continental monsoons. The above features play a central role in determining the African rainfall spatial and temporal variability patterns. The current understanding of these climate features and their influence on the rainfall patterns is not sufficiently understood. Like many real-world systems, atmospheric-oceanic processes exhibit non-linear properties that can be better explored using non-linear (NL) methods of time-series analysis. Over the recent years, the complex network approach has evolved as a powerful new player in understanding spatio-temporal dynamics and evolution of complex systems. Together with NL techniques, it is continuing to find new applications in many areas of science and technology including climate research. We would like to use these two powerful methods to understand the spatial structure and dynamics of African rainfall anomaly patterns and extremes. The method of event synchronization (ES) developed by Quiroga et al., 2002 and first applied to climate networks by Malik et al., 2011 looks at correlations with a dynamic time lag and as such, it is a more intuitive way to correlate a complex and heterogeneous system like climate networks than a fixed time delay most commonly used. On the other hand, the short comings of ES is its lack of vigorous test statistics for the significance level of the correlations, and the fact that only the events' time indices are synchronized while all information about how the relative intensities propagate within network framework is lost. The new method we present is motivated by the ES and borrows ideas from signal processing where a signal is represented by its intensity and frequency. Even though the anomaly signals are not periodic, the idea of phase synchronization is not far fetched. It brings into one umbrella, the traditionally known linear Intensity correlation methods like Pearson correlation, spear-man's rank or non-linear ones like mutual information with the ES for non-linear temporal synchronization. The intensity correlation is only performed where there is a temporal synchronization. The former just measures how constant the intensity differences are. In other words, how monotonic are the two functions. The overall measure of correlation and synchronization is the product of the two coefficients. Complex networks constructed by this technique has all the advantages inherent in each of the techniques it borrows. But, it is more superior and able to uncover many known and unknown dynamical features in rainfall field or any variable of interest. The main aim of this work is to develop a method that can identify the footprints of coherent or incoherent structures within the ICTZ, the African and the Indian monsoons and the ENSO signal on the tropical African continent and their temporal evolution.
Complexity analysis of fetal heart rate preceding intrauterine demise.
Schnettler, William T; Goldberger, Ary L; Ralston, Steven J; Costa, Madalena
2016-08-01
Visual non-stress test interpretation lacks the optimal specificity and observer-agreement of an ideal screening tool for intrauterine fetal demise (IUFD) syndrome prevention. Computational methods based on traditional heart rate variability have also been of limited value. Complexity analysis probes properties of the dynamics of physiologic signals that are otherwise not accessible and, therefore, might be useful in this context. To explore the association between fetal heart rate (FHR) complexity analysis and subsequent IUFD. Our specific hypothesis is that the complexity of the fetal heart rate dynamics is lower in the IUFD group compared with controls. This case-control study utilized cases of IUFD at a single tertiary-care center among singleton pregnancies with at least 10min of continuous electronic FHR monitoring on at least 2 weekly occasions in the 3 weeks immediately prior to fetal demise. Controls delivered a live singleton beyond 35 weeks' gestation and were matched to cases by gestational age, testing indication, and maternal age in a 3:1 ratio. FHR data was analyzed using the multiscale entropy (MSE) method to derive their complexity index. In addition, pNNx, a measure of short-term heart rate variability, which in adults is ascribable primarily to cardiac vagal tone modulation, was also computed. 211 IUFDs occurred during the 9-year period of review, but only 6 met inclusion criteria. The median gestational age at the time of IUFD was 35.5 weeks. Three controls were matched to each case for a total of 24 subjects, and 87 FHR tracings were included for analysis. The median gestational age at the first fetal heart rate tracing was similar between groups (median [1st-3rd quartiles] weeks: IUFD cases: 34.7 (34.4-36.2); controls: 35.3 (34.4-36.1); p=.94). The median complexity of the cases' tracings was significantly less than the controls' (12.44 [8.9-16.77] vs. 17.82 [15.21-22.17]; p<.0001). Furthermore, the cases' median complexity decreased as gestation advanced whereas the controls' median complexity increased over time. However, this difference was not statistically significant [-0.83 (-2.03 to 0.47) vs. 0.14 (-1.25 to 0.94); p=.62]. The degree of short-term variability of FHR tracings, as measured by the pNN metric, was significantly lower (p<.005) for the controls (1.1 [0.8-1.3]) than the IUFD cases (1.3 [1.1-1.6]). FHR complexity analysis using multiscale entropy analysis may add value to other measures in detecting and monitoring pregnancies at the highest risk for IUFD. The decrease in complexity and short-term variability seen in the IUFD cases may reflect perturbations in neuroautonomic control due to multiple maternal-fetal factors. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Evidence of Deterministic Components in the Apparent Randomness of GRBs: Clues of a Chaotic Dynamic
Greco, G.; Rosa, R.; Beskin, G.; Karpov, S.; Romano, L.; Guarnieri, A.; Bartolini, C.; Bedogni, R.
2011-01-01
Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short – as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration – seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole. PMID:22355609
Evidence of deterministic components in the apparent randomness of GRBs: clues of a chaotic dynamic.
Greco, G; Rosa, R; Beskin, G; Karpov, S; Romano, L; Guarnieri, A; Bartolini, C; Bedogni, R
2011-01-01
Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short - as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration - seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole.
A Hardware Model Validation Tool for Use in Complex Space Systems
NASA Technical Reports Server (NTRS)
Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.
2010-01-01
One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.
Cross-scale modeling of surface temperature and tree seedling establishment inmountain landscapes
Dingman, John; Sweet, Lynn C.; McCullough, Ian M.; Davis, Frank W.; Flint, Alan L.; Franklin, Janet; Flint, Lorraine E.
2013-01-01
Abstract: Introduction: Estimating surface temperature from above-ground field measurements is important for understanding the complex landscape patterns of plant seedling survival and establishment, processes which occur at heights of only several centimeters. Currently, future climate models predict temperature at 2 m above ground, leaving ground-surface microclimate not well characterized. Methods: Using a network of field temperature sensors and climate models, a ground-surface temperature method was used to estimate microclimate variability of minimum and maximum temperature. Temperature lapse rates were derived from field temperature sensors and distributed across the landscape capturing differences in solar radiation and cold air drainages modeled at a 30-m spatial resolution. Results: The surface temperature estimation method used for this analysis successfully estimated minimum surface temperatures on north-facing, south-facing, valley, and ridgeline topographic settings, and when compared to measured temperatures yielded an R2 of 0.88, 0.80, 0.88, and 0.80, respectively. Maximum surface temperatures generally had slightly more spatial variability than minimum surface temperatures, resulting in R2 values of 0.86, 0.77, 0.72, and 0.79 for north-facing, south-facing, valley, and ridgeline topographic settings. Quasi-Poisson regressions predicting recruitment of Quercus kelloggii (black oak) seedlings from temperature variables were significantly improved using these estimates of surface temperature compared to air temperature modeled at 2 m. Conclusion: Predicting minimum and maximum ground-surface temperatures using a downscaled climate model coupled with temperature lapse rates estimated from field measurements provides a method for modeling temperature effects on plant recruitment. Such methods could be applied to improve projections of species’ range shifts under climate change. Areas of complex topography can provide intricate microclimates that may allow species to redistribute locally as climate changes.
ERIC Educational Resources Information Center
Lee, Hyeon Woo
2011-01-01
As the technology-enriched learning environments and theoretical constructs involved in instructional design become more sophisticated and complex, a need arises for equally sophisticated analytic methods to research these environments, theories, and models. Thus, this paper illustrates a comprehensive approach for analyzing data arising from…
Tire crumb rubber from recycled tires is widely used as infill material in synthetic turf fields in the United States. Recycled crumb rubber is a complex and potentially variable matrix with many metal, VOC, and SVOC constituents, presenting challenges for characterization and ex...
USDA-ARS?s Scientific Manuscript database
Characterization of complex microbial communities by DNA sequencing has become a standard technique in microbial ecology. Yet, particular features of this approach render traditional methods of community comparison problematic. In particular, a very low proportion of community members are typically ...
Modeling Noisy Data with Differential Equations Using Observed and Expected Matrices
ERIC Educational Resources Information Center
Deboeck, Pascal R.; Boker, Steven M.
2010-01-01
Complex intraindividual variability observed in psychology may be well described using differential equations. It is difficult, however, to apply differential equation models in psychological contexts, as time series are frequently short, poorly sampled, and have large proportions of measurement and dynamic error. Furthermore, current methods for…
Delorme, Arnaud; Miyakoshi, Makoto; Jung, Tzyy-Ping; Makeig, Scott
2014-01-01
With the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects. We have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability. This study demonstrates new methods for computing and visualizing grand ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute. PMID:25447029
Carletto, Jeferson Schneider; Luciano, Raquel Medeiros; Bedendo, Gizelle Cristina; Carasek, Eduardo
2009-04-06
A hollow fiber renewal liquid membrane (HFRLM) extraction method to determine cadmium (II) in water samples using Flame Atomic Absorption Spectrometry (FAAS) was developed. Ammonium O,O-diethyl dithiophosphate (DDTP) was used to complex cadmium (II) in an acid medium to obtain a neutral hydrophobic complex (ML(2)). The organic solvent introduced to the sample extracts this complex from the aqueous solution and carries it over the poly(dimethylsiloxane) (PDMS) membrane, that had their walls previously filled with the same organic solvent. The organic solvent is solubilized inside the PDMS membrane, leading to a homogeneous phase. The complex strips the lumen of the membrane where, at higher pH, the complex Cd-DDTP is broken down and cadmium (II) is released into the stripping phase. EDTA was used to complex the cadmium (II), helping to trap the analyte in the stripping phase. A multivariate procedure was used to optimize the studied variables. The optimized variables were: sample (donor phase) pH 3.25, DDTP concentration 0.05% (m/v), stripping (acceptor phase) pH 8.75, EDTA concentration 1.5x10(-2) mol L(-1), extraction temperature 40 degrees C, extraction time 40 min, a solvent mixture N-butyl acetate and hexane (60/40%, v/v) with a volume of 100 microL, and addition of ammonium sulfate to saturate the sample. The sample volume used was 20 mL and the stripping volume was 165 microL. The analyte enrichment factor was 120, limit of detection (LOD) 1.3 microg L(-1), relative standard deviation (RSD) 5.5% and the working linear range 2-30 microg L(-1).
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
General algebraic method applied to control analysis of complex engine types
NASA Technical Reports Server (NTRS)
Boksenbom, Aaron S; Hood, Richard
1950-01-01
A general algebraic method of attack on the problem of controlling gas-turbine engines having any number of independent variables was utilized employing operational functions to describe the assumed linear characteristics for the engine, the control, and the other units in the system. Matrices were used to describe the various units of the system, to form a combined system showing all effects, and to form a single condensed matrix showing the principal effects. This method directly led to the conditions on the control system for noninteraction so that any setting disturbance would affect only its corresponding controlled variable. The response-action characteristics were expressed in terms of the control system and the engine characteristics. The ideal control-system characteristics were explicitly determined in terms of any desired response action.
An efficient approach to BAC based assembly of complex genomes.
Visendi, Paul; Berkman, Paul J; Hayashi, Satomi; Golicz, Agnieszka A; Bayer, Philipp E; Ruperao, Pradeep; Hurgobin, Bhavna; Montenegro, Juan; Chan, Chon-Kit Kenneth; Staňková, Helena; Batley, Jacqueline; Šimková, Hana; Doležel, Jaroslav; Edwards, David
2016-01-01
There has been an exponential growth in the number of genome sequencing projects since the introduction of next generation DNA sequencing technologies. Genome projects have increasingly involved assembly of whole genome data which produces inferior assemblies compared to traditional Sanger sequencing of genomic fragments cloned into bacterial artificial chromosomes (BACs). While whole genome shotgun sequencing using next generation sequencing (NGS) is relatively fast and inexpensive, this method is extremely challenging for highly complex genomes, where polyploidy or high repeat content confounds accurate assembly, or where a highly accurate 'gold' reference is required. Several attempts have been made to improve genome sequencing approaches by incorporating NGS methods, to variable success. We present the application of a novel BAC sequencing approach which combines indexed pools of BACs, Illumina paired read sequencing, a sequence assembler specifically designed for complex BAC assembly, and a custom bioinformatics pipeline. We demonstrate this method by sequencing and assembling BAC cloned fragments from bread wheat and sugarcane genomes. We demonstrate that our assembly approach is accurate, robust, cost effective and scalable, with applications for complete genome sequencing in large and complex genomes.
Dennehy, Ellen B; Suppes, Trisha; John Rush, A; Lynn Crismon, M; Witte, B; Webster, J
2004-01-01
The adoption of treatment guidelines for complex psychiatric illness is increasing. Treatment decisions in psychiatry depend on a number of variables, including severity of symptoms, past treatment history, patient preferences, medication tolerability, and clinical response. While patient outcomes may be improved by the use of treatment guidelines, there is no agreed upon standard by which to assess the degree to which clinician behavior corresponds to those recommendations. This report presents a method to assess clinician adherence to the complex multidimensional treatment guideline for bipolar disorder utilized in the Texas Medication Algorithm Project. The steps involved in the development of this system are presented, including the reliance on standardized documentation, defining core variables of interest, selecting criteria for operationalization of those variables, and computerization of the assessment of adherence. The computerized assessment represents an improvement over other assessment methods, which have relied on laborious and costly chart reviews to extract clinical information and to analyze provider behavior. However, it is limited by the specificity of decisions that guided the adherence scoring process. Preliminary findings using this system with 2035 clinical visits conducted for the bipolar disorder module of TMAP Phase 3 are presented. These data indicate that this system of guideline adherence monitoring is feasible.
NASA Astrophysics Data System (ADS)
Vela Vela, Luis; Sanchez, Raul; Geiger, Joachim
2018-03-01
A method is presented to obtain initial conditions for Smoothed Particle Hydrodynamic (SPH) scenarios where arbitrarily complex density distributions and low particle noise are needed. Our method, named ALARIC, tampers with the evolution of the internal variables to obtain a fast and efficient profile evolution towards the desired goal. The result has very low levels of particle noise and constitutes a perfect candidate to study the equilibrium and stability properties of SPH/SPMHD systems. The method uses the iso-thermal SPH equations to calculate hydrodynamical forces under the presence of an external fictitious potential and evolves them in time with a 2nd-order symplectic integrator. The proposed method generates tailored initial conditions that perform better in many cases than those based on purely crystalline lattices, since it prevents the appearance of anisotropies.
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.
NASA Astrophysics Data System (ADS)
Coban, Mustafa Burak
2018-06-01
A new GdIII coordination complex, {[Gd(2-stp)2(H2O)6].2(4,4'-bipy).4(H2O)}, complex 1, (2-stp = 2-sulfoterephthalate anion and 4,4'-bipy = 4,4'-bipyridine), has been synthesized by hydrothermal method and characterized by elemental analysis, solid state UV-Vis and FT-IR spectroscopy, single-crystal X-ray diffraction, solid state photoluminescence and variable-temperature magnetic measurements. The crystal structure determination shows that GdIII ions are eight coordinated and adopt a distorted square-antiprismatic geometry. Molecules interacting through intra- and intermolecular (O-H⋯O, O-H⋯N) hydrogen bonds in complex 1, give rise to 3D hydrogen bonded structure and the discrete lattice 4,4'-bipy molecules occupy the channel of the 3D structure. π-π stacking interactions also exist 4,4'-bipy-4,4'-bipy and 4,4'-bipy-2-stp molecule rings in 3D structures. Additionally, solid state photoluminescence properties of complex 1 at room temperature have been investigated. Under the excitation of UV light (at 349 nm), the complex 1 exhibited green emissions (at 505 nm) of GdIII ion in the visible region. Furthermore, Variable-temperature magnetic susceptibility and isothermal magnetization as function of external magnetic field studies reveal that complex 1 displays possible antiferromagnetic interaction.
Tuberous sclerosis complex: Recent advances in manifestations and therapy.
Wataya-Kaneda, Mari; Uemura, Motohide; Fujita, Kazutoshi; Hirata, Haruhiko; Osuga, Keigo; Kagitani-Shimono, Kuriko; Nonomura, Norio
2017-09-01
Tuberous sclerosis complex is an autosomal dominant inherited disorder characterized by generalized involvement and variable manifestations with a birth incidence of 1:6000. In a quarter of a century, significant progress in tuberous sclerosis complex has been made. Two responsible genes, TSC1 and TSC2, which encode hamartin and tuberin, respectively, were discovered in the 1990s, and their functions were elucidated in the 2000s. Hamartin-Tuberin complex is involved in the phosphoinositide 3-kinase-protein kinase B-mammalian target of rapamycin signal transduction pathway, and suppresses mammalian target of rapamycin complex 1 activity, which is a center for various functions. Constitutive activation of mammalian target of rapamycin complex 1 causes variable manifestations in tuberous sclerosis complex. Recently, genetic tests were launched to diagnose tuberous sclerosis complex, and mammalian target of rapamycin complex 1 inhibitors are being used to treat tuberous sclerosis complex patients. As a result of these advances, new diagnostic criteria have been established and an indispensable new treatment method; that is, "a cross-sectional medical examination system," a system to involve many experts for tuberous sclerosis complex diagnosis and treatments, was also created. Simultaneously, the frequency of genetic tests and advances in diagnostic technology have resulted in new views on symptoms. The numbers of tuberous sclerosis complex patients without neural symptoms are increasing, and for these patients, renal manifestations and pulmonary lymphangioleiomyomatosis have become important manifestations. New concepts of tuberous sclerosis complex-associated neuropsychiatric disorders or perivascular epithelioid cell tumors are being created. The present review contains a summary of recent advances, significant manifestations and therapy in tuberous sclerosis complex. © 2017 The Japanese Urological Association.
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
NASA Astrophysics Data System (ADS)
Shi, Zhong; Huang, Xuexiang; Hu, Tianjian; Tan, Qian; Hou, Yuzhuo
2016-10-01
Space teleoperation is an important space technology, and human-robot motion similarity can improve the flexibility and intuition of space teleoperation. This paper aims to obtain an appropriate kinematics mapping method of coupled Cartesian-joint space for space teleoperation. First, the coupled Cartesian-joint similarity principles concerning kinematics differences are defined. Then, a novel weighted augmented Jacobian matrix with a variable coefficient (WAJM-VC) method for kinematics mapping is proposed. The Jacobian matrix is augmented to achieve a global similarity of human-robot motion. A clamping weighted least norm scheme is introduced to achieve local optimizations, and the operating ratio coefficient is variable to pursue similarity in the elbow joint. Similarity in Cartesian space and the property of joint constraint satisfaction is analysed to determine the damping factor and clamping velocity. Finally, a teleoperation system based on human motion capture is established, and the experimental results indicate that the proposed WAJM-VC method can improve the flexibility and intuition of space teleoperation to complete complex space tasks.
NASA Astrophysics Data System (ADS)
Naeemullah; Kazi, Tasneem Gul; Afridi, Hassan Imran; Shah, Faheem; Arain, Sadaf Sadia; Arain, Salma Aslam; Panhwar, Abdul Haleem; Arain, Mariam Shahzadi; Samoon, Muhammad Kashif
2016-02-01
An innovative and simple miniaturized solid phase microextraction (M-SPME) method, was developed for preconcentration and determination of silver(I) in the fresh and waste water samples. For M-SPME, a micropipette tip packed with activated carbon cloth (ACC) as sorbent, in a syringe system. The size, morphology and elemental composition of ACC before and after adsorption of analyte have been characterized by scanning electron microscopy and energy dispersive spectroscopy. The sample solution treated with a complexing reagent, ammonium pyrrolidine dithiocarbamate (APDC), was drawn into the syringe filled with ACC and dispensed manually for 2 to 10 aspirating/dispensing cycle. Then the Ag- complex sorbed on the ACC in micropipette was quantitatively eluted by drawing and dispensing of different concentrations of acids for 2 to 5 aspirating/dispensing cycles. The extracted Ag ions with modifier were injected directly into the electrothermal atomic absorption spectrometry for analysis. The influence of different variables on the extraction efficiency, including the concentration of ligand, pH, sample volume, eluent type, concentration and volume was investigated. Validity and accuracy of the developed method was checked by the standard addition method. Reliability of the proposed methodology was checked by the relative standard deviation (%RSD), which was found to be < 5%. Under the optimized experimental variables, the limits of detection (LOD) and enhancement factors (EF), were obtained to be 0.86 ng L- 1 and 120, respectively. The proposed method was successfully applied for the determination of trace levels of silver ions in fresh and waste water samples.
Toomey, Elaine; Matthews, James; Hurley, Deirdre A
2017-01-01
Objectives and design Despite an increasing awareness of the importance of fidelity of delivery within complex behaviour change interventions, it is often poorly assessed. This mixed methods study aimed to establish the fidelity of delivery of a complex self-management intervention and explore the reasons for these findings using a convergent/triangulation design. Setting Feasibility trial of the Self-management of Osteoarthritis and Low back pain through Activity and Skills (SOLAS) intervention (ISRCTN49875385), delivered in primary care physiotherapy. Methods and outcomes 60 SOLAS sessions were delivered across seven sites by nine physiotherapists. Fidelity of delivery of prespecified intervention components was evaluated using (1) audio-recordings (n=60), direct observations (n=24) and self-report checklists (n=60) and (2) individual interviews with physiotherapists (n=9). Quantitatively, fidelity scores were calculated using percentage means and SD of components delivered. Associations between fidelity scores and physiotherapist variables were analysed using Spearman’s correlations. Interviews were analysed using thematic analysis to explore potential reasons for fidelity scores. Integration of quantitative and qualitative data occurred at an interpretation level using triangulation. Results Quantitatively, fidelity scores were high for all assessment methods; with self-report (92.7%) consistently higher than direct observations (82.7%) or audio-recordings (81.7%). There was significant variation between physiotherapists’ individual scores (69.8% - 100%). Both qualitative and quantitative data (from physiotherapist variables) found that physiotherapists’ knowledge (Spearman’s association at p=0.003) and previous experience (p=0.008) were factors that influenced their fidelity. The qualitative data also postulated participant-level (eg, individual needs) and programme-level factors (eg, resources) as additional elements that influenced fidelity. Conclusion The intervention was delivered with high fidelity. This study contributes to the limited evidence regarding fidelity assessment methods within complex behaviour change interventions. The findings suggest a combination of quantitative methods is suitable for the assessment of fidelity of delivery. A mixed methods approach provided a more insightful understanding of fidelity and its influencing factors. Trial registration number ISRCTN49875385; Pre-results. PMID:28780544
Carricarte Naranjo, Claudia; Sanchez-Rodriguez, Lazaro M; Brown Martínez, Marta; Estévez Báez, Mario; Machado García, Andrés
2017-07-01
Heart rate variability (HRV) analysis is a relevant tool for the diagnosis of cardiovascular autonomic neuropathy (CAN). To our knowledge, no previous investigation on CAN has assessed the complexity of HRV from an ordinal perspective. Therefore, the aim of this work is to explore the potential of permutation entropy (PE) analysis of HRV complexity for the assessment of CAN. For this purpose, we performed a short-term PE analysis of HRV in healthy subjects and type 1 diabetes mellitus patients, including patients with CAN. Standard HRV indicators were also calculated in the control group. A discriminant analysis was used to select the variables combination with best discriminative power between control and CAN patients groups, as well as for classifying cases. We found that for some specific temporal scales, PE indicators were significantly lower in CAN patients than those calculated for controls. In such cases, there were ordinal patterns with high probabilities of occurrence, while others were hardly found. We posit this behavior occurs due to a decrease of HRV complexity in the diseased system. Discriminant functions based on PE measures or probabilities of occurrence of ordinal patterns provided an average of 75% and 96% classification accuracy. Correlations of PE and HRV measures showed to depend only on temporal scale, regardless of pattern length. PE analysis at some specific temporal scales, seem to provide additional information to that obtained with traditional HRV methods. We concluded that PE analysis of HRV is a promising method for the assessment of CAN. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reconsidering the evolution of brain, cognition, and behavior in birds and mammals
Willemet, Romain
2013-01-01
Despite decades of research, some of the most basic issues concerning the extraordinarily complex brains and behavior of birds and mammals, such as the factors responsible for the diversity of brain size and composition, are still unclear. This is partly due to a number of conceptual and methodological issues. Determining species and group differences in brain composition requires accounting for the presence of taxon-cerebrotypes and the use of precise statistical methods. The role of allometry in determining brain variables should be revised. In particular, bird and mammalian brains appear to have evolved in response to a variety of selective pressures influencing both brain size and composition. “Brain” and “cognition” are indeed meta-variables, made up of the variables that are ecologically relevant and evolutionarily selected. External indicators of species differences in cognition and behavior are limited by the complexity of these differences. Indeed, behavioral differences between species and individuals are caused by cognitive and affective components. Although intra-species variability forms the basis of species evolution, some of the mechanisms underlying individual differences in brain and behavior appear to differ from those between species. While many issues have persisted over the years because of a lack of appropriate data or methods to test them; several fallacies, particularly those related to the human brain, reflect scientists' preconceptions. The theoretical framework on the evolution of brain, cognition, and behavior in birds and mammals should be reconsidered with these biases in mind. PMID:23847570
Sassi, Roberto; Cerutti, Sergio; Lombardi, Federico; Malik, Marek; Huikuri, Heikki V; Peng, Chung-Kang; Schmidt, Georg; Yamamoto, Yoshiharu
2015-09-01
Following the publication of the Task Force document on heart rate variability (HRV) in 1996, a number of articles have been published to describe new HRV methodologies and their application in different physiological and clinical studies. This document presents a critical review of the new methods. A particular attention has been paid to methodologies that have not been reported in the 1996 standardization document but have been more recently tested in sufficiently sized populations. The following methods were considered: Long-range correlation and fractal analysis; Short-term complexity; Entropy and regularity; and Nonlinear dynamical systems and chaotic behaviour. For each of these methods, technical aspects, clinical achievements, and suggestions for clinical application were reviewed. While the novel approaches have contributed in the technical understanding of the signal character of HRV, their success in developing new clinical tools, such as those for the identification of high-risk patients, has been rather limited. Available results obtained in selected populations of patients by specialized laboratories are nevertheless of interest but new prospective studies are needed. The investigation of new parameters, descriptive of the complex regulation mechanisms of heart rate, has to be encouraged because not all information in the HRV signal is captured by traditional methods. The new technologies thus could provide after proper validation, additional physiological, and clinical meaning. Multidisciplinary dialogue and specialized courses in the combination of clinical cardiology and complex signal processing methods seem warranted for further advances in studies of cardiac oscillations and in the understanding normal and abnormal cardiac control processes. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2015. For permissions please email: journals.permissions@oup.com.
Integral projection models for finite populations in a stochastic environment.
Vindenes, Yngvild; Engen, Steinar; Saether, Bernt-Erik
2011-05-01
Continuous types of population structure occur when continuous variables such as body size or habitat quality affect the vital parameters of individuals. These structures can give rise to complex population dynamics and interact with environmental conditions. Here we present a model for continuously structured populations with finite size, including both demographic and environmental stochasticity in the dynamics. Using recent methods developed for discrete age-structured models we derive the demographic and environmental variance of the population growth as functions of a continuous state variable. These two parameters, together with the expected population growth rate, are used to define a one-dimensional diffusion approximation of the population dynamics. Thus, a substantial reduction in complexity is achieved as the dynamics of the complex structured model can be described by only three population parameters. We provide methods for numerical calculation of the model parameters and demonstrate the accuracy of the diffusion approximation by computer simulation of specific examples. The general modeling framework makes it possible to analyze and predict future dynamics and extinction risk of populations with various types of structure, and to explore consequences of changes in demography caused by, e.g., climate change or different management decisions. Our results are especially relevant for small populations that are often of conservation concern.
Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H
2017-07-01
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in using RF to develop predictive models with large environmental data sets.
Sun, Gang; Hoff, Steven J; Zelle, Brian C; Nelson, Minda A
2008-12-01
It is vital to forecast gas and particle matter concentrations and emission rates (GPCER) from livestock production facilities to assess the impact of airborne pollutants on human health, ecological environment, and global warming. Modeling source air quality is a complex process because of abundant nonlinear interactions between GPCER and other factors. The objective of this study was to introduce statistical methods and radial basis function (RBF) neural network to predict daily source air quality in Iowa swine deep-pit finishing buildings. The results show that four variables (outdoor and indoor temperature, animal units, and ventilation rates) were identified as relative important model inputs using statistical methods. It can be further demonstrated that only two factors, the environment factor and the animal factor, were capable of explaining more than 94% of the total variability after performing principal component analysis. The introduction of fewer uncorrelated variables to the neural network would result in the reduction of the model structure complexity, minimize computation cost, and eliminate model overfitting problems. The obtained results of RBF network prediction were in good agreement with the actual measurements, with values of the correlation coefficient between 0.741 and 0.995 and very low values of systemic performance indexes for all the models. The good results indicated the RBF network could be trained to model these highly nonlinear relationships. Thus, the RBF neural network technology combined with multivariate statistical methods is a promising tool for air pollutant emissions modeling.
Effects of Topography-driven Micro-climatology on Evaporation
NASA Astrophysics Data System (ADS)
Adams, D. D.; Boll, J.; Wagenbrenner, N. S.
2017-12-01
The effects of spatial-temporal variation of climatic conditions on evaporation in micro-climates are not well defined. Current spatially-based remote sensing and modeling for evaporation is limited for high resolutions and complex topographies. We investigated the effect of topography-driven micro-climatology on evaporation supported by field measurements and modeling. Fourteen anemometers and thermometers were installed in intersecting transects over the complex topography of the Cook Agronomy Farm, Pullman, WA. WindNinja was used to create 2-D vector maps based on recorded observations for wind. Spatial analysis of vector maps using ArcGIS was performed for analysis of wind patterns and variation. Based on field measurements, wind speed and direction show consequential variability based on hill-slope location in this complex topography. Wind speed and wind direction varied up to threefold and more than 45 degrees, respectively for a given time interval. The use of existing wind models enables prediction of wind variability over the landscape and subsequently topography-driven evaporation patterns relative to wind. The magnitude of the spatial-temporal variability of wind therefore resulted in variable evaporation rates over the landscape. These variations may contribute to uneven crop development patterns observed during the late growth stages of the agricultural crops at the study location. Use of hill-slope location indexes and appropriate methods for estimating actual evaporation support development of methodologies to better define topography-driven heterogeneity in evaporation. The cumulative effects of spatially-variable climatic factors on evaporation are important to quantify the localized water balance and inform precision farming practices.
Cross-entropy clustering framework for catchment classification
NASA Astrophysics Data System (ADS)
Tongal, Hakan; Sivakumar, Bellie
2017-09-01
There is an increasing interest in catchment classification and regionalization in hydrology, as they are useful for identification of appropriate model complexity and transfer of information from gauged catchments to ungauged ones, among others. This study introduces a nonlinear cross-entropy clustering (CEC) method for classification of catchments. The method specifically considers embedding dimension (m), sample entropy (SampEn), and coefficient of variation (CV) to represent dimensionality, complexity, and variability of the time series, respectively. The method is applied to daily streamflow time series from 217 gauging stations across Australia. The results suggest that a combination of linear and nonlinear parameters (i.e. m, SampEn, and CV), representing different aspects of the underlying dynamics of streamflows, could be useful for determining distinct patterns of flow generation mechanisms within a nonlinear clustering framework. For the 217 streamflow time series, nine hydrologically homogeneous clusters that have distinct patterns of flow regime characteristics and specific dominant hydrological attributes with different climatic features are obtained. Comparison of the results with those obtained using the widely employed k-means clustering method (which results in five clusters, with the loss of some information about the features of the clusters) suggests the superiority of the cross-entropy clustering method. The outcomes from this study provide a useful guideline for employing the nonlinear dynamic approaches based on hydrologic signatures and for gaining an improved understanding of streamflow variability at a large scale.
Synchrosqueezing an effective method for analyzing Doppler radar physiological signals.
Yavari, Ehsan; Rahman, Ashikur; Jia Xu; Mandic, Danilo P; Boric-Lubecke, Olga
2016-08-01
Doppler radar can monitor vital sign wirelessly. Respiratory and heart rate have time-varying behavior. Capturing the rate variability provides crucial physiological information. However, the common time-frequency methods fail to detect key information. We investigate Synchrosqueezing method to extract oscillatory components of the signal with time varying spectrum. Simulation and experimental result shows the potential of the proposed method for analyzing signals with complex time-frequency behavior like physiological signals. Respiration and heart signals and their components are extracted with higher resolution and without any pre-filtering and signal conditioning.
Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.
2004-01-01
Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.
Fluid Mechanics and Complex Variable Theory: Getting Past the 19th Century
ERIC Educational Resources Information Center
Newton, Paul K.
2017-01-01
The subject of fluid mechanics is a rich, vibrant, and rapidly developing branch of applied mathematics. Historically, it has developed hand-in-hand with the elegant subject of complex variable theory. The Westmont College NSF-sponsored workshop on the revitalization of complex variable theory in the undergraduate curriculum focused partly on…
Transdimensional Seismic Tomography
NASA Astrophysics Data System (ADS)
Bodin, T.; Sambridge, M.
2009-12-01
In seismic imaging the degree of model complexity is usually determined by manually tuning damping parameters within a fixed parameterization chosen in advance. Here we present an alternative methodology for seismic travel time tomography where the model complexity is controlled automatically by the data. In particular we use a variable parametrization consisting of Voronoi cells with mobile geometry, shape and number, all treated as unknowns in the inversion. The reversible jump algorithm is used to sample the transdimensional model space within a Bayesian framework which avoids global damping procedures and the need to tune regularisation parameters. The method is an ensemble inference approach, as many potential solutions are generated with variable numbers of cells. Information is extracted from the ensemble as a whole by performing Monte Carlo integration to produce the expected Earth model. The ensemble of models can also be used to produce velocity uncertainty estimates and experiments with synthetic data suggest they represent actual uncertainty surprisingly well. In a transdimensional approach, the level of data uncertainty directly determines the model complexity needed to satisfy the data. Intriguingly, the Bayesian formulation can be extended to the case where data uncertainty is also uncertain. Experiments show that it is possible to recover data noise estimate while at the same time controlling model complexity in an automated fashion. The method is tested on synthetic data in a 2-D application and compared with a more standard matrix based inversion scheme. The method has also been applied to real data obtained from cross correlation of ambient noise where little is known about the size of the errors associated with the travel times. As an example, a tomographic image of Rayleigh wave group velocity for the Australian continent is constructed for 5s data together with uncertainty estimates.
Deriving the exact nonadiabatic quantum propagator in the mapping variable representation.
Hele, Timothy J H; Ananth, Nandini
2016-12-22
We derive an exact quantum propagator for nonadiabatic dynamics in multi-state systems using the mapping variable representation, where classical-like Cartesian variables are used to represent both continuous nuclear degrees of freedom and discrete electronic states. The resulting Liouvillian is a Moyal series that, when suitably approximated, can allow for the use of classical dynamics to efficiently model large systems. We demonstrate that different truncations of the exact Liouvillian lead to existing approximate semiclassical and mixed quantum-classical methods and we derive an associated error term for each method. Furthermore, by combining the imaginary-time path-integral representation of the Boltzmann operator with the exact Liouvillian, we obtain an analytic expression for thermal quantum real-time correlation functions. These results provide a rigorous theoretical foundation for the development of accurate and efficient classical-like dynamics to compute observables such as electron transfer reaction rates in complex quantized systems.
From metadynamics to dynamics.
Tiwary, Pratyush; Parrinello, Michele
2013-12-06
Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush; Parrinello, Michele
2013-12-01
Metadynamics is a commonly used and successful enhanced sampling method. By the introduction of a history dependent bias which depends on a restricted number of collective variables it can explore complex free energy surfaces characterized by several metastable states separated by large free energy barriers. Here we extend its scope by introducing a simple yet powerful method for calculating the rates of transition between different metastable states. The method does not rely on a previous knowledge of the transition states or reaction coordinates, as long as collective variables are known that can distinguish between the various stable minima in free energy space. We demonstrate that our method recovers the correct escape rates out of these stable states and also preserves the correct sequence of state-to-state transitions, with minimal extra computational effort needed over ordinary metadynamics. We apply the formalism to three different problems and in each case find excellent agreement with the results of long unbiased molecular dynamics runs.
An adaptive gridless methodology in one dimension
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snyder, N.T.; Hailey, C.E.
1996-09-01
Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less
Reduced heart rate variability during sleep in long-duration spaceflight.
Xu, D; Shoemaker, J K; Blaber, A P; Arbeille, P; Fraser, K; Hughson, R L
2013-07-15
Limited data are available to describe the regulation of heart rate (HR) during sleep in spaceflight. Sleep provides a stable supine baseline during preflight Earth recordings for comparison of heart rate variability (HRV) over a wide range of frequencies using both linear, complexity, and fractal indicators. The current study investigated the effect of long-duration spaceflight on HR and HRV during sleep in seven astronauts aboard the International Space Station up to 6 mo. Measurements included electrocardiographic waveforms from Holter monitors and simultaneous movement records from accelerometers before, during, and after the flights. HR was unchanged inflight and elevated postflight [59.6 ± 8.9 beats per minute (bpm) compared with preflight 53.3 ± 7.3 bpm; P < 0.01]. Compared with preflight data, HRV indicators from both time domain and power spectral analysis methods were diminished inflight from ultralow to high frequencies and partially recovered to preflight levels after landing. During inflight and at postflight, complexity and fractal properties of HR were not different from preflight properties. Slow fluctuations (<0.04 Hz) in HR presented moderate correlations with movements during sleep, partially accounting for the reduction in HRV. In summary, substantial reduction in HRV was observed with linear, but not with complexity and fractal, methods of analysis. These results suggest that periodic elements that influence regulation of HR through reflex mechanisms are altered during sleep in spaceflight but that underlying system complexity and fractal dynamics were not altered.
Lee, Jonathan K.; Froehlich, David C.
1987-01-01
Published literature on the application of the finite-element method to solving the equations of two-dimensional surface-water flow in the horizontal plane is reviewed in this report. The finite-element method is ideally suited to modeling two-dimensional flow over complex topography with spatially variable resistance. A two-dimensional finite-element surface-water flow model with depth and vertically averaged velocity components as dependent variables allows the user great flexibility in defining geometric features such as the boundaries of a water body, channels, islands, dikes, and embankments. The following topics are reviewed in this report: alternative formulations of the equations of two-dimensional surface-water flow in the horizontal plane; basic concepts of the finite-element method; discretization of the flow domain and representation of the dependent flow variables; treatment of boundary conditions; discretization of the time domain; methods for modeling bottom, surface, and lateral stresses; approaches to solving systems of nonlinear equations; techniques for solving systems of linear equations; finite-element alternatives to Galerkin's method of weighted residuals; techniques of model validation; and preparation of model input data. References are listed in the final chapter.
Machine Learning for Detecting Gene-Gene Interactions
McKinney, Brett A.; Reif, David M.; Ritchie, Marylyn D.; Moore, Jason H.
2011-01-01
Complex interactions among genes and environmental factors are known to play a role in common human disease aetiology. There is a growing body of evidence to suggest that complex interactions are ‘the norm’ and, rather than amounting to a small perturbation to classical Mendelian genetics, interactions may be the predominant effect. Traditional statistical methods are not well suited for detecting such interactions, especially when the data are high dimensional (many attributes or independent variables) or when interactions occur between more than two polymorphisms. In this review, we discuss machine-learning models and algorithms for identifying and characterising susceptibility genes in common, complex, multifactorial human diseases. We focus on the following machine-learning methods that have been used to detect gene-gene interactions: neural networks, cellular automata, random forests, and multifactor dimensionality reduction. We conclude with some ideas about how these methods and others can be integrated into a comprehensive and flexible framework for data mining and knowledge discovery in human genetics. PMID:16722772
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
A practical approach to Sasang constitutional diagnosis using vocal features
2013-01-01
Background Sasang constitutional medicine (SCM) is a type of tailored medicine that divides human beings into four Sasang constitutional (SC) types. Diagnosis of SC types is crucial to proper treatment in SCM. Voice characteristics have been used as an essential clue for diagnosing SC types. In the past, many studies tried to extract quantitative vocal features to make diagnosis models; however, these studies were flawed by limited data collected from one or a few sites, long recording time, and low accuracy. We propose a practical diagnosis model having only a few variables, which decreases model complexity. This in turn, makes our model appropriate for clinical applications. Methods A total of 2,341 participants’ voice recordings were used in making a SC classification model and to test the generalization ability of the model. Although the voice data consisted of five vowels and two repeated sentences per participant, we used only the sentence part for our study. A total of 21 features were extracted, and an advanced feature selection method—the least absolute shrinkage and selection operator (LASSO)—was applied to reduce the number of variables for classifier learning. A SC classification model was developed using multinomial logistic regression via LASSO. Results We compared the proposed classification model to the previous study, which used both sentences and five vowels from the same patient’s group. The classification accuracies for the test set were 47.9% and 40.4% for male and female, respectively. Our result showed that the proposed method was superior to the previous study in that it required shorter voice recordings, is more applicable to practical use, and had better generalization performance. Conclusions We proposed a practical SC classification method and showed that our model having fewer variables outperformed the model having many variables in the generalization test. We attempted to reduce the number of variables in two ways: 1) the initial number of candidate features was decreased by considering shorter voice recording, and 2) LASSO was introduced for reducing model complexity. The proposed method is suitable for an actual clinical environment. Moreover, we expect it to yield more stable results because of the model’s simplicity. PMID:24200041
Biopsy variability of lymphocytic infiltration in breast cancer subtypes and the ImmunoSkew score
NASA Astrophysics Data System (ADS)
Khan, Adnan Mujahid; Yuan, Yinyin
2016-11-01
The number of tumour biopsies required for a good representation of tumours has been controversial. An important factor to consider is intra-tumour heterogeneity, which can vary among cancer types and subtypes. Immune cells in particular often display complex infiltrative patterns, however, there is a lack of quantitative understanding of the spatial heterogeneity of immune cells and how this fundamental biological nature of human tumours influences biopsy variability and treatment resistance. We systematically investigate biopsy variability for the lymphocytic infiltrate in 998 breast tumours using a novel virtual biopsy method. Across all breast cancers, we observe a nonlinear increase in concordance between the biopsy and whole-tumour score of lymphocytic infiltrate with increasing number of biopsies, yet little improvement is gained with more than four biopsies. Interestingly, biopsy variability of lymphocytic infiltrate differs considerably among breast cancer subtypes, with the human epidermal growth factor receptor 2-positive (HER2+) subtype having the highest variability. We subsequently identify a quantitative measure of spatial variability that predicts disease-specific survival in HER2+ subtype independent of standard clinical variables (node status, tumour size and grade). Our study demonstrates how systematic methods provide new insights that can influence future study design based on a quantitative knowledge of tumour heterogeneity.
Grosse Frie, Kirstin; Janssen, Christian
2009-01-01
Based on the theoretical and empirical approach of Pierre Bourdieu, a multivariate non-linear method is introduced as an alternative way to analyse the complex relationships between social determinants and health. The analysis is based on face-to-face interviews with 695 randomly selected respondents aged 30 to 59. Variables regarding socio-economic status, life circumstances, lifestyles, health-related behaviour and health were chosen for the analysis. In order to determine whether the respondents can be differentiated and described based on these variables, a non-linear canonical correlation analysis (OVERALS) was performed. The results can be described on three dimensions; Eigenvalues add up to the fit of 1.444, which can be interpreted as approximately 50 % of explained variance. The three-dimensional space illustrates correspondences between variables and provides a framework for interpretation based on latent dimensions, which can be described by age, education, income and gender. Using non-linear canonical correlation analysis, health characteristics can be analysed in conjunction with socio-economic conditions and lifestyles. Based on Bourdieus theoretical approach, the complex correlations between these variables can be more substantially interpreted and presented.
Zhang, Xiaoshuai; Xue, Fuzhong; Liu, Hong; Zhu, Dianwen; Peng, Bin; Wiemels, Joseph L; Yang, Xiaowei
2014-12-10
Genome-wide Association Studies (GWAS) are typically designed to identify phenotype-associated single nucleotide polymorphisms (SNPs) individually using univariate analysis methods. Though providing valuable insights into genetic risks of common diseases, the genetic variants identified by GWAS generally account for only a small proportion of the total heritability for complex diseases. To solve this "missing heritability" problem, we implemented a strategy called integrative Bayesian Variable Selection (iBVS), which is based on a hierarchical model that incorporates an informative prior by considering the gene interrelationship as a network. It was applied here to both simulated and real data sets. Simulation studies indicated that the iBVS method was advantageous in its performance with highest AUC in both variable selection and outcome prediction, when compared to Stepwise and LASSO based strategies. In an analysis of a leprosy case-control study, iBVS selected 94 SNPs as predictors, while LASSO selected 100 SNPs. The Stepwise regression yielded a more parsimonious model with only 3 SNPs. The prediction results demonstrated that the iBVS method had comparable performance with that of LASSO, but better than Stepwise strategies. The proposed iBVS strategy is a novel and valid method for Genome-wide Association Studies, with the additional advantage in that it produces more interpretable posterior probabilities for each variable unlike LASSO and other penalized regression methods.
Zhang, Qin; Yao, Quanying
2018-05-01
The dynamic uncertain causality graph (DUCG) is a newly presented framework for uncertain causality representation and probabilistic reasoning. It has been successfully applied to online fault diagnoses of large, complex industrial systems, and decease diagnoses. This paper extends the DUCG to model more complex cases than what could be previously modeled, e.g., the case in which statistical data are in different groups with or without overlap, and some domain knowledge and actions (new variables with uncertain causalities) are introduced. In other words, this paper proposes to use -mode, -mode, and -mode of the DUCG to model such complex cases and then transform them into either the standard -mode or the standard -mode. In the former situation, if no directed cyclic graph is involved, the transformed result is simply a Bayesian network (BN), and existing inference methods for BNs can be applied. In the latter situation, an inference method based on the DUCG is proposed. Examples are provided to illustrate the methodology.
Research in Reading in English as a Second Language.
ERIC Educational Resources Information Center
Devine, Joanne, Ed.; And Others
This collection of essays, most followed by comments, reflect some aspect of the general theme: reading is a multifacted, complex, interactive process that involves many subskills and many types of reader, as well as text, variables. Papers include: "The Eclectic Synergy of Methods of Reading Research" (Ulla Connor); "A View of…
A Practical Method of Policy Analysis by Estimating Effect Size
ERIC Educational Resources Information Center
Phelps, James L.
2011-01-01
The previous articles on class size and other productivity research paint a complex and confusing picture of the relationship between policy variables and student achievement. Missing is a conceptual scheme capable of combining the seemingly unrelated research and dissimilar estimates of effect size into a unified structure for policy analysis and…
Correction of I/Q channel errors without calibration
Doerry, Armin W.; Tise, Bertice L.
2002-01-01
A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.
NASA Technical Reports Server (NTRS)
Carleton, O.
1972-01-01
Consideration is given specifically to sixth order elliptic partial differential equations in two independent real variables x, y such that the coefficients of the highest order terms are real constants. It is assumed that the differential operator has distinct characteristics and that it can be factored as a product of second order operators. By analytically continuing into the complex domain and using the complex characteristic coordinates of the differential equation, it is shown that its solutions, u, may be reflected across analytic arcs on which u satisfies certain analytic boundary conditions. Moreover, a method is given whereby one can determine a region into which the solution is extensible. It is seen that this region of reflection is dependent on the original domain of difinition of the solution, the arc and the coefficients of the highest order terms of the equation and not on any sufficiently small quantities; i.e., the reflection is global in nature. The method employed may be applied to similar differential equations of order 2n.
Effect of Methamphetamine Dependence on Heart Rate Variability
Henry, Brook L.; Minassian, Arpi; Perry, William
2010-01-01
Background Methamphetamine (METH) is an increasing popular and highly addictive stimulant associated with autonomic nervous system (ANS) dysfunction, cardiovascular pathology, and neurotoxicity. Heart rate variability (HRV) has been used to assess autonomic function and predict mortality in cardiac disorders and drug intoxication, but has not been characterized in METH use. We recorded HRV in a sample of currently abstinent individuals with a history of METH dependence compared to age- and gender-matched drug-free comparison subjects. Method HRV was assessed using time domain, frequency domain, and nonlinear entropic analyses in 17 previously METH-dependent and 21 drug-free comparison individuals during a 5 minute rest period. Results The METH-dependent group demonstrated significant reduction in HRV, reduced parasympathetic activity, and diminished heartbeat complexity relative to comparison participants. More recent METH use was associated with increased sympathetic tone. Conclusion Chronic METH exposure may be associated with decreased HRV, impaired vagal function, and reduction in heart rate complexity as assessed by multiple methods of analysis. We discuss and review evidence that impaired HRV may be related to the cardiotoxic or neurotoxic effects of prolonged METH use. PMID:21182570
The use of generalised additive models (GAM) in dentistry.
Helfenstein, U; Steiner, M; Menghini, G
1997-12-01
Ordinary multiple regression and logistic multiple regression are widely applied statistical methods which allow a researcher to 'explain' or 'predict' a response variable from a set of explanatory variables or predictors. In these models it is usually assumed that quantitative predictors such as age enter linearly into the model. During recent years these methods have been further developed to allow more flexibility in the way explanatory variables 'act' on a response variable. The methods are called 'generalised additive models' (GAM). The rigid linear terms characterising the association between response and predictors are replaced in an optimal way by flexible curved functions of the predictors (the 'profiles'). Plotting the 'profiles' allows the researcher to visualise easily the shape by which predictors 'act' over the whole range of values. The method facilitates detection of particular shapes such as 'bumps', 'U-shapes', 'J-shapes, 'threshold values' etc. Information about the shape of the association is not revealed by traditional methods. The shapes of the profiles may be checked by performing a Monte Carlo simulation ('bootstrapping'). After the presentation of the GAM a relevant case study is presented in order to demonstrate application and use of the method. The dependence of caries in primary teeth on a set of explanatory variables is investigated. Since GAMs may not be easily accessible to dentists, this article presents them in an introductory condensed form. It was thought that a nonmathematical summary and a worked example might encourage readers to consider the methods described. GAMs may be of great value to dentists in allowing visualisation of the shape by which predictors 'act' and obtaining a better understanding of the complex relationships between predictors and response.
NASA Astrophysics Data System (ADS)
Charakopoulos, A. K.; Katsouli, G. A.; Karakasidis, T. E.
2018-04-01
Understanding the underlying processes and extracting detailed characteristics of spatiotemporal dynamics of ocean and atmosphere as well as their interaction is of significant interest and has not been well thoroughly established. The purpose of this study was to examine the performance of two main additional methodologies for the identification of spatiotemporal underlying dynamic characteristics and patterns among atmospheric and oceanic variables from Seawatch buoys from Aegean and Ionian Sea, provided by the Hellenic Center for Marine Research (HCMR). The first approach involves the estimation of cross correlation analysis in an attempt to investigate time-lagged relationships, and further in order to identify the direction of interactions between the variables we performed the Granger causality method. According to the second approach the time series are converted into complex networks and then the main topological network properties such as degree distribution, average path length, diameter, modularity and clustering coefficient are evaluated. Our results show that the proposed analysis of complex network analysis of time series can lead to the extraction of hidden spatiotemporal characteristics. Also our findings indicate high level of positive and negative correlations and causalities among variables, both from the same buoy and also between buoys from different stations, which cannot be determined from the use of simple statistical measures.
Kim, Sean H. J.; Jackson, Andre J.; Hunt, C. Anthony
2014-01-01
The objective of this study was to develop and explore new, in silico experimental methods for deciphering complex, highly variable absorption and food interaction pharmacokinetics observed for a modified-release drug product. Toward that aim, we constructed an executable software analog of study participants to whom product was administered orally. The analog is an object- and agent-oriented, discrete event system, which consists of grid spaces and event mechanisms that map abstractly to different physiological features and processes. Analog mechanisms were made sufficiently complicated to achieve prespecified similarity criteria. An equation-based gastrointestinal transit model with nonlinear mixed effects analysis provided a standard for comparison. Subject-specific parameterizations enabled each executed analog’s plasma profile to mimic features of the corresponding six individual pairs of subject plasma profiles. All achieved prespecified, quantitative similarity criteria, and outperformed the gastrointestinal transit model estimations. We observed important subject-specific interactions within the simulation and mechanistic differences between the two models. We hypothesize that mechanisms, events, and their causes occurring during simulations had counterparts within the food interaction study: they are working, evolvable, concrete theories of dynamic interactions occurring within individual subjects. The approach presented provides new, experimental strategies for unraveling the mechanistic basis of complex pharmacological interactions and observed variability. PMID:25268237
Ward, Ashleigh L; Lukens, Wayne W; Lu, Connie C; Arnold, John
2014-03-05
A series of actinide-transition metal heterobimetallics has been prepared, featuring thorium, uranium, and cobalt. Complexes incorporating the binucleating ligand N[ο-(NHCH2P(i)Pr2)C6H4]3 with either Th(IV) (4) or U(IV) (5) and a carbonyl bridged [Co(CO)4](-) unit were synthesized from the corresponding actinide chlorides (Th: 2; U: 3) and Na[Co(CO)4]. Irradiation of the resulting isocarbonyls with ultraviolet light resulted in the formation of new species containing actinide-metal bonds in good yields (Th: 6; U: 7); this photolysis method provides a new approach to a relatively unusual class of complexes. Characterization by single-crystal X-ray diffraction revealed that elimination of the bridging carbonyl and formation of the metal-metal bond is accompanied by coordination of a phosphine arm from the N4P3 ligand to the cobalt center. Additionally, actinide-cobalt bonds of 3.0771(5) Å and 3.0319(7) Å for the thorium and uranium complexes, respectively, were observed. The solution-state behavior of the thorium complexes was evaluated using (1)H, (1)H-(1)H COSY, (31)P, and variable-temperature NMR spectroscopy. IR, UV-vis/NIR, and variable-temperature magnetic susceptibility measurements are also reported.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Ashleigh; Lukens, Wayne; Lu, Connie
2014-04-01
A series of actinide-transition metal heterobimetallics has been prepared, featuring thorium, uranium and cobalt. Complexes incorporating the binucleating ligand N[-(NHCH2PiPr2)C6H4]3 and Th(IV) (4) or U(IV) (5) with a carbonyl bridged [Co(CO)4]- unit were synthesized from the corresponding actinide chlorides (Th: 2; U: 3) and Na[Co(CO)4]. Irradiation of the isocarbonyls with ultraviolet light resulted in the formation of new species containing actinide-metal bonds in good yields (Th: 6; U: 7); this photolysis method provides a new approach to a relatively rare class of complexes. Characterization by single-crystal X-ray diffraction revealed that elimination of the bridging carbonyl is accompanied by coordination ofmore » a phosphine arm from the N4P3 ligand to the cobalt center. Additionally, actinide-cobalt bonds of 3.0771(5) and 3.0319(7) for the thorium and uranium complexes, respectively, were observed. The solution state behavior of the thorium complexes was evaluated using 1H, 1H-1H COSY, 31P and variable-temperature NMR spectroscopy. IR, UV-Vis/NIR, and variable-temperature magnetic susceptibility measurements are also reported.« less
Toomey, Elaine; Matthews, James; Hurley, Deirdre A
2017-08-04
Despite an increasing awareness of the importance of fidelity of delivery within complex behaviour change interventions, it is often poorly assessed. This mixed methods study aimed to establish the fidelity of delivery of a complex self-management intervention and explore the reasons for these findings using a convergent/triangulation design. Feasibility trial of the Self-management of Osteoarthritis and Low back pain through Activity and Skills (SOLAS) intervention (ISRCTN49875385), delivered in primary care physiotherapy. 60 SOLAS sessions were delivered across seven sites by nine physiotherapists. Fidelity of delivery of prespecified intervention components was evaluated using (1) audio-recordings (n=60), direct observations (n=24) and self-report checklists (n=60) and (2) individual interviews with physiotherapists (n=9). Quantitatively, fidelity scores were calculated using percentage means and SD of components delivered. Associations between fidelity scores and physiotherapist variables were analysed using Spearman's correlations. Interviews were analysed using thematic analysis to explore potential reasons for fidelity scores. Integration of quantitative and qualitative data occurred at an interpretation level using triangulation. Quantitatively, fidelity scores were high for all assessment methods; with self-report (92.7%) consistently higher than direct observations (82.7%) or audio-recordings (81.7%). There was significant variation between physiotherapists' individual scores (69.8% - 100%). Both qualitative and quantitative data (from physiotherapist variables) found that physiotherapists' knowledge (Spearman's association at p=0.003) and previous experience (p=0.008) were factors that influenced their fidelity. The qualitative data also postulated participant-level (eg, individual needs) and programme-level factors (eg, resources) as additional elements that influenced fidelity. The intervention was delivered with high fidelity. This study contributes to the limited evidence regarding fidelity assessment methods within complex behaviour change interventions. The findings suggest a combination of quantitative methods is suitable for the assessment of fidelity of delivery. A mixed methods approach provided a more insightful understanding of fidelity and its influencing factors. ISRCTN49875385; Pre-results. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
Silva, Luiz Eduardo Virgilio; Lataro, Renata Maria; Castania, Jaci Airton; da Silva, Carlos Alberto Aguiar; Valencia, Jose Fernando; Murta, Luiz Otavio; Salgado, Helio Cesar; Fazan, Rubens; Porta, Alberto
2016-07-01
The analysis of heart rate variability (HRV) by nonlinear methods has been gaining increasing interest due to their ability to quantify the complexity of cardiovascular regulation. In this study, multiscale entropy (MSE) and refined MSE (RMSE) were applied to track the complexity of HRV as a function of time scale in three pathological conscious animal models: rats with heart failure (HF), spontaneously hypertensive rats (SHR), and rats with sinoaortic denervation (SAD). Results showed that HF did not change HRV complexity, although there was a tendency to decrease the entropy in HF animals. On the other hand, SHR group was characterized by reduced complexity at long time scales, whereas SAD animals exhibited a smaller short- and long-term irregularity. We propose that short time scales (1 to 4), accounting for fast oscillations, are more related to vagal and respiratory control, whereas long time scales (5 to 20), accounting for slow oscillations, are more related to sympathetic control. The increased sympathetic modulation is probably the main reason for the lower entropy observed at high scales for both SHR and SAD groups, acting as a negative factor for the cardiovascular complexity. This study highlights the contribution of the multiscale complexity analysis of HRV for understanding the physiological mechanisms involved in cardiovascular regulation. Copyright © 2016 the American Physiological Society.
The macroevolution of size and complexity in insect male genitalia
Rudoy, Andrey
2016-01-01
The evolution of insect male genitalia has received much attention, but there is still a lack of data on the macroevolutionary origin of its extraordinary variation. We used a calibrated molecular phylogeny of 71 of the 150 known species of the beetle genus Limnebius to study the evolution of the size and complexity of the male genitalia in its two subgenera, Bilimneus, with small species with simple genitalia, and Limnebius s.str., with a much larger variation in size and complexity. We reconstructed ancestral values of complexity (perimeter and fractal dimension of the aedeagus) and genital and body size with Bayesian methods. Complexity evolved more in agreement with a Brownian model, although with evidence of weak directional selection to a decrease or increase in complexity in the two subgenera respectively, as measured with an excess of branches with negative or positive change. On the contrary, aedeagus size, the variable with the highest rates of evolution, had a lower phylogenetic signal, without significant differences between the two subgenera in the average change of the individual branches of the tree. Aedeagus size also had a lower correlation with time and no evidence of directional selection. Rather than to directional selection, it thus seems that the higher diversity of the male genitalia in Limnebius s.str. is mostly due to the larger variance of the phenotypic change in the individual branches of the tree for all measured variables. PMID:27114865
Spring, Michael R; Hanusa, Barbara H; Eack, Shaun M; Haas, Gretchen L
2017-01-01
Background eHealth technologies offer great potential for improving the use and effectiveness of treatments for those with severe mental illness (SMI), including schizophrenia and schizoaffective disorder. This potential can be muted by poor design. There is limited research on designing eHealth technologies for those with SMI, others with cognitive impairments, and those who are not technology savvy. We previously tested a design model, the Flat Explicit Design Model (FEDM), to create eHealth interventions for individuals with SMI. Subsequently, we developed the design concept page complexity, defined via the design variables we created of distinct topic areas, distinct navigation areas, and number of columns used to organize contents and the variables of text reading level, text reading ease (a newly added variable to the FEDM), and the number of hyperlinks and number of words on a page. Objective The objective of our study was to report the influence that the 19 variables of the FEDM have on the ability of individuals with SMI to use a website, ratings of a website’s ease of use, and performance on a novel usability task we created termed as content disclosure (a measure of the influence of a homepage’s design on the understanding user’s gain of a website). Finally, we assessed the performance of 3 groups or dimensions we developed that organize the 19 variables of the FEDM, termed as page complexity, navigational simplicity, and comprehensibility. Methods We measured 4 website usability outcomes: ability to find information, time to find information, ease of use, and a user’s ability to accurately judge a website’s contents. A total of 38 persons with SMI (chart diagnosis of schizophrenia or schizoaffective disorder) and 5 mental health websites were used to evaluate the importance of the new design concepts, as well as the other variables in the FEDM. Results We found that 11 of the FEDM’s 19 variables were significantly associated with all 4 usability outcomes. Most other variables were significantly related to 2 or 3 of these usability outcomes. With the 5 tested websites, 7 of the 19 variables of the FEDM overlapped with other variables, resulting in 12 distinct variable groups. The 3 design dimensions had acceptable coefficient alphas. Both navigational simplicity and comprehensibility were significantly related to correctly identifying whether information was available on a website. Page complexity and navigational simplicity were significantly associated with the ability and time to find information and ease-of-use ratings. Conclusions The 19 variables and 3 dimensions (page complexity, navigational simplicity, and comprehensibility) of the FEDM offer evidence-based design guidance intended to reduce the cognitive effort required to effectively use eHealth applications, particularly for persons with SMI, and potentially others, including those with cognitive impairments and limited skills or experience with technology. The new variables we examined (topic areas, navigational areas, columns) offer additional and very simple ways to improve simplicity. PMID:28057610
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.
NASA Technical Reports Server (NTRS)
Martin, E. Dale
1989-01-01
The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.
Variable Complexity Optimization of Composite Structures
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
2002-01-01
The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.
A model for correlating flat plate film cooling effectiveness for rows of round holes
NASA Astrophysics Data System (ADS)
Lecuyer, M. R.; Soechting, F. O.
1985-09-01
An effective method of cooling, that has found widespread application in aircraft gas turbines, is the injection of a film of cooling air through holes into the hot mainstream gas to provide a buffer layer between the hot gas and the airfoil surface. Film cooling has been extensively investigated and the results have been reported in the literature. However, there is no generalized method reported in the literature to predict the film cooling performance as influenced by the major variables. A generalized film cooling correlation has been developed, utilizing data reported in the literature, for constant velocity and flat plate boundary layer development. This work provides a basic understanding of the complex interaction of the major variables effecting film cooling performance.
Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil
2015-01-01
The medical curriculum is the main tool representing the entire undergraduate medical education. Due to its complexity and multilayered structure it is of limited use to teachers in medical education for quality improvement purposes. In this study we evaluated three visualizations of curriculum data from a pilot course, using teachers from an undergraduate medical program and applying visual analytics methods. We found that visual analytics can be used to positively impacting analytical reasoning and decision making in medical education through the realization of variables capable to enhance human perception and cognition on complex curriculum data. The positive results derived from our evaluation of a medical curriculum and in a small scale, signify the need to expand this method to an entire medical curriculum. As our approach sustains low levels of complexity it opens a new promising direction in medical education informatics research.
Double symbolic joint entropy in nonlinear dynamic complexity analysis
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-07-01
Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.
On the genre-fication of music: a percolation approach
NASA Astrophysics Data System (ADS)
Lambiotte, R.; Ausloos, M.
2006-03-01
We analyze web-downloaded data on people sharing their music library. By attributing to each music group usual music genres (Rock, Pop ...), and analysing correlations between music groups of different genres with percolation-idea based methods, we probe the reality of these subdivisions and construct a music genre cartography, with a tree representation. We also discuss an alternative objective way to classify music, that is based on the complex structure of the groups audience. Finally, a link is drawn with the theory of hidden variables in complex networks.
NASA Astrophysics Data System (ADS)
Niroomand, Sona; Khorasani-Motlagh, Mozhgan; Noroozifar, Meissam; Jahani, Shohreh; Moodi, Asieh
2017-02-01
The binding of the lanthanum(III) complex containing 1,10-phenanthroline (phen), [La(phen)3Cl3·OH2], to DNA is investigated by absorption and emission methods. This complex shows absorption decreasing in a charge transfer band, and fluorescence decrement when it binds to DNA. Electronic absorption spectroscopy (UV-Vis), fluorescence spectra, iodide quenching experiments, salt effect and viscosity measurements, ethidium bromide (EB) competition test, circular dichroism (CD) spectra as well as variable temperature experiments indicate that the La(III) complex binds to fish salmon (FS) DNA, presumably via groove binding mode. The binding constants (Kb) of the La(III) complex with DNA is (2.55 ± 0.02) × 106 M-1. Furthermore, the binding site size, n, the Stern-Volmer constant KSV and thermodynamic parameters; enthalpy change (ΔH0) and entropy change (ΔS0) and Gibb's free energy (ΔG0), are calculated according to relevant fluorescent data and the Van't Hoff equation. The La(III) complex has been screened for its antibacterial activities by the disc diffusion method. Also, in order to supplement the experimental findings, DFT computation and NBO analysis are carried out.
A new spectrophotometric method for determination of EDTA in water using its complex with Mn(III)
NASA Astrophysics Data System (ADS)
Andrade, Carlos Eduardo O.; Oliveira, André F.; Neves, Antônio A.; Queiroz, Maria Eliana L. R.
2016-11-01
EDTA is an important ligand used in many industrial products as well as in agriculture, where it is employed to assist in phytoextraction procedures and the absorption of nutrients by plants. Due to its intensive use and recalcitrance, it is now considered an emerging pollutant in water, so there is great interest in techniques suitable for its monitoring. This work proposes a method based on formation of the Mn(III)-EDTA complex after oxidation of the Mn(II)-EDTA complex by PbO2 immobilized on cyanoacrylate spheres. A design of experiments (DOE) based on the Doehlert matrix was used to determine the optimum conditions of the method, and the influence of the variables was evaluated using a multiple linear regression (MLR) model. The optimized method presented a linear response in the range from 0.77 to 100.0 μmol L- 1, with analytical sensitivity of 7.7 × 103 L mol- 1, a coefficient of determination of 0.999, and a limit of detection of 0.23 μmol L- 1. The method was applied using samples fortified at different concentration levels, and the recoveries achieved were between 97.0 and 104.9%.
Improved Sparse Multi-Class SVM and Its Application for Gene Selection in Cancer Classification
Huang, Lingkang; Zhang, Hao Helen; Zeng, Zhao-Bang; Bushel, Pierre R.
2013-01-01
Background Microarray techniques provide promising tools for cancer diagnosis using gene expression profiles. However, molecular diagnosis based on high-throughput platforms presents great challenges due to the overwhelming number of variables versus the small sample size and the complex nature of multi-type tumors. Support vector machines (SVMs) have shown superior performance in cancer classification due to their ability to handle high dimensional low sample size data. The multi-class SVM algorithm of Crammer and Singer provides a natural framework for multi-class learning. Despite its effective performance, the procedure utilizes all variables without selection. In this paper, we propose to improve the procedure by imposing shrinkage penalties in learning to enforce solution sparsity. Results The original multi-class SVM of Crammer and Singer is effective for multi-class classification but does not conduct variable selection. We improved the method by introducing soft-thresholding type penalties to incorporate variable selection into multi-class classification for high dimensional data. The new methods were applied to simulated data and two cancer gene expression data sets. The results demonstrate that the new methods can select a small number of genes for building accurate multi-class classification rules. Furthermore, the important genes selected by the methods overlap significantly, suggesting general agreement among different variable selection schemes. Conclusions High accuracy and sparsity make the new methods attractive for cancer diagnostics with gene expression data and defining targets of therapeutic intervention. Availability: The source MATLAB code are available from http://math.arizona.edu/~hzhang/software.html. PMID:23966761
Approximate median regression for complex survey data with skewed response.
Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi
2016-12-01
The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.
Approximate Median Regression for Complex Survey Data with Skewed Response
Fraser, Raphael André; Lipsitz, Stuart R.; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Pan, Yi
2016-01-01
Summary The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling and weighting. In this paper, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS) based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. PMID:27062562
Optimal Output of Distributed Generation Based On Complex Power Increment
NASA Astrophysics Data System (ADS)
Wu, D.; Bao, H.
2017-12-01
In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.
A projection gradient method for computing ground state of spin-2 Bose–Einstein condensates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Hanquan, E-mail: hanquan.wang@gmail.com; Yunnan Tongchang Scientific Computing and Data Mining Research Center, Kunming, Yunnan Province, 650221
In this paper, a projection gradient method is presented for computing ground state of spin-2 Bose–Einstein condensates (BEC). We first propose the general projection gradient method for solving energy functional minimization problem under multiple constraints, in which the energy functional takes real functions as independent variables. We next extend the method to solve a similar problem, where the energy functional now takes complex functions as independent variables. We finally employ the method into finding the ground state of spin-2 BEC. The key of our method is: by constructing continuous gradient flows (CGFs), the ground state of spin-2 BEC can bemore » computed as the steady state solution of such CGFs. We discretized the CGFs by a conservative finite difference method along with a proper way to deal with the nonlinear terms. We show that the numerical discretization is normalization and magnetization conservative and energy diminishing. Numerical results of the ground state and their energy of spin-2 BEC are reported to demonstrate the effectiveness of the numerical method.« less
Naeemullah; Kazi, Tasneem Gul; Afridi, Hassan Imran; Shah, Faheem; Arain, Sadaf Sadia; Arain, Salma Aslam; Panhwar, Abdul Haleem; Arain, Mariam Shahzadi; Samoon, Muhammad Kashif
2016-02-05
An innovative and simple miniaturized solid phase microextraction (M-SPME) method, was developed for preconcentration and determination of silver(I) in the fresh and waste water samples. For M-SPME, a micropipette tip packed with activated carbon cloth (ACC) as sorbent, in a syringe system. The size, morphology and elemental composition of ACC before and after adsorption of analyte have been characterized by scanning electron microscopy and energy dispersive spectroscopy. The sample solution treated with a complexing reagent, ammonium pyrrolidine dithiocarbamate (APDC), was drawn into the syringe filled with ACC and dispensed manually for 2 to 10 aspirating/dispensing cycle. Then the Ag- complex sorbed on the ACC in micropipette was quantitatively eluted by drawing and dispensing of different concentrations of acids for 2 to 5 aspirating/dispensing cycles. The extracted Ag ions with modifier were injected directly into the electrothermal atomic absorption spectrometry for analysis. The influence of different variables on the extraction efficiency, including the concentration of ligand, pH, sample volume, eluent type, concentration and volume was investigated. Validity and accuracy of the developed method was checked by the standard addition method. Reliability of the proposed methodology was checked by the relative standard deviation (%RSD), which was found to be <5%. Under the optimized experimental variables, the limits of detection (LOD) and enhancement factors (EF), were obtained to be 0.86 ng L(-1) and 120, respectively. The proposed method was successfully applied for the determination of trace levels of silver ions in fresh and waste water samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Torija, Antonio J; Ruiz, Diego P
2015-02-01
The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.
A variable circular-plot method for estimated bird numbers
Reynolds, R.T.; Scott, J.M.; Nussbaum, R.A.
1980-01-01
A bird census method is presented that is designed for tall, structurally complex vegetation types, and rugged terrain. With this method the observer counts all birds seen or heard around a station, and estimates the horizontal distance from the station to each bird. Count periods at stations vary according to the avian community and structural complexity of the vegetation. The density of each species is determined by inspecting a histogram of the number of individuals per unit area in concentric bands of predetermined widths about the stations, choosing the band (with outside radius x) where the density begins to decline, and summing the number of individuals counted within the circle of radius x and dividing by the area (Bx2). Although all observations beyond radius x are rejected with this procedure, coefficients of maximum distance.
Yang, Yi Isaac; Parrinello, Michele
2018-06-12
Collective variables are used often in many enhanced sampling methods, and their choice is a crucial factor in determining sampling efficiency. However, at times, searching for good collective variables can be challenging. In a recent paper, we combined time-lagged independent component analysis with well-tempered metadynamics in order to obtain improved collective variables from metadynamics runs that use lower quality collective variables [ McCarty, J.; Parrinello, M. J. Chem. Phys. 2017 , 147 , 204109 ]. In this work, we extend these ideas to variationally enhanced sampling. This leads to an efficient scheme that is able to make use of the many advantages of the variational scheme. We apply the method to alanine-3 in water. From an alanine-3 variationally enhanced sampling trajectory in which all the six dihedral angles are biased, we extract much better collective variables able to describe in exquisite detail the protein complex free energy surface in a low dimensional representation. The success of this investigation is helped by a more accurate way of calculating the correlation functions needed in the time-lagged independent component analysis and from the introduction of a new basis set to describe the dihedral angles arrangement.
NASA Technical Reports Server (NTRS)
Dadone, L.; Cowan, J.; Mchugh, F. J.
1982-01-01
Deployment of variable camber concepts on helicopter rotors was analytically assessed. It was determined that variable camber extended the operating range of helicopters provided that the correct compromise can be obtained between performance/loads gains and mechanical complexity. A number of variable camber concepts were reviewed on a two dimensional basis to determine the usefulness of leading edge, trailing edge and overall camber variation schemes. The most powerful method to vary camber was through the trailing edge flaps undergoing relatively small motions (-5 deg to +15 deg). The aerodynamic characteristics of the NASA/Ames A-1 airfoil with 35% and 50% plain trailing edge flaps were determined by means of current subcritical and transonic airfoil design methods and used by rotor performance and loads analysis codes. The most promising variable camber schedule reviewed was a configuration with a 35% plain flap deployment in an on/off mode near the tip of a blade. Preliminary results show approximately 11% reduction in power is possible at 192 knots and a rotor thrust coefficient of 0.09. The potential demonstrated indicates a significant potential for expanding the operating envelope of the helicopter. Further investigation into improving the power saving and defining the improvement in the operational envelope of the helicopter is recommended.
Yang, Guanxue; Wang, Lin; Wang, Xiaofan
2017-06-07
Reconstruction of networks underlying complex systems is one of the most crucial problems in many areas of engineering and science. In this paper, rather than identifying parameters of complex systems governed by pre-defined models or taking some polynomial and rational functions as a prior information for subsequent model selection, we put forward a general framework for nonlinear causal network reconstruction from time-series with limited observations. With obtaining multi-source datasets based on the data-fusion strategy, we propose a novel method to handle nonlinearity and directionality of complex networked systems, namely group lasso nonlinear conditional granger causality. Specially, our method can exploit different sets of radial basis functions to approximate the nonlinear interactions between each pair of nodes and integrate sparsity into grouped variables selection. The performance characteristic of our approach is firstly assessed with two types of simulated datasets from nonlinear vector autoregressive model and nonlinear dynamic models, and then verified based on the benchmark datasets from DREAM3 Challenge4. Effects of data size and noise intensity are also discussed. All of the results demonstrate that the proposed method performs better in terms of higher area under precision-recall curve.
ERIC Educational Resources Information Center
Marcovitz, Alan B., Ed.
Described is the use of an analog/hybrid computer installation to study those physical phenomena that can be described through the evaluation of an algebraic function of a complex variable. This is an alternative way to study such phenomena on an interactive graphics terminal. The typical problem used, involving complex variables, is that of…
A Nonlinear Model for Gene-Based Gene-Environment Interaction.
Sa, Jian; Liu, Xu; He, Tao; Liu, Guifen; Cui, Yuehua
2016-06-04
A vast amount of literature has confirmed the role of gene-environment (G×E) interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP) and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects) are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR) model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC) model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR) model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.
NASA Astrophysics Data System (ADS)
Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.
2016-09-01
The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.
NASA Astrophysics Data System (ADS)
Forootan, Ehsan; Kusche, Jürgen
2016-04-01
Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5
Complexity in Soil Systems: What Does It Mean and How Should We Proceed?
NASA Astrophysics Data System (ADS)
Faybishenko, B.; Molz, F. J.; Brodie, E.; Hubbard, S. S.
2015-12-01
The complex soil systems approach is needed fundamentally for the development of integrated, interdisciplinary methods to measure and quantify the physical, chemical and biological processes taking place in soil, and to determine the role of fine-scale heterogeneities. This presentation is aimed at a review of the concepts and observations concerning complexity and complex systems theory, including terminology, emergent complexity and simplicity, self-organization and a general approach to the study of complex systems using the Weaver (1948) concept of "organized complexity." These concepts are used to provide understanding of complex soil systems, and to develop experimental and mathematical approaches to soil microbiological processes. The results of numerical simulations, observations and experiments are presented that indicate the presence of deterministic chaotic dynamics in soil microbial systems. So what are the implications for the scientists who wish to develop mathematical models in the area of organized complexity or to perform experiments to help clarify an aspect of an organized complex system? The modelers have to deal with coupled systems having at least three dependent variables, and they have to forgo making linear approximations to nonlinear phenomena. The analogous rule for experimentalists is that they need to perform experiments that involve measurement of at least three interacting entities (variables depending on time, space, and each other). These entities could be microbes in soil penetrated by roots. If a process being studied in a soil affects the soil properties, like biofilm formation, then this effect has to be measured and included. The mathematical implications of this viewpoint are examined, and results of numerical solutions to a system of equations demonstrating deterministic chaotic behavior are also discussed using time series and the 3D strange attractors.
Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.
Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si
2017-07-01
Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.
Prediction of hourly PM2.5 using a space-time support vector regression model
NASA Astrophysics Data System (ADS)
Yang, Wentao; Deng, Min; Xu, Feng; Wang, Hang
2018-05-01
Real-time air quality prediction has been an active field of research in atmospheric environmental science. The existing methods of machine learning are widely used to predict pollutant concentrations because of their enhanced ability to handle complex non-linear relationships. However, because pollutant concentration data, as typical geospatial data, also exhibit spatial heterogeneity and spatial dependence, they may violate the assumptions of independent and identically distributed random variables in most of the machine learning methods. As a result, a space-time support vector regression model is proposed to predict hourly PM2.5 concentrations. First, to address spatial heterogeneity, spatial clustering is executed to divide the study area into several homogeneous or quasi-homogeneous subareas. To handle spatial dependence, a Gauss vector weight function is then developed to determine spatial autocorrelation variables as part of the input features. Finally, a local support vector regression model with spatial autocorrelation variables is established for each subarea. Experimental data on PM2.5 concentrations in Beijing are used to verify whether the results of the proposed model are superior to those of other methods.
Error Propagation Made Easy--Or at Least Easier
ERIC Educational Resources Information Center
Gardenier, George H.; Gui, Feng; Demas, James N.
2011-01-01
Complex error propagation is reduced to formula and data entry into a Mathcad worksheet or an Excel spreadsheet. The Mathcad routine uses both symbolic calculus analysis and Monte Carlo methods to propagate errors in a formula of up to four variables. Graphical output is used to clarify the contributions to the final error of each of the…
ERIC Educational Resources Information Center
Strobl, Carolin; Malley, James; Tutz, Gerhard
2009-01-01
Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and…
USDA-ARS?s Scientific Manuscript database
Large uncertainties for landfill CH4 emissions due to spatial and temporal variabilities remain unresolved by short-term field campaigns and historic GHG inventory models. Using four field methods (aircraft-based mass balance, tracer correlation, vertical radial plume mapping, and static chambers) ...
Collective feature selection to identify crucial epistatic variants.
Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D
2018-01-01
Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.
Evrendilek, Fatih
2007-12-12
This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.
NASA Technical Reports Server (NTRS)
Cahan, Boris D.
1991-01-01
The Iterative Boundary Integral Equation Method (I-BIEM) has been applied to the problem of frequency dispersion at a disk electrode in a finite geometry. The I-BIEM permits the direct evaluation of the AC potential (a complex variable) using complex boundary conditions. The point spacing was made highly nonuniform, to give extremely high resolution in those regions where the variables change most rapidly, i.e., in the vicinity of the edge of the disk. Results are analyzed with respect to IR correction, equipotential surfaces, and reference electrode placement. The current distribution is also examined for a ring-disk configuration, with the ring and the disk at the same AC potential. It is shown that the apparent impedance of the disk is inductive at higher frequencies. The results are compared to analytic calculations from the literature, and usually agree to better than 0.001 percent.
NASA Technical Reports Server (NTRS)
Cahan, Boris D.
1991-01-01
The Iterative Boundary Integral Equation Method (I-BIEM) has been applied to the problem of frequency dispersion at a disk electrode in a finite geometry. The I-BIEM permits the direct evaluation of the AC potential (a complex variable) using complex boundary conditions. The point spacing was made highly nonuniform, to give extremely high resolution in those regions where the variables change most rapidly, i.e., in the vicinity of the edge of the disk. Results are analyzed with respect to IR correction, equipotential surfaces, and reference electrode placement. The current distribution is also examined for a ring-disk configuration, with the ring and the disk at the same AC potential. It is shown that the apparent impedance of the disk is inductive at higher frequencies. The results are compared to analytic calculations from the literature, and usually agree to better than 0.001 percent.
NASA Astrophysics Data System (ADS)
Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar
2011-12-01
This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.
VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal
NASA Astrophysics Data System (ADS)
Satheeskumaran, S.; Sabrigiriraj, M.
2016-06-01
Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.
NASA Technical Reports Server (NTRS)
Rosen, Bruce S.
1991-01-01
An upwind three-dimensional volume Navier-Stokes code is modified to facilitate modeling of complex geometries and flow fields represented by proposed National Aerospace Plane concepts. Code enhancements include an equilibrium air model, a generalized equilibrium gas model and several schemes to simplify treatment of complex geometric configurations. The code is also restructured for inclusion of an arbitrary number of independent and dependent variables. This latter capability is intended for eventual use to incorporate nonequilibrium/chemistry gas models, more sophisticated turbulence and transition models, or other physical phenomena which will require inclusion of additional variables and/or governing equations. Comparisons of computed results with experimental data and results obtained using other methods are presented for code validation purposes. Good correlation is obtained for all of the test cases considered, indicating the success of the current effort.
Discrete-Choice Modeling Of Non-Working Women’s Trip-Chaining Activity Based
NASA Astrophysics Data System (ADS)
Hayati, Amelia; Pradono; Purboyo, Heru; Maryati, Sri
2018-05-01
Start The urban developments of technology and economics are now changing the lifestyles of the urban societies. It is also changing their travel demand to meet their movement needs. Nowadays, urban women, especially in Bandung, West Java, have a high demand for their daily travel and tend to increase. They have the ease of accessibility to personal modes of transportation and freedom to go anywhere to meet their personal and family needs. This also happens to non-working women or as housewives in the city of Bandung. More than 50% of women’s mobility is outside the home, in the term of trip-chaining, from leaving to returning home in one day. It is based on their complex activities in order to meet the needs of family and home care. While less than 60% of male’s mobility is outdoors, it is a simple trip-chaining or only has a single trip. The trip-chaining has significant differences between non-working women and working-men. This illustrates the pattern of Mom and Dad’s mobility in a family with an activity-based approach for the same purpose, i.e. family welfare. This study explains how complex the trip-chaining of non-working urban women and as housewives, with an activity-based approach done outdoors in a week. Socio-economic and household demographic variables serve as the basis for measuring the independent variables affecting family welfare, as well as the variables of type, time and duration of activities performed by unemployed housewives. This study aims to examine the interrelationships between activity variables, especially the time of activity and travel, and socio-economic of household variables that can generate the complexity of women’s daily travel. Discrete Choice Modeling developed by Ben-Akiva, Chandra Bhat, etc., is used in this study to illustrate the relationship between activity and socio-economic demographic variables based on primary survey data in Bandung, West Java for 466 unemployed housewives. The results of the regression, by Seemingly Unrelated Regression approach methods, showed the interrelationship between all variables, including the complexity of trip chaining of housewives based on their daily activities. The type of mandatory and discretionary activities, and the duration of activities performed during the dismissal in the series of trip chains conducted are intended for the fulfillment of the welfare of all family member.
ERIC Educational Resources Information Center
Kuiken, Folkert; Vedder, Ineke
2012-01-01
The research project reported in this chapter consists of three studies in which syntactic complexity, lexical variation and fluency appear as dependent variables. The independent variables are task complexity and proficiency level, as the three studies investigate the effect of task complexity on the written and oral performance of L2 learners of…
NASA Astrophysics Data System (ADS)
Köhler, Reinhard
2014-12-01
We have long been used to the domination of qualitative methods in modern linguistics. Indeed, qualitative methods have advantages such as ease of use and wide applicability to many types of linguistic phenomena. However, this shall not overshadow the fact that a great part of human language is amenable to quantification. Moreover, qualitative methods may lead to over-simplification by employing the rigid yes/no scale. When variability and vagueness of human language must be taken into account, qualitative methods will prove inadequate and give way to quantitative methods [1, p. 11]. In addition to such advantages as exactness and precision, quantitative concepts and methods make it possible to find laws of human language which are just like those in natural sciences. These laws are fundamental elements of linguistic theories in the spirit of the philosophy of science [2,3]. Theorization effort of this type is what quantitative linguistics [1,4,5] is devoted to. The review of Cong and Liu [6] has provided an informative and insightful survey of linguistic complex networks as a young field of quantitative linguistics, including the basic concepts and measures, the major lines of research with linguistic motivation, and suggestions for future research.
NASA Astrophysics Data System (ADS)
Prashanth, K. N.; Swamy, N.; Basavaiah, K.
2014-11-01
Three simple and sensitive extraction-free spectrophotometric methods are described for the determination of trifluoperazine dihydrochloride (TFH). The methods are based on ion pair complex formation between the nitrogenous compound trifluoperazine (TFP) converted from trifluoperazine dihydrochloride and sulfonphthalein dyes, namely, bromocresol green (BCG), bromothymol blue (BTB), and bromophenol blue (BPB) in dichloromethane medium in which all the above experimental variables were circumvented. The colored products are measured at 425 nm in the BCG method, 415 nm in the BTB method, and 420 nm in the BPB method. The stoichiometry of the ion-pair complexes formed between the drug and dye (1:1) was determined by Job's continuous variations method, and the stability constants of the complexes were also calculated. These methods quantify TFP over the concentration ranges of 1.25-20.0 μg/ml in the BCG method, 1.5-21.0 μg/ml in the BTB method, and 1.5-18.0 μg/ml in the BPB method. The molar absorptivity (l·mol-1·cm-1) and Sandell sensitivity (ng/cm2) were calculated to be 2.06·104 and 0.0197; 1.82·104 and 0.0224; and 2.22·104 and 0.0183 for the BCG, BTB, and BPB methods, respectively. The methods were successfully applied to the determination of TFP in pure drug, pharmaceuticals, and in spiked human urine with good accuracy and precision.
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2002-01-01
A variable order method of integrating initial value ordinary differential equations that is based on the state transition matrix has been developed. The method has been evaluated for linear time variant and nonlinear systems of equations. While it is more complex than most other methods, it produces exact solutions at arbitrary time step size when the time variation of the system can be modeled exactly by a polynomial. Solutions to several nonlinear problems exhibiting chaotic behavior have been computed. Accuracy of the method has been demonstrated by comparison with an exact solution and with solutions obtained by established methods.
Minimizing Interrater Variability in Staging Sleep by Use of Computer-Derived Features
Younes, Magdy; Hanly, Patrick J.
2016-01-01
Study Objectives: Inter-scorer variability in sleep staging of polysomnograms (PSGs) results primarily from difficulty in determining whether: (1) an electroencephalogram pattern of wakefulness spans > 15 sec in transitional epochs, (2) spindles or K complexes are present, and (3) duration of delta waves exceeds 6 sec in a 30-sec epoch. We hypothesized that providing digitally derived information about these variables to PSG scorers may reduce inter-scorer variability. Methods: Fifty-six PSGs were scored (five-stage) by two experienced technologists, (first manual, M1). Months later, the technologists edited their own scoring (second manual, M2). PSGs were then scored with an automatic system and the same two technologists and an additional experienced technologist edited them, epoch-by-epoch (Edited-Auto). This resulted in seven manual scores for each PSG. The two M2 scores were then independently modified using digitally obtained values for sleep depth and delta duration and digitally identified spindles and K complexes. Results: Percent agreement between scorers in M2 was 78.9 ± 9.0% before modification and 96.5 ± 2.6% after. Errors of this approach were defined as a change in a manual score to a stage that was not assigned by any scorer during the seven manual scoring sessions. Total errors averaged 7.1 ± 3.7% and 6.9 ± 3.8% of epochs for scorers 1 and 2, respectively, and there was excellent agreement between the modified score and the initial manual score of each technologist. Conclusions: Providing digitally obtained information about sleep depth, delta duration, spindles and K complexes during manual scoring can greatly reduce interrater variability in sleep staging by eliminating the guesswork in scoring epochs with equivocal features. Citation: Younes M, Hanly PJ. Minimizing interrater variability in staging sleep by use of computer-derived features. J Clin Sleep Med 2016;12(10):1347–1356. PMID:27448418
Quantitative Tools for Examining the Vocalizations of Juvenile Songbirds
Wellock, Cameron D.; Reeke, George N.
2012-01-01
The singing of juvenile songbirds is highly variable and not well stereotyped, a feature that makes it difficult to analyze with existing computational techniques. We present here a method suitable for analyzing such vocalizations, windowed spectral pattern recognition (WSPR). Rather than performing pairwise sample comparisons, WSPR measures the typicality of a sample against a large sample set. We also illustrate how WSPR can be used to perform a variety of tasks, such as sample classification, song ontogeny measurement, and song variability measurement. Finally, we present a novel measure, based on WSPR, for quantifying the apparent complexity of a bird's singing. PMID:22701474
NASA Astrophysics Data System (ADS)
Wang, Rongxi; Gao, Xu; Gao, Jianmin; Gao, Zhiyong; Kang, Jiani
2018-02-01
As one of the most important approaches for analyzing the mechanism of fault pervasion, fault root cause tracing is a powerful and useful tool for detecting the fundamental causes of faults so as to prevent any further propagation and amplification. Focused on the problems arising from the lack of systematic and comprehensive integration, an information transfer-based novel data-driven framework for fault root cause tracing of complex electromechanical systems in the processing industry was proposed, taking into consideration the experience and qualitative analysis of conventional fault root cause tracing methods. Firstly, an improved symbolic transfer entropy method was presented to construct a directed-weighted information model for a specific complex electromechanical system based on the information flow. Secondly, considering the feedback mechanisms in the complex electromechanical systems, a method for determining the threshold values of weights was developed to explore the disciplines of fault propagation. Lastly, an iterative method was introduced to identify the fault development process. The fault root cause was traced by analyzing the changes in information transfer between the nodes along with the fault propagation pathway. An actual fault root cause tracing application of a complex electromechanical system is used to verify the effectiveness of the proposed framework. A unique fault root cause is obtained regardless of the choice of the initial variable. Thus, the proposed framework can be flexibly and effectively used in fault root cause tracing for complex electromechanical systems in the processing industry, and formulate the foundation of system vulnerability analysis and condition prediction, as well as other engineering applications.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
van Diest, Mike; Stegenga, Jan; Wörtche, Heinrich J.; Roerdink, Jos B. T. M; Verkerke, Gijsbertus J.; Lamoth, Claudine J. C.
2015-01-01
Background Exergames are becoming an increasingly popular tool for training balance ability, thereby preventing falls in older adults. Automatic, real time, assessment of the user’s balance control offers opportunities in terms of providing targeted feedback and dynamically adjusting the gameplay to the individual user, yet algorithms for quantification of balance control remain to be developed. The aim of the present study was to identify movement patterns, and variability therein, of young and older adults playing a custom-made weight-shifting (ice-skating) exergame. Methods Twenty older adults and twenty young adults played a weight-shifting exergame under five conditions of varying complexity, while multi-segmental whole-body movement data were captured using Kinect. Movement coordination patterns expressed during gameplay were identified using Self Organizing Maps (SOM), an artificial neural network, and variability in these patterns was quantified by computing Total Trajectory Variability (TTvar). Additionally a k Nearest Neighbor (kNN) classifier was trained to discriminate between young and older adults based on the SOM features. Results Results showed that TTvar was significantly higher in older adults than in young adults, when playing the exergame under complex task conditions. The kNN classifier showed a classification accuracy of 65.8%. Conclusions Older adults display more variable sway behavior than young adults, when playing the exergame under complex task conditions. The SOM features characterizing movement patterns expressed during exergaming allow for discriminating between young and older adults with limited accuracy. Our findings contribute to the development of algorithms for quantification of balance ability during home-based exergaming for balance training. PMID:26230655
Permutation importance: a corrected feature importance measure.
Altmann, André; Toloşi, Laura; Sander, Oliver; Lengauer, Thomas
2010-05-15
In life sciences, interpretability of machine learning models is as important as their prediction accuracy. Linear models are probably the most frequently used methods for assessing feature relevance, despite their relative inflexibility. However, in the past years effective estimators of feature relevance have been derived for highly complex or non-parametric models such as support vector machines and RandomForest (RF) models. Recently, it has been observed that RF models are biased in such a way that categorical variables with a large number of categories are preferred. In this work, we introduce a heuristic for normalizing feature importance measures that can correct the feature importance bias. The method is based on repeated permutations of the outcome vector for estimating the distribution of measured importance for each variable in a non-informative setting. The P-value of the observed importance provides a corrected measure of feature importance. We apply our method to simulated data and demonstrate that (i) non-informative predictors do not receive significant P-values, (ii) informative variables can successfully be recovered among non-informative variables and (iii) P-values computed with permutation importance (PIMP) are very helpful for deciding the significance of variables, and therefore improve model interpretability. Furthermore, PIMP was used to correct RF-based importance measures for two real-world case studies. We propose an improved RF model that uses the significant variables with respect to the PIMP measure and show that its prediction accuracy is superior to that of other existing models. R code for the method presented in this article is available at http://www.mpi-inf.mpg.de/ approximately altmann/download/PIMP.R CONTACT: altmann@mpi-inf.mpg.de, laura.tolosi@mpi-inf.mpg.de Supplementary data are available at Bioinformatics online.
PREDICTING TWO-DIMENSIONAL STEADY-STATE SOIL FREEZING FRONTS USING THE CVBEM.
Hromadka, T.V.
1986-01-01
The complex variable boundary element method (CVBEM) is used instead of a real variable boundary element method due to the available modeling error evaluation techniques developed. The modeling accuracy is evaluated by the model-user in the determination of an approximative boundary upon which the CVBEM provides an exact solution. Although inhomogeneity (and anisotropy) can be included in the CVBEM model, the resulting fully populated matrix system quickly becomes large. Therefore in this paper, the domain is assumed homogeneous and isotropic except for differences in frozen and thawed conduction parameters on either side of the freezing front. The example problems presented were obtained by use of a popular 64K microcomputer (the current version of the program used in this study has the capacity to accommodate 30 nodal points).
Identifying Changes of Complex Flood Dynamics with Recurrence Analysis
NASA Astrophysics Data System (ADS)
Wendi, D.; Merz, B.; Marwan, N.
2016-12-01
Temporal changes in flood hazard system are known to be difficult to detect and attribute due to multiple drivers that include complex processes that are non-stationary and highly variable. These drivers, such as human-induced climate change, natural climate variability, implementation of flood defense, river training, or land use change, could impact variably on space-time scales and influence or mask each other. Flood time series may show complex behavior that vary at a range of time scales and may cluster in time. Moreover hydrological time series (i.e. discharge) are often subject to measurement errors, such as rating curve error especially in the case of extremes where observation are actually derived through extrapolation. This study focuses on the application of recurrence based data analysis techniques (recurrence plot) for understanding and quantifying spatio-temporal changes in flood hazard in Germany. The recurrence plot is known as an effective tool to visualize the dynamics of phase space trajectories i.e. constructed from a time series by using an embedding dimension and a time delay, and it is known to be effective in analyzing non-stationary and non-linear time series. Sensitivity of the common measurement errors and noise on recurrence analysis will also be analyzed and evaluated against conventional methods. The emphasis will be on the identification of characteristic recurrence properties that could associate typical dynamic to certain flood events.
Chadeau-Hyam, Marc; Campanella, Gianluca; Jombart, Thibaut; Bottolo, Leonardo; Portengen, Lutzen; Vineis, Paolo; Liquet, Benoit; Vermeulen, Roel C H
2013-08-01
Recent technological advances in molecular biology have given rise to numerous large-scale datasets whose analysis imposes serious methodological challenges mainly relating to the size and complex structure of the data. Considerable experience in analyzing such data has been gained over the past decade, mainly in genetics, from the Genome-Wide Association Study era, and more recently in transcriptomics and metabolomics. Building upon the corresponding literature, we provide here a nontechnical overview of well-established methods used to analyze OMICS data within three main types of regression-based approaches: univariate models including multiple testing correction strategies, dimension reduction techniques, and variable selection models. Our methodological description focuses on methods for which ready-to-use implementations are available. We describe the main underlying assumptions, the main features, and advantages and limitations of each of the models. This descriptive summary constitutes a useful tool for driving methodological choices while analyzing OMICS data, especially in environmental epidemiology, where the emergence of the exposome concept clearly calls for unified methods to analyze marginally and jointly complex exposure and OMICS datasets. Copyright © 2013 Wiley Periodicals, Inc.
Antioch, Kathryn M; Walsh, Michael K
2004-06-01
Hospitals throughout the world using funding based on diagnosis-related groups (DRG) have incurred substantial budgetary deficits, despite high efficiency. We identify the limitations of DRG funding that lack risk (severity) adjustment for State-wide referral services. Methods to risk adjust DRGs are instructive. The average price in casemix funding in the Australian State of Victoria is policy based, not benchmarked. Average cost weights are too low for high-complexity DRGs relating to State-wide referral services such as heart and lung transplantation and trauma. Risk-adjusted specified grants (RASG) are required for five high-complexity respiratory, cardiology and stroke DRGs incurring annual deficits of $3.6 million due to high casemix complexity and government under-funding despite high efficiency. Five stepwise linear regressions for each DRG excluded non-significant variables and assessed heteroskedasticity and multicollinearlity. Cost per patient was the dependent variable. Significant independent variables were age, length-of-stay outliers, number of disease types, diagnoses, procedures and emergency status. Diagnosis and procedure severity markers were identified. The methodology and the work of the State-wide Risk Adjustment Working Group can facilitate risk adjustment of DRGs State-wide and for Treasury negotiations for expenditure growth. The Alfred Hospital previously negotiated RASG of $14 million over 5 years for three trauma and chronic DRGs. Some chronic diseases require risk-adjusted capitation funding models for Australian Health Maintenance Organizations as an alternative to casemix funding. The use of Diagnostic Cost Groups can facilitate State and Federal government reform via new population-based risk adjusted funding models that measure health need.
Adaptive Elastic Net for Generalized Methods of Moments.
Caner, Mehmet; Zhang, Hao Helen
2014-01-30
Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.
Quantitative characterization of genetic parts and circuits for plant synthetic biology.
Schaumberg, Katherine A; Antunes, Mauricio S; Kassaw, Tessema K; Xu, Wenlong; Zalewski, Christopher S; Medford, June I; Prasad, Ashok
2016-01-01
Plant synthetic biology promises immense technological benefits, including the potential development of a sustainable bio-based economy through the predictive design of synthetic gene circuits. Such circuits are built from quantitatively characterized genetic parts; however, this characterization is a significant obstacle in work with plants because of the time required for stable transformation. We describe a method for rapid quantitative characterization of genetic plant parts using transient expression in protoplasts and dual luciferase outputs. We observed experimental variability in transient-expression assays and developed a mathematical model to describe, as well as statistical normalization methods to account for, this variability, which allowed us to extract quantitative parameters. We characterized >120 synthetic parts in Arabidopsis and validated our method by comparing transient expression with expression in stably transformed plants. We also tested >100 synthetic parts in sorghum (Sorghum bicolor) protoplasts, and the results showed that our method works in diverse plant groups. Our approach enables the construction of tunable gene circuits in complex eukaryotic organisms.
Adjoint-Based Methodology for Time-Dependent Optimization
NASA Technical Reports Server (NTRS)
Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.
2008-01-01
This paper presents a discrete adjoint method for a broad class of time-dependent optimization problems. The time-dependent adjoint equations are derived in terms of the discrete residual of an arbitrary finite volume scheme which approximates unsteady conservation law equations. Although only the 2-D unsteady Euler equations are considered in the present analysis, this time-dependent adjoint method is applicable to the 3-D unsteady Reynolds-averaged Navier-Stokes equations with minor modifications. The discrete adjoint operators involving the derivatives of the discrete residual and the cost functional with respect to the flow variables are computed using a complex-variable approach, which provides discrete consistency and drastically reduces the implementation and debugging cycle. The implementation of the time-dependent adjoint method is validated by comparing the sensitivity derivative with that obtained by forward mode differentiation. Our numerical results show that O(10) optimization iterations of the steepest descent method are needed to reduce the objective functional by 3-6 orders of magnitude for test problems considered.
On Chaotic and Hyperchaotic Complex Nonlinear Dynamical Systems
NASA Astrophysics Data System (ADS)
Mahmoud, Gamal M.
Dynamical systems described by real and complex variables are currently one of the most popular areas of scientific research. These systems play an important role in several fields of physics, engineering, and computer sciences, for example, laser systems, control (or chaos suppression), secure communications, and information science. Dynamical basic properties, chaos (hyperchaos) synchronization, chaos control, and generating hyperchaotic behavior of these systems are briefly summarized. The main advantage of introducing complex variables is the reduction of phase space dimensions by a half. They are also used to describe and simulate the physics of detuned laser and thermal convection of liquid flows, where the electric field and the atomic polarization amplitudes are both complex. Clearly, if the variables of the system are complex the equations involve twice as many variables and control parameters, thus making it that much harder for a hostile agent to intercept and decipher the coded message. Chaotic and hyperchaotic complex systems are stated as examples. Finally there are many open problems in the study of chaotic and hyperchaotic complex nonlinear dynamical systems, which need further investigations. Some of these open problems are given.
Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.
2001-01-01
Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.
NASA Astrophysics Data System (ADS)
Dorigo, W. A.; Zurita-Milla, R.; de Wit, A. J. W.; Brazile, J.; Singh, R.; Schaepman, M. E.
2007-05-01
During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical-empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by integrating imagery with different spatial, temporal, spectral, and angular resolutions, and the fusion of optical data with data of different origin, such as LIDAR and radar/microwave.
Structural identifiability of cyclic graphical models of biological networks with latent variables.
Wang, Yulin; Lu, Na; Miao, Hongyu
2016-06-13
Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.
Estimating trends in the global mean temperature record
NASA Astrophysics Data System (ADS)
Poppick, Andrew; Moyer, Elisabeth J.; Stein, Michael L.
2017-06-01
Given uncertainties in physical theory and numerical climate simulations, the historical temperature record is often used as a source of empirical information about climate change. Many historical trend analyses appear to de-emphasize physical and statistical assumptions: examples include regression models that treat time rather than radiative forcing as the relevant covariate, and time series methods that account for internal variability in nonparametric rather than parametric ways. However, given a limited data record and the presence of internal variability, estimating radiatively forced temperature trends in the historical record necessarily requires some assumptions. Ostensibly empirical methods can also involve an inherent conflict in assumptions: they require data records that are short enough for naive trend models to be applicable, but long enough for long-timescale internal variability to be accounted for. In the context of global mean temperatures, empirical methods that appear to de-emphasize assumptions can therefore produce misleading inferences, because the trend over the twentieth century is complex and the scale of temporal correlation is long relative to the length of the data record. We illustrate here how a simple but physically motivated trend model can provide better-fitting and more broadly applicable trend estimates and can allow for a wider array of questions to be addressed. In particular, the model allows one to distinguish, within a single statistical framework, between uncertainties in the shorter-term vs. longer-term response to radiative forcing, with implications not only on historical trends but also on uncertainties in future projections. We also investigate the consequence on inferred uncertainties of the choice of a statistical description of internal variability. While nonparametric methods may seem to avoid making explicit assumptions, we demonstrate how even misspecified parametric statistical methods, if attuned to the important characteristics of internal variability, can result in more accurate uncertainty statements about trends.
Beyond the SCS curve number: A new stochastic spatial runoff approach
NASA Astrophysics Data System (ADS)
Bartlett, M. S., Jr.; Parolari, A.; McDonnell, J.; Porporato, A. M.
2015-12-01
The Soil Conservation Service curve number (SCS-CN) method is the standard approach in practice for predicting a storm event runoff response. It is popular because its low parametric complexity and ease of use. However, the SCS-CN method does not describe the spatial variability of runoff and is restricted to certain geographic regions and land use types. Here we present a general theory for extending the SCS-CN method. Our new theory accommodates different event based models derived from alternative rainfall-runoff mechanisms or distributions of watershed variables, which are the basis of different semi-distributed models such as VIC, PDM, and TOPMODEL. We introduce a parsimonious but flexible description where runoff is initiated by a pure threshold, i.e., saturation excess, that is complemented by fill and spill runoff behavior from areas of partial saturation. To facilitate event based runoff prediction, we derive simple equations for the fraction of the runoff source areas, the probability density function (PDF) describing runoff variability, and the corresponding average runoff value (a runoff curve analogous to the SCS-CN). The benefit of the theory is that it unites the SCS-CN method, VIC, PDM, and TOPMODEL as the same model type but with different assumptions for the spatial distribution of variables and the runoff mechanism. The new multiple runoff mechanism description for the SCS-CN enables runoff prediction in geographic regions and site runoff types previously misrepresented by the traditional SCS-CN method. In addition, we show that the VIC, PDM, and TOPMODEL runoff curves may be more suitable than the SCS-CN for different conditions. Lastly, we explore predictions of sediment and nutrient transport by applying the PDF describing runoff variability within our new framework.
NASA Astrophysics Data System (ADS)
Le Maire, P.; Munschy, M.
2017-12-01
Interpretation of marine magnetic anomalies enable to perform accurate global kinematic models. Several methods have been proposed to compute the paleo-latitude of the oceanic crust as its formation. A model of the Earth's magnetic field is used to determine a relationship between the apparent inclination of the magnetization and the paleo-latitude. Usually, the estimation of the apparent inclination is qualitative, with the fit between magnetic data and forward models. We propose to apply a new method using complex algebra to obtain the apparent inclination of the magnetization of the oceanic crust. For two dimensional bodies, we rewrite Talwani's equations using complex algebra; the corresponding complex function of the complex variable, called CMA (complex magnetic anomaly) is easier to use for forward modelling and inversion of the magnetic data. This complex equation allows to visualize the data in the complex plane (Argand diagram) and offers a new way to interpret data (curves to the right of the figure (B), while the curves to the left represent the standard display of magnetic anomalies (A) for the model displayed (C) at the bottom of the figure). In the complex plane, the effect of the apparent inclination is to rotate the curves, while on the standard display the evolution of the shape of the anomaly is more complicated (figure). This innovative method gives the opportunity to study a set of magnetic profiles (provided by the Geological Survey of Norway) acquired in the Norwegian Sea, near the Jan Mayen fracture zone. In this area, the age of the oceanic crust ranges from 40 to 55 Ma and the apparent inclination of the magnetization is computed.
Blind predictions of protein interfaces by docking calculations in CAPRI.
Lensink, Marc F; Wodak, Shoshana J
2010-11-15
Reliable prediction of the amino acid residues involved in protein-protein interfaces can provide valuable insight into protein function, and inform mutagenesis studies, and drug design applications. A fast-growing number of methods are being proposed for predicting protein interfaces, using structural information, energetic criteria, or sequence conservation or by integrating multiple criteria and approaches. Overall however, their performance remains limited, especially when applied to nonobligate protein complexes, where the individual components are also stable on their own. Here, we evaluate interface predictions derived from protein-protein docking calculations. To this end we measure the overlap between the interfaces in models of protein complexes submitted by 76 participants in CAPRI (Critical Assessment of Predicted Interactions) and those of 46 observed interfaces in 20 CAPRI targets corresponding to nonobligate complexes. Our evaluation considers multiple models for each target interface, submitted by different participants, using a variety of docking methods. Although this results in a substantial variability in the prediction performance across participants and targets, clear trends emerge. Docking methods that perform best in our evaluation predict interfaces with average recall and precision levels of about 60%, for a small majority (60%) of the analyzed interfaces. These levels are significantly higher than those obtained for nonobligate complexes by most extant interface prediction methods. We find furthermore that a sizable fraction (24%) of the interfaces in models ranked as incorrect in the CAPRI assessment are actually correctly predicted (recall and precision ≥50%), and that these models contribute to 70% of the correct docking-based interface predictions overall. Our analysis proves that docking methods are much more successful in identifying interfaces than in predicting complexes, and suggests that these methods have an excellent potential of addressing the interface prediction challenge. © 2010 Wiley-Liss, Inc.
B. Desta Fekedulegn; J.J. Colbert; R.R., Jr. Hicks; Michael E. Schuckers
2002-01-01
The theory and application of principal components regression, a method for coping with multicollinearity among independent variables in analyzing ecological data, is exhibited in detail. A concrete example of the complex procedures that must be carried out in developing a diagnostic growth-climate model is provided. We use tree radial increment data taken from breast...
Fire scars reveal variability and dynamics of eastern fire regimes
Richard P. Guyette; Daniel C. Dey; Michael C. Stambaugh; Rose-Marie Muzika
2006-01-01
Fire scar evidence in eastern North America is sparse and complex but shows promise in defining the dynamics of these fire regimes and their influence on ecosystems. We review fire scar data, methods, and limitations, and use this information to identify and examine the factors influencing fire regimes. Fire scar data from studies at more than 40 sites in Eastern North...
Impact of Classroom Design on Teacher Pedagogy and Student Engagement and Performance in Mathematics
ERIC Educational Resources Information Center
Imms, Wesley; Byers, Terry
2017-01-01
A resurgence in interest in classroom and school design has highlighted how little we know about the impact of learning environments on student and teacher performance. This is partly because of a lack of research methods capable of controlling the complex variables inherent to space and education. In a unique study that overcame such difficulties…
A Three-Step Approach To Model Tree Mortality in the State of Georgia
Qingmin Meng; Chris J. Cieszewski; Roger C. Lowe; Michal Zasada
2005-01-01
Tree mortality is one of the most complex phenomena of forest growth and yield. Many types of factors affect tree mortality, which is considered difficult to predict. This study presents a new systematic approach to simulate tree mortality based on the integration of statistical models and geographical information systems. This method begins with variable preselection...
Shape optimization techniques for musical instrument design
NASA Astrophysics Data System (ADS)
Henrique, Luis; Antunes, Jose; Carvalho, Joao S.
2002-11-01
The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.
Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control
NASA Astrophysics Data System (ADS)
Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo
2017-02-01
The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.
Variability in Rheumatology day care hospitals in Spain: VALORA study.
Hernández Miguel, María Victoria; Martín Martínez, María Auxiliadora; Corominas, Héctor; Sanchez-Piedra, Carlos; Sanmartí, Raimon; Fernandez Martinez, Carmen; García-Vicuña, Rosario
To describe the variability of the day care hospital units (DCHUs) of Rheumatology in Spain, in terms of structural resources and operating processes. Multicenter descriptive study with data from a self-completed questionnaire of DCHUs self-assessment based on DCHUs quality standards of the Spanish Society of Rheumatology. Structural resources and operating processes were analyzed and stratified by hospital complexity (regional, general, major and complex). Variability was determined using the coefficient of variation (CV) of the variable with clinical relevance that presented statistically significant differences when was compared by centers. A total of 89 hospitals (16 autonomous regions and Melilla) were included in the analysis. 11.2% of hospitals are regional, 22,5% general, 27%, major and 39,3% complex. A total of 92% of DCHUs were polyvalent. The number of treatments applied, the coordination between DCHUs and hospital pharmacy and the post graduate training process were the variables that showed statistically significant differences depending on the complexity of hospital. The highest rate of rheumatologic treatments was found in complex hospitals (2.97 per 1,000 population), and the lowest in general hospitals (2.01 per 1,000 population). The CV was 0.88 in major hospitals; 0.86 in regional; 0.76 in general, and 0.72 in the complex. there was variability in the number of treatments delivered in DCHUs, being greater in major hospitals and then in regional centers. Nonetheless, the variability in terms of structure and function does not seem due to differences in center complexity. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
Cabrieto, Jedelyn; Tuerlinckx, Francis; Kuppens, Peter; Grassmann, Mariel; Ceulemans, Eva
2017-06-01
Change point detection in multivariate time series is a complex task since next to the mean, the correlation structure of the monitored variables may also alter when change occurs. DeCon was recently developed to detect such changes in mean and\\or correlation by combining a moving windows approach and robust PCA. However, in the literature, several other methods have been proposed that employ other non-parametric tools: E-divisive, Multirank, and KCP. Since these methods use different statistical approaches, two issues need to be tackled. First, applied researchers may find it hard to appraise the differences between the methods. Second, a direct comparison of the relative performance of all these methods for capturing change points signaling correlation changes is still lacking. Therefore, we present the basic principles behind DeCon, E-divisive, Multirank, and KCP and the corresponding algorithms, to make them more accessible to readers. We further compared their performance through extensive simulations using the settings of Bulteel et al. (Biological Psychology, 98 (1), 29-42, 2014) implying changes in mean and in correlation structure and those of Matteson and James (Journal of the American Statistical Association, 109 (505), 334-345, 2014) implying different numbers of (noise) variables. KCP emerged as the best method in almost all settings. However, in case of more than two noise variables, only DeCon performed adequately in detecting correlation changes.
Rapid-estimation method for assessing scour at highway bridges
Holnbeck, Stephen R.
1998-01-01
A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.
Distributed Coding/Decoding Complexity in Video Sensor Networks
Cordeiro, Paulo J.; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972
Distributed coding/decoding complexity in video sensor networks.
Cordeiro, Paulo J; Assunção, Pedro
2012-01-01
Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.
Valavanis, Ioannis K; Mougiakakou, Stavroula G; Grimaldi, Keith A; Nikita, Konstantina S
2010-09-08
Obesity is a multifactorial trait, which comprises an independent risk factor for cardiovascular disease (CVD). The aim of the current work is to study the complex etiology beneath obesity and identify genetic variations and/or factors related to nutrition that contribute to its variability. To this end, a set of more than 2300 white subjects who participated in a nutrigenetics study was used. For each subject a total of 63 factors describing genetic variants related to CVD (24 in total), gender, and nutrition (38 in total), e.g. average daily intake in calories and cholesterol, were measured. Each subject was categorized according to body mass index (BMI) as normal (BMI ≤ 25) or overweight (BMI > 25). Two artificial neural network (ANN) based methods were designed and used towards the analysis of the available data. These corresponded to i) a multi-layer feed-forward ANN combined with a parameter decreasing method (PDM-ANN), and ii) a multi-layer feed-forward ANN trained by a hybrid method (GA-ANN) which combines genetic algorithms and the popular back-propagation training algorithm. PDM-ANN and GA-ANN were comparatively assessed in terms of their ability to identify the most important factors among the initial 63 variables describing genetic variations, nutrition and gender, able to classify a subject into one of the BMI related classes: normal and overweight. The methods were designed and evaluated using appropriate training and testing sets provided by 3-fold Cross Validation (3-CV) resampling. Classification accuracy, sensitivity, specificity and area under receiver operating characteristics curve were utilized to evaluate the resulted predictive ANN models. The most parsimonious set of factors was obtained by the GA-ANN method and included gender, six genetic variations and 18 nutrition-related variables. The corresponding predictive model was characterized by a mean accuracy equal of 61.46% in the 3-CV testing sets. The ANN based methods revealed factors that interactively contribute to obesity trait and provided predictive models with a promising generalization ability. In general, results showed that ANNs and their hybrids can provide useful tools for the study of complex traits in the context of nutrigenetics.
A new spectrophotometric method for determination of EDTA in water using its complex with Mn(III).
Andrade, Carlos Eduardo O; Oliveira, André F; Neves, Antônio A; Queiroz, Maria Eliana L R
2016-11-05
EDTA is an important ligand used in many industrial products as well as in agriculture, where it is employed to assist in phytoextraction procedures and the absorption of nutrients by plants. Due to its intensive use and recalcitrance, it is now considered an emerging pollutant in water, so there is great interest in techniques suitable for its monitoring. This work proposes a method based on formation of the Mn(III)-EDTA complex after oxidation of the Mn(II)-EDTA complex by PbO2 immobilized on cyanoacrylate spheres. A design of experiments (DOE) based on the Doehlert matrix was used to determine the optimum conditions of the method, and the influence of the variables was evaluated using a multiple linear regression (MLR) model. The optimized method presented a linear response in the range from 0.77 to 100.0μmolL(-1), with analytical sensitivity of 7.7×10(3)Lmol(-1), a coefficient of determination of 0.999, and a limit of detection of 0.23μmolL(-1). The method was applied using samples fortified at different concentration levels, and the recoveries achieved were between 97.0 and 104.9%. Copyright © 2016 Elsevier B.V. All rights reserved.
A fast non-contact imaging photoplethysmography method using a tissue-like model
NASA Astrophysics Data System (ADS)
McDuff, Daniel J.; Blackford, Ethan B.; Estepp, Justin R.; Nishidate, Izumi
2018-02-01
Imaging photoplethysmography (iPPG) allows non-contact, concomitant measurement and visualization of peripheral blood flow using just an RGB camera. Most iPPG methods require a window of temporal data and complex computation, this makes real-time measurement and spatial visualization impossible. We present a fast,"window-less", non-contact imaging photoplethysmography method, based on a tissue-like model of the skin, that allows accurate measurement of heart rate and heart rate variability parameters. The error in heart rate estimates is equivalent to state-of-the-art techniques and computation is much faster.
Complex Variables throughout the Curriculum
ERIC Educational Resources Information Center
D'Angelo, John P.
2017-01-01
We offer many specific detailed examples, several of which are new, that instructors can use (in lecture or as student projects) to revitalize the role of complex variables throughout the curriculum. We conclude with three primary recommendations: revise the syllabus of Calculus II to allow early introductions of complex numbers and linear…
Xu, Man K.; Gaysina, Darya; Tsonaka, Roula; Morin, Alexandre J. S.; Croudace, Tim J.; Barnett, Jennifer H.; Houwing-Duistermaat, Jeanine; Richards, Marcus; Jones, Peter B.
2017-01-01
Very few molecular genetic studies of personality traits have used longitudinal phenotypic data, therefore molecular basis for developmental change and stability of personality remains to be explored. We examined the role of the monoamine oxidase A gene (MAOA) on extraversion and neuroticism from adolescence to adulthood, using modern latent variable methods. A sample of 1,160 male and 1,180 female participants with complete genotyping data was drawn from a British national birth cohort, the MRC National Survey of Health and Development (NSHD). The predictor variable was based on a latent variable representing genetic variations of the MAOA gene measured by three SNPs (rs3788862, rs5906957, and rs979606). Latent phenotype variables were constructed using psychometric methods to represent cross-sectional and longitudinal phenotypes of extraversion and neuroticism measured at ages 16 and 26. In males, the MAOA genetic latent variable (AAG) was associated with lower extraversion score at age 16 (β = −0.167; CI: −0.289, −0.045; p = 0.007, FDRp = 0.042), as well as greater increase in extraversion score from 16 to 26 years (β = 0.197; CI: 0.067, 0.328; p = 0.003, FDRp = 0.036). No genetic association was found for neuroticism after adjustment for multiple testing. Although, we did not find statistically significant associations after multiple testing correction in females, this result needs to be interpreted with caution due to issues related to x-inactivation in females. The latent variable method is an effective way of modeling phenotype- and genetic-based variances and may therefore improve the methodology of molecular genetic studies of complex psychological traits. PMID:29075213
Xu, Man K; Gaysina, Darya; Tsonaka, Roula; Morin, Alexandre J S; Croudace, Tim J; Barnett, Jennifer H; Houwing-Duistermaat, Jeanine; Richards, Marcus; Jones, Peter B
2017-01-01
Very few molecular genetic studies of personality traits have used longitudinal phenotypic data, therefore molecular basis for developmental change and stability of personality remains to be explored. We examined the role of the monoamine oxidase A gene ( MAOA ) on extraversion and neuroticism from adolescence to adulthood, using modern latent variable methods. A sample of 1,160 male and 1,180 female participants with complete genotyping data was drawn from a British national birth cohort, the MRC National Survey of Health and Development (NSHD). The predictor variable was based on a latent variable representing genetic variations of the MAOA gene measured by three SNPs (rs3788862, rs5906957, and rs979606). Latent phenotype variables were constructed using psychometric methods to represent cross-sectional and longitudinal phenotypes of extraversion and neuroticism measured at ages 16 and 26. In males, the MAOA genetic latent variable (AAG) was associated with lower extraversion score at age 16 (β = -0.167; CI: -0.289, -0.045; p = 0.007, FDRp = 0.042), as well as greater increase in extraversion score from 16 to 26 years (β = 0.197; CI: 0.067, 0.328; p = 0.003, FDRp = 0.036). No genetic association was found for neuroticism after adjustment for multiple testing. Although, we did not find statistically significant associations after multiple testing correction in females, this result needs to be interpreted with caution due to issues related to x-inactivation in females. The latent variable method is an effective way of modeling phenotype- and genetic-based variances and may therefore improve the methodology of molecular genetic studies of complex psychological traits.
Vaurio, Rebecca G; Simmonds, Daniel J; Mostofsky, Stewart H
2009-10-01
One of the most consistent findings in children with ADHD is increased moment-to-moment variability in reaction time (RT). The source of increased RT variability can be examined using ex-Gaussian analyses that divide variability into normal and exponential components and Fast Fourier transform (FFT) that allow for detailed examination of the frequency of responses in the exponential distribution. Prior studies of ADHD using these methods have produced variable results, potentially related to differences in task demand. The present study sought to examine the profile of RT variability in ADHD using two Go/No-go tasks with differing levels of cognitive demand. A total of 140 children (57 with ADHD and 83 typically developing controls), ages 8-13 years, completed both a "simple" Go/No-go task and a more "complex" Go/No-go task with increased working memory load. Repeated measures ANOVA of ex-Gaussian functions revealed for both tasks children with ADHD demonstrated increased variability in both the normal/Gaussian (significantly elevated sigma) and the exponential (significantly elevated tau) components. In contrast, FFT analysis of the exponential component revealed a significant task x diagnosis interaction, such that infrequent slow responses in ADHD differed depending on task demand (i.e., for the simple task, increased power in the 0.027-0.074 Hz frequency band; for the complex task, decreased power in the 0.074-0.202 Hz band). The ex-Gaussian findings revealing increased variability in both the normal (sigma) and exponential (tau) components for the ADHD group, suggest that both impaired response preparation and infrequent "lapses in attention" contribute to increased variability in ADHD. FFT analyses reveal that the periodicity of intermittent lapses of attention in ADHD varies with task demand. The findings provide further support for intra-individual variability as a candidate intermediate endophenotype of ADHD.
NASA Astrophysics Data System (ADS)
Fouad, Geoffrey; Skupin, André; Hope, Allen
2016-04-01
The flow duration curve (FDC) is one of the most widely used tools to quantify streamflow. Its percentile flows are often required for water resource applications, but these values must be predicted for ungauged basins with insufficient or no streamflow data. Regional regression is a commonly used approach for predicting percentile flows that involves identifying hydrologic regions and calibrating regression models to each region. The independent variables used to describe the physiographic and climatic setting of the basins are a critical component of regional regression, yet few studies have investigated their effect on resulting predictions. In this study, the complexity of the independent variables needed for regional regression is investigated. Different levels of variable complexity are applied for a regional regression consisting of 918 basins in the US. Both the hydrologic regions and regression models are determined according to the different sets of variables, and the accuracy of resulting predictions is assessed. The different sets of variables include (1) a simple set of three variables strongly tied to the FDC (mean annual precipitation, potential evapotranspiration, and baseflow index), (2) a traditional set of variables describing the average physiographic and climatic conditions of the basins, and (3) a more complex set of variables extending the traditional variables to include statistics describing the distribution of physiographic data and temporal components of climatic data. The latter set of variables is not typically used in regional regression, and is evaluated for its potential to predict percentile flows. The simplest set of only three variables performed similarly to the other more complex sets of variables. Traditional variables used to describe climate, topography, and soil offered little more to the predictions, and the experimental set of variables describing the distribution of basin data in more detail did not improve predictions. These results are largely reflective of cross-correlation existing in hydrologic datasets, and highlight the limited predictive power of many traditionally used variables for regional regression. A parsimonious approach including fewer variables chosen based on their connection to streamflow may be more efficient than a data mining approach including many different variables. Future regional regression studies may benefit from having a hydrologic rationale for including different variables and attempting to create new variables related to streamflow.
Hasanpour, Foroozan; Hadadzadeh, Hassan; Taei, Masoumeh; Nekouei, Mohsen; Mozafari, Elmira
2016-05-01
Analytical performance of conventional spectrophotometer was developed by coupling of effective dispersive liquid-liquid micro-extraction method with spectrophotometric determination for ultra-trace determination of cobalt. The method was based on the formation of Co(II)-alpha-benzoin oxime complex and its extraction using a dispersive liquid-liquid micro-extraction technique. During the present work, several important variables such as pH, ligand concentration, amount and type of dispersive, and extracting solvent were optimized. It was found that the crucial factor for the Co(II)-alpha benzoin oxime complex formation is the pH of the alkaline alcoholic medium. Under the optimized condition, the calibration graph was linear in the ranges of 1.0-110 μg L(-1) with the detection limit (S/N = 3) of 0.5 μg L(-1). The preconcentration operation of 25 mL of sample gave enhancement factor of 75. The proposed method was applied for determination of Co(II) in soil samples.
Geometric multigrid for an implicit-time immersed boundary method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy, Robert D.; Philip, Bobby; Griffith, Boyce E.
2014-10-12
The immersed boundary (IB) method is an approach to fluid-structure interaction that uses Lagrangian variables to describe the deformations and resulting forces of the structure and Eulerian variables to describe the motion and forces of the fluid. Explicit time stepping schemes for the IB method require solvers only for Eulerian equations, for which fast Cartesian grid solution methods are available. Such methods are relatively straightforward to develop and are widely used in practice but often require very small time steps to maintain stability. Implicit-time IB methods permit the stable use of large time steps, but efficient implementations of such methodsmore » require significantly more complex solvers that effectively treat both Lagrangian and Eulerian variables simultaneously. Moreover, several different approaches to solving the coupled Lagrangian-Eulerian equations have been proposed, but a complete understanding of this problem is still emerging. This paper presents a geometric multigrid method for an implicit-time discretization of the IB equations. This multigrid scheme uses a generalization of box relaxation that is shown to handle problems in which the physical stiffness of the structure is very large. Numerical examples are provided to illustrate the effectiveness and efficiency of the algorithms described herein. Finally, these tests show that using multigrid as a preconditioner for a Krylov method yields improvements in both robustness and efficiency as compared to using multigrid as a solver. They also demonstrate that with a time step 100–1000 times larger than that permitted by an explicit IB method, the multigrid-preconditioned implicit IB method is approximately 50–200 times more efficient than the explicit method.« less
A mathematical framework for modelling cambial surface evolution using a level set method
Sellier, Damien; Plank, Michael J.; Harrington, Jonathan J.
2011-01-01
Background and Aims During their lifetime, tree stems take a series of successive nested shapes. Individual tree growth models traditionally focus on apical growth and architecture. However, cambial growth, which is distributed over a surface layer wrapping the whole organism, equally contributes to plant form and function. This study aims at providing a framework to simulate how organism shape evolves as a result of a secondary growth process that occurs at the cellular scale. Methods The development of the vascular cambium is modelled as an expanding surface using the level set method. The surface consists of multiple compartments following distinct expansion rules. Growth behaviour can be formulated as a mathematical function of surface state variables and independent variables to describe biological processes. Key Results The model was coupled to an architectural model and to a forest stand model to simulate cambium dynamics and wood formation at the scale of the organism. The model is able to simulate competition between cambia, surface irregularities and local features. Predicting the shapes associated with arbitrarily complex growth functions does not add complexity to the numerical method itself. Conclusions Despite their slenderness, it is sometimes useful to conceive of trees as expanding surfaces. The proposed mathematical framework provides a way to integrate through time and space the biological and physical mechanisms underlying cambium activity. It can be used either to test growth hypotheses or to generate detailed maps of wood internal structure. PMID:21470972
Middleton, John; Vaks, Jeffrey E
2007-04-01
Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.
Bieler, Noah S; Tschopp, Jan P; Hünenberger, Philippe H
2015-06-09
An extension of the λ-local-elevation umbrella-sampling (λ-LEUS) scheme [ Bieler et al. J. Chem. Theory Comput. 2014 , 10 , 3006 ] is proposed to handle the multistate (MS) situation, i.e. the calculation of the relative free energies of multiple physical states based on a single simulation. The key element of the MS-λ-LEUS approach is to use a single coupling variable Λ controlling successive pairwise mutations between the states of interest in a cyclic fashion. The Λ variable is propagated dynamically as an extended-system variable, using a coordinate transformation with plateaus and a memory-based biasing potential as in λ-LEUS. Compared to other available MS schemes (one-step perturbation, enveloping distribution sampling and conventional λ-dynamics) the proposed method presents a number of important advantages, namely: (i) the physical states are visited explicitly and over finite time periods; (ii) the extent of unphysical space required to ensure transitions is kept minimal and, in particular, one-dimensional; (iii) the setup protocol solely requires the topologies of the physical states; and (iv) the method only requires limited modifications in a simulation code capable of handling two-state mutations. As an initial application, the absolute binding free energies of five alkali cations to three crown ethers in three different solvents are calculated. The results are found to reproduce qualitatively the main experimental trends and, in particular, the experimental selectivity of 18C6 for K(+) in water and methanol, which is interpreted in terms of opposing trends along the cation series between the solvation free energy of the cation and the direct electrostatic interactions within the complex.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Xuesong
2012-12-17
Precipitation is an important input variable for hydrologic and ecological modeling and analysis. Next Generation Radar (NEXRAD) can provide precipitation products that cover most of the continental United States with a high resolution display of approximately 4 × 4 km2. Two major issues concerning the applications of NEXRAD data are (1) lack of a NEXRAD geo-processing and geo-referencing program and (2) bias correction of NEXRAD estimates. In this chapter, a geographic information system (GIS) based software that can automatically support processing of NEXRAD data for hydrologic and ecological models is presented. Some geostatistical approaches to calibrating NEXRAD data using rainmore » gauge data are introduced, and two case studies on evaluating accuracy of NEXRAD Multisensor Precipitation Estimator (MPE) and calibrating MPE with rain-gauge data are presented. The first case study examines the performance of MPE in mountainous region versus south plains and cold season versus warm season, as well as the effect of sub-grid variability and temporal scale on NEXRAD performance. From the results of the first case study, performance of MPE was found to be influenced by complex terrain, frozen precipitation, sub-grid variability, and temporal scale. Overall, the assessment of MPE indicates the importance of removing bias of the MPE precipitation product before its application, especially in the complex mountainous region. The second case study examines the performance of three MPE calibration methods using rain gauge observations in the Little River Experimental Watershed in Georgia. The comparison results show that no one method can perform better than the others in terms of all evaluation coefficients and for all time steps. For practical estimation of precipitation distribution, implementation of multiple methods to predict spatial precipitation is suggested.« less
Zhou, Shang-Ming; Lyons, Ronan A.; Brophy, Sinead; Gravenor, Mike B.
2012-01-01
The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data. PMID:23272108
State estimation and prediction using clustered particle filters.
Lee, Yoonsang; Majda, Andrew J
2016-12-20
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.
Zhou, Shang-Ming; Lyons, Ronan A; Brophy, Sinead; Gravenor, Mike B
2012-01-01
The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data.
State estimation and prediction using clustered particle filters
Lee, Yoonsang; Majda, Andrew J.
2016-01-01
Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332
General description and understanding of the nonlinear dynamics of mode-locked fiber lasers.
Wei, Huai; Li, Bin; Shi, Wei; Zhu, Xiushan; Norwood, Robert A; Peyghambarian, Nasser; Jian, Shuisheng
2017-05-02
As a type of nonlinear system with complexity, mode-locked fiber lasers are known for their complex behaviour. It is a challenging task to understand the fundamental physics behind such complex behaviour, and a unified description for the nonlinear behaviour and the systematic and quantitative analysis of the underlying mechanisms of these lasers have not been developed. Here, we present a complexity science-based theoretical framework for understanding the behaviour of mode-locked fiber lasers by going beyond reductionism. This hierarchically structured framework provides a model with variable dimensionality, resulting in a simple view that can be used to systematically describe complex states. Moreover, research into the attractors' basins reveals the origin of stochasticity, hysteresis and multistability in these systems and presents a new method for quantitative analysis of these nonlinear phenomena. These findings pave the way for dynamics analysis and system designs of mode-locked fiber lasers. We expect that this paradigm will also enable potential applications in diverse research fields related to complex nonlinear phenomena.
Complexity of food preparation and food security status in low-income young women.
Engler-Stringer, Rachel; Stringer, Bernadette; Haines, Ted
2011-01-01
This study was conducted to explore whether preparing more complex meals was associated with higher food security status. This mixed-methods, community-based study involved the use of semistructured interviews to examine the cooking practices of a group of young, low-income women in Montreal. Fifty participants aged 18 to 35 were recruited at 10 locations in five low-income neighbourhoods. Food security status was the main outcome measure and the main exposure variable, "complex food preparation," combined the preparation of three specific food types (soups, sauces, and baked goods) using basic ingredients. Low-income women preparing a variety of meals using basic ingredients at least three times a week were more than twice as likely to be food secure as were women preparing more complex meals less frequently. Women who prepared more complex meals more frequently had higher food security. Whether this means that preparing more complex foods results in greater food security remains unclear, as this was an exploratory study.
Video Game Telemetry as a Critical Tool in the Study of Complex Skill Learning
Thompson, Joseph J.; Blair, Mark R.; Chen, Lihan; Henrey, Andrew J.
2013-01-01
Cognitive science has long shown interest in expertise, in part because prediction and control of expert development would have immense practical value. Most studies in this area investigate expertise by comparing experts with novices. The reliance on contrastive samples in studies of human expertise only yields deep insight into development where differences are important throughout skill acquisition. This reliance may be pernicious where the predictive importance of variables is not constant across levels of expertise. Before the development of sophisticated machine learning tools for data mining larger samples, and indeed, before such samples were available, it was difficult to test the implicit assumption of static variable importance in expertise development. To investigate if this reliance may have imposed critical restrictions on the understanding of complex skill development, we adopted an alternative method, the online acquisition of telemetry data from a common daily activity for many: video gaming. Using measures of cognitive-motor, attentional, and perceptual processing extracted from game data from 3360 Real-Time Strategy players at 7 different levels of expertise, we identified 12 variables relevant to expertise. We show that the static variable importance assumption is false - the predictive importance of these variables shifted as the levels of expertise increased - and, at least in our dataset, that a contrastive approach would have been misleading. The finding that variable importance is not static across levels of expertise suggests that large, diverse datasets of sustained cognitive-motor performance are crucial for an understanding of expertise in real-world contexts. We also identify plausible cognitive markers of expertise. PMID:24058656
Video game telemetry as a critical tool in the study of complex skill learning.
Thompson, Joseph J; Blair, Mark R; Chen, Lihan; Henrey, Andrew J
2013-01-01
Cognitive science has long shown interest in expertise, in part because prediction and control of expert development would have immense practical value. Most studies in this area investigate expertise by comparing experts with novices. The reliance on contrastive samples in studies of human expertise only yields deep insight into development where differences are important throughout skill acquisition. This reliance may be pernicious where the predictive importance of variables is not constant across levels of expertise. Before the development of sophisticated machine learning tools for data mining larger samples, and indeed, before such samples were available, it was difficult to test the implicit assumption of static variable importance in expertise development. To investigate if this reliance may have imposed critical restrictions on the understanding of complex skill development, we adopted an alternative method, the online acquisition of telemetry data from a common daily activity for many: video gaming. Using measures of cognitive-motor, attentional, and perceptual processing extracted from game data from 3360 Real-Time Strategy players at 7 different levels of expertise, we identified 12 variables relevant to expertise. We show that the static variable importance assumption is false--the predictive importance of these variables shifted as the levels of expertise increased--and, at least in our dataset, that a contrastive approach would have been misleading. The finding that variable importance is not static across levels of expertise suggests that large, diverse datasets of sustained cognitive-motor performance are crucial for an understanding of expertise in real-world contexts. We also identify plausible cognitive markers of expertise.
A method for analyzing temporal patterns of variability of a time series from Poincare plots.
Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E
2012-07-01
The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.
2014-01-01
Background Numerous social factors, generally studied in isolation, have been associated with older adults’ health. Even so, older people’s social circumstances are complex and an approach which embraces this complexity is desirable. Here we investigate many social factors in relation to one another and to survival among older adults using a social ecology perspective to measure social vulnerability among older adults. Methods 2740 adults aged 65 and older were followed for ten years in the Canadian National Population Health Survey (NPHS). Twenty-three individual-level social variables were drawn from the 1994 NPHS and five Enumeration Area (EA)-level variables were abstracted from the 1996 Canadian Census using postal code linkage. Principal Component Analysis (PCA) was used to identify dimensions of social vulnerability. All social variables were summed to create a social vulnerability index which was studied in relation to ten-year mortality. Results The PCA was limited by low variance (47%) explained by emergent factors. Seven dimensions of social vulnerability emerged in the most robust, yet limited, model: social support, engagement, living situation, self-esteem, sense of control, relations with others and contextual socio-economic status. These dimensions showed complex inter-relationships and were situated within a social ecology framework, considering spheres of influence from the individual through to group, neighbourhood and broader societal levels. Adjusting for age, sex, and frailty, increasing social vulnerability measured using the cumulative social vulnerability index was associated with increased risk of mortality over ten years in a Cox regression model (HR 1.04, 95% CI:1.01-1.07, p = 0.01). Conclusions Social vulnerability has important independent influence on older adults’ health though relationships between contributing variables are complex and do not lend themselves well to fragmentation into a small number of discrete factors. A social ecology perspective provides a candidate framework for further study of social vulnerability among older adults. PMID:25129548
Machine learning methods applied on dental fear and behavior management problems in children.
Klingberg, G; Sillén, R; Norén, J G
1999-08-01
The etiologies of dental fear and dental behavior management problems in children were investigated in a database of information on 2,257 Swedish children 4-6 and 9-11 years old. The analyses were performed using computerized inductive techniques within the field of artificial intelligence. The database held information regarding dental fear levels and behavior management problems, which were defined as outcomes, i.e. dependent variables. The attributes, i.e. independent variables, included data on dental health and dental treatments, information about parental dental fear, general anxiety, socioeconomic variables, etc. The data contained both numerical and discrete variables. The analyses were performed using an inductive analysis program (XpertRule Analyser, Attar Software Ltd, Lancashire, UK) that presents the results in a hierarchic diagram called a knowledge tree. The importance of the different attributes is represented by their position in this diagram. The results show that inductive methods are well suited for analyzing multifactorial and complex relationships in large data sets, and are thus a useful complement to multivariate statistical techniques. The knowledge trees for the two outcomes, dental fear and behavior management problems, were very different from each other, suggesting that the two phenomena are not equivalent. Dental fear was found to be more related to non-dental variables, whereas dental behavior management problems seemed connected to dental variables.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Zoning method for environmental engineering geological patterns in underground coal mining areas.
Liu, Shiliang; Li, Wenping; Wang, Qiqing
2018-09-01
Environmental engineering geological patterns (EEGPs) are used to express the trend and intensity of eco-geological environment caused by mining in underground coal mining areas, a complex process controlled by multiple factors. A new zoning method for EEGPs was developed based on the variable-weight theory (VWT), where the weights of factors vary with their value. The method was applied to the Yushenfu mining area, Shaanxi, China. First, the mechanism of the EEGPs caused by mining was elucidated, and four types of EEGPs were proposed. Subsequently, 13 key control factors were selected from mining conditions, lithosphere, hydrosphere, ecosphere, and climatic conditions; their thematic maps were constructed using ArcGIS software and remote-sensing technologies. Then, a stimulation-punishment variable-weight model derived from the partition of basic evaluation unit of study area, construction of partition state-variable-weight vector, and determination of variable-weight interval was built to calculate the variable weights of each factor. On this basis, a zoning mathematical model of EEGPs was established, and the zoning results were analyzed. For comparison, the traditional constant-weight theory (CWT) was also applied to divide the EEGPs. Finally, the zoning results obtained using VWT and CWT were compared. The verification of field investigation indicates that VWT is more accurate and reliable than CWT. The zoning results are consistent with the actual situations and the key of planning design for the rational development of coal resources and protection of eco-geological environment. Copyright © 2018 Elsevier B.V. All rights reserved.
Shi, Yuan; Lau, Kevin Ka-Lun; Ng, Edward
2017-08-01
Urban air quality serves as an important function of the quality of urban life. Land use regression (LUR) modelling of air quality is essential for conducting health impacts assessment but more challenging in mountainous high-density urban scenario due to the complexities of the urban environment. In this study, a total of 21 LUR models are developed for seven kinds of air pollutants (gaseous air pollutants CO, NO 2 , NO x , O 3 , SO 2 and particulate air pollutants PM 2.5 , PM 10 ) with reference to three different time periods (summertime, wintertime and annual average of 5-year long-term hourly monitoring data from local air quality monitoring network) in Hong Kong. Under the mountainous high-density urban scenario, we improved the traditional LUR modelling method by incorporating wind availability information into LUR modelling based on surface geomorphometrical analysis. As a result, 269 independent variables were examined to develop the LUR models by using the "ADDRESS" independent variable selection method and stepwise multiple linear regression (MLR). Cross validation has been performed for each resultant model. The results show that wind-related variables are included in most of the resultant models as statistically significant independent variables. Compared with the traditional method, a maximum increase of 20% was achieved in the prediction performance of annual averaged NO 2 concentration level by incorporating wind-related variables into LUR model development. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Chiu, Hung-Chih; Lin, Yen-Hung; Lo, Men-Tzung; Tang, Sung-Chun; Wang, Tzung-Dau; Lu, Hung-Chun; Ho, Yi-Lwun; Ma, Hsi-Pin; Peng, Chung-Kang
2015-08-01
The hierarchical interaction between electrical signals of the brain and heart is not fully understood. We hypothesized that the complexity of cardiac electrical activity can be used to predict changes in encephalic electricity after stress. Most methods for analyzing the interaction between the heart rate variability (HRV) and electroencephalography (EEG) require a computation-intensive mathematical model. To overcome these limitations and increase the predictive accuracy of human relaxing states, we developed a method to test our hypothesis. In addition to routine linear analysis, multiscale entropy and detrended fluctuation analysis of the HRV were used to quantify nonstationary and nonlinear dynamic changes in the heart rate time series. Short-time Fourier transform was applied to quantify the power of EEG. The clinical, HRV, and EEG parameters of postcatheterization EEG alpha waves were analyzed using change-score analysis and generalized additive models. In conclusion, the complexity of cardiac electrical signals can be used to predict EEG changes after stress.
Spike-In Normalization of ChIP Data Using DNA-DIG-Antibody Complex.
Eberle, Andrea B
2018-01-01
Chromatin immunoprecipitation (ChIP) is a widely used method to determine the occupancy of specific proteins within the genome, helping to unravel the function and activity of specific genomic regions. In ChIP experiments, normalization of the obtained data by a suitable internal reference is crucial. However, particularly when comparing differently treated samples, such a reference is difficult to identify. Here, a simple method to improve the accuracy and reliability of ChIP experiments by the help of an external reference is described. An artificial molecule, composed of a well-defined digoxigenin (DIG) labeled DNA fragment in complex with an anti-DIG antibody, is synthesized and added to each chromatin sample before immunoprecipitation. During the ChIP procedure, the DNA-DIG-antibody complex undergoes the same treatments as the chromatin and is therefore purified and quantified together with the chromatin of interest. This external reference compensates for variability during the ChIP routine and improves the similarity between replicates, thereby emphasizing the biological differences between samples.
Chiu, Hung-Chih; Lin, Yen-Hung; Lo, Men-Tzung; Tang, Sung-Chun; Wang, Tzung-Dau; Lu, Hung-Chun; Ho, Yi-Lwun; Ma, Hsi-Pin; Peng, Chung-Kang
2015-01-01
The hierarchical interaction between electrical signals of the brain and heart is not fully understood. We hypothesized that the complexity of cardiac electrical activity can be used to predict changes in encephalic electricity after stress. Most methods for analyzing the interaction between the heart rate variability (HRV) and electroencephalography (EEG) require a computation-intensive mathematical model. To overcome these limitations and increase the predictive accuracy of human relaxing states, we developed a method to test our hypothesis. In addition to routine linear analysis, multiscale entropy and detrended fluctuation analysis of the HRV were used to quantify nonstationary and nonlinear dynamic changes in the heart rate time series. Short-time Fourier transform was applied to quantify the power of EEG. The clinical, HRV, and EEG parameters of postcatheterization EEG alpha waves were analyzed using change-score analysis and generalized additive models. In conclusion, the complexity of cardiac electrical signals can be used to predict EEG changes after stress. PMID:26286628
NASA Astrophysics Data System (ADS)
Reinhardt, Katja; Samimi, Cyrus
2018-01-01
While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and ordinary kriging. Overall, explanatory variables improve the interpolation results.
Toward a methodical framework for comprehensively assessing forest multifunctionality.
Trogisch, Stefan; Schuldt, Andreas; Bauhus, Jürgen; Blum, Juliet A; Both, Sabine; Buscot, François; Castro-Izaguirre, Nadia; Chesters, Douglas; Durka, Walter; Eichenberg, David; Erfmeier, Alexandra; Fischer, Markus; Geißler, Christian; Germany, Markus S; Goebes, Philipp; Gutknecht, Jessica; Hahn, Christoph Zacharias; Haider, Sylvia; Härdtle, Werner; He, Jin-Sheng; Hector, Andy; Hönig, Lydia; Huang, Yuanyuan; Klein, Alexandra-Maria; Kühn, Peter; Kunz, Matthias; Leppert, Katrin N; Li, Ying; Liu, Xiaojuan; Niklaus, Pascal A; Pei, Zhiqin; Pietsch, Katherina A; Prinz, Ricarda; Proß, Tobias; Scherer-Lorenzen, Michael; Schmidt, Karsten; Scholten, Thomas; Seitz, Steffen; Song, Zhengshan; Staab, Michael; von Oheimb, Goddert; Weißbecker, Christina; Welk, Erik; Wirth, Christian; Wubet, Tesfaye; Yang, Bo; Yang, Xuefei; Zhu, Chao-Dong; Schmid, Bernhard; Ma, Keping; Bruelheide, Helge
2017-12-01
Biodiversity-ecosystem functioning (BEF) research has extended its scope from communities that are short-lived or reshape their structure annually to structurally complex forest ecosystems. The establishment of tree diversity experiments poses specific methodological challenges for assessing the multiple functions provided by forest ecosystems. In particular, methodological inconsistencies and nonstandardized protocols impede the analysis of multifunctionality within, and comparability across the increasing number of tree diversity experiments. By providing an overview on key methods currently applied in one of the largest forest biodiversity experiments, we show how methods differing in scale and simplicity can be combined to retrieve consistent data allowing novel insights into forest ecosystem functioning. Furthermore, we discuss and develop recommendations for the integration and transferability of diverse methodical approaches to present and future forest biodiversity experiments. We identified four principles that should guide basic decisions concerning method selection for tree diversity experiments and forest BEF research: (1) method selection should be directed toward maximizing data density to increase the number of measured variables in each plot. (2) Methods should cover all relevant scales of the experiment to consider scale dependencies of biodiversity effects. (3) The same variable should be evaluated with the same method across space and time for adequate larger-scale and longer-time data analysis and to reduce errors due to changing measurement protocols. (4) Standardized, practical and rapid methods for assessing biodiversity and ecosystem functions should be promoted to increase comparability among forest BEF experiments. We demonstrate that currently available methods provide us with a sophisticated toolbox to improve a synergistic understanding of forest multifunctionality. However, these methods require further adjustment to the specific requirements of structurally complex and long-lived forest ecosystems. By applying methods connecting relevant scales, trophic levels, and above- and belowground ecosystem compartments, knowledge gain from large tree diversity experiments can be optimized.
Are Complexity Metrics Reliable in Assessing HRV Control in Obese Patients During Sleep?
Cabiddu, Ramona; Trimer, Renata; Borghi-Silva, Audrey; Migliorini, Matteo; Mendes, Renata G; Oliveira, Antonio D; Costa, Fernando S M; Bianchi, Anna M
2015-01-01
Obesity is associated with cardiovascular mortality. Linear methods, including time domain and frequency domain analysis, are normally applied on the heart rate variability (HRV) signal to investigate autonomic cardiovascular control, whose imbalance might promote cardiovascular disease in these patients. However, given the cardiac activity non-linearities, non-linear methods might provide better insight. HRV complexity was hereby analyzed during wakefulness and different sleep stages in healthy and obese subjects. Given the short duration of each sleep stage, complexity measures, normally extracted from long-period signals, needed be calculated on short-term signals. Sample entropy, Lempel-Ziv complexity and detrended fluctuation analysis were evaluated and results showed no significant differences among the values calculated over ten-minute signals and longer durations, confirming the reliability of such analysis when performed on short-term signals. Complexity parameters were extracted from ten-minute signal portions selected during wakefulness and different sleep stages on HRV signals obtained from eighteen obese patients and twenty controls. The obese group presented significantly reduced complexity during light and deep sleep, suggesting a deficiency in the control mechanisms integration during these sleep stages. To our knowledge, this study reports for the first time on how the HRV complexity changes in obesity during wakefulness and sleep. Further investigation is needed to quantify altered HRV impact on cardiovascular mortality in obesity.
Are Complexity Metrics Reliable in Assessing HRV Control in Obese Patients During Sleep?
Cabiddu, Ramona; Trimer, Renata; Borghi-Silva, Audrey; Migliorini, Matteo; Mendes, Renata G.; Oliveira Jr., Antonio D.; Costa, Fernando S. M.; Bianchi, Anna M.
2015-01-01
Obesity is associated with cardiovascular mortality. Linear methods, including time domain and frequency domain analysis, are normally applied on the heart rate variability (HRV) signal to investigate autonomic cardiovascular control, whose imbalance might promote cardiovascular disease in these patients. However, given the cardiac activity non-linearities, non-linear methods might provide better insight. HRV complexity was hereby analyzed during wakefulness and different sleep stages in healthy and obese subjects. Given the short duration of each sleep stage, complexity measures, normally extracted from long-period signals, needed be calculated on short-term signals. Sample entropy, Lempel-Ziv complexity and detrended fluctuation analysis were evaluated and results showed no significant differences among the values calculated over ten-minute signals and longer durations, confirming the reliability of such analysis when performed on short-term signals. Complexity parameters were extracted from ten-minute signal portions selected during wakefulness and different sleep stages on HRV signals obtained from eighteen obese patients and twenty controls. The obese group presented significantly reduced complexity during light and deep sleep, suggesting a deficiency in the control mechanisms integration during these sleep stages. To our knowledge, this study reports for the first time on how the HRV complexity changes in obesity during wakefulness and sleep. Further investigation is needed to quantify altered HRV impact on cardiovascular mortality in obesity. PMID:25893856
ERIC Educational Resources Information Center
Tagarelli, Kaitlyn M.; Ruiz, Simón; Vega, José Luis Moreno; Rebuschat, Patrick
2016-01-01
Second language learning outcomes are highly variable, due to a variety of factors, including individual differences, exposure conditions, and linguistic complexity. However, exactly how these factors interact to influence language learning is unknown. This article examines the relationship between these three variables in language learners.…
Diminished heart rate complexity in adolescent girls: a sign of vulnerability to anxiety disorders?
Fiol-Veny, Aina; De la Torre-Luque, Alejandro; Balle, Maria; Bornas, Xavier
2018-07-01
Diminished heart rate variability has been found to be associated with high anxiety symptomatology. Since adolescence is the period of onset for many anxiety disorders, this study aimed to determine sex- and anxiety-related differences in heart rate variability and complexity in adolescents. We created four groups according to sex and anxiety symptomatology: high-anxiety girls (n = 24) and boys (n = 25), and low-anxiety girls (n = 22) and boys (n = 24) and recorded their cardiac function while they performed regular school activities. A series of two-way (sex and anxiety) MANOVAs were performed on time domain variability, frequency domain variability, and non-linear complexity. We obtained no multivariate interaction effects between sex and anxiety, but highly anxious participants had lower heart rate variability than the low-anxiety group. Regarding sex, girls showed lower heart rate variability and complexity than boys. The results suggest that adolescent girls have a less flexible cardiac system that could be a marker of the girls' vulnerability to developing anxiety disorders.
NASA Astrophysics Data System (ADS)
Roşca, S.; Bilaşco, Ş.; Petrea, D.; Fodorean, I.; Vescan, I.; Filip, S.; Măguţ, F.-L.
2015-11-01
The existence of a large number of GIS models for the identification of landslide occurrence probability makes difficult the selection of a specific one. The present study focuses on the application of two quantitative models: the logistic and the BSA models. The comparative analysis of the results aims at identifying the most suitable model. The territory corresponding to the Niraj Mic Basin (87 km2) is an area characterised by a wide variety of the landforms with their morphometric, morphographical and geological characteristics as well as by a high complexity of the land use types where active landslides exist. This is the reason why it represents the test area for applying the two models and for the comparison of the results. The large complexity of input variables is illustrated by 16 factors which were represented as 72 dummy variables, analysed on the basis of their importance within the model structures. The testing of the statistical significance corresponding to each variable reduced the number of dummy variables to 12 which were considered significant for the test area within the logistic model, whereas for the BSA model all the variables were employed. The predictability degree of the models was tested through the identification of the area under the ROC curve which indicated a good accuracy (AUROC = 0.86 for the testing area) and predictability of the logistic model (AUROC = 0.63 for the validation area).
Design of a WSN for the Sampling of Environmental Variability in Complex Terrain
Martín-Tardío, Miguel A.; Felicísimo, Ángel M.
2014-01-01
In-situ environmental parameter measurements using sensor systems connected to a wireless network have become widespread, but the problem of monitoring large and mountainous areas by means of a wireless sensor network (WSN) is not well resolved. The main reasons for this are: (1) the environmental variability distribution is unknown in the field; (2) without this knowledge, a huge number of sensors would be necessary to ensure the complete coverage of the environmental variability and (3) WSN design requirements, for example, effective connectivity (intervisibility), limiting distances and controlled redundancy, are usually solved by trial and error. Using temperature as the target environmental variable, we propose: (1) a method to determine the homogeneous environmental classes to be sampled using the digital elevation model (DEM) and geometric simulations and (2) a procedure to determine an effective WSN design in complex terrain in terms of the number of sensors, redundancy, cost and spatial distribution. The proposed methodology, based on geographic information systems and binary integer programming can be easily adapted to a wide range of applications that need exhaustive and continuous environmental monitoring with high spatial resolution. The results show that the WSN design is perfectly suited to the topography and the technical specifications of the sensors, and provides a complete coverage of the environmental variability in terms of Sun exposure. However these results still need be validated in the field and the proposed procedure must be refined. PMID:25412218
Fabre, Michel; Koeck, Jean-Louis; Le Flèche, Philippe; Simon, Fabrice; Hervé, Vincent; Vergnaud, Gilles; Pourcel, Christine
2004-01-01
We have analyzed, using complementary molecular methods, the diversity of 43 strains of “Mycobacterium canettii” originating from the Republic of Djibouti, on the Horn of Africa, from 1998 to 2003. Genotyping by multiple-locus variable-number tandem repeat analysis shows that all the strains belong to a single but very distant group when compared to strains of the Mycobacterium tuberculosis complex (MTBC). Thirty-one strains cluster into one large group with little variability and five strains form another group, whereas the other seven are more diverged. In total, 14 genotypes are observed. The DR locus analysis reveals additional variability, some strains being devoid of a direct repeat locus and others having unique spacers. The hsp65 gene polymorphism was investigated by restriction enzyme analysis and sequencing of PCR amplicons. Four new single nucleotide polymorphisms were discovered. One strain was characterized by three nucleotide changes in 441 bp, creating new restriction enzyme polymorphisms. As no sequence variability was found for hsp65 in the whole MTBC, and as a single point mutation separates M. tuberculosis from the closest “M. canettii” strains, this diversity within “M. canettii” subspecies strongly suggests that it is the most probable source species of the MTBC rather than just another branch of the MTBC. PMID:15243089
Geomatic methods at the service of water resources modelling
NASA Astrophysics Data System (ADS)
Molina, José-Luis; Rodríguez-Gonzálvez, Pablo; Molina, Mª Carmen; González-Aguilera, Diego; Espejo, Fernando
2014-02-01
Acquisition, management and/or use of spatial information are crucial for the quality of water resources studies. In this sense, several geomatic methods arise at the service of water modelling, aiming the generation of cartographic products, especially in terms of 3D models and orthophotos. They may also perform as tools for problem solving and decision making. However, choosing the right geomatic method is still a challenge in this field. That is mostly due to the complexity of the different applications and variables involved for water resources management. This study is aimed to provide a guide to best practices in this context by tackling a deep review of geomatic methods and their suitability assessment for the following study types: Surface Hydrology, Groundwater Hydrology, Hydraulics, Agronomy, Morphodynamics and Geotechnical Processes. This assessment is driven by several decision variables grouped in two categories, classified depending on their nature as geometric or radiometric. As a result, the reader comes with the best choice/choices for the method to use, depending on the type of water resources modelling study in hand.
McCarty, James; Parrinello, Michele
2017-11-28
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
NASA Astrophysics Data System (ADS)
McCarty, James; Parrinello, Michele
2017-11-01
In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.
Application of effective discharge analysis to environmental flow decision-making
McKay, S. Kyle; Freeman, Mary C.; Covich, A.P.
2016-01-01
Well-informed river management decisions rely on an explicit statement of objectives, repeatable analyses, and a transparent system for assessing trade-offs. These components may then be applied to compare alternative operational regimes for water resource infrastructure (e.g., diversions, locks, and dams). Intra- and inter-annual hydrologic variability further complicates these already complex environmental flow decisions. Effective discharge analysis (developed in studies of geomorphology) is a powerful tool for integrating temporal variability of flow magnitude and associated ecological consequences. Here, we adapt the effectiveness framework to include multiple elements of the natural flow regime (i.e., timing, duration, and rate-of-change) as well as two flow variables. We demonstrate this analytical approach using a case study of environmental flow management based on long-term (60 years) daily discharge records in the Middle Oconee River near Athens, GA, USA. Specifically, we apply an existing model for estimating young-of-year fish recruitment based on flow-dependent metrics to an effective discharge analysis that incorporates hydrologic variability and multiple focal taxa. We then compare three alternative methods of environmental flow provision. Percentage-based withdrawal schemes outcompete other environmental flow methods across all levels of water withdrawal and ecological outcomes.
Application of Effective Discharge Analysis to Environmental Flow Decision-Making.
McKay, S Kyle; Freeman, Mary C; Covich, Alan P
2016-06-01
Well-informed river management decisions rely on an explicit statement of objectives, repeatable analyses, and a transparent system for assessing trade-offs. These components may then be applied to compare alternative operational regimes for water resource infrastructure (e.g., diversions, locks, and dams). Intra- and inter-annual hydrologic variability further complicates these already complex environmental flow decisions. Effective discharge analysis (developed in studies of geomorphology) is a powerful tool for integrating temporal variability of flow magnitude and associated ecological consequences. Here, we adapt the effectiveness framework to include multiple elements of the natural flow regime (i.e., timing, duration, and rate-of-change) as well as two flow variables. We demonstrate this analytical approach using a case study of environmental flow management based on long-term (60 years) daily discharge records in the Middle Oconee River near Athens, GA, USA. Specifically, we apply an existing model for estimating young-of-year fish recruitment based on flow-dependent metrics to an effective discharge analysis that incorporates hydrologic variability and multiple focal taxa. We then compare three alternative methods of environmental flow provision. Percentage-based withdrawal schemes outcompete other environmental flow methods across all levels of water withdrawal and ecological outcomes.
An evaluation of human factors research for ultrasonic inservice inspection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pond, D.J.; Donohoo, D.T.; Harris, R.V. Jr.
1998-03-01
This work was undertaken to determine if human factors research has yielded information applicable to upgrading requirements in ASME Boiler and Pressure Vessel Code Section XI, improving methods and techniques in Section V, and/or suggesting relevant research. A preference was established for information and recommendations which have become accepted and standard practice. Manual Ultrasonic Testing/Inservice Inspection (UT/ISI) is a complex task subject to influence by dozens of variables. This review frequently revealed equivocal findings regarding effects of environmental variables as well as repeated indications that inspection performance may be more, and more reliably, influenced by the workers` social environment, includingmore » managerial practices, than by other situational variables. Also of significance are each inspector`s relevant knowledge, skills, and abilities, and determination of these is seen as a necessary first step in upgrading requirements, methods, and techniques as well as in focusing research in support of such programs, While understanding the effects and mediating mechanisms of the variables impacting inspection performance is a worthwhile pursuit for researchers, initial improvements in industrial UTASI performance may be achieved by implementing practices already known to mitigate the effects of potentially adverse conditions. 52 refs., 2 tabs.« less
Random forests for classification in ecology
Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J.
2007-01-01
Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature. ?? 2007 by the Ecological Society of America.
NASA Astrophysics Data System (ADS)
Mohamed, Marwa E.; Frag, Eman Y. Z.; Hathoot, Abla A.; Shalaby, Essam A.
2018-01-01
Simple, accurate and robust spectrophotometric method was developed for determination of fenoprofen calcium drug (FPC). The proposed method was based on the charge transfer (CT) reaction of FPC drug (as n-electron donor) with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ), 2,4,6-trinitrophenol (picric acid, PA) or 1,2,5,8-tetrahydroxyanthraquinone (Quinalizarin, QZ) (as π-acceptors) to give highly colored charge transfer complexes. Different variables affecting the reaction such as reagent concentration, temperature and time have been carefully optimized to achieve the highest sensitivity. Beer's law was obeyed over the concentration ranges of 2-60, 0.6-90 and 4-30 μg mL- 1 using DDQ, PA and QZ CT reagents, respectively, with correlation coefficients of 0.9986, 0.9989 and 0.997 and detection limits of 1.78, 0.48 and 2.6 μg mL- 1 for the CT reagents in the same order. Elucidation of the chemical structure of the solid CT complexes formed via reaction between the drug under study and π-acceptors was done using elemental, thermal analyses, IR, 1H NMR and mass spectrometry. X-ray diffraction was used to estimate the crystallinity of the CT complexes. Their biological activities were screened against different bacterial and fungal organisms. The method was applied successfully with satisfactory results for the determination of FPC drug in fenoprofen capsules. The method was validated with respect to linearity, limit of detection and quantification, inter- and intra-days precision and accuracy. The proposed method gave comparable results with the official method.
Francq, Bernard G; Govaerts, Bernadette
2016-06-30
Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Raimondi, G; Chillemi, S; Michelassi, C; Di Garbo, A; Varanini, M; Legramante, J; Balocchi, R
2002-07-01
Orthostatic intolerance is the most serious symptom of cardiovascular deconditioning induced by microgravity. However, the exact mechanisms underlying these alterations have not been completely clarified. Several methods for studying the time series of systolic arterial pressure and RR interval have been proposed both in the time and in the frequency domain. However, these methods did not produce definitive results. In fact heart rate and arterial pressure show a complex pattern of global variability which is likely due to non linear feedback which involves the autonomic nervous system and to "stochastic" influences. Aim of this study was to evaluate the degree of interdependence between the mechanisms responsible for the variability of SAP and RR signals in subjects exposed to head down (HD). This quantification was achieved by using Mutual Information (MI).
Frequency-domain-independent vector analysis for mode-division multiplexed transmission
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Hu, Guijun; Li, Jiao
2018-04-01
In this paper, we propose a demultiplexing method based on frequency-domain independent vector analysis (FD-IVA) algorithm for mode-division multiplexing (MDM) system. FD-IVA extends frequency-domain independent component analysis (FD-ICA) from unitary variable to multivariate variables, and provides an efficient method to eliminate the permutation ambiguity. In order to verify the performance of FD-IVA algorithm, a 6 ×6 MDM system is simulated. The simulation results show that the FD-IVA algorithm has basically the same bit-error-rate(BER) performance with the FD-ICA algorithm and frequency-domain least mean squares (FD-LMS) algorithm. Meanwhile, the convergence speed of FD-IVA algorithm is the same as that of FD-ICA. However, compared with the FD-ICA and the FD-LMS, the FD-IVA has an obviously lower computational complexity.
Baker, Jannah; White, Nicole; Mengersen, Kerrie
2014-11-20
Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
NASA Astrophysics Data System (ADS)
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
Hall, Molly A; Dudek, Scott M; Goodloe, Robert; Crawford, Dana C; Pendergrass, Sarah A; Peissig, Peggy; Brilliant, Murray; McCarty, Catherine A; Ritchie, Marylyn D
2014-01-01
Environment-wide association studies (EWAS) provide a way to uncover the environmental mechanisms involved in complex traits in a high-throughput manner. Genome-wide association studies have led to the discovery of genetic variants associated with many common diseases but do not take into account the environmental component of complex phenotypes. This EWAS assesses the comprehensive association between environmental variables and the outcome of type 2 diabetes (T2D) in the Marshfield Personalized Medicine Research Project Biobank (Marshfield PMRP). We sought replication in two National Health and Nutrition Examination Surveys (NHANES). The Marshfield PMRP currently uses four tools for measuring environmental exposures and outcome traits: 1) the PhenX Toolkit includes standardized exposure and phenotypic measures across several domains, 2) the Diet History Questionnaire (DHQ) is a food frequency questionnaire, 3) the Measurement of a Person's Habitual Physical Activity scores the level of an individual's physical activity, and 4) electronic health records (EHR) employs validated algorithms to establish T2D case-control status. Using PLATO software, 314 environmental variables were tested for association with T2D using logistic regression, adjusting for sex, age, and BMI in over 2,200 European Americans. When available, similar variables were tested with the same methods and adjustment in samples from NHANES III and NHANES 1999-2002. Twelve and 31 associations were identified in the Marshfield samples at p<0.01 and p<0.05, respectively. Seven and 13 measures replicated in at least one of the NHANES at p<0.01 and p<0.05, respectively, with the same direction of effect. The most significant environmental exposures associated with T2D status included decreased alcohol use as well as increased smoking exposure in childhood and adulthood. The results demonstrate the utility of the EWAS method and survey tools for identifying environmental components of complex diseases like type 2 diabetes. These high-throughput and comprehensive investigation methods can easily be applied to investigate the relation between environmental exposures and multiple phenotypes in future analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Aerts, H; Berbeco, R
2014-06-15
Purpose: PET-based texture features are used to quantify tumor heterogeneity due to their predictive power in treatment outcome. We investigated the sensitivity of texture features to tumor motion by comparing whole body (3D) and respiratory-gated (4D) PET imaging. Methods: Twenty-six patients (34 lesions) received 3D and 4D [F-18]FDG-PET scans before chemo-radiotherapy. The acquired 4D data were retrospectively binned into five breathing phases to create the 4D image sequence. Four texture features (Coarseness, Contrast, Busyness, and Complexity) were computed within the the physician-defined tumor volume. The relative difference (δ) in each measure between the 3D- and 4D-PET imaging was calculated. Wilcoxonmore » signed-rank test (p<0.01) was used to determine if δ was significantly different from zero. Coefficient of variation (CV) was used to determine the variability in the texture features between all 4D-PET phases. Pearson correlation coefficient was used to investigate the impact of tumor size and motion amplitude on δ. Results: Significant differences (p<<0.01) between 3D and 4D imaging were found for Coarseness, Busyness, and Complexity. The difference for Contrast was not significant (p>0.24). 4D-PET increased Busyness (∼20%) and Complexity (∼20%), and decreased Coarseness (∼10%) and Contrast (∼5%) compared to 3D-PET. Nearly negligible variability (CV=3.9%) was found between the 4D phase bins for Coarseness and Complexity. Moderate variability was found for Contrast and Busyness (CV∼10%). Poor correlation was found between the tumor volume and δ for the texture features (R=−0.34−0.34). Motion amplitude had moderate impact on δ for Contrast and Busyness (R=−0.64− 0.54) and no impact for Coarseness and Complexity (R=−0.29−0.17). Conclusion: Substantial differences in textures were found between 3D and 4D-PET imaging. Moreover, the variability between phase bins for Coarseness and Complexity was negligible, suggesting that similar quantification can be obtained from all phases. Texture features, blurred out by respiratory motion during 3D-PET acquisition, can be better resolved by 4D-PET imaging with any phase.« less
Narayanan, Roshni; Nugent, Rebecca; Nugent, Kenneth
2015-10-01
Accreditation Council for Graduate Medical Education guidelines require internal medicine residents to develop skills in the interpretation of medical literature and to understand the principles of research. A necessary component is the ability to understand the statistical methods used and their results, material that is not an in-depth focus of most medical school curricula and residency programs. Given the breadth and depth of the current medical literature and an increasing emphasis on complex, sophisticated statistical analyses, the statistical foundation and education necessary for residents are uncertain. We reviewed the statistical methods and terms used in 49 articles discussed at the journal club in the Department of Internal Medicine residency program at Texas Tech University between January 1, 2013 and June 30, 2013. We collected information on the study type and on the statistical methods used for summarizing and comparing samples, determining the relations between independent variables and dependent variables, and estimating models. We then identified the typical statistics education level at which each term or method is learned. A total of 14 articles came from the Journal of the American Medical Association Internal Medicine, 11 from the New England Journal of Medicine, 6 from the Annals of Internal Medicine, 5 from the Journal of the American Medical Association, and 13 from other journals. Twenty reported randomized controlled trials. Summary statistics included mean values (39 articles), category counts (38), and medians (28). Group comparisons were based on t tests (14 articles), χ2 tests (21), and nonparametric ranking tests (10). The relations between dependent and independent variables were analyzed with simple regression (6 articles), multivariate regression (11), and logistic regression (8). Nine studies reported odds ratios with 95% confidence intervals, and seven analyzed test performance using sensitivity and specificity calculations. These papers used 128 statistical terms and context-defined concepts, including some from data analysis (56), epidemiology-biostatistics (31), modeling (24), data collection (12), and meta-analysis (5). Ten different software programs were used in these articles. Based on usual undergraduate and graduate statistics curricula, 64.3% of the concepts and methods used in these papers required at least a master's degree-level statistics education. The interpretation of the current medical literature can require an extensive background in statistical methods at an education level exceeding the material and resources provided to most medical students and residents. Given the complexity and time pressure of medical education, these deficiencies will be hard to correct, but this project can serve as a basis for developing a curriculum in study design and statistical methods needed by physicians-in-training.
2017-01-01
Background Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Methods Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence) and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts’ exposure to the parasite’s dispersive stages. Results Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm) than large molecrabs (<15 mm). Independently of seagull density, large molecrabs carried significantly more parasites than small molecrabs. The analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. Conclusions These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation—a characteristic of indirect host-parasite interactions—and subsequent increasing mortality rates over ontogeny underpin size-dependent host-parasite dynamics. PMID:28828270
New Sensitive Kinetic Spectrophotometric Methods for Determination of Omeprazole in Dosage Forms
Mahmoud, Ashraf M.
2009-01-01
New rapid, sensitive, and accurate kinetic spectrophotometric methods were developed, for the first time, to determine omeprazole (OMZ) in its dosage forms. The methods were based on the formation of charge-transfer complexes with both iodine and 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ). The variables that affected the reactions were carefully studied and optimized. The formed complexes and the site of interaction were examined by UV/VIS, IR, and 1H-NMR techniques, and computational molecular modeling. Under optimum conditions, the stoichiometry of the reactions between OMZ and the acceptors was found to be 1 : 1. The order of the reactions and the specific rate constants were determined. The thermodynamics of the complexes were computed and the mechanism of the reactions was postulated. The initial rate and fixed time methods were utilized for the determination of OMZ concentrations. The linear ranges for the proposed methods were 0.10–3.00 and 0.50–25.00 μg mL−1 with the lowest LOD of 0.03 and 0.14 μg mL−1 for iodine and DDQ, respectively. Analytical performance of the methods was statistically validated; RSD was <1.25% for the precision and <1.95% for the accuracy. The proposed methods were successfully applied to the analysis of OMZ in its dosage forms; the recovery was 98.91–100.32% ± 0.94–1.84, and was found to be comparable with that of reference method. PMID:20140076
Visualizing medium and biodistribution in complex cell culture bioreactors using in vivo imaging.
Ratcliffe, E; Thomas, R J; Stacey, A J
2014-01-01
There is a dearth of technology and methods to aid process characterization, control and scale-up of complex culture platforms that provide niche micro-environments for some stem cell-based products. We have demonstrated a novel use of 3d in vivo imaging systems to visualize medium flow and cell distribution within a complex culture platform (hollow fiber bioreactor) to aid characterization of potential spatial heterogeneity and identify potential routes of bioreactor failure or sources of variability. This can then aid process characterization and control of such systems with a view to scale-up. Two potential sources of variation were observed with multiple bioreactors repeatedly imaged using two different imaging systems: shortcutting of medium between adjacent inlet and outlet ports with the potential to create medium gradients within the bioreactor, and localization of bioluminescent murine 4T1-luc2 cells upon inoculation with the potential to create variable seeding densities at different points within the cell growth chamber. The ability of the imaging technique to identify these key operational bioreactor characteristics demonstrates an emerging technique in troubleshooting and engineering optimization of bioreactor performance. © 2013 American Institute of Chemical Engineers.
Some comparisons of complexity in dictionary-based and linear computational models.
Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello
2011-03-01
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gou, Yabin; Ma, Yingzhao; Chen, Haonan; Wen, Yixin
2018-05-01
Quantitative precipitation estimation (QPE) is one of the important applications of weather radars. However, in complex terrain such as Tibetan Plateau, it is a challenging task to obtain an optimal Z-R relation due to the complex spatial and temporal variability in precipitation microphysics. This paper develops two radar QPE schemes respectively based on Reflectivity Threshold (RT) and Storm Cell Identification and Tracking (SCIT) algorithms using observations from 11 Doppler weather radars and 3264 rain gauges over the Eastern Tibetan Plateau (ETP). These two QPE methodologies are evaluated extensively using four precipitation events that are characterized by different meteorological features. Precipitation characteristics of independent storm cells associated with these four events, as well as the storm-scale differences, are investigated using short-term vertical profile of reflectivity (VPR) clusters. Evaluation results show that the SCIT-based rainfall approach performs better than the simple RT-based method for all precipitation events in terms of score comparison using validation gauge measurements as references. It is also found that the SCIT-based approach can effectively mitigate the local error of radar QPE and represent the precipitation spatiotemporal variability better than the RT-based scheme.
Termination Patterns of Complex Partial Seizures: An Intracranial EEG Study
Afra, Pegah; Jouny, Christopher C.; Bergey, Gregory K.
2015-01-01
Purpose While seizure onset patterns have been the subject of many reports, there have been few studies of seizure termination. In this study we report the incidence of synchronous and asynchronous termination patterns of partial seizures recorded with intracranial arrays. Methods Data were collected from patients with intractable complex partial seizures undergoing presurgical evaluations with intracranial electrodes. Patients with seizures originating from mesial temporal and neocortical regions were grouped into three groups based on patterns of seizure termination: synchronous only (So), asynchronous only (Ao), or mixed (S/A, with both synchronous and asynchronous termination patterns). Results 88% of the patients in the MT group had seizures with a synchronous pattern of termination exclusively (38%) or mixed (50%). 82% of the NC group had seizures with synchronous pattern of termination exclusively (52%) or mixed (30%). In the NC group, there was a significant difference of the range of seizure durations between So and Ao groups, with Ao exhibiting higher variability. Seizures with synchronous termination had low variability in both groups. Conclusions Synchronous seizure termination is a common pattern for complex partial seizures of both mesial temporal or neocortical onset. This may reflect stereotyped network behavior or dynamics at the seizure focus. PMID:26552555
Environmental variability and indicators: a few observations
William F. Laudenslayer
1991-01-01
Abstract The environment of the earth is exceedingly complex and variable. Indicator species are used to reduce thaf complexity and variability to a level that can be more emily understood. In recent years, use of indicators has increased dramatically. For the Forest Service, as an example, regulations that interpret the National Forest Management Act require the use...
[Comparison of predictive models for the selection of high-complexity patients].
Estupiñán-Ramírez, Marcos; Tristancho-Ajamil, Rita; Company-Sancho, María Consuelo; Sánchez-Janáriz, Hilda
2017-08-18
To compare the concordance of complexity weights between Clinical Risk Groups (CRG) and Adjusted Morbidity Groups (AMG). To determine which one is the best predictor of patient admission. To optimise the method used to select the 0.5% of patients of higher complexity that will be included in an intervention protocol. Cross-sectional analytical study in 18 Canary Island health areas, 385,049 citizens were enrolled, using sociodemographic variables from health cards; diagnoses and use of healthcare resources obtained from primary health care electronic records (PCHR) and the basic minimum set of hospital data; the functional status recorded in the PCHR, and the drugs prescribed through the electronic prescription system. The correlation between stratifiers was estimated from these data. The ability of each stratifier to predict patient admissions was evaluated and prediction optimisation models were constructed. Concordance between weights complexity stratifiers was strong (rho = 0.735) and the correlation between categories of complexity was moderate (weighted kappa = 0.515). AMG complexity weight predicts better patient admission than CRG (AUC: 0.696 [0.695-0.697] versus 0.692 [0.691-0.693]). Other predictive variables were added to the AMG weight, obtaining the best AUC (0.708 [0.707-0.708]) the model composed by AMG, sex, age, Pfeiffer and Barthel scales, re-admissions and number of prescribed therapeutic groups. strong concordance was found between stratifiers, and higher predictive capacity for admission from AMG, which can be increased by adding other dimensions. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Extension of optical lithography by mask-litho integration with computational lithography
NASA Astrophysics Data System (ADS)
Takigawa, T.; Gronlund, K.; Wiley, J.
2010-05-01
Wafer lithography process windows can be enlarged by using source mask co-optimization (SMO). Recently, SMO including freeform wafer scanner illumination sources has been developed. Freeform sources are generated by a programmable illumination system using a micro-mirror array or by custom Diffractive Optical Elements (DOE). The combination of freeform sources and complex masks generated by SMO show increased wafer lithography process window and reduced MEEF. Full-chip mask optimization using source optimized by SMO can generate complex masks with small variable feature size sub-resolution assist features (SRAF). These complex masks create challenges for accurate mask pattern writing and low false-defect inspection. The accuracy of the small variable-sized mask SRAF patterns is degraded by short range mask process proximity effects. To address the accuracy needed for these complex masks, we developed a highly accurate mask process correction (MPC) capability. It is also difficult to achieve low false-defect inspections of complex masks with conventional mask defect inspection systems. A printability check system, Mask Lithography Manufacturability Check (M-LMC), is developed and integrated with 199-nm high NA inspection system, NPI. M-LMC successfully identifies printable defects from all of the masses of raw defect images collected during the inspection of a complex mask. Long range mask CD uniformity errors are compensated by scanner dose control. A mask CD uniformity error map obtained by mask metrology system is used as input data to the scanner. Using this method, wafer CD uniformity is improved. As reviewed above, mask-litho integration technology with computational lithography is becoming increasingly important.
Movie denoising by average of warped lines.
Bertalmío, Marcelo; Caselles, Vicent; Pardo, Alvaro
2007-09-01
Here, we present an efficient method for movie denoising that does not require any motion estimation. The method is based on the well-known fact that averaging several realizations of a random variable reduces the variance. For each pixel to be denoised, we look for close similar samples along the level surface passing through it. With these similar samples, we estimate the denoised pixel. The method to find close similar samples is done via warping lines in spatiotemporal neighborhoods. For that end, we present an algorithm based on a method for epipolar line matching in stereo pairs which has per-line complexity O (N), where N is the number of columns in the image. In this way, when applied to the image sequence, our algorithm is computationally efficient, having a complexity of the order of the total number of pixels. Furthermore, we show that the presented method is unsupervised and is adapted to denoise image sequences with an additive white noise while respecting the visual details on the movie frames. We have also experimented with other types of noise with satisfactory results.
COVARIANCE ESTIMATION USING CONJUGATE GRADIENT FOR 3D CLASSIFICATION IN CRYO-EM.
Andén, Joakim; Katsevich, Eugene; Singer, Amit
2015-04-01
Classifying structural variability in noisy projections of biological macromolecules is a central problem in Cryo-EM. In this work, we build on a previous method for estimating the covariance matrix of the three-dimensional structure present in the molecules being imaged. Our proposed method allows for incorporation of contrast transfer function and non-uniform distribution of viewing angles, making it more suitable for real-world data. We evaluate its performance on a synthetic dataset and an experimental dataset obtained by imaging a 70S ribosome complex.
A 3-D chimera grid embedding technique
NASA Technical Reports Server (NTRS)
Benek, J. A.; Buning, P. G.; Steger, J. L.
1985-01-01
A three-dimensional (3-D) chimera grid-embedding technique is described. The technique simplifies the construction of computational grids about complex geometries. The method subdivides the physical domain into regions which can accommodate easily generated grids. Communication among the grids is accomplished by interpolation of the dependent variables at grid boundaries. The procedures for constructing the composite mesh and the associated data structures are described. The method is demonstrated by solution of the Euler equations for the transonic flow about a wing/body, wing/body/tail, and a configuration of three ellipsoidal bodies.
Bojan, Mirela; Gerelli, Sébastien; Gioanni, Simone; Pouard, Philippe; Vouhé, Pascal
2011-09-01
The Aristotle Comprehensive Complexity (ACC) and the Risk Adjustment in Congenital Heart Surgery (RACHS-1) scores have been proposed for complexity adjustment in the analysis of outcome after congenital heart surgery. Previous studies found RACHS-1 to be a better predictor of outcome than the Aristotle Basic Complexity score. We compared the ability to predict operative mortality and morbidity between ACC, the latest update of the Aristotle method and RACHS-1. Morbidity was assessed by length of intensive care unit stay. We retrospectively enrolled patients undergoing congenital heart surgery. We modeled each score as a continuous variable, mortality as a binary variable, and length of stay as a censored variable. We compared performance between mortality and morbidity models using likelihood ratio tests for nested models and paired concordance statistics. Among all 1,384 patients enrolled, 30-day mortality rate was 3.5% and median length of intensive care unit stay was 3 days. Both scores strongly related to mortality, but ACC made better prediction than RACHS-1; c-indexes 0.87 (0.84, 0.91) vs 0.75 (0.65, 0.82). Both scores related to overall length of stay only during the first postoperative week, but ACC made better predictions than RACHS-1; U statistic=0.22, p<0.001. No significant difference was noted after adjusting RACHS-1 models on age, prematurity, and major extracardiac abnormalities. The ACC was a better predictor of operative mortality and length of intensive care unit stay than RACHS-1. In order to achieve similar performance, regression models including RACHS-1 need to be further adjusted on age, prematurity, and major extracardiac abnormalities. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dominguez, M.
2017-12-01
Headwater catchments in complex terrain typically exhibit significant variations in microclimatic conditions across slopes. This microclimatic variability in turn, modifies land surface properties presumably altering the hydrologic dynamics of these catchments. The extent to which differences in microclimate and land cover dictate the partition of water and energy fluxes within a catchment is still poorly understood. In this study, we attempt to do an assessment of the effects of aspect, elevation and latitude (which are the principal factors that define microclimate conditions) on the hydrologic behavior of the hillslopes within catchments with complex terrain. Using a distributed hydrologic model on a number of catchments at different latitudes, where data is available for calibration and validation, we estimate the different components of the water balance to obtain the aridity index (AI = PET/P) and the evaporative index (EI = AET/P) of each slope for a number of years. We use Budyko's curve as a framework to characterize the inter-annual variability in the hydrologic response of the hillslopes in the studied catchments, developing a hydrologic sensitivity index (HSi) based on the relative change in Budyko's curve components (HSi=ΔAI/ΔEI). With this method, when the HSi values of a given hillslope are larger than 1 the hydrologic behavior of that part of the catchment is considered sensitive to changes in climatic conditions, while values approaching 0 would indicate the opposite. We use this approach as a diagnostic tool to discern the effect of aspect, elevation, and latitude on the hydrologic regime of the slopes in complex terrain catchments and to try to explain observed patterns of land cover conditions on these types of catchments.
Environmental variability and acoustic signals: a multi-level approach in songbirds.
Medina, Iliana; Francis, Clinton D
2012-12-23
Among songbirds, growing evidence suggests that acoustic adaptation of song traits occurs in response to habitat features. Despite extensive study, most research supporting acoustic adaptation has only considered acoustic traits averaged for species or populations, overlooking intraindividual variation of song traits, which may facilitate effective communication in heterogeneous and variable environments. Fewer studies have explicitly incorporated sexual selection, which, if strong, may favour variation across environments. Here, we evaluate the prevalence of acoustic adaptation among 44 species of songbirds by determining how environmental variability and sexual selection intensity are associated with song variability (intraindividual and intraspecific) and short-term song complexity. We show that variability in precipitation can explain short-term song complexity among taxonomically diverse songbirds, and that precipitation seasonality and the intensity of sexual selection are related to intraindividual song variation. Our results link song complexity to environmental variability, something previously found for mockingbirds (Family Mimidae). Perhaps more importantly, our results illustrate that individual variation in song traits may be shaped by both environmental variability and strength of sexual selection.
NASA Astrophysics Data System (ADS)
Yoon, Susan A.; Koehler-Yom, Jessica; Anderson, Emma; Lin, Joyce; Klopfer, Eric
2015-05-01
Background: This exploratory study is part of a larger-scale research project aimed at building theoretical and practical knowledge of complex systems in students and teachers with the goal of improving high school biology learning through professional development and a classroom intervention. Purpose: We propose a model of adaptive expertise to better understand teachers' classroom practices as they attempt to navigate myriad variables in the implementation of biology units that include working with computer simulations, and learning about and teaching through complex systems ideas. Sample: Research participants were three high school biology teachers, two females and one male, ranging in teaching experience from six to 16 years. Their teaching contexts also ranged in student achievement from 14-47% advanced science proficiency. Design and methods: We used a holistic multiple case study methodology and collected data during the 2011-2012 school year. Data sources include classroom observations, teacher and student surveys, and interviews. Data analyses and trustworthiness measures were conducted through qualitative mining of data sources and triangulation of findings. Results: We illustrate the characteristics of adaptive expertise of more or less successful teaching and learning when implementing complex systems curricula. We also demonstrate differences between case study teachers in terms of particular variables associated with adaptive expertise. Conclusions: This research contributes to scholarship on practices and professional development needed to better support teachers to teach through a complex systems pedagogical and curricular approach.
Inter- and Intra-method Variability of VS Profiles and VS30 at ARRA-funded Sites
NASA Astrophysics Data System (ADS)
Yong, A.; Boatwright, J.; Martin, A. J.
2015-12-01
The 2009 American Recovery and Reinvestment Act (ARRA) funded geophysical site characterizations at 191 seismographic stations in California and in the central and eastern United States. Shallow boreholes were considered cost- and environmentally-prohibitive, thus non-invasive methods (passive and active surface- and body-wave techniques) were used at these stations. The drawback, however, is that these techniques measure seismic properties indirectly and introduce more uncertainty than borehole methods. The principal methods applied were Array Microtremor (AM), Multi-channel Analysis of Surface Waves (MASW; Rayleigh and Love waves), Spectral Analysis of Surface Waves (SASW), Refraction Microtremor (ReMi), and P- and S-wave refraction tomography. Depending on the apparent geologic or seismic complexity of the site, field crews applied one or a combination of these methods to estimate the shear-wave velocity (VS) profile and calculate VS30, the time-averaged VS to a depth of 30 meters. We study the inter- and intra-method variability of VS and VS30 at each seismographic station where combinations of techniques were applied. For each site, we find both types of variability in VS30 remain insignificant (5-10% difference) despite substantial variability observed in the VS profiles. We also find that reliable VS profiles are best developed using a combination of techniques, e.g., surface-wave VS profiles correlated against P-wave tomography to constrain variables (Poisson's ratio and density) that are key depth-dependent parameters used in modeling VS profiles. The most reliable results are based on surface- or body-wave profiles correlated against independent observations such as material properties inferred from outcropping geology nearby. For example, mapped geology describes station CI.LJR as a hard rock site (VS30 > 760 m/s). However, decomposed rock outcrops were found nearby and support the estimated VS30 of 303 m/s derived from the MASW (Love wave) profile.
Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I
2011-09-26
A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.
Spatio-Temporal Process Variability in Watershed Scale Wetland Restoration Planning
NASA Astrophysics Data System (ADS)
Evenson, G. R.
2012-12-01
Watershed scale restoration decision making processes are increasingly informed by quantitative methodologies providing site-specific restoration recommendations - sometimes referred to as "systematic planning." The more advanced of these methodologies are characterized by a coupling of search algorithms and ecological models to discover restoration plans that optimize environmental outcomes. Yet while these methods have exhibited clear utility as decision support toolsets, they may be critiqued for flawed evaluations of spatio-temporally variable processes fundamental to watershed scale restoration. Hydrologic and non-hydrologic mediated process connectivity along with post-restoration habitat dynamics, for example, are commonly ignored yet known to appreciably affect restoration outcomes. This talk will present a methodology to evaluate such spatio-temporally complex processes in the production of watershed scale wetland restoration plans. Using the Tuscarawas Watershed in Eastern Ohio as a case study, a genetic algorithm will be coupled with the Soil and Water Assessment Tool (SWAT) to reveal optimal wetland restoration plans as measured by their capacity to maximize nutrient reductions. Then, a so-called "graphical" representation of the optimization problem will be implemented in-parallel to promote hydrologic and non-hydrologic mediated connectivity amongst existing wetlands and sites selected for restoration. Further, various search algorithm mechanisms will be discussed as a means of accounting for temporal complexities such as post-restoration habitat dynamics. Finally, generalized patterns of restoration plan optimality will be discussed as an alternative and possibly superior decision support toolset given the complexity and stochastic nature of spatio-temporal process variability.
Controls of multi-modal wave conditions in a complex coastal setting
Hegermiller, Christie; Rueda, Ana C.; Erikson, Li H.; Barnard, Patrick L.; Antolinez, J.A.A.; Mendez, Fernando J.
2017-01-01
Coastal hazards emerge from the combined effect of wave conditions and sea level anomalies associated with storms or low-frequency atmosphere-ocean oscillations. Rigorous characterization of wave climate is limited by the availability of spectral wave observations, the computational cost of dynamical simulations, and the ability to link wave-generating atmospheric patterns with coastal conditions. We present a hybrid statistical-dynamical approach to simulating nearshore wave climate in complex coastal settings, demonstrated in the Southern California Bight, where waves arriving from distant, disparate locations are refracted over complex bathymetry and shadowed by offshore islands. Contributions of wave families and large-scale atmospheric drivers to nearshore wave energy flux are analyzed. Results highlight the variability of influences controlling wave conditions along neighboring coastlines. The universal method demonstrated here can be applied to complex coastal settings worldwide, facilitating analysis of the effects of climate change on nearshore wave climate.
Controls of Multimodal Wave Conditions in a Complex Coastal Setting
NASA Astrophysics Data System (ADS)
Hegermiller, C. A.; Rueda, A.; Erikson, L. H.; Barnard, P. L.; Antolinez, J. A. A.; Mendez, F. J.
2017-12-01
Coastal hazards emerge from the combined effect of wave conditions and sea level anomalies associated with storms or low-frequency atmosphere-ocean oscillations. Rigorous characterization of wave climate is limited by the availability of spectral wave observations, the computational cost of dynamical simulations, and the ability to link wave-generating atmospheric patterns with coastal conditions. We present a hybrid statistical-dynamical approach to simulating nearshore wave climate in complex coastal settings, demonstrated in the Southern California Bight, where waves arriving from distant, disparate locations are refracted over complex bathymetry and shadowed by offshore islands. Contributions of wave families and large-scale atmospheric drivers to nearshore wave energy flux are analyzed. Results highlight the variability of influences controlling wave conditions along neighboring coastlines. The universal method demonstrated here can be applied to complex coastal settings worldwide, facilitating analysis of the effects of climate change on nearshore wave climate.
Weyhermüller, Thomas; Wagner, Rita; Khanra, Sumit; Chaudhuri, Phalguni
2005-08-07
Three trinuclear complexes, NiII MnIII NiII, NiII CrIII NiII and Ni(II)3 based on (pyridine-2-aldoximato)nickel(II) units are described. Two of them, and , contain metal-centers in linear arrangement, as is revealed by X-ray diffraction. Complex is a homonuclear complex in which the three nickel(II) centers are disposed in a triangular fashion. The compounds were characterized by various physical methods including cyclic voltammetric and variable-temperature (2-290 K) susceptibility measurements. Complexes and display antiferromagnetic exchange coupling of the neighbouring metal centers, while weak ferromagnetic spin exchange between the adjacent Ni II and Cr III ions in is observed. The experimental magnetic data were simulated by using appropriate models.
Field oriented control of induction motors
NASA Technical Reports Server (NTRS)
Burrows, Linda M.; Zinger, Don S.; Roth, Mary Ellen
1990-01-01
Induction motors have always been known for their simple rugged construction, but until lately were not suitable for variable speed or servo drives due to the inherent complexity of the controls. With the advent of field oriented control (FOC), however, the induction motor has become an attractive option for these types of drive systems. An FOC system which utilizes the pulse population modulation method to synthesize the motor drive frequencies is examined. This system allows for a variable voltage to frequency ratio and enables the user to have independent control of both the speed and torque of an induction motor. A second generation of the control boards were developed and tested with the next point of focus being the minimization of the size and complexity of these controls. Many options were considered with the best approach being the use of a digital signal processor (DSP) due to its inherent ability to quickly evaluate control algorithms. The present test results of the system and the status of the optimization process using a DSP are discussed.
2014-01-01
Background To improve quality of care and patient outcomes, health system decision-makers need to identify and implement effective interventions. An increasing number of systematic reviews document the effects of quality improvement programs to assist decision-makers in developing new initiatives. However, limitations in the reporting of primary studies and current meta-analysis methods (including approaches for exploring heterogeneity) reduce the utility of existing syntheses for health system decision-makers. This study will explore the role of innovative meta-analysis approaches and the added value of enriched and updated data for increasing the utility of systematic reviews of complex interventions. Methods/Design We will use the dataset from our recent systematic review of 142 randomized trials of diabetes quality improvement programs to evaluate novel approaches for exploring heterogeneity. These will include exploratory methods, such as multivariate meta-regression analyses and all-subsets combinatorial meta-analysis. We will then update our systematic review to include new trials and enrich the dataset by surveying authors of all included trials. In doing so, we will explore the impact of variables not, reported in previous publications, such as details of study context, on the effectiveness of the intervention. We will use innovative analytical methods on the enriched and updated dataset to identify key success factors in the implementation of quality improvement interventions for diabetes. Decision-makers will be involved throughout to help identify and prioritize variables to be explored and to aid in the interpretation and dissemination of results. Discussion This study will inform future systematic reviews of complex interventions and describe the value of enriching and updating data for exploring heterogeneity in meta-analysis. It will also result in an updated comprehensive systematic review of diabetes quality improvement interventions that will be useful to health system decision-makers in developing interventions to improve outcomes for people with diabetes. Systematic review registration PROSPERO registration no. CRD42013005165 PMID:25115289
NASA Astrophysics Data System (ADS)
Ma, Junhai; Li, Ting; Ren, Wenbo
2017-06-01
This paper examines the optimal decisions of dual-channel game model considering the inputs of retailing service. We analyze how adjustment speed of service inputs affect the system complexity and market performance, and explore the stability of the equilibrium points by parameter basin diagrams. And chaos control is realized by variable feedback method. The numerical simulation shows that complex behavior would trigger the system to become unstable, such as double period bifurcation and chaos. We measure the performances of the model in different periods by analyzing the variation of average profit index. The theoretical results show that the percentage share of the demand and cross-service coefficients have important influence on the stability of the system and its feasible basin of attraction.
Zhang, Yajun; Chai, Tianyou; Wang, Hong; Wang, Dianhui; Chen, Xinkai
2018-06-01
Complex industrial processes are multivariable and generally exhibit strong coupling among their control loops with heavy nonlinear nature. These make it very difficult to obtain an accurate model. As a result, the conventional and data-driven control methods are difficult to apply. Using a twin-tank level control system as an example, a novel multivariable decoupling control algorithm with adaptive neural-fuzzy inference system (ANFIS)-based unmodeled dynamics (UD) compensation is proposed in this paper for a class of complex industrial processes. At first, a nonlinear multivariable decoupling controller with UD compensation is introduced. Different from the existing methods, the decomposition estimation algorithm using ANFIS is employed to estimate the UD, and the desired estimating and decoupling control effects are achieved. Second, the proposed method does not require the complicated switching mechanism which has been commonly used in the literature. This significantly simplifies the obtained decoupling algorithm and its realization. Third, based on some new lemmas and theorems, the conditions on the stability and convergence of the closed-loop system are analyzed to show the uniform boundedness of all the variables. This is then followed by the summary on experimental tests on a heavily coupled nonlinear twin-tank system that demonstrates the effectiveness and the practicability of the proposed method.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
The comparison of rapid bioassays for the assessment of urban groundwater quality.
Dewhurst, R E; Wheeler, J R; Chummun, K S; Mather, J D; Callaghan, A; Crane, M
2002-05-01
Groundwater is a complex mixture of chemicals that is naturally variable. Current legislation in the UK requires that groundwater quality and the degree of contamination are assessed using chemical methods. Such methods do not consider the synergistic or antagonistic interactions that may affect the bioavailability and toxicity of pollutants in the environment. Bioassays are a method for assessing the toxic impact of whole groundwater samples on the environment. Three rapid bioassays, Eclox, Microtox and ToxAlert, and a Daphnia magna 48-h immobilisation test were used to assess groundwater quality from sites with a wide range of historical uses. Eclox responses indicated that the test was very sensitive to changes in groundwater chemistry; 77% of the results had a percentage inhibition greater than 90%. ToxAlert, although suitable for monitoring changes in water quality under laboratory conditions, produced highly variable results due to fluctuations in temperature and the chemical composition of the samples. Microtox produced replicable results that correlated with those from D. magna tests.
Wavelet neural networks: a practical guide.
Alexandridis, Antonios K; Zapranis, Achilleas D
2013-06-01
Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of applications. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thoroughly examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey-Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications. Copyright © 2013 Elsevier Ltd. All rights reserved.
Using a composite grid approach in a complex coastal domain to estimate estuarine residence time
Warner, John C.; Geyer, W. Rockwell; Arango, Herman G.
2010-01-01
We investigate the processes that influence residence time in a partially mixed estuary using a three-dimensional circulation model. The complex geometry of the study region is not optimal for a structured grid model and so we developed a new method of grid connectivity. This involves a novel approach that allows an unlimited number of individual grids to be combined in an efficient manner to produce a composite grid. We then implemented this new method into the numerical Regional Ocean Modeling System (ROMS) and developed a composite grid of the Hudson River estuary region to investigate the residence time of a passive tracer. Results show that the residence time is a strong function of the time of release (spring vs. neap tide), the along-channel location, and the initial vertical placement. During neap tides there is a maximum in residence time near the bottom of the estuary at the mid-salt intrusion length. During spring tides the residence time is primarily a function of along-channel location and does not exhibit a strong vertical variability. This model study of residence time illustrates the utility of the grid connectivity method for circulation and dispersion studies in regions of complex geometry.
Global high-frequency source imaging accounting for complexity in Green's functions
NASA Astrophysics Data System (ADS)
Lambert, V.; Zhan, Z.
2017-12-01
The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Miyabara, Renata; Berg, Karsten; Kraemer, Jan F; Baltatu, Ovidiu C; Wessel, Niels; Campos, Luciana A
2017-01-01
Objective: The aim of this study was to identify the most sensitive heart rate and blood pressure variability (HRV and BPV) parameters from a given set of well-known methods for the quantification of cardiovascular autonomic function after several autonomic blockades. Methods: Cardiovascular sympathetic and parasympathetic functions were studied in freely moving rats following peripheral muscarinic (methylatropine), β1-adrenergic (metoprolol), muscarinic + β1-adrenergic, α1-adrenergic (prazosin), and ganglionic (hexamethonium) blockades. Time domain, frequency domain and symbolic dynamics measures for each of HRV and BPV were classified through paired Wilcoxon test for all autonomic drugs separately. In order to select those variables that have a high relevance to, and stable influence on our target measurements (HRV, BPV) we used Fisher's Method to combine the p -value of multiple tests. Results: This analysis led to the following best set of cardiovascular variability parameters: The mean normal beat-to-beat-interval/value (HRV/BPV: meanNN), the coefficient of variation (cvNN = standard deviation over meanNN) and the root mean square differences of successive (RMSSD) of the time domain analysis. In frequency domain analysis the very-low-frequency (VLF) component was selected. From symbolic dynamics Shannon entropy of the word distribution (FWSHANNON) as well as POLVAR3, the non-linear parameter to detect intermittently decreased variability, showed the best ability to discriminate between the different autonomic blockades. Conclusion: Throughout a complex comparative analysis of HRV and BPV measures altered by a set of autonomic drugs, we identified the most sensitive set of informative cardiovascular variability indexes able to pick up the modifications imposed by the autonomic challenges. These indexes may help to increase our understanding of cardiovascular sympathetic and parasympathetic functions in translational studies of experimental diseases.
Mental workload measurement for emergency operating procedures in digital nuclear power plants.
Gao, Qin; Wang, Yang; Song, Fei; Li, Zhizhong; Dong, Xiaolu
2013-01-01
Mental workload is a major consideration for the design of emergency operation procedures (EOPs) in nuclear power plants. Continuous and objective measures are desired. This paper compares seven mental workload measurement methods (pupil size, blink rate, blink duration, heart rate variability, parasympathetic/sympathetic ratio, total power and (Goals, Operations, Methods, and Section Rules)-(Keystroke Level Model) GOMS-KLM-based workload index) with regard to sensitivity, validity and intrusiveness. Eighteen participants performed two computerised EOPs of different complexity levels, and mental workload measures were collected during the experiment. The results show that the blink rate is sensitive to both the difference in the overall task complexity and changes in peak complexity within EOPs, that the error rate is sensitive to the level of arousal and correlate to the step error rate and that blink duration increases over the task period in both low and high complexity EOPs. Cardiac measures were able to distinguish tasks with different overall complexity. The intrusiveness of the physiological instruments is acceptable. Finally, the six physiological measures were integrated using group method of data handling to predict perceived overall mental workload. The study compared seven measures for evaluating the mental workload with emergency operation procedure in nuclear power plants. An experiment with simulated procedures was carried out, and the results show that eye response measures are useful for assessing temporal changes of workload whereas cardiac measures are useful for evaluating the overall workload.
Factor complexity of crash occurrence: An empirical demonstration using boosted regression trees.
Chung, Yi-Shih
2013-12-01
Factor complexity is a characteristic of traffic crashes. This paper proposes a novel method, namely boosted regression trees (BRT), to investigate the complex and nonlinear relationships in high-variance traffic crash data. The Taiwanese 2004-2005 single-vehicle motorcycle crash data are used to demonstrate the utility of BRT. Traditional logistic regression and classification and regression tree (CART) models are also used to compare their estimation results and external validities. Both the in-sample cross-validation and out-of-sample validation results show that an increase in tree complexity provides improved, although declining, classification performance, indicating a limited factor complexity of single-vehicle motorcycle crashes. The effects of crucial variables including geographical, time, and sociodemographic factors explain some fatal crashes. Relatively unique fatal crashes are better approximated by interactive terms, especially combinations of behavioral factors. BRT models generally provide improved transferability than conventional logistic regression and CART models. This study also discusses the implications of the results for devising safety policies. Copyright © 2012 Elsevier Ltd. All rights reserved.
Aliakbaryhosseinabadi, Susan; Kostic, Vladimir; Pavlovic, Aleksandra; Radovanovic, Sasa; Nlandu Kamavuako, Ernest; Jiang, Ning; Petrini, Laura; Dremstrup, Kim; Farina, Dario; Mrachacz-Kersting, Natalie
2017-01-01
In this study, we analyzed the influence of artificially imposed attention variations using the auditory oddball paradigm on the cortical activity associated to motor preparation/execution. EEG signals from Cz and its surrounding channels were recorded during three sets of ankle dorsiflexion movements. Each set was interspersed with either a complex or a simple auditory oddball task for healthy participants and a complex auditory oddball task for stroke patients. The amplitude of the movement-related cortical potentials (MRCPs) decreased with the complex oddball paradigm, while MRCP variability increased. Both oddball paradigms increased the detection latency significantly (p<0.05) and the complex paradigm decreased the true positive rate (TPR) (p=0.04). In patients, the negativity of the MRCP decreased while pre-phase variability increased, and the detection latency and accuracy deteriorated with attention diversion. Attention diversion has a significant influence on MRCP features and detection parameters, although these changes were counteracted by the application of the laplacian method. Brain-computer interfaces for neuromodulation that use the MRCP as the control signal are robust to changes in attention. However, attention must be monitored since it plays a key role in plasticity induction. Here we demonstrate that this can be achieved using the single channel Cz. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Complexity and time asymmetry of heart rate variability are altered in acute mental stress.
Visnovcova, Z; Mestanik, M; Javorka, M; Mokra, D; Gala, M; Jurko, A; Calkovska, A; Tonhajzerova, I
2014-07-01
We aimed to study the complexity and time asymmetry of short-term heart rate variability (HRV) as an index of complex neurocardiac control in response to stress using symbolic dynamics and time irreversibility methods. ECG was recorded at rest and during and after two stressors (Stroop, arithmetic test) in 70 healthy students. Symbolic dynamics parameters (NUPI, NCI, 0V%, 1V%, 2LV%, 2UV%), and time irreversibility indices (P%, G%, E) were evaluated. Additionally, HRV magnitude was quantified by linear parameters: spectral powers in low (LF) and high frequency (HF) bands. Our results showed a reduction of HRV complexity in stress (lower NUPI with both stressors, lower NCI with Stroop). Pattern classification analysis revealed significantly higher 0V% and lower 2LV% with both stressors, indicating a shift in sympathovagal balance, and significantly higher 1V% and lower 2UV% with Stroop. An unexpected result was found in time irreversibility: significantly lower G% and E with both stressors, P% index significantly declined only with arithmetic test. Linear HRV analysis confirmed vagal withdrawal (lower HF) with both stressors; LF significantly increased with Stroop and decreased with arithmetic test. Correlation analysis revealed no significant associations between symbolic dynamics and time irreversibility. Concluding, symbolic dynamics and time irreversibility could provide independent information related to alterations of neurocardiac control integrity in stress-related disease.
NASA Astrophysics Data System (ADS)
Antsiferov, SV; Sammal, AS; Deev, PV
2018-03-01
To determine the stress-strain state of multilayer support of vertical shafts, including cross-sectional deformation of the tubing rings as against the design, the authors propose an analytical method based on the provision of the mechanics of underground structures and surrounding rock mass as the elements of an integrated deformable system. The method involves a rigorous solution of the corresponding problem of elasticity, obtained using the mathematical apparatus of the theory of analytic functions of a complex variable. The design method is implemented as a software program allowing multivariate applied computation. Examples of the calculation are given.
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
Chen, Jinsong; Zhang, Dake; Choi, Jaehwa
2015-12-01
It is common to encounter latent variables with ordinal data in social or behavioral research. Although a mediated effect of latent variables (latent mediated effect, or LME) with ordinal data may appear to be a straightforward combination of LME with continuous data and latent variables with ordinal data, the methodological challenges to combine the two are not trivial. This research covers model structures as complex as LME and formulates both point and interval estimates of LME for ordinal data using the Bayesian full-information approach. We also combine weighted least squares (WLS) estimation with the bias-corrected bootstrapping (BCB; Efron Journal of the American Statistical Association, 82, 171-185, 1987) method or the traditional delta method as the limited-information approach. We evaluated the viability of these different approaches across various conditions through simulation studies, and provide an empirical example to illustrate the approaches. We found that the Bayesian approach with reasonably informative priors is preferred when both point and interval estimates are of interest and the sample size is 200 or above.
A Computing Method for Sound Propagation Through a Nonuniform Jet Stream
NASA Technical Reports Server (NTRS)
Padula, S. L.; Liu, C. H.
1974-01-01
Understanding the principles of jet noise propagation is an essential ingredient of systematic noise reduction research. High speed computer methods offer a unique potential for dealing with complex real life physical systems whereas analytical solutions are restricted to sophisticated idealized models. The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions and a more suitable approach was needed. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.
NASA Astrophysics Data System (ADS)
Eghtesad, Adnan; Knezevic, Marko
2018-07-01
A corrective smooth particle method (CSPM) within smooth particle hydrodynamics (SPH) is used to study the deformation of an aircraft structure under high-velocity water-ditching impact load. The CSPM-SPH method features a new approach for the prediction of two-way fluid-structure interaction coupling. Results indicate that the implementation is well suited for modeling the deformation of structures under high-velocity impact into water as evident from the predicted stress and strain localizations in the aircraft structure as well as the integrity of the impacted interfaces, which show no artificial particle penetrations. To reduce the simulation time, a heterogeneous particle size distribution over a complex three-dimensional geometry is used. The variable particle size is achieved from a finite element mesh with variable element size and, as a result, variable nodal (i.e., SPH particle) spacing. To further accelerate the simulations, the SPH code is ported to a graphics processing unit using the OpenACC standard. The implementation and simulation results are described and discussed in this paper.
NASA Astrophysics Data System (ADS)
Eghtesad, Adnan; Knezevic, Marko
2017-12-01
A corrective smooth particle method (CSPM) within smooth particle hydrodynamics (SPH) is used to study the deformation of an aircraft structure under high-velocity water-ditching impact load. The CSPM-SPH method features a new approach for the prediction of two-way fluid-structure interaction coupling. Results indicate that the implementation is well suited for modeling the deformation of structures under high-velocity impact into water as evident from the predicted stress and strain localizations in the aircraft structure as well as the integrity of the impacted interfaces, which show no artificial particle penetrations. To reduce the simulation time, a heterogeneous particle size distribution over a complex three-dimensional geometry is used. The variable particle size is achieved from a finite element mesh with variable element size and, as a result, variable nodal (i.e., SPH particle) spacing. To further accelerate the simulations, the SPH code is ported to a graphics processing unit using the OpenACC standard. The implementation and simulation results are described and discussed in this paper.
NASA Astrophysics Data System (ADS)
Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.
2018-05-01
Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.
ADAM: analysis of discrete models of biological systems using computer algebra.
Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard
2011-07-20
Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web-based tool for several different input formats, and it makes analysis of complex models accessible to a larger community, as it is platform independent as a web-service and does not require understanding of the underlying mathematics.
Romi, Wahengbam; Keisam, Santosh; Ahmed, Giasuddin; Jeyaram, Kumaraswamy
2014-02-28
Meyerozyma guilliermondii (anamorph Candida guilliermondii) and Meyerozyma caribbica (anamorph Candida fermentati) are closely related species of the genetically heterogenous M. guilliermondii complex. Conventional phenotypic methods frequently misidentify the species within this complex and also with other species of the Saccharomycotina CTG clade. Even the long-established sequencing of large subunit (LSU) rRNA gene remains ambiguous. We also faced similar problem during identification of yeast isolates of M. guilliermondii complex from indigenous bamboo shoot fermentation in North East India. There is a need for development of reliable and accurate identification methods for these closely related species because of their increasing importance as emerging infectious yeasts and associated biotechnological attributes. We targeted the highly variable internal transcribed spacer (ITS) region (ITS1-5.8S-ITS2) and identified seven restriction enzymes through in silico analysis for differentiating M. guilliermondii from M. caribbica. Fifty five isolates of M. guilliermondii complex which could not be delineated into species-specific taxonomic ranks by API 20 C AUX and LSU rRNA gene D1/D2 sequencing were subjected to ITS-restriction fragment length polymorphism (ITS-RFLP) analysis. TaqI ITS-RFLP distinctly differentiated the isolates into M. guilliermondii (47 isolates) and M. caribbica (08 isolates) with reproducible species-specific patterns similar to the in silico prediction. The reliability of this method was validated by ITS1-5.8S-ITS2 sequencing, mitochondrial DNA RFLP and electrophoretic karyotyping. We herein described a reliable ITS-RFLP method for distinct differentiation of frequently misidentified M. guilliermondii from M. caribbica. Even though in silico analysis differentiated other closely related species of M. guilliermondii complex from the above two species, it is yet to be confirmed by in vitro analysis using reference strains. This method can be used as a reliable tool for rapid and accurate identification of closely related species of M. guilliermondii complex and for differentiating emerging infectious yeasts of the Saccharomycotina CTG clade.
2D problems of surface growth theory with applications to additive manufacturing
NASA Astrophysics Data System (ADS)
Manzhirov, A. V.; Mikhin, M. N.
2018-04-01
We study 2D problems of surface growth theory of deformable solids and their applications to the analysis of the stress-strain state of AM fabricated products and structures. Statements of the problems are given, and a solution method based on the approaches of the theory of functions of a complex variable is suggested. Computations are carried out for model problems. Qualitative and quantitative results are discussed.
An implementation of the distributed programming structural synthesis system (PROSSS)
NASA Technical Reports Server (NTRS)
Rogers, J. L., Jr.
1981-01-01
A method is described for implementing a flexible software system that combines large, complex programs with small, user-supplied, problem-dependent programs and that distributes their execution between a mainframe and a minicomputer. The Programming Structural Synthesis System (PROSSS) was the specific software system considered. The results of such distributed implementation are flexibility of the optimization procedure organization and versatility of the formulation of constraints and design variables.
Cytomegalovirus shapes long-term immune reconstitution after allogeneic stem cell transplantation
Itzykson, Raphael; Robin, Marie; Moins-Teisserenc, Helene; Delord, Marc; Busson, Marc; Xhaard, Aliénor; de Fontebrune, Flore Sicre; de Latour, Régis Peffault; Toubert, Antoine; Socié, Gérard
2015-01-01
Immune reconstitution after allogeneic stem cell transplantation is a dynamic and complex process depending on the recipient and donor characteristics, on the modalities of transplantation, and on the occurrence of graft-versus-host disease. Multivariate methods widely used for gene expression profiling can simultaneously analyze the patterns of a great number of biological variables on a heterogeneous set of patients. Here we use these methods on flow cytometry assessment of up to 25 lymphocyte populations to analyze the global pattern of long-term immune reconstitution after transplantation. Immune patterns were most distinct from healthy controls at six months, and had not yet fully recovered as long as two years after transplant. The two principal determinants of variability were linked to the balance of B and CD8+ T cells and of natural killer and B cells, respectively. Recipient’s cytomegalovirus serostatus, cytomegalovirus replication, and chronic graft-versus-host disease were the main factors shaping the immune pattern one year after transplant. We identified a complex signature of under- and over-representation of immune populations dictated by recipient’s cytomegalovirus seropositivity. Finally, we identified dimensions of variance in immune patterns as significant predictors of long-term non-relapse mortality, independently of chronic graft-versus-host disease. PMID:25261095
Mrabet, Yassine; Semmar, Nabil
2010-05-01
Complexity of metabolic systems can be undertaken at different scales (metabolites, metabolic pathways, metabolic network map, biological population) and under different aspects (structural, functional, evolutive). To analyse such a complexity, metabolic systems need to be decomposed into different components according to different concepts. Four concepts are presented here consisting in considering metabolic systems as sets of metabolites, chemical reactions, metabolic pathways or successive processes. From a metabolomic dataset, such decompositions are performed using different mathematical methods including correlation, stiochiometric, ordination, classification, combinatorial and kinetic analyses. Correlation analysis detects and quantifies affinities/oppositions between metabolites. Stoichiometric analysis aims to identify the organisation of a metabolic network into different metabolic pathways on the hand, and to quantify/optimize the metabolic flux distribution through the different chemical reactions of the system. Ordination and classification analyses help to identify different metabolic trends and their associated metabolites in order to highlight chemical polymorphism representing different variability poles of the metabolic system. Then, metabolic processes/correlations responsible for such a polymorphism can be extracted in silico by combining metabolic profiles representative of different metabolic trends according to a weighting bootstrap approach. Finally evolution of metabolic processes in time can be analysed by different kinetic/dynamic modelling approaches.
From themes to hypotheses: following up with quantitative methods.
Morgan, David L
2015-06-01
One important category of mixed-methods research designs consists of quantitative studies that follow up on qualitative research. In this case, the themes that serve as the results from the qualitative methods generate hypotheses for testing through the quantitative methods. That process requires operationalization to translate the concepts from the qualitative themes into quantitative variables. This article illustrates these procedures with examples that range from simple operationalization to the evaluation of complex models. It concludes with an argument for not only following up qualitative work with quantitative studies but also the reverse, and doing so by going beyond integrating methods within single projects to include broader mutual attention from qualitative and quantitative researchers who work in the same field. © The Author(s) 2015.
1986-08-01
mean square errors for selected variables . . 34 8. Variable range and mean value for MCC and non-MCC cases . . 36 9. Alpha ( a ) levels at which the...Table 9. For each variable, the a level is listed at which the two mean values are determined to be significantly 38 Table 9. Alpha ( a ) levels at...vorticity advection None 700 mb vertical velocity forecast .20 different. These a levels express the probability of erroneously con- cluding that the
Soneson, Charlotte; Lilljebjörn, Henrik; Fioretos, Thoas; Fontes, Magnus
2010-04-15
With the rapid development of new genetic measurement methods, several types of genetic alterations can be quantified in a high-throughput manner. While the initial focus has been on investigating each data set separately, there is an increasing interest in studying the correlation structure between two or more data sets. Multivariate methods based on Canonical Correlation Analysis (CCA) have been proposed for integrating paired genetic data sets. The high dimensionality of microarray data imposes computational difficulties, which have been addressed for instance by studying the covariance structure of the data, or by reducing the number of variables prior to applying the CCA. In this work, we propose a new method for analyzing high-dimensional paired genetic data sets, which mainly emphasizes the correlation structure and still permits efficient application to very large data sets. The method is implemented by translating a regularized CCA to its dual form, where the computational complexity depends mainly on the number of samples instead of the number of variables. The optimal regularization parameters are chosen by cross-validation. We apply the regularized dual CCA, as well as a classical CCA preceded by a dimension-reducing Principal Components Analysis (PCA), to a paired data set of gene expression changes and copy number alterations in leukemia. Using the correlation-maximizing methods, regularized dual CCA and PCA+CCA, we show that without pre-selection of known disease-relevant genes, and without using information about clinical class membership, an exploratory analysis singles out two patient groups, corresponding to well-known leukemia subtypes. Furthermore, the variables showing the highest relevance to the extracted features agree with previous biological knowledge concerning copy number alterations and gene expression changes in these subtypes. Finally, the correlation-maximizing methods are shown to yield results which are more biologically interpretable than those resulting from a covariance-maximizing method, and provide different insight compared to when each variable set is studied separately using PCA. We conclude that regularized dual CCA as well as PCA+CCA are useful methods for exploratory analysis of paired genetic data sets, and can be efficiently implemented also when the number of variables is very large.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
Gene variants associated with antisocial behaviour: A latent variable approach
Bentley, Mary Jane; Lin, Haiqun; Fernandez, Thomas V.; Lee, Maria; Yrigollen, Carolyn M.; Pakstis, Andrew J.; Katsovich, Liliya; Olds, David L.; Grigorenko, Elena L.; Leckman, James F.
2013-01-01
Objective The aim of this study was to determine if a latent variable approach might be useful in identifying shared variance across genetic risk alleles that is associated with antisocial behaviour at age 15 years. Methods Using a conventional latent variable approach, we derived an antisocial phenotype in 328 adolescents utilizing data from a 15-year follow-up of a randomized trial of a prenatal and infancy nurse-home visitation program in Elmira, New York. We then investigated, via a novel latent variable approach, 450 informative genetic polymorphisms in 71 genes previously associated with antisocial behaviour, drug use, affiliative behaviours, and stress response in 241 consenting individuals for whom DNA was available. Haplotype and Pathway analyses were also performed. Results Eight single-nucleotide polymorphisms (SNPs) from 8 genes contributed to the latent genetic variable that in turn accounted for 16.0% of the variance within the latent antisocial phenotype. The number of risk alleles was linearly related to the latent antisocial variable scores. Haplotypes that included the putative risk alleles for all 8 genes were also associated with higher latent antisocial variable scores. In addition, 33 SNPs from 63 of the remaining genes were also significant when added to the final model. Many of these genes interact on a molecular level, forming molecular networks. The results support a role for genes related to dopamine, norepinephrine, serotonin, glutamate, opioid, and cholinergic signaling as well as stress response pathways in mediating susceptibility to antisocial behaviour. Conclusions This preliminary study supports use of relevant behavioural indicators and latent variable approaches to study the potential “co-action” of gene variants associated with antisocial behaviour. It also underscores the cumulative relevance of common genetic variants for understanding the etiology of complex behaviour. If replicated in future studies, this approach may allow the identification of a ‘shared’ variance across genetic risk alleles associated with complex neuropsychiatric dimensional phenotypes using relatively small numbers of well-characterized research participants. PMID:23822756
Kwon, Hae-Yeon; Ahn, So-Yoon
2016-10-01
[Purpose] This study investigates how a task-oriented training and high-variability practice program can affect the gross motor performance and activities of daily living for children with spastic diplegia and provides an effective and reliable clinical database for future improvement of motor performances skills. [Subjects and Methods] This study randomly assigned seven children with spastic diplegia to each intervention group including that of a control group, task-oriented training group, and a high-variability practice group. The control group only received neurodevelopmental treatment for 40 minutes, while the other two intervention groups additionally implemented a task-oriented training and high-variability practice program for 8 weeks (twice a week, 60 min per session). To compare intra and inter-relationships of the three intervention groups, this study measured gross motor performance measure (GMPM) and functional independence measure for children (WeeFIM) before and after 8 weeks of training. [Results] There were statistically significant differences in the amount of change before and after the training among the three intervention groups for the gross motor performance measure and functional independence measure. [Conclusion] Applying high-variability practice in a task-oriented training course may be considered an efficient intervention method to improve motor performance skills that can tune to movement necessary for daily livelihood through motor experience and learning of new skills as well as change of tasks learned in a complex environment or similar situations to high-variability practice.
Simulating variable source problems via post processing of individual particle tallies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.
2000-10-20
Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source formore » optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source.« less
NASA Astrophysics Data System (ADS)
Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz
2012-12-01
Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.
NASA Astrophysics Data System (ADS)
Linker, Thomas M.; Lee, Glenn S.; Beekman, Matt
2018-06-01
The semi-analytical methods of thermoelectric energy conversion efficiency calculation based on the cumulative properties approach and reduced variables approach are compared for 21 high performance thermoelectric materials. Both approaches account for the temperature dependence of the material properties as well as the Thomson effect, thus the predicted conversion efficiencies are generally lower than that based on the conventional thermoelectric figure of merit ZT for nearly all of the materials evaluated. The two methods also predict material energy conversion efficiencies that are in very good agreement which each other, even for large temperature differences (average percent difference of 4% with maximum observed deviation of 11%). The tradeoff between obtaining a reliable assessment of a material's potential for thermoelectric applications and the complexity of implementation of the three models, as well as the advantages of using more accurate modeling approaches in evaluating new thermoelectric materials, are highlighted.
A global × global test for testing associations between two large sets of variables.
Chaturvedi, Nimisha; de Menezes, Renée X; Goeman, Jelle J
2017-01-01
In high-dimensional omics studies where multiple molecular profiles are obtained for each set of patients, there is often interest in identifying complex multivariate associations, for example, copy number regulated expression levels in a certain pathway or in a genomic region. To detect such associations, we present a novel approach to test for association between two sets of variables. Our approach generalizes the global test, which tests for association between a group of covariates and a single univariate response, to allow high-dimensional multivariate response. We apply the method to several simulated datasets as well as two publicly available datasets, where we compare the performance of multivariate global test (G2) with univariate global test. The method is implemented in R and will be available as a part of the globaltest package in R. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ryder, Alan G
2002-03-01
Eighty-five solid samples consisting of illegal narcotics diluted with several different materials were analyzed by near-infrared (785 nm excitation) Raman spectroscopy. Principal Component Analysis (PCA) was employed to classify the samples according to narcotic type. The best sample discrimination was obtained by using the first derivative of the Raman spectra. Furthermore, restricting the spectral variables for PCA to 2 or 3% of the original spectral data according to the most intense peaks in the Raman spectrum of the pure narcotic resulted in a rapid discrimination method for classifying samples according to narcotic type. This method allows for the easy discrimination between cocaine, heroin, and MDMA mixtures even when the Raman spectra are complex or very similar. This approach of restricting the spectral variables also decreases the computational time by a factor of 30 (compared to the complete spectrum), making the methodology attractive for rapid automatic classification and identification of suspect materials.
A Curved, Elastostatic Boundary Element for Plane Anisotropic Structures
NASA Technical Reports Server (NTRS)
Smeltzer, Stanley S.; Klang, Eric C.
2001-01-01
The plane-stress equations of linear elasticity are used in conjunction with those of the boundary element method to develop a novel curved, quadratic boundary element applicable to structures composed of anisotropic materials in a state of plane stress or plane strain. The curved boundary element is developed to solve two-dimensional, elastostatic problems of arbitrary shape, connectivity, and material type. As a result of the anisotropy, complex variables are employed in the fundamental solution derivations for a concentrated unit-magnitude force in an infinite elastic anisotropic medium. Once known, the fundamental solutions are evaluated numerically by using the known displacement and traction boundary values in an integral formulation with Gaussian quadrature. All the integral equations of the boundary element method are evaluated using one of two methods: either regular Gaussian quadrature or a combination of regular and logarithmic Gaussian quadrature. The regular Gaussian quadrature is used to evaluate most of the integrals along the boundary, and the combined scheme is employed for integrals that are singular. Individual element contributions are assembled into the global matrices of the standard boundary element method, manipulated to form a system of linear equations, and the resulting system is solved. The interior displacements and stresses are found through a separate set of auxiliary equations that are derived using an Airy-type stress function in terms of complex variables. The capabilities and accuracy of this method are demonstrated for a laminated-composite plate with a central, elliptical cutout that is subjected to uniform tension along one of the straight edges of the plate. Comparison of the boundary element results for this problem with corresponding results from an analytical model show a difference of less than 1%.
Reduced modeling of signal transduction – a modular approach
Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter
2007-01-01
Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494
Mohamed, Marwa E; Frag, Eman Y Z; Hathoot, Abla A; Shalaby, Essam A
2018-01-15
Simple, accurate and robust spectrophotometric method was developed for determination of fenoprofen calcium drug (FPC). The proposed method was based on the charge transfer (CT) reaction of FPC drug (as n-electron donor) with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ), 2,4,6-trinitrophenol (picric acid, PA) or 1,2,5,8-tetrahydroxyanthraquinone (Quinalizarin, QZ) (as π-acceptors) to give highly colored charge transfer complexes. Different variables affecting the reaction such as reagent concentration, temperature and time have been carefully optimized to achieve the highest sensitivity. Beer's law was obeyed over the concentration ranges of 2-60, 0.6-90 and 4-30μgmL -1 using DDQ, PA and QZ CT reagents, respectively, with correlation coefficients of 0.9986, 0.9989 and 0.997 and detection limits of 1.78, 0.48 and 2.6μgmL -1 for the CT reagents in the same order. Elucidation of the chemical structure of the solid CT complexes formed via reaction between the drug under study and π-acceptors was done using elemental, thermal analyses, IR, 1 H NMR and mass spectrometry. X-ray diffraction was used to estimate the crystallinity of the CT complexes. Their biological activities were screened against different bacterial and fungal organisms. The method was applied successfully with satisfactory results for the determination of FPC drug in fenoprofen capsules. The method was validated with respect to linearity, limit of detection and quantification, inter- and intra-days precision and accuracy. The proposed method gave comparable results with the official method. Copyright © 2017 Elsevier B.V. All rights reserved.
An introduction to tree-structured modeling with application to quality of life data.
Su, Xiaogang; Azuero, Andres; Cho, June; Kvale, Elizabeth; Meneses, Karen M; McNees, M Patrick
2011-01-01
Investigators addressing nursing research are faced increasingly with the need to analyze data that involve variables of mixed types and are characterized by complex nonlinearity and interactions. Tree-based methods, also called recursive partitioning, are gaining popularity in various fields. In addition to efficiency and flexibility in handling multifaceted data, tree-based methods offer ease of interpretation. The aims of this study were to introduce tree-based methods, discuss their advantages and pitfalls in application, and describe their potential use in nursing research. In this article, (a) an introduction to tree-structured methods is presented, (b) the technique is illustrated via quality of life (QOL) data collected in the Breast Cancer Education Intervention study, and (c) implications for their potential use in nursing research are discussed. As illustrated by the QOL analysis example, tree methods generate interesting and easily understood findings that cannot be uncovered via traditional linear regression analysis. The expanding breadth and complexity of nursing research may entail the use of new tools to improve efficiency and gain new insights. In certain situations, tree-based methods offer an attractive approach that help address such needs.
NASA Astrophysics Data System (ADS)
Arain, Salma Aslam; Kazi, Tasneem G.; Afridi, Hassan Imran; Abbasi, Abdul Rasool; Panhwar, Abdul Haleem; Naeemullah; Shanker, Bhawani; Arain, Mohammad Balal
2014-12-01
An efficient, innovative preconcentration method, dual-cloud point extraction (d-CPE) has been developed for the extraction and preconcentration of copper (Cu2+) in serum samples of different viral hepatitis patients prior to couple with flame atomic absorption spectrometry (FAAS). The d-CPE procedure was based on forming complexes of elemental ions with complexing reagent 1-(2-pyridylazo)-2-naphthol (PAN), and subsequent entrapping the complexes in nonionic surfactant (Triton X-114). Then the surfactant rich phase containing the metal complexes was treated with aqueous nitric acid solution, and metal ions were back extracted into the aqueous phase, as second cloud point extraction stage, and finally determined by flame atomic absorption spectrometry using conventional nebulization. The multivariate strategy was applied to estimate the optimum values of experimental variables for the recovery of Cu2+ using d-CPE. In optimum experimental conditions, the limit of detection and the enrichment factor were 0.046 μg L-1 and 78, respectively. The validity and accuracy of proposed method were checked by analysis of Cu2+ in certified sample of serum (CRM) by d-CPE and conventional CPE procedure on same CRM. The proposed method was successfully applied to the determination of Cu2+ in serum samples of different viral hepatitis patients and healthy controls.
Decision paths in complex tasks
NASA Technical Reports Server (NTRS)
Galanter, Eugene
1991-01-01
Complex real world action and its prediction and control has escaped analysis by the classical methods of psychological research. The reason is that psychologists have no procedures to parse complex tasks into their constituents. Where such a division can be made, based say on expert judgment, there is no natural scale to measure the positive or negative values of the components. Even if we could assign numbers to task parts, we lack rules i.e., a theory, to combine them into a total task representation. We compare here two plausible theories for the amalgamation of the value of task components. Both of these theories require a numerical representation of motivation, for motivation is the primary variable that guides choice and action in well-learned tasks. We address this problem of motivational quantification and performance prediction by developing psychophysical scales of the desireability or aversiveness of task components based on utility scaling methods (Galanter 1990). We modify methods used originally to scale sensory magnitudes (Stevens and Galanter 1957), and that have been applied recently to the measure of task 'workload' by Gopher and Braune (1984). Our modification uses utility comparison scaling techniques which avoid the unnecessary assumptions made by Gopher and Braune. Formula for the utility of complex tasks based on the theoretical models are used to predict decision and choice of alternate paths to the same goal.
Residual interference and wind tunnel wall adaption
NASA Technical Reports Server (NTRS)
Mokry, Miroslav
1989-01-01
Measured flow variables near the test section boundaries, used to guide adjustments of the walls in adaptive wind tunnels, can also be used to quantify the residual interference. Because of a finite number of wall control devices (jacks, plenum compartments), the finite test section length, and the approximation character of adaptation algorithms, the unconfined flow conditions are not expected to be precisely attained even in the fully adapted stage. The procedures for the evaluation of residual wall interference are essentially the same as those used for assessing the correction in conventional, non-adaptive wind tunnels. Depending upon the number of flow variables utilized, one can speak of one- or two-variable methods; in two dimensions also of Schwarz- or Cauchy-type methods. The one-variable methods use the measured static pressure and normal velocity at the test section boundary, but do not require any model representation. This is clearly of an advantage for adaptive wall test section, which are often relatively small with respect to the test model, and for the variety of complex flows commonly encountered in wind tunnel testing. For test sections with flexible walls the normal component of velocity is given by the shape of the wall, adjusted for the displacement effect of its boundary layer. For ventilated test section walls it has to be measured by the Calspan pipes, laser Doppler velocimetry, or other appropriate techniques. The interface discontinuity method, also described, is a genuine residual interference assessment technique. It is specific to adaptive wall wind tunnels, where the computation results for the fictitious flow in the exterior of the test section are provided.
Survey of large protein complexes D. vulgaris reveals great structural diversity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, B.-G.; Dong, M.; Liu, H.
2009-08-15
An unbiased survey has been made of the stable, most abundant multi-protein complexes in Desulfovibrio vulgaris Hildenborough (DvH) that are larger than Mr {approx} 400 k. The quaternary structures for 8 of the 16 complexes purified during this work were determined by single-particle reconstruction of negatively stained specimens, a success rate {approx}10 times greater than that of previous 'proteomic' screens. In addition, the subunit compositions and stoichiometries of the remaining complexes were determined by biochemical methods. Our data show that the structures of only two of these large complexes, out of the 13 in this set that have recognizable functions,more » can be modeled with confidence based on the structures of known homologs. These results indicate that there is significantly greater variability in the way that homologous prokaryotic macromolecular complexes are assembled than has generally been appreciated. As a consequence, we suggest that relying solely on previously determined quaternary structures for homologous proteins may not be sufficient to properly understand their role in another cell of interest.« less
McNab, Duncan; Bowie, Paul; Morrison, Jill; Ross, Alastair
2016-11-01
Participation in projects to improve patient safety is a key component of general practice (GP) specialty training, appraisal and revalidation. Patient safety training priorities for GPs at all career stages are described in the Royal College of General Practitioners' curriculum. Current methods that are taught and employed to improve safety often use a 'find-and-fix' approach to identify components of a system (including humans) where performance could be improved. However, the complex interactions and inter-dependence between components in healthcare systems mean that cause and effect are not always linked in a predictable manner. The Safety-II approach has been proposed as a new way to understand how safety is achieved in complex systems that may improve quality and safety initiatives and enhance GP and trainee curriculum coverage. Safety-II aims to maximise the number of events with a successful outcome by exploring everyday work. Work-as-done often differs from work-as-imagined in protocols and guidelines and various ways to achieve success, dependent on work conditions, may be possible. Traditional approaches to improve the quality and safety of care often aim to constrain variability but understanding and managing variability may be a more beneficial approach. The application of a Safety-II approach to incident investigation, quality improvement projects, prospective analysis of risk in systems and performance indicators may offer improved insight into system performance leading to more effective change. The way forward may be to combine the Safety-II approach with 'traditional' methods to enhance patient safety training, outcomes and curriculum coverage.
de Beer, Jessica L.; Kremer, Kristin; Ködmön, Csaba; Supply, Philip
2012-01-01
Although variable-number tandem-repeat (VNTR) typing has gained recognition as the new standard for the DNA fingerprinting of Mycobacterium tuberculosis complex (MTBC) isolates, external quality control programs have not yet been developed. Therefore, we organized the first multicenter proficiency study on 24-locus VNTR typing. Sets of 30 DNAs of MTBC strains, including 10 duplicate DNA samples, were distributed among 37 participating laboratories in 30 different countries worldwide. Twenty-four laboratories used an in-house-adapted method with fragment sizing by gel electrophoresis or an automated DNA analyzer, nine laboratories used a commercially available kit, and four laboratories used other methods. The intra- and interlaboratory reproducibilities of VNTR typing varied from 0% to 100%, with averages of 72% and 60%, respectively. Twenty of the 37 laboratories failed to amplify particular VNTR loci; if these missing results were ignored, the number of laboratories with 100% interlaboratory reproducibility increased from 1 to 5. The average interlaboratory reproducibility of VNTR typing using a commercial kit was better (88%) than that of in-house-adapted methods using a DNA analyzer (70%) or gel electrophoresis (50%). Eleven laboratories using in-house-adapted manual typing or automated typing scored inter- and intralaboratory reproducibilities of 80% or higher, which suggests that these approaches can be used in a reliable way. In conclusion, this first multicenter study has documented the worldwide quality of VNTR typing of MTBC strains and highlights the importance of international quality control to improve genotyping in the future. PMID:22170917
Dexosomes as a therapeutic cancer vaccine: from bench to bedside.
Le Pecq, Jean-Bernard
2005-01-01
Exosomes released from dendritic cells, now referred as dexosomes, have recently been extensively characterized. Preclinical studies in mice have shown that, when properly loaded with tumor antigens, dexosomes can elicit a strong antitumor activity. Before dexosomes could be used in humans as a therapeutic vaccine, extensive development work had to be performed to meet the present regulatory requirements. First a manufacturing process amenable to cGMP for isolating and purifying dexosomes was established. Methods for loading the Major Histocompatibility Complex (MHC) molecules class II and I in a quantitative and reproducible way were developed. The most challenging task was the establishment of a quality control method for accessing the biological activity of individual lots. Such a method must remain relatively simple and reflect the mechanism of action of dexosomes. This was accomplished by measuring the transfer of a MHC class II superantigen complex to an antigen presenting cell that was MHC class II negative. More than 100 separate dexosome lots were prepared from blood cells of healthy volunteers to evaluate the variability of the manufacturing process. The analysis of the data showed that the main source of variability was related to the heterogeneity of the human population and not to the manufacturing process. These studies allowed to perform two phase I clinical trials. A total of 24 cancer patients received Dex therapy. Dexosome production from cells of cancer patient was found equivalent to that of normal volunteer. No adverse events related to this therapy were reported. Evidence of dexosome bioactivity was observed.
Variable-Complexity Multidisciplinary Optimization on Parallel Computers
NASA Technical Reports Server (NTRS)
Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.
1998-01-01
This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.
NASA Astrophysics Data System (ADS)
Herkül, Kristjan; Peterson, Anneliis; Paekivi, Sander
2017-06-01
Both basic science and marine spatial planning are in a need of high resolution spatially continuous data on seabed habitats and biota. As conventional point-wise sampling is unable to cover large spatial extents in high detail, it must be supplemented with remote sensing and modeling in order to fulfill the scientific and management needs. The combined use of in situ sampling, sonar scanning, and mathematical modeling is becoming the main method for mapping both abiotic and biotic seabed features. Further development and testing of the methods in varying locations and environmental settings is essential for moving towards unified and generally accepted methodology. To fill the relevant research gap in the Baltic Sea, we used multibeam sonar and mathematical modeling methods - generalized additive models (GAM) and random forest (RF) - together with underwater video to map seabed substrate and epibenthos of offshore shallows. In addition to testing the general applicability of the proposed complex of techniques, the predictive power of different sonar-based variables and modeling algorithms were tested. Mean depth, followed by mean backscatter, were the most influential variables in most of the models. Generally, mean values of sonar-based variables had higher predictive power than their standard deviations. The predictive accuracy of RF was higher than that of GAM. To conclude, we found the method to be feasible and with predictive accuracy similar to previous studies of sonar-based mapping.
Enabling Advanced Wind-Tunnel Research Methods Using the NASA Langley 12-Foot Low Speed Tunnel
NASA Technical Reports Server (NTRS)
Busan, Ronald C.; Rothhaar, Paul M.; Croom, Mark A.; Murphy, Patrick C.; Grafton, Sue B.; O-Neal, Anthony W.
2014-01-01
Design of Experiment (DOE) testing methods were used to gather wind tunnel data characterizing the aerodynamic and propulsion forces and moments acting on a complex vehicle configuration with 10 motor-driven propellers, 9 control surfaces, a tilt wing, and a tilt tail. This paper describes the potential benefits and practical implications of using DOE methods for wind tunnel testing - with an emphasis on describing how it can affect model hardware, facility hardware, and software for control and data acquisition. With up to 23 independent variables (19 model and 2 tunnel) for some vehicle configurations, this recent test also provides an excellent example of using DOE methods to assess critical coupling effects in a reasonable timeframe for complex vehicle configurations. Results for an exploratory test using conventional angle of attack sweeps to assess aerodynamic hysteresis is summarized, and DOE results are presented for an exploratory test used to set the data sampling time for the overall test. DOE results are also shown for one production test characterizing normal force in the Cruise mode for the vehicle.
Automatically Detect and Track Multiple Fish Swimming in Shallow Water with Frequent Occlusion
Qian, Zhi-Ming; Cheng, Xi En; Chen, Yan Qiu
2014-01-01
Due to its universality, swarm behavior in nature attracts much attention of scientists from many fields. Fish schools are examples of biological communities that demonstrate swarm behavior. The detection and tracking of fish in a school are of important significance for the quantitative research on swarm behavior. However, different from other biological communities, there are three problems in the detection and tracking of fish school, that is, variable appearances, complex motion and frequent occlusion. To solve these problems, we propose an effective method of fish detection and tracking. In this method, first, the fish head region is positioned through extremum detection and ellipse fitting; second, The Kalman filtering and feature matching are used to track the target in complex motion; finally, according to the feature information obtained by the detection and tracking, the tracking problems caused by frequent occlusion are processed through trajectory linking. We apply this method to track swimming fish school of different densities. The experimental results show that the proposed method is both accurate and reliable. PMID:25207811
Optimisation by hierarchical search
NASA Astrophysics Data System (ADS)
Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias
2015-03-01
Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.
NASA Astrophysics Data System (ADS)
Britt, S.; Tsynkov, S.; Turkel, E.
2018-02-01
We solve the wave equation with variable wave speed on nonconforming domains with fourth order accuracy in both space and time. This is accomplished using an implicit finite difference (FD) scheme for the wave equation and solving an elliptic (modified Helmholtz) equation at each time step with fourth order spatial accuracy by the method of difference potentials (MDP). High-order MDP utilizes compact FD schemes on regular structured grids to efficiently solve problems on nonconforming domains while maintaining the design convergence rate of the underlying FD scheme. Asymptotically, the computational complexity of high-order MDP scales the same as that for FD.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Preventing Data Ambiguity in Infectious Diseases with Four-Dimensional and Personalized Evaluations
Iandiorio, Michelle J.; Fair, Jeanne M.; Chatzipanagiotou, Stylianos; Ioannidis, Anastasios; Trikka-Graphakos, Eleftheria; Charalampaki, Nikoletta; Sereti, Christina; Tegos, George P.; Hoogesteijn, Almira L.; Rivas, Ariel L.
2016-01-01
Background Diagnostic errors can occur, in infectious diseases, when anti-microbial immune responses involve several temporal scales. When responses span from nanosecond to week and larger temporal scales, any pre-selected temporal scale is likely to miss some (faster or slower) responses. Hoping to prevent diagnostic errors, a pilot study was conducted to evaluate a four-dimensional (4D) method that captures the complexity and dynamics of infectious diseases. Methods Leukocyte-microbial-temporal data were explored in canine and human (bacterial and/or viral) infections, with: (i) a non-structured approach, which measures leukocytes or microbes in isolation; and (ii) a structured method that assesses numerous combinations of interacting variables. Four alternatives of the structured method were tested: (i) a noise-reduction oriented version, which generates a single (one data point-wide) line of observations; (ii) a version that measures complex, three-dimensional (3D) data interactions; (iii) a non-numerical version that displays temporal data directionality (arrows that connect pairs of consecutive observations); and (iv) a full 4D (single line-, complexity-, directionality-based) version. Results In all studies, the non-structured approach revealed non-interpretable (ambiguous) data: observations numerically similar expressed different biological conditions, such as recovery and lack of recovery from infections. Ambiguity was also found when the data were structured as single lines. In contrast, two or more data subsets were distinguished and ambiguity was avoided when the data were structured as complex, 3D, single lines and, in addition, temporal data directionality was determined. The 4D method detected, even within one day, changes in immune profiles that occurred after antibiotics were prescribed. Conclusions Infectious disease data may be ambiguous. Four-dimensional methods may prevent ambiguity, providing earlier, in vivo, dynamic, complex, and personalized information that facilitates both diagnostics and selection or evaluation of anti-microbial therapies. PMID:27411058
A modular approach to large-scale design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Hwang, John T.
Gradient-based optimization and the adjoint method form a synergistic combination that enables the efficient solution of large-scale optimization problems. Though the gradient-based approach struggles with non-smooth or multi-modal problems, the capability to efficiently optimize up to tens of thousands of design variables provides a valuable design tool for exploring complex tradeoffs and finding unintuitive designs. However, the widespread adoption of gradient-based optimization is limited by the implementation challenges for computing derivatives efficiently and accurately, particularly in multidisciplinary and shape design problems. This thesis addresses these difficulties in two ways. First, to deal with the heterogeneity and integration challenges of multidisciplinary problems, this thesis presents a computational modeling framework that solves multidisciplinary systems and computes their derivatives in a semi-automated fashion. This framework is built upon a new mathematical formulation developed in this thesis that expresses any computational model as a system of algebraic equations and unifies all methods for computing derivatives using a single equation. The framework is applied to two engineering problems: the optimization of a nanosatellite with 7 disciplines and over 25,000 design variables; and simultaneous allocation and mission optimization for commercial aircraft involving 330 design variables, 12 of which are integer variables handled using the branch-and-bound method. In both cases, the framework makes large-scale optimization possible by reducing the implementation effort and code complexity. The second half of this thesis presents a differentiable parametrization of aircraft geometries and structures for high-fidelity shape optimization. Existing geometry parametrizations are not differentiable, or they are limited in the types of shape changes they allow. This is addressed by a novel parametrization that smoothly interpolates aircraft components, providing differentiability. An unstructured quadrilateral mesh generation algorithm is also developed to automate the creation of detailed meshes for aircraft structures, and a mesh convergence study is performed to verify that the quality of the mesh is maintained as it is refined. As a demonstration, high-fidelity aerostructural analysis is performed for two unconventional configurations with detailed structures included, and aerodynamic shape optimization is applied to the truss-braced wing, which finds and eliminates a shock in the region bounded by the struts and the wing.
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
Houston, Lauren; Probst, Yasmine; Martin, Allison
2018-05-18
Data audits within clinical settings are extensively used as a major strategy to identify errors, monitor study operations and ensure high-quality data. However, clinical trial guidelines are non-specific in regards to recommended frequency, timing and nature of data audits. The absence of a well-defined data quality definition and method to measure error undermines the reliability of data quality assessment. This review aimed to assess the variability of source data verification (SDV) auditing methods to monitor data quality in a clinical research setting. The scientific databases MEDLINE, Scopus and Science Direct were searched for English language publications, with no date limits applied. Studies were considered if they included data from a clinical trial or clinical research setting and measured and/or reported data quality using a SDV auditing method. In total 15 publications were included. The nature and extent of SDV audit methods in the articles varied widely, depending upon the complexity of the source document, type of study, variables measured (primary or secondary), data audit proportion (3-100%) and collection frequency (6-24 months). Methods for coding, classifying and calculating error were also inconsistent. Transcription errors and inexperienced personnel were the main source of reported error. Repeated SDV audits using the same dataset demonstrated ∼40% improvement in data accuracy and completeness over time. No description was given in regards to what determines poor data quality in clinical trials. A wide range of SDV auditing methods are reported in the published literature though no uniform SDV auditing method could be determined for "best practice" in clinical trials. Published audit methodology articles are warranted for the development of a standardised SDV auditing method to monitor data quality in clinical research settings. Copyright © 2018. Published by Elsevier Inc.
A Geometric View of Complex Trigonometric Functions
ERIC Educational Resources Information Center
Hammack, Richard
2007-01-01
Given that the sine and cosine functions of a real variable can be interpreted as the coordinates of points on the unit circle, the author of this article asks whether there is something similar for complex variables, and shows that indeed there is.