Sample records for complex scaling method

  1. A fast boosting-based screening method for large-scale association study in complex traits with genetic heterogeneity.

    PubMed

    Wang, Lu-Yong; Fasulo, D

    2006-01-01

    Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.

  2. Synchronization in node of complex networks consist of complex chaotic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qiang, E-mail: qiangweibeihua@163.com; Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin; Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  3. Scale Development and Initial Tests of the Multidimensional Complex Adaptive Leadership Scale for School Principals: An Exploratory Mixed Method Study

    ERIC Educational Resources Information Center

    Özen, Hamit; Turan, Selahattin

    2017-01-01

    This study was designed to develop the scale of the Complex Adaptive Leadership for School Principals (CAL-SP) and examine its psychometric properties. This was an exploratory mixed method research design (ES-MMD). Both qualitative and quantitative methods were used to develop and assess psychometric properties of the questionnaire. This study…

  4. Continuum Level Density in Complex Scaling Method

    NASA Astrophysics Data System (ADS)

    Suzuki, R.; Myo, T.; Katō, K.

    2005-11-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique.

  5. The complex-scaled multiconfigurational spin-tensor electron propagator method for low-lying shape resonances in Be-, Mg- and Ca-

    NASA Astrophysics Data System (ADS)

    Tsogbayar, Tsednee; Yeager, Danny L.

    2017-01-01

    We further apply the complex scaled multiconfigurational spin-tensor electron propagator method (CMCSTEP) for the theoretical determination of resonance parameters with electron-atom systems including open-shell and highly correlated (non-dynamical correlation) atoms and molecules. The multiconfigurational spin-tensor electron propagator method (MCSTEP) developed and implemented by Yeager and his coworkers for real space gives very accurate and reliable ionization potentials and electron affinities. CMCSTEP uses a complex scaled multiconfigurational self-consistent field (CMCSCF) state as an initial state along with a dilated Hamiltonian where all of the electronic coordinates are scaled by a complex factor. CMCSTEP is designed for determining resonances. We apply CMCSTEP to get the lowest 2P (Be-, Mg-) and 2D (Mg-, Ca-) shape resonances using several different basis sets each with several complete active spaces. Many of these basis sets we employ have been used by others with different methods. Hence, we can directly compare results with different methods but using the same basis sets.

  6. Calculation of Expectation Values of Operators in the Complex Scaling Method

    DOE PAGES

    Papadimitriou, G.

    2016-06-14

    In the complex scaling method (CSM) provides with a way to obtain resonance parameters of particle unstable states by rotating the coordinates and momenta of the original Hamiltonian. It is convenient to use an L 2 integrable basis to resolve the complex rotated or complex scaled Hamiltonian H θ , with θ being the angle of rotation in the complex energy plane. Within the CSM, resonance and scattering solutions have fall-off asymptotics. Furthermore, one of the consequences is that, expectation values of operators in a resonance or scattering complex scaled solution are calculated by complex rotating the operators. In thismore » work we are exploring applications of the CSM on calculations of expectation values of quantum mechanical operators by using the regularized backrotation technique and calculating hence the expectation value using the unrotated operator. Moreover, the test cases involve a schematic two-body Gaussian model and also applications using realistic interactions.« less

  7. A study of complex scaling transformation using the Wigner representation of wavefunctions.

    PubMed

    Kaprálová-Ždánská, Petra Ruth

    2011-05-28

    The complex scaling operator exp(-θ ̂x̂p/ℏ), being a foundation of the complex scaling method for resonances, is studied in the Wigner phase-space representation. It is shown that the complex scaling operator behaves similarly to the squeezing operator, rotating and amplifying Wigner quasi-probability distributions of the respective wavefunctions. It is disclosed that the distorting effect of the complex scaling transformation is correlated with increased numerical errors of computed resonance energies and widths. The behavior of the numerical error is demonstrated for a computation of CO(2+) vibronic resonances. © 2011 American Institute of Physics

  8. Hybrid method (JM-ECS) combining the J-matrix and exterior complex scaling methods for scattering calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanroose, W.; Broeckhove, J.; Arickx, F.

    The paper proposes a hybrid method for calculating scattering processes. It combines the J-matrix method with exterior complex scaling and an absorbing boundary condition. The wave function is represented as a finite sum of oscillator eigenstates in the inner region, and it is discretized on a grid in the outer region. The method is validated for a one- and a two-dimensional model with partial wave equations and a calculation of p-shell nuclear scattering with semirealistic interactions.

  9. Level Density in the Complex Scaling Method

    NASA Astrophysics Data System (ADS)

    Suzuki, R.; Myo, T.; Katō, K.

    2005-06-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L(2) basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM.

  10. Dynamical complexity changes during two forms of meditation

    NASA Astrophysics Data System (ADS)

    Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng

    2011-06-01

    Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.

  11. Accurate complex scaling of three dimensional numerical potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less

  12. Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding

    PubMed Central

    Tajima, Satohiro; Yanagawa, Toru; Fujii, Naotaka; Toyoizumi, Taro

    2015-01-01

    Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness. PMID:26584045

  13. Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks

    NASA Astrophysics Data System (ADS)

    Eom, Young-Ho; Jo, Hang-Hyun

    2015-05-01

    Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.

  14. Scale-dependent intrinsic entropies of complex time series.

    PubMed

    Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E

    2016-04-13

    Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).

  15. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  16. LETTER TO THE EDITOR: Iteratively-coupled propagating exterior complex scaling method for electron hydrogen collisions

    NASA Astrophysics Data System (ADS)

    Bartlett, Philip L.; Stelbovics, Andris T.; Bray, Igor

    2004-02-01

    A newly-derived iterative coupling procedure for the propagating exterior complex scaling (PECS) method is used to efficiently calculate the electron-impact wavefunctions for atomic hydrogen. An overview of this method is given along with methods for extracting scattering cross sections. Differential scattering cross sections at 30 eV are presented for the electron-impact excitation to the n = 1, 2, 3 and 4 final states, for both PECS and convergent close coupling (CCC), which are in excellent agreement with each other and with experiment. PECS results are presented at 27.2 eV and 30 eV for symmetric and asymmetric energy-sharing triple differential cross sections, which are in excellent agreement with CCC and exterior complex scaling calculations, and with experimental data. At these intermediate energies, the efficiency of the PECS method with iterative coupling has allowed highly accurate partial-wave solutions of the full Schrödinger equation, for L les 50 and a large number of coupled angular momentum states, to be obtained with minimal computing resources.

  17. Decision paths in complex tasks

    NASA Technical Reports Server (NTRS)

    Galanter, Eugene

    1991-01-01

    Complex real world action and its prediction and control has escaped analysis by the classical methods of psychological research. The reason is that psychologists have no procedures to parse complex tasks into their constituents. Where such a division can be made, based say on expert judgment, there is no natural scale to measure the positive or negative values of the components. Even if we could assign numbers to task parts, we lack rules i.e., a theory, to combine them into a total task representation. We compare here two plausible theories for the amalgamation of the value of task components. Both of these theories require a numerical representation of motivation, for motivation is the primary variable that guides choice and action in well-learned tasks. We address this problem of motivational quantification and performance prediction by developing psychophysical scales of the desireability or aversiveness of task components based on utility scaling methods (Galanter 1990). We modify methods used originally to scale sensory magnitudes (Stevens and Galanter 1957), and that have been applied recently to the measure of task 'workload' by Gopher and Braune (1984). Our modification uses utility comparison scaling techniques which avoid the unnecessary assumptions made by Gopher and Braune. Formula for the utility of complex tasks based on the theoretical models are used to predict decision and choice of alternate paths to the same goal.

  18. A Spectral Method for Spatial Downscaling

    PubMed Central

    Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.

    2014-01-01

    Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037

  19. Multiscale entropy-based methods for heart rate variability complexity analysis

    NASA Astrophysics Data System (ADS)

    Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio

    2015-03-01

    Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.

  20. Determination of the equilibrium constant of C60 fullerene binding with drug molecules.

    PubMed

    Mosunov, Andrei A; Pashkova, Irina S; Sidorova, Maria; Pronozin, Artem; Lantushenko, Anastasia O; Prylutskyy, Yuriy I; Parkinson, John A; Evstigneev, Maxim P

    2017-03-01

    We report a new analytical method that allows the determination of the magnitude of the equilibrium constant of complexation, K h , of small molecules to C 60 fullerene in aqueous solution. The developed method is based on the up-scaled model of C 60 fullerene-ligand complexation and contains the full set of equations needed to fit titration datasets arising from different experimental methods (UV-Vis spectroscopy, 1 H NMR spectroscopy, diffusion ordered NMR spectroscopy, DLS). The up-scaled model takes into consideration the specificity of C 60 fullerene aggregation in aqueous solution and allows the highly dispersed nature of C 60 fullerene cluster distribution to be accounted for. It also takes into consideration the complexity of fullerene-ligand dynamic equilibrium in solution, formed by various types of self- and hetero-complexes. These features make the suggested method superior to standard Langmuir-type analysis, the approach used to date for obtaining quantitative information on ligand binding with different nanoparticles.

  1. Metastable Autoionizing States of Molecules and Radicals in Highly Energetic Environment

    DTIC Science & Technology

    2016-03-22

    electronic states. The specific aims are to develop and calibrate complex-scaled equation-of-motion coupled cluster (cs-EOM- CC ) and CAP (complex...absorbing potential) augmented EOM- CC methods. We have implemented and benchmarked cs-EOM-CCSD and CAP- augmented EOM-CCSD methods for excitation energies...motion coupled cluster (cs-EOM- CC ) and CAP (complex absorbing potential) augmented EOM- CC methods. We have implemented and benchmarked cs-EOM-CCSD and

  2. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    PubMed

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  3. Rahman Prize Lecture: Lattice Boltzmann simulation of complex states of flowing matter

    NASA Astrophysics Data System (ADS)

    Succi, Sauro

    Over the last three decades, the Lattice Boltzmann (LB) method has gained a prominent role in the numerical simulation of complex flows across an impressively broad range of scales, from fully-developed turbulence in real-life geometries, to multiphase flows in micro-fluidic devices, all the way down to biopolymer translocation in nanopores and lately, even quark-gluon plasmas. After a brief introduction to the main ideas behind the LB method and its historical developments, we shall present a few selected applications to complex flow problems at various scales of motion. Finally, we shall discuss prospects for extreme-scale LB simulations of outstanding problems in the physics of fluids and its interfaces with material sciences and biology, such as the modelling of fluid turbulence, the optimal design of nanoporous gold catalysts and protein folding/aggregation in crowded environments.

  4. Using nocturnal cold air drainage flow to monitor ecosystem processes in complex terrain

    Treesearch

    Thomas G. Pypker; Michael H. Unsworth; Alan C. Mix; William Rugh; Troy Ocheltree; Karrin Alstad; Barbara J. Bond

    2007-01-01

    This paper presents initial investigations of a new approach to monitor ecosystem processes in complex terrain on large scales. Metabolic processes in mountainous ecosystems are poorly represented in current ecosystem monitoring campaigns because the methods used for monitoring metabolism at the ecosystem scale (e.g., eddy covariance) require flat study sites. Our goal...

  5. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klaiman, Shachar; Gilary, Ido; Moiseyev, Nimrod

    Analytical expressions for the resonances of the long-range potential (LRP), V(r)=a/r-b/r{sup 2}, as a function of the Hamiltonian parameters were derived by Doolen a long time ago [Int. J. Quant. Chem. 14, 523 (1979)]. Here we show that converged numerical results are obtained by applying the shifted complex scaling and the smooth-exterior scaling (SES) methods rather than the usual complex coordinate method (i.e., complex scaling). The narrow and broad shape-type resonances are shown to be localized inside or over the potential barrier and not inside the potential well. Therefore, the resonances for Doolen LRP's are not associated with the tunnelingmore » through the potential barrier as one might expect. The fact that the SES provides a universal reflection-free absorbing potential is, in particular, important in view of future applications. In particular, it is most convenient to calculate the molecular autoionizing resonances by adding one-electron complex absorbing potentials into the codes of the available quantum molecular electronic packages.« less

  7. Detecting vortices in superconductors: Extracting one-dimensional topological singularities from a discretized complex scalar field

    DOE PAGES

    Phillips, Carolyn L.; Peterka, Tom; Karpeyev, Dmitry; ...

    2015-02-20

    In type II superconductors, the dynamics of superconducting vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter. Extracting their precise positions and motion from discretized numerical simulation data is an important, but challenging, task. In the past, vortices have mostly been detected by analyzing the magnitude of the complex scalar field representing the order parameter and visualized by corresponding contour plots and isosurfaces. However, these methods, primarily used for small-scale simulations, blur the fine details of the vortices, scale poorly to large-scale simulations, and do not easily enable isolating andmore » tracking individual vortices. In this paper, we present a method for exactly finding the vortex core lines from a complex order parameter field. With this method, vortices can be easily described at a resolution even finer than the mesh itself. The precise determination of the vortex cores allows the interplay of the vortices inside a model superconductor to be visualized in higher resolution than has previously been possible. Finally, by representing the field as the set of vortices, this method also massively reduces the data footprint of the simulations and provides the data structures for further analysis and feature tracking.« less

  8. Comparison of MODIS and SWAT evapotranspiration over a complex terrain at different spatial scales

    NASA Astrophysics Data System (ADS)

    Abiodun, Olanrewaju O.; Guan, Huade; Post, Vincent E. A.; Batelaan, Okke

    2018-05-01

    In most hydrological systems, evapotranspiration (ET) and precipitation are the largest components of the water balance, which are difficult to estimate, particularly over complex terrain. In recent decades, the advent of remotely sensed data based ET algorithms and distributed hydrological models has provided improved spatially upscaled ET estimates. However, information on the performance of these methods at various spatial scales is limited. This study compares the ET from the MODIS remotely sensed ET dataset (MOD16) with the ET estimates from a SWAT hydrological model on graduated spatial scales for the complex terrain of the Sixth Creek Catchment of the Western Mount Lofty Ranges, South Australia. ET from both models was further compared with the coarser-resolution AWRA-L model at catchment scale. The SWAT model analyses are performed on daily timescales with a 6-year calibration period (2000-2005) and 7-year validation period (2007-2013). Differences in ET estimation between the SWAT and MOD16 methods of up to 31, 19, 15, 11 and 9 % were observed at respectively 1, 4, 9, 16 and 25 km2 spatial resolutions. Based on the results of the study, a spatial scale of confidence of 4 km2 for catchment-scale evapotranspiration is suggested in complex terrain. Land cover differences, HRU parameterisation in AWRA-L and catchment-scale averaging of input climate data in the SWAT semi-distributed model were identified as the principal sources of weaker correlations at higher spatial resolution.

  9. Grid Convergence of High Order Methods for Multiscale Complex Unsteady Viscous Compressible Flows

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    Grid convergence of several high order methods for the computation of rapidly developing complex unsteady viscous compressible flows with a wide range of physical scales is studied. The recently developed adaptive numerical dissipation control high order methods referred to as the ACM and wavelet filter schemes are compared with a fifth-order weighted ENO (WENO) scheme. The two 2-D compressible full Navier-Stokes models considered do not possess known analytical and experimental data. Fine grid solutions from a standard second-order TVD scheme and a MUSCL scheme with limiters are used as reference solutions. The first model is a 2-D viscous analogue of a shock tube problem which involves complex shock/shear/boundary-layer interactions. The second model is a supersonic reactive flow concerning fuel breakup. The fuel mixing involves circular hydrogen bubbles in air interacting with a planar moving shock wave. Both models contain fine scale structures and are stiff in the sense that even though the unsteadiness of the flows are rapidly developing, extreme grid refinement and time step restrictions are needed to resolve all the flow scales as well as the chemical reaction scales.

  10. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  11. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  12. Leveraging Mechanism Simplicity and Strategic Averaging to Identify Signals from Highly Heterogeneous Spatial and Temporal Ozone Data

    NASA Astrophysics Data System (ADS)

    Brown-Steiner, B.; Selin, N. E.; Prinn, R. G.; Monier, E.; Garcia-Menendez, F.; Tilmes, S.; Emmons, L. K.; Lamarque, J. F.; Cameron-Smith, P. J.

    2017-12-01

    We summarize two methods to aid in the identification of ozone signals from underlying spatially and temporally heterogeneous data in order to help research communities avoid the sometimes burdensome computational costs of high-resolution high-complexity models. The first method utilizes simplified chemical mechanisms (a Reduced Hydrocarbon Mechanism and a Superfast Mechanism) alongside a more complex mechanism (MOZART-4) within CESM CAM-Chem to extend the number of simulated meteorological years (or add additional members to an ensemble) for a given modeling problem. The Reduced Hydrocarbon mechanism is twice as fast, and the Superfast mechanism is three times faster than the MOZART-4 mechanism. We show that simplified chemical mechanisms are largely capable of simulating surface ozone across the globe as well as the more complex chemical mechanisms, and where they are not capable, a simple standardized anomaly emulation approach can correct for their inadequacies. The second method uses strategic averaging over both temporal and spatial scales to filter out the highly heterogeneous noise that underlies ozone observations and simulations. This method allows for a selection of temporal and spatial averaging scales that match a particular signal strength (between 0.5 and 5 ppbv), and enables the identification of regions where an ozone signal can rise above the ozone noise over a given region and a given period of time. In conjunction, these two methods can be used to "scale down" chemical mechanism complexity and quantitatively determine spatial and temporal scales that could enable research communities to utilize simplified representations of atmospheric chemistry and thereby maximize their productivity and efficiency given computational constraints. While this framework is here applied to ozone data, it could also be applied to a broad range of geospatial data sets (observed or modeled) that have spatial and temporal coverage.

  13. Are Patient-Administered Attention Deficit Hyperactivity Disorder Scales Suitable for Adults?

    ERIC Educational Resources Information Center

    Rogers, Edwin S.; Spalding, Steven L.; Eckard, Alexis A.; Wallace, Lorraine S.

    2009-01-01

    Objective: This primary purpose of this study was to examine cognitive complexity and readability of patient-administered ADHD scales. The secondary purpose was to estimate variation in readability of individual ADHD scale items. Method: Using comprehensive search strategies, we identified eight English-language ADHD scales for inclusion in our…

  14. Scaled MP3 non-covalent interaction energies agree closely with accurate CCSD(T) benchmark data.

    PubMed

    Pitonák, Michal; Neogrády, Pavel; Cerný, Jirí; Grimme, Stefan; Hobza, Pavel

    2009-01-12

    Scaled MP3 interaction energies calculated as a sum of MP2/CBS (complete basis set limit) interaction energies and scaled third-order energy contributions obtained in small or medium size basis sets agree very closely with the estimated CCSD(T)/CBS interaction energies for the 22 H-bonded, dispersion-controlled and mixed non-covalent complexes from the S22 data set. Performance of this so-called MP2.5 (third-order scaling factor of 0.5) method has also been tested for 33 nucleic acid base pairs and two stacked conformers of porphine dimer. In all the test cases, performance of the MP2.5 method was shown to be superior to the scaled spin-component MP2 based methods, e.g. SCS-MP2, SCSN-MP2 and SCS(MI)-MP2. In particular, a very balanced treatment of hydrogen-bonded compared to stacked complexes is achieved with MP2.5. The main advantage of the approach is that it employs only a single empirical parameter and is thus biased by two rigorously defined, asymptotically correct ab-initio methods, MP2 and MP3. The method is proposed as an accurate but computationally feasible alternative to CCSD(T) for the computation of the properties of various kinds of non-covalently bound systems.

  15. Detrended Partial-Cross-Correlation Analysis: A New Method for Analyzing Correlations in Complex System

    PubMed Central

    Yuan, Naiming; Fu, Zuntao; Zhang, Huan; Piao, Lin; Xoplaki, Elena; Luterbacher, Juerg

    2015-01-01

    In this paper, a new method, detrended partial-cross-correlation analysis (DPCCA), is proposed. Based on detrended cross-correlation analysis (DCCA), this method is improved by including partial-correlation technique, which can be applied to quantify the relations of two non-stationary signals (with influences of other signals removed) on different time scales. We illustrate the advantages of this method by performing two numerical tests. Test I shows the advantages of DPCCA in handling non-stationary signals, while Test II reveals the “intrinsic” relations between two considered time series with potential influences of other unconsidered signals removed. To further show the utility of DPCCA in natural complex systems, we provide new evidence on the winter-time Pacific Decadal Oscillation (PDO) and the winter-time Nino3 Sea Surface Temperature Anomaly (Nino3-SSTA) affecting the Summer Rainfall over the middle-lower reaches of the Yangtze River (SRYR). By applying DPCCA, better significant correlations between SRYR and Nino3-SSTA on time scales of 6 ~ 8 years are found over the period 1951 ~ 2012, while significant correlations between SRYR and PDO on time scales of 35 years arise. With these physically explainable results, we have confidence that DPCCA is an useful method in addressing complex systems. PMID:25634341

  16. Extracting Communities from Complex Networks by the k-Dense Method

    NASA Astrophysics Data System (ADS)

    Saito, Kazumi; Yamada, Takeshi; Kazama, Kazuhiro

    To understand the structural and functional properties of large-scale complex networks, it is crucial to efficiently extract a set of cohesive subnetworks as communities. There have been proposed several such community extraction methods in the literature, including the classical k-core decomposition method and, more recently, the k-clique based community extraction method. The k-core method, although computationally efficient, is often not powerful enough for uncovering a detailed community structure and it produces only coarse-grained and loosely connected communities. The k-clique method, on the other hand, can extract fine-grained and tightly connected communities but requires a substantial amount of computational load for large-scale complex networks. In this paper, we present a new notion of a subnetwork called k-dense, and propose an efficient algorithm for extracting k-dense communities. We applied our method to the three different types of networks assembled from real data, namely, from blog trackbacks, word associations and Wikipedia references, and demonstrated that the k-dense method could extract communities almost as efficiently as the k-core method, while the qualities of the extracted communities are comparable to those obtained by the k-clique method.

  17. Continuum Level Density of a Coupled-Channel System in the Complex Scaling Method

    NASA Astrophysics Data System (ADS)

    Suzuki, R.; Kruppa, A. T.; Giraud, B. G.; Katō, K.

    2008-06-01

    We study the continuum level density (CLD) in the formalism of the complex scaling method (CSM) for coupled-channel systems. We apply the formalism to the ^{4}He = [^{3}H + p] + [^3{He} + n] coupled-channel cluster model where there are resonances at low energy. Numerical calculations of the CLD in the CSM with a finite number of L^{2} basis functions are consistent with the exact result calculated from the S-matrix by solving coupled-channel equations. We also study channel densities. In this framework, the extended completeness relation (ECR) plays an important role.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacques Hugo

    Traditional engineering methods do not make provision for the integration of human considerations, while traditional human factors methods do not scale well to the complexity of large-scale nuclear power plant projects. Although the need for up-to-date human factors engineering processes and tools is recognised widely in industry, so far no formal guidance has been developed. This article proposes such a framework.

  19. Combined self-learning based single-image super-resolution and dual-tree complex wavelet transform denoising for medical images

    NASA Astrophysics Data System (ADS)

    Yang, Guang; Ye, Xujiong; Slabaugh, Greg; Keegan, Jennifer; Mohiaddin, Raad; Firmin, David

    2016-03-01

    In this paper, we propose a novel self-learning based single-image super-resolution (SR) method, which is coupled with dual-tree complex wavelet transform (DTCWT) based denoising to better recover high-resolution (HR) medical images. Unlike previous methods, this self-learning based SR approach enables us to reconstruct HR medical images from a single low-resolution (LR) image without extra training on HR image datasets in advance. The relationships between the given image and its scaled down versions are modeled using support vector regression with sparse coding and dictionary learning, without explicitly assuming reoccurrence or self-similarity across image scales. In addition, we perform DTCWT based denoising to initialize the HR images at each scale instead of simple bicubic interpolation. We evaluate our method on a variety of medical images. Both quantitative and qualitative results show that the proposed approach outperforms bicubic interpolation and state-of-the-art single-image SR methods while effectively removing noise.

  20. Scaling Linguistic Characterization of Precipitation Variability

    NASA Astrophysics Data System (ADS)

    Primo, C.; Gutierrez, J. M.

    2003-04-01

    Rainfall variability is influenced by changes in the aggregation of daily rainfall. This problem is of great importance for hydrological, agricultural and ecological applications. Rainfall averages, or accumulations, are widely used as standard climatic parameters. However different aggregation schemes may lead to the same average or accumulated values. In this paper we present a fractal method to characterize different aggregation schemes. The method provides scaling exponents characterizing weekly or monthly rainfall patterns for a given station. To this aim, we establish an analogy with linguistic analysis, considering precipitation as a discrete variable (e.g., rain, no rain). Each weekly, or monthly, symbolic precipitation sequence of observed precipitation is then considered as a "word" (in this case, a binary word) which defines a specific weekly rainfall pattern. Thus, each site defines a "language" characterized by the words observed in that site during a period representative of the climatology. Then, the more variable the observed weekly precipitation sequences, the more complex the obtained language. To characterize these languages, we first applied the Zipf's method obtaining scaling histograms of rank ordered frequencies. However, to obtain significant exponents, the scaling must be maintained some orders of magnitude, requiring long sequences of daily precipitation which are not available at particular stations. Thus this analysis is not suitable for applications involving particular stations (such as regionalization). Then, we introduce an alternative fractal method applicable to data from local stations. The so-called Chaos-Game method uses Iterated Function Systems (IFS) for graphically representing rainfall languages, in a way that complex languages define complex graphical patterns. The box-counting dimension and the entropy of the resulting patterns are used as linguistic parameters to quantitatively characterize the complexity of the patterns. We illustrate the high climatological discrimination power of the linguistic parameters in the Iberian peninsula, when compared with other standard techniques (such as seasonal mean accumulated precipitation). As an example, standard and linguistic parameters are used as inputs for a clustering regionalization method, comparing the resulting clusters.

  1. A new multi-scale method to reveal hierarchical modular structures in biological networks.

    PubMed

    Jiao, Qing-Ju; Huang, Yan; Shen, Hong-Bin

    2016-11-15

    Biological networks are effective tools for studying molecular interactions. Modular structure, in which genes or proteins may tend to be associated with functional modules or protein complexes, is a remarkable feature of biological networks. Mining modular structure from biological networks enables us to focus on a set of potentially important nodes, which provides a reliable guide to future biological experiments. The first fundamental challenge in mining modular structure from biological networks is that the quality of the observed network data is usually low owing to noise and incompleteness in the obtained networks. The second problem that poses a challenge to existing approaches to the mining of modular structure is that the organization of both functional modules and protein complexes in networks is far more complicated than was ever thought. For instance, the sizes of different modules vary considerably from each other and they often form multi-scale hierarchical structures. To solve these problems, we propose a new multi-scale protocol for mining modular structure (named ISIMB) driven by a node similarity metric, which works in an iteratively converged space to reduce the effects of the low data quality of the observed network data. The multi-scale node similarity metric couples both the local and the global topology of the network with a resolution regulator. By varying this resolution regulator to give different weightings to the local and global terms in the metric, the ISIMB method is able to fit the shape of modules and to detect them on different scales. Experiments on protein-protein interaction and genetic interaction networks show that our method can not only mine functional modules and protein complexes successfully, but can also predict functional modules from specific to general and reveal the hierarchical organization of protein complexes.

  2. Electron- and positron-impact atomic scattering calculations using propagating exterior complex scaling

    NASA Astrophysics Data System (ADS)

    Bartlett, P. L.; Stelbovics, A. T.; Rescigno, T. N.; McCurdy, C. W.

    2007-11-01

    Calculations are reported for four-body electron-helium collisions and positron-hydrogen collisions, in the S-wave model, using the time-independent propagating exterior complex scaling (PECS) method. The PECS S-wave calculations for three-body processes in electron-helium collisions compare favourably with previous convergent close-coupling (CCC) and time-dependent exterior complex scaling (ECS) calculations, and exhibit smooth cross section profiles. The PECS four-body double-excitation cross sections are significantly different from CCC calculations and highlight the need for an accurate representation of the resonant helium final-state wave functions when undertaking these calculations. Results are also presented for positron-hydrogen collisions in an S-wave model using an electron-positron potential of V12 = - (8 + (r1 - r2)2)-1/2. This model is representative of the full problem, and the results demonstrate that ECS-based methods can accurately calculate scattering, ionization and positronium formation cross sections in this three-body rearrangement collision.

  3. A Principled Approach to the Specification of System Architectures for Space Missions

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  4. Autoscoring Essays Based on Complex Networks

    ERIC Educational Resources Information Center

    Ke, Xiaohua; Zeng, Yongqiang; Luo, Haijiao

    2016-01-01

    This article presents a novel method, the Complex Dynamics Essay Scorer (CDES), for automated essay scoring using complex network features. Texts produced by college students in China were represented as scale-free networks (e.g., a word adjacency model) from which typical network features, such as the in-/out-degrees, clustering coefficient (CC),…

  5. Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures

    NASA Astrophysics Data System (ADS)

    Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi

    2017-04-01

    Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.

  6. A spectral method for spatial downscaling | Science Inventory ...

    EPA Pesticide Factsheets

    Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this paper, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July, 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. The National Exposure Research Laboratory′s (NERL′s)Atmospheric Modeling Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing ch

  7. End of the chain? Rugosity and fine-scale bathymetry from existing underwater digital imagery using structure-from-motion (SfM) technology

    USGS Publications Warehouse

    Storlazzi, Curt; Dartnell, Peter; Hatcher, Gerry; Gibbs, Ann E.

    2016-01-01

    The rugosity or complexity of the seafloor has been shown to be an important ecological parameter for fish, algae, and corals. Historically, rugosity has been measured either using simple and subjective manual methods such as ‘chain-and-tape’ or complicated and expensive geophysical methods. Here, we demonstrate the application of structure-from-motion (SfM) photogrammetry to generate high-resolution, three-dimensional bathymetric models of a fringing reef from existing underwater video collected to characterize the seafloor. SfM techniques are capable of achieving spatial resolution that can be orders of magnitude greater than large-scale lidar and sonar mapping of coral reef ecosystems. The resulting data provide finer-scale measurements of bathymetry and rugosity that are more applicable to ecological studies of coral reefs than provided by the more expensive and time-consuming geophysical methods. Utilizing SfM techniques for characterizing the benthic habitat proved to be more effective and quantitatively powerful than conventional methods and thus might portend the end of the ‘chain-and-tape’ method for measuring benthic complexity.

  8. Scale separation for multi-scale modeling of free-surface and two-phase flows with the conservative sharp interface method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, L.H., E-mail: Luhui.Han@tum.de; Hu, X.Y., E-mail: Xiangyu.Hu@tum.de; Adams, N.A., E-mail: Nikolaus.Adams@tum.de

    In this paper we present a scale separation approach for multi-scale modeling of free-surface and two-phase flows with complex interface evolution. By performing a stimulus-response operation on the level-set function representing the interface, separation of resolvable and non-resolvable interface scales is achieved efficiently. Uniform positive and negative shifts of the level-set function are used to determine non-resolvable interface structures. Non-resolved interface structures are separated from the resolved ones and can be treated by a mixing model or a Lagrangian-particle model in order to preserve mass. Resolved interface structures are treated by the conservative sharp-interface model. Since the proposed scale separationmore » approach does not rely on topological information, unlike in previous work, it can be implemented in a straightforward fashion into a given level set based interface model. A number of two- and three-dimensional numerical tests demonstrate that the proposed method is able to cope with complex interface variations accurately and significantly increases robustness against underresolved interface structures.« less

  9. ComplexContact: a web server for inter-protein contact prediction using deep learning.

    PubMed

    Zeng, Hong; Wang, Sheng; Zhou, Tianming; Zhao, Feifeng; Li, Xiufeng; Wu, Qing; Xu, Jinbo

    2018-05-22

    ComplexContact (http://raptorx2.uchicago.edu/ComplexContact/) is a web server for sequence-based interfacial residue-residue contact prediction of a putative protein complex. Interfacial residue-residue contacts are critical for understanding how proteins form complex and interact at residue level. When receiving a pair of protein sequences, ComplexContact first searches for their sequence homologs and builds two paired multiple sequence alignments (MSA), then it applies co-evolution analysis and a CASP-winning deep learning (DL) method to predict interfacial contacts from paired MSAs and visualizes the prediction as an image. The DL method was originally developed for intra-protein contact prediction and performed the best in CASP12. Our large-scale experimental test further shows that ComplexContact greatly outperforms pure co-evolution methods for inter-protein contact prediction, regardless of the species.

  10. Complexity analysis of brain activity in attention-deficit/hyperactivity disorder: A multiscale entropy analysis.

    PubMed

    Chenxi, Li; Chen, Yanni; Li, Youjun; Wang, Jue; Liu, Tian

    2016-06-01

    The multiscale entropy (MSE) is a novel method for quantifying the intrinsic dynamical complexity of physiological systems over several scales. To evaluate this method as a promising way to explore the neural mechanisms in ADHD, we calculated the MSE in EEG activity during the designed task. EEG data were collected from 13 outpatient boys with a confirmed diagnosis of ADHD and 13 age- and gender-matched normal control children during their doing multi-source interference task (MSIT). We estimated the MSE by calculating the sample entropy values of delta, theta, alpha and beta frequency bands over twenty time scales using coarse-grained procedure. The results showed increased complexity of EEG data in delta and theta frequency bands and decreased complexity in alpha frequency bands in ADHD children. The findings of this study revealed aberrant neural connectivity of kids with ADHD during interference task. The results showed that MSE method may be a new index to identify and understand the neural mechanism of ADHD. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. A meta-analysis of crop pest and natural enemy response to landscape complexity.

    PubMed

    Chaplin-Kramer, Rebecca; O'Rourke, Megan E; Blitzer, Eleanor J; Kremen, Claire

    2011-09-01

    Many studies in recent years have investigated the relationship between landscape complexity and pests, natural enemies and/or pest control. However, no quantitative synthesis of this literature beyond simple vote-count methods yet exists. We conducted a meta-analysis of 46 landscape-level studies, and found that natural enemies have a strong positive response to landscape complexity. Generalist enemies show consistent positive responses to landscape complexity across all scales measured, while specialist enemies respond more strongly to landscape complexity at smaller scales. Generalist enemy response to natural habitat also tends to occur at larger spatial scales than for specialist enemies, suggesting that land management strategies to enhance natural pest control should differ depending on whether the dominant enemies are generalists or specialists. The positive response of natural enemies does not necessarily translate into pest control, since pest abundances show no significant response to landscape complexity. Very few landscape-scale studies have estimated enemy impact on pest populations, however, limiting our understanding of the effects of landscape on pest control. We suggest focusing future research efforts on measuring population dynamics rather than static counts to better characterise the relationship between landscape complexity and pest control services from natural enemies. © 2011 Blackwell Publishing Ltd/CNRS.

  12. Preventing Data Ambiguity in Infectious Diseases with Four-Dimensional and Personalized Evaluations

    PubMed Central

    Iandiorio, Michelle J.; Fair, Jeanne M.; Chatzipanagiotou, Stylianos; Ioannidis, Anastasios; Trikka-Graphakos, Eleftheria; Charalampaki, Nikoletta; Sereti, Christina; Tegos, George P.; Hoogesteijn, Almira L.; Rivas, Ariel L.

    2016-01-01

    Background Diagnostic errors can occur, in infectious diseases, when anti-microbial immune responses involve several temporal scales. When responses span from nanosecond to week and larger temporal scales, any pre-selected temporal scale is likely to miss some (faster or slower) responses. Hoping to prevent diagnostic errors, a pilot study was conducted to evaluate a four-dimensional (4D) method that captures the complexity and dynamics of infectious diseases. Methods Leukocyte-microbial-temporal data were explored in canine and human (bacterial and/or viral) infections, with: (i) a non-structured approach, which measures leukocytes or microbes in isolation; and (ii) a structured method that assesses numerous combinations of interacting variables. Four alternatives of the structured method were tested: (i) a noise-reduction oriented version, which generates a single (one data point-wide) line of observations; (ii) a version that measures complex, three-dimensional (3D) data interactions; (iii) a non-numerical version that displays temporal data directionality (arrows that connect pairs of consecutive observations); and (iv) a full 4D (single line-, complexity-, directionality-based) version. Results In all studies, the non-structured approach revealed non-interpretable (ambiguous) data: observations numerically similar expressed different biological conditions, such as recovery and lack of recovery from infections. Ambiguity was also found when the data were structured as single lines. In contrast, two or more data subsets were distinguished and ambiguity was avoided when the data were structured as complex, 3D, single lines and, in addition, temporal data directionality was determined. The 4D method detected, even within one day, changes in immune profiles that occurred after antibiotics were prescribed. Conclusions Infectious disease data may be ambiguous. Four-dimensional methods may prevent ambiguity, providing earlier, in vivo, dynamic, complex, and personalized information that facilitates both diagnostics and selection or evaluation of anti-microbial therapies. PMID:27411058

  13. Platinum clusters with precise numbers of atoms for preparative-scale catalysis.

    PubMed

    Imaoka, Takane; Akanuma, Yuki; Haruta, Naoki; Tsuchiya, Shogo; Ishihara, Kentaro; Okayasu, Takeshi; Chun, Wang-Jae; Takahashi, Masaki; Yamamoto, Kimihisa

    2017-09-25

    Subnanometer noble metal clusters have enormous potential, mainly for catalytic applications. Because a difference of only one atom may cause significant changes in their reactivity, a preparation method with atomic-level precision is essential. Although such a precision with enough scalability has been achieved by gas-phase synthesis, large-scale preparation is still at the frontier, hampering practical applications. We now show the atom-precise and fully scalable synthesis of platinum clusters on a milligram scale from tiara-like platinum complexes with various ring numbers (n = 5-13). Low-temperature calcination of the complexes on a carbon support under hydrogen stream affords monodispersed platinum clusters, whose atomicity is equivalent to that of the precursor complex. One of the clusters (Pt 10 ) exhibits high catalytic activity in the hydrogenation of styrene compared to that of the other clusters. This method opens an avenue for the application of these clusters to preparative-scale catalysis.The catalytic activity of a noble metal nanocluster is tied to its atomicity. Here, the authors report an atom-precise, fully scalable synthesis of platinum clusters from molecular ring precursors, and show that a variation of only one atom can dramatically change a cluster's reactivity.

  14. Development of multiscale complexity and multifractality of fetal heart rate variability.

    PubMed

    Gierałtowski, Jan; Hoyer, Dirk; Tetschke, Florian; Nowack, Samuel; Schneider, Uwe; Zebrowski, Jan

    2013-11-01

    During fetal development a complex system grows and coordination over multiple time scales is formed towards an integrated behavior of the organism. Since essential cardiovascular and associated coordination is mediated by the autonomic nervous system (ANS) and the ANS activity is reflected in recordable heart rate patterns, multiscale heart rate analysis is a tool predestined for the diagnosis of prenatal maturation. The analyses over multiple time scales requires sufficiently long data sets while the recordings of fetal heart rate as well as the behavioral states studied are themselves short. Care must be taken that the analysis methods used are appropriate for short data lengths. We investigated multiscale entropy and multifractal scaling exponents from 30 minute recordings of 27 normal fetuses, aged between 23 and 38 weeks of gestational age (WGA) during the quiet state. In multiscale entropy, we found complexity lower than that of non-correlated white noise over all 20 coarse graining time scales investigated. Significant maturation age related complexity increase was strongest expressed at scale 2, both using sample entropy and generalized mutual information as complexity estimates. Multiscale multifractal analysis (MMA) in which the Hurst surface h(q,s) is calculated, where q is the multifractal parameter and s is the scale, was applied to the fetal heart rate data. MMA is a method derived from detrended fluctuation analysis (DFA). We modified the base algorithm of MMA to be applicable for short time series analysis using overlapping data windows and a reduction of the scale range. We looked for such q and s for which the Hurst exponent h(q,s) is most correlated with gestational age. We used this value of the Hurst exponent to predict the gestational age based only on fetal heart rate variability properties. Comparison with the true age of the fetus gave satisfying results (error 2.17±3.29 weeks; p<0.001; R(2)=0.52). In addition, we found that the normally used DFA scale range is non-optimal for fetal age evaluation. We conclude that 30 min recordings are appropriate and sufficient for assessing fetal age by multiscale entropy and multiscale multifractal analysis. The predominant prognostic role of scale 2 heart beats for MSE and scale 39 heart beats (at q=-0.7) for MMA cannot be explored neither by single scale complexity measures nor by standard detrended fluctuation analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Electron-Atom Ionization Calculations using Propagating Exterior Complex Scaling

    NASA Astrophysics Data System (ADS)

    Bartlett, Philip

    2007-10-01

    The exterior complex scaling method (Science 286 (1999) 2474), pioneered by Rescigno, McCurdy and coworkers, provided highly accurate ab initio solutions for electron-hydrogen collisions by directly solving the time-independent Schr"odinger equation in coordinate space. An extension of this method, propagating exterior complex scaling (PECS), was developed by Bartlett and Stelbovics (J. Phys. B 37 (2004) L69, J. Phys. B 39 (2006) R379) and has been demonstrated to provide computationally efficient and accurate calculations of ionization and scattering cross sections over a large range of energies below, above and near the ionization threshold. An overview of the PECS method for three-body collisions and the computational advantages of its propagation and iterative coupling techniques will be presented along with results of: (1) near-threshold ionization of electron-hydrogen collisions and the Wannier threshold laws, (2) scattering cross section resonances below the ionization threshold, and (3) total and differential cross sections for electron collisions with excited targets and hydrogenic ions from low through to high energies. Recently, the PECS method has been extended to solve four-body collisions using time-independent methods in coordinate space and has initially been applied to the s-wave model for electron-helium collisions. A description of the extensions made to the PECS method to facilitate these significantly more computationally demanding calculations will be given, and results will be presented for elastic, single-excitation, double-excitation, single-ionization and double-ionization collisions.

  16. Advances in modelling of biomimetic fluid flow at different scales

    PubMed Central

    2011-01-01

    The biomimetic flow at different scales has been discussed at length. The need of looking into the biological surfaces and morphologies and both geometrical and physical similarities to imitate the technological products and processes has been emphasized. The complex fluid flow and heat transfer problems, the fluid-interface and the physics involved at multiscale and macro-, meso-, micro- and nano-scales have been discussed. The flow and heat transfer simulation is done by various CFD solvers including Navier-Stokes and energy equations, lattice Boltzmann method and molecular dynamics method. Combined continuum-molecular dynamics method is also reviewed. PMID:21711847

  17. Multiscale entropy analysis of heart rate variability in heart failure, hypertensive, and sinoaortic-denervated rats: classical and refined approaches.

    PubMed

    Silva, Luiz Eduardo Virgilio; Lataro, Renata Maria; Castania, Jaci Airton; da Silva, Carlos Alberto Aguiar; Valencia, Jose Fernando; Murta, Luiz Otavio; Salgado, Helio Cesar; Fazan, Rubens; Porta, Alberto

    2016-07-01

    The analysis of heart rate variability (HRV) by nonlinear methods has been gaining increasing interest due to their ability to quantify the complexity of cardiovascular regulation. In this study, multiscale entropy (MSE) and refined MSE (RMSE) were applied to track the complexity of HRV as a function of time scale in three pathological conscious animal models: rats with heart failure (HF), spontaneously hypertensive rats (SHR), and rats with sinoaortic denervation (SAD). Results showed that HF did not change HRV complexity, although there was a tendency to decrease the entropy in HF animals. On the other hand, SHR group was characterized by reduced complexity at long time scales, whereas SAD animals exhibited a smaller short- and long-term irregularity. We propose that short time scales (1 to 4), accounting for fast oscillations, are more related to vagal and respiratory control, whereas long time scales (5 to 20), accounting for slow oscillations, are more related to sympathetic control. The increased sympathetic modulation is probably the main reason for the lower entropy observed at high scales for both SHR and SAD groups, acting as a negative factor for the cardiovascular complexity. This study highlights the contribution of the multiscale complexity analysis of HRV for understanding the physiological mechanisms involved in cardiovascular regulation. Copyright © 2016 the American Physiological Society.

  18. Multiscale multifractal DCCA and complexity behaviors of return intervals for Potts price model

    NASA Astrophysics Data System (ADS)

    Wang, Jie; Wang, Jun; Stanley, H. Eugene

    2018-02-01

    To investigate the characteristics of extreme events in financial markets and the corresponding return intervals among these events, we use a Potts dynamic system to construct a random financial time series model of the attitudes of market traders. We use multiscale multifractal detrended cross-correlation analysis (MM-DCCA) and Lempel-Ziv complexity (LZC) perform numerical research of the return intervals for two significant China's stock market indices and for the proposed model. The new MM-DCCA method is based on the Hurst surface and provides more interpretable cross-correlations of the dynamic mechanism between different return interval series. We scale the LZC method with different exponents to illustrate the complexity of return intervals in different scales. Empirical studies indicate that the proposed return intervals from the Potts system and the real stock market indices hold similar statistical properties.

  19. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data.

    PubMed

    Won, Sungho; Choi, Hosik; Park, Suyeon; Lee, Juyoung; Park, Changyi; Kwon, Sunghoon

    2015-01-01

    Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  20. Detrended fluctuation analysis based on higher-order moments of financial time series

    NASA Astrophysics Data System (ADS)

    Teng, Yue; Shang, Pengjian

    2018-01-01

    In this paper, a generalized method of detrended fluctuation analysis (DFA) is proposed as a new measure to assess the complexity of a complex dynamical system such as stock market. We extend DFA and local scaling DFA to higher moments such as skewness and kurtosis (labeled SMDFA and KMDFA), so as to investigate the volatility scaling property of financial time series. Simulations are conducted over synthetic and financial data for providing the comparative study. We further report the results of volatility behaviors in three American countries, three Chinese and three European stock markets by using DFA and LSDFA method based on higher moments. They demonstrate the dynamics behaviors of time series in different aspects, which can quantify the changes of complexity for stock market data and provide us with more meaningful information than single exponent. And the results reveal some higher moments volatility and higher moments multiscale volatility details that cannot be obtained using the traditional DFA method.

  1. AMPLIFIED FRAGMENT LENGTH POLYMORPHISM ANALYSIS OF MYCOBACTERIUM AVIUM COMPLEX ISOLATES RECOVERED FROM SOUTHERN CALIFORNIA

    EPA Science Inventory

    Fine-scale genotyping methods are necessary in order to identify possible sources of human exposure to opportunistic pathogens belonging to the Mycobacterium avium complex (MAC). In this study, amplified fragment length polymorphism (AFLP) analysis was evaluated for fingerprintin...

  2. Quantification for complex assessment: uncertainty estimation in final year project thesis assessment

    NASA Astrophysics Data System (ADS)

    Kim, Ho Sung

    2013-12-01

    A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.

  3. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  4. Solving the three-body Coulomb breakup problem using exterior complex scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCurdy, C.W.; Baertschy, M.; Rescigno, T.N.

    2004-05-17

    Electron-impact ionization of the hydrogen atom is the prototypical three-body Coulomb breakup problem in quantum mechanics. The combination of subtle correlation effects and the difficult boundary conditions required to describe two electrons in the continuum have made this one of the outstanding challenges of atomic physics. A complete solution of this problem in the form of a ''reduction to computation'' of all aspects of the physics is given by the application of exterior complex scaling, a modern variant of the mathematical tool of analytic continuation of the electronic coordinates into the complex plane that was used historically to establish themore » formal analytic properties of the scattering matrix. This review first discusses the essential difficulties of the three-body Coulomb breakup problem in quantum mechanics. It then describes the formal basis of exterior complex scaling of electronic coordinates as well as the details of its numerical implementation using a variety of methods including finite difference, finite elements, discrete variable representations, and B-splines. Given these numerical implementations of exterior complex scaling, the scattering wave function can be generated with arbitrary accuracy on any finite volume in the space of electronic coordinates, but there remains the fundamental problem of extracting the breakup amplitudes from it. Methods are described for evaluating these amplitudes. The question of the volume-dependent overall phase that appears in the formal theory of ionization is resolved. A summary is presented of accurate results that have been obtained for the case of electron-impact ionization of hydrogen as well as a discussion of applications to the double photoionization of helium.« less

  5. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE PAGES

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; ...

    2016-02-18

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  6. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    USGS Publications Warehouse

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.

    2016-01-01

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.

  7. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  8. Complexity perplexity: a systematic review to describe the measurement of medication regimen complexity.

    PubMed

    Paquin, Allison M; Zimmerman, Kristin M; Kostas, Tia R; Pelletier, Lindsey; Hwang, Angela; Simone, Mark; Skarf, Lara M; Rudolph, James L

    2013-11-01

    Complex medication regimens are error prone and challenging for patients, which may impact medication adherence and safety. No universal method to assess the complexity of medication regimens (CMRx) exists. The authors aim to review literature for CMRx measurements to establish consistencies and, secondarily, describe CMRx impact on healthcare outcomes. A search of EMBASE and PubMed for studies analyzing at least two medications and complexity components, among those self-managing medications, was conducted. Out of 1204 abstracts, 38 studies were included in the final sample. The majority (74%) of studies used one of five validated CMRx scales; their components and scoring were compared. Universal CMRx assessment is needed to identify and reduce complex regimens, and, thus, improve safety. The authors highlight commonalities among five scales to help build consensus. Common components (i.e., regimen factors) included dosing frequency, units per dose, and non-oral routes. Elements (e.g., twice daily) of these components (e.g., dosing frequency) and scoring varied. Patient-specific factors (e.g., dexterity, cognition) were not addressed, which is a shortcoming of current scales and a challenge for future scales. As CMRx has important outcomes, notably adherence and healthcare utilization, a standardized tool has potential for far-reaching clinical, research, and patient-safety impact.

  9. Optimal Output of Distributed Generation Based On Complex Power Increment

    NASA Astrophysics Data System (ADS)

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  10. Fractal mechanisms in the electrophysiology of the heart

    NASA Technical Reports Server (NTRS)

    Goldberger, A. L.

    1992-01-01

    The mathematical concept of fractals provides insights into complex anatomic branching structures that lack a characteristic (single) length scale, and certain complex physiologic processes, such as heart rate regulation, that lack a single time scale. Heart rate control is perturbed by alterations in neuro-autonomic function in a number of important clinical syndromes, including sudden cardiac death, congestive failure, cocaine intoxication, fetal distress, space sickness and physiologic aging. These conditions are associated with a loss of the normal fractal complexity of interbeat interval dynamics. Such changes, which may not be detectable using conventional statistics, can be quantified using new methods derived from "chaos theory.".

  11. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  12. Epidemic dynamics and endemic states in complex networks

    NASA Astrophysics Data System (ADS)

    Pastor-Satorras, Romualdo; Vespignani, Alessandro

    2001-06-01

    We study by analytical methods and large scale simulations a dynamical model for the spreading of epidemics in complex networks. In networks with exponentially bounded connectivity we recover the usual epidemic behavior with a threshold defining a critical point below that the infection prevalence is null. On the contrary, on a wide range of scale-free networks we observe the absence of an epidemic threshold and its associated critical behavior. This implies that scale-free networks are prone to the spreading and the persistence of infections whatever spreading rate the epidemic agents might possess. These results can help understanding computer virus epidemics and other spreading phenomena on communication and social networks.

  13. Computational vibrational study on coordinated nicotinamide

    NASA Astrophysics Data System (ADS)

    Bolukbasi, Olcay; Akyuz, Sevim

    2005-06-01

    The molecular structure and vibrational spectra of zinc (II) halide complexes of nicotinamide (ZnX 2(NIA) 2; X=Cl or Br; NIA=Nicotinamide) were investigated by computational vibrational study and scaled quantum mechanical (SQM) analysis. The geometry optimisation and vibrational wavenumber calculations of zinc halide complexes of nicotinamide were carried out by using the DFT/RB3LYP level of theory with 6-31G(d,p) basis set. The calculated wavenumbers were scaled by using scaled quantum mechanical (SQM) force field method. The fundamental vibrational modes were characterised by their total energy distribution. The coordination effects on nicotinamide through the ring nitrogen were discussed.

  14. Identifying influential nodes in complex networks: A node information dimension approach

    NASA Astrophysics Data System (ADS)

    Bian, Tian; Deng, Yong

    2018-04-01

    In the field of complex networks, how to identify influential nodes is a significant issue in analyzing the structure of a network. In the existing method proposed to identify influential nodes based on the local dimension, the global structure information in complex networks is not taken into consideration. In this paper, a node information dimension is proposed by synthesizing the local dimensions at different topological distance scales. A case study of the Netscience network is used to illustrate the efficiency and practicability of the proposed method.

  15. Modeling Complex Phenomena Using Multiscale Time Sequences

    DTIC Science & Technology

    2009-08-24

    measures based on Hurst and Holder exponents , auto-regressive methods and Fourier and wavelet decomposition methods. The applications for this technology...relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and Holder exponents , auto-regressive...different scales and how these scales relate to each other. This can be done by combining a set statistical fractal measures based on Hurst and

  16. A comparative study of turbulence models for overset grids

    NASA Technical Reports Server (NTRS)

    Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.

    1992-01-01

    The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.

  17. Filtering Gene Ontology semantic similarity for identifying protein complexes in large protein interaction networks.

    PubMed

    Wang, Jian; Xie, Dong; Lin, Hongfei; Yang, Zhihao; Zhang, Yijia

    2012-06-21

    Many biological processes recognize in particular the importance of protein complexes, and various computational approaches have been developed to identify complexes from protein-protein interaction (PPI) networks. However, high false-positive rate of PPIs leads to challenging identification. A protein semantic similarity measure is proposed in this study, based on the ontology structure of Gene Ontology (GO) terms and GO annotations to estimate the reliability of interactions in PPI networks. Interaction pairs with low GO semantic similarity are removed from the network as unreliable interactions. Then, a cluster-expanding algorithm is used to detect complexes with core-attachment structure on filtered network. Our method is applied to three different yeast PPI networks. The effectiveness of our method is examined on two benchmark complex datasets. Experimental results show that our method performed better than other state-of-the-art approaches in most evaluation metrics. The method detects protein complexes from large scale PPI networks by filtering GO semantic similarity. Removing interactions with low GO similarity significantly improves the performance of complex identification. The expanding strategy is also effective to identify attachment proteins of complexes.

  18. Double symbolic joint entropy in nonlinear dynamic complexity analysis

    NASA Astrophysics Data System (ADS)

    Yao, Wenpo; Wang, Jun

    2017-07-01

    Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.

  19. Exploring stability of entropy analysis for signal with different trends

    NASA Astrophysics Data System (ADS)

    Zhang, Yin; Li, Jin; Wang, Jun

    2017-03-01

    Considering the effects of environment disturbances and instrument systems, the actual detecting signals always are carrying different trends, which result in that it is difficult to accurately catch signals complexity. So choosing steady and effective analysis methods is very important. In this paper, we applied entropy measures-the base-scale entropy and approximate entropy to analyze signal complexity, and studied the effect of trends on the ideal signal and the heart rate variability (HRV) signals, that is, linear, periodic, and power-law trends which are likely to occur in actual signals. The results show that approximate entropy is unsteady when we embed different trends into the signals, so it is not suitable to analyze signal with trends. However, the base-scale entropy has preferable stability and accuracy for signal with different trends. So the base-scale entropy is an effective method to analyze the actual signals.

  20. Scale-free crystallization of two-dimensional complex plasmas: Domain analysis using Minkowski tensors

    NASA Astrophysics Data System (ADS)

    Böbel, A.; Knapek, C. A.; Räth, C.

    2018-05-01

    Experiments of the recrystallization processes in two-dimensional complex plasmas are analyzed to rigorously test a recently developed scale-free phase transition theory. The "fractal-domain-structure" (FDS) theory is based on the kinetic theory of Frenkel. It assumes the formation of homogeneous domains, separated by defect lines, during crystallization and a fractal relationship between domain area and boundary length. For the defect number fraction and system energy a scale-free power-law relation is predicted. The long-range scaling behavior of the bond-order correlation function shows clearly that the complex plasma phase transitions are not of the Kosterlitz, Thouless, Halperin, Nelson, and Young type. Previous preliminary results obtained by counting the number of dislocations and applying a bond-order metric for structural analysis are reproduced. These findings are supplemented by extending the use of the bond-order metric to measure the defect number fraction and furthermore applying state-of-the-art analysis methods, allowing a systematic testing of the FDS theory with unprecedented scrutiny: A morphological analysis of lattice structure is performed via Minkowski tensor methods. Minkowski tensors form a complete family of additive, motion covariant and continuous morphological measures that are sensitive to nonlinear properties. The FDS theory is rigorously confirmed and predictions of the theory are reproduced extremely well. The predicted scale-free power-law relation between defect fraction number and system energy is verified for one more order of magnitude at high energies compared to the inherently discontinuous bond-order metric. It is found that the fractal relation between crystalline domain area and circumference is independent of the experiment, the particular Minkowski tensor method, and the particular choice of parameters. Thus, the fractal relationship seems to be inherent to two-dimensional phase transitions in complex plasmas. Minkowski tensor analysis turns out to be a powerful tool for investigations of crystallization processes. It is capable of revealing nonlinear local topological properties, however, still provides easily interpretable results founded on a solid mathematical framework.

  1. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  2. Three Dimensional Energetics of Left Ventricle Flows Using Time-Resolved DPIV

    NASA Astrophysics Data System (ADS)

    Pierrakos, Olga; Vlachos, Pavlos

    2006-11-01

    Left ventricular (LV) flows in the human heart are very complex and in the presence of unhealthy or prosthetic heart valves (HV), the complexity of the flow is further increased. Yet to date, no study has documented the complex 3D hemodynamic characteristics and energetics of LV flows. We present high sampling frequency Time Resolved DPIV results obtained in a flexible, transparent LV documenting the evolution of eddies and turbulence. The purpose is to characterize the energetics of the LV flow field in the presence of four orientations of the most commonly implanted mechanical bileaflet HV and a porcine valve. By decomposing the energy scales of the flow field, the ultimate goal is to quantify the total energy losses associated with vortex ring formation and turbulence dissipation. The energies associated to vortex ring formation give a measure of the energy trapped within the structure while estimations of the turbulence dissipation rate (TDR) give a measure of the energy dissipated at the smaller scales. For the first time in cardiovascular applications, an LES-based PIV method, which overcomes the limitations of conventional TDR estimation methods that assume homogeneous isotropic turbulence, was employed. We observed that energy lost at the larger scales (vortex ring) is much higher than the energy lost at the smaller scales due to turbulence dissipation.

  3. Quantification of scaling exponents and dynamical complexity of microwave refractivity in a tropical climate

    NASA Astrophysics Data System (ADS)

    Fuwape, Ibiyinka A.; Ogunjo, Samuel T.

    2016-12-01

    Radio refractivity index is used to quantify the effect of atmospheric parameters in communication systems. Scaling and dynamical complexities of radio refractivity across different climatic zones of Nigeria have been studied. Scaling property of the radio refractivity across Nigeria was estimated from the Hurst Exponent obtained using two different scaling methods namely: The Rescaled Range (R/S) and the detrended fluctuation analysis(DFA). The delay vector variance (DVV), Largest Lyapunov Exponent (λ1) and Correlation Dimension (D2) methods were used to investigate nonlinearity and the results confirm the presence of deterministic nonlinear profile in the radio refractivity time series. The recurrence quantification analysis (RQA) was used to quantify the degree of chaoticity in the radio refractivity across the different climatic zones. RQA was found to be a good measure for identifying unique fingerprint and signature of chaotic time series data. Microwave radio refractivity was found to be persistent and chaotic in all the study locations. The dynamics of radio refractivity increases in complexity and chaoticity from the Coastal region towards the Sahelian climate. The design, development and deployment of robust and reliable microwave communication link in the region will be greatly affected by the chaotic nature of radio refractivity in the region.

  4. Innovative Field Methods for Characterizing the Hydraulic Properties of a Complex Fractured Rock Aquifer (Ploemeur, Brittany)

    NASA Astrophysics Data System (ADS)

    Bour, O.; Le Borgne, T.; Longuevergne, L.; Lavenant, N.; Jimenez-Martinez, J.; De Dreuzy, J. R.; Schuite, J.; Boudin, F.; Labasque, T.; Aquilina, L.

    2014-12-01

    Characterizing the hydraulic properties of heterogeneous and complex aquifers often requires field scale investigations at multiple space and time scales to better constrain hydraulic property estimates. Here, we present and discuss results from the site of Ploemeur (Brittany, France) where complementary hydrological and geophysical approaches have been combined to characterize the hydrogeological functioning of this highly fractured crystalline rock aquifer. In particular, we show how cross-borehole flowmeter tests, pumping tests and frequency domain analysis of groundwater levels allow quantifying the hydraulic properties of the aquifer at different scales. In complement, we used groundwater temperature as an excellent tracer for characterizing groundwater flow. At the site scale, measurements of ground surface deformation through long-base tiltmeters provide robust estimates of aquifer storage and allow identifying the active structures where groundwater pressure changes occur, including those acting during recharge process. Finally, a numerical model of the site that combines hydraulic data and groundwater ages confirms the geometry of this complex aquifer and the consistency of the different datasets. The Ploemeur site, which has been used for water supply at a rate of about 106 m3 per year since 1991, belongs to the French network of hydrogeological sites H+ and is currently used for monitoring groundwater changes and testing innovative field methods.

  5. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    NASA Astrophysics Data System (ADS)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  6. Detection of crossover time scales in multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Ge, Erjia; Leung, Yee

    2013-04-01

    Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.

  7. Toward a methodical framework for comprehensively assessing forest multifunctionality.

    PubMed

    Trogisch, Stefan; Schuldt, Andreas; Bauhus, Jürgen; Blum, Juliet A; Both, Sabine; Buscot, François; Castro-Izaguirre, Nadia; Chesters, Douglas; Durka, Walter; Eichenberg, David; Erfmeier, Alexandra; Fischer, Markus; Geißler, Christian; Germany, Markus S; Goebes, Philipp; Gutknecht, Jessica; Hahn, Christoph Zacharias; Haider, Sylvia; Härdtle, Werner; He, Jin-Sheng; Hector, Andy; Hönig, Lydia; Huang, Yuanyuan; Klein, Alexandra-Maria; Kühn, Peter; Kunz, Matthias; Leppert, Katrin N; Li, Ying; Liu, Xiaojuan; Niklaus, Pascal A; Pei, Zhiqin; Pietsch, Katherina A; Prinz, Ricarda; Proß, Tobias; Scherer-Lorenzen, Michael; Schmidt, Karsten; Scholten, Thomas; Seitz, Steffen; Song, Zhengshan; Staab, Michael; von Oheimb, Goddert; Weißbecker, Christina; Welk, Erik; Wirth, Christian; Wubet, Tesfaye; Yang, Bo; Yang, Xuefei; Zhu, Chao-Dong; Schmid, Bernhard; Ma, Keping; Bruelheide, Helge

    2017-12-01

    Biodiversity-ecosystem functioning (BEF) research has extended its scope from communities that are short-lived or reshape their structure annually to structurally complex forest ecosystems. The establishment of tree diversity experiments poses specific methodological challenges for assessing the multiple functions provided by forest ecosystems. In particular, methodological inconsistencies and nonstandardized protocols impede the analysis of multifunctionality within, and comparability across the increasing number of tree diversity experiments. By providing an overview on key methods currently applied in one of the largest forest biodiversity experiments, we show how methods differing in scale and simplicity can be combined to retrieve consistent data allowing novel insights into forest ecosystem functioning. Furthermore, we discuss and develop recommendations for the integration and transferability of diverse methodical approaches to present and future forest biodiversity experiments. We identified four principles that should guide basic decisions concerning method selection for tree diversity experiments and forest BEF research: (1) method selection should be directed toward maximizing data density to increase the number of measured variables in each plot. (2) Methods should cover all relevant scales of the experiment to consider scale dependencies of biodiversity effects. (3) The same variable should be evaluated with the same method across space and time for adequate larger-scale and longer-time data analysis and to reduce errors due to changing measurement protocols. (4) Standardized, practical and rapid methods for assessing biodiversity and ecosystem functions should be promoted to increase comparability among forest BEF experiments. We demonstrate that currently available methods provide us with a sophisticated toolbox to improve a synergistic understanding of forest multifunctionality. However, these methods require further adjustment to the specific requirements of structurally complex and long-lived forest ecosystems. By applying methods connecting relevant scales, trophic levels, and above- and belowground ecosystem compartments, knowledge gain from large tree diversity experiments can be optimized.

  8. Assessing Understanding of Complex Causal Networks Using an Interactive Game

    ERIC Educational Resources Information Center

    Ross, Joel

    2013-01-01

    Assessing people's understanding of the causal relationships found in large-scale complex systems may be necessary for addressing many critical social concerns, such as environmental sustainability. Existing methods for assessing systems thinking and causal understanding frequently use the technique of cognitive causal mapping. However, the…

  9. Multiscale skeletal representation of images via Voronoi diagrams

    NASA Astrophysics Data System (ADS)

    Marston, R. E.; Shih, Jian C.

    1995-08-01

    Polygonal approximations to skeletal or stroke-based representations of 2D objects may consume less storage and be sufficient to describe their shape for many applications. Multi- scale descriptions of object outlines are well established but corresponding methods for skeletal descriptions have been slower to develop. In this paper we offer a method of generating scale-based skeletal representation via the Voronoi diagram. The method has the advantages of less time complexity, a closer relationship between the skeletons at each scale and better control over simplification of the skeleton at lower scales. This is because the algorithm starts by generating the skeleton at the coarsest scale first, then it produces each finer scale, in an iterative manner, directly from the level below. The skeletal approximations produced by the algorithm also benefit from a strong relationship with the object outline, due to the structure of the Voronoi diagram.

  10. A Bayesian method for assessing multiscalespecies-habitat relationships

    USGS Publications Warehouse

    Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.

    2017-01-01

    ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and testing hypotheses of scaling relationships.

  11. Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity

    PubMed Central

    Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.

    2013-01-01

    Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of complexity estimates. PMID:23700424

  12. Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.

    PubMed

    Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen

    2016-05-01

    Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.

  13. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    PubMed

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.

  14. Scaling of counter-current imbibition recovery curves using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  15. Development of the Next Generation of Biogeochemistry Simulations Using EMSL's NWChem Molecular Modeling Software

    NASA Astrophysics Data System (ADS)

    Bylaska, E. J.; Kowalski, K.; Apra, E.; Govind, N.; Valiev, M.

    2017-12-01

    Methods of directly simulating the behavior of complex strongly interacting atomic systems (molecular dynamics, Monte Carlo) have provided important insight into the behavior of nanoparticles, biogeochemical systems, mineral/fluid systems, nanoparticles, actinide systems and geofluids. The limitation of these methods to even wider applications is the difficulty of developing accurate potential interactions in these systems at the molecular level that capture their complex chemistry. The well-developed tools of quantum chemistry and physics have been shown to approach the accuracy required. However, despite the continuous effort being put into improving their accuracy and efficiency, these tools will be of little value to condensed matter problems without continued improvements in techniques to traverse and sample the high-dimensional phase space needed to span the ˜10^12 time scale differences between molecular simulation and chemical events. In recent years, we have made considerable progress in developing electronic structure and AIMD methods tailored to treat biochemical and geochemical problems, including very efficient implementations of many-body methods, fast exact exchange methods, electron-transfer methods, excited state methods, QM/MM, and new parallel algorithms that scale to +100,000 cores. The poster will focus on the fundamentals of these methods and the realities in terms of system size, computational requirements and simulation times that are required for their application to complex biogeochemical systems.

  16. Resonances for Symmetric Two-Barrier Potentials

    ERIC Educational Resources Information Center

    Fernandez, Francisco M.

    2011-01-01

    We describe a method for the accurate calculation of bound-state and resonance energies for one-dimensional potentials. We calculate the shape resonances for symmetric two-barrier potentials and compare them with those coming from the Siegert approximation, the complex scaling method and the box-stabilization method. A comparison of the…

  17. Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  18. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  19. Enhanced conformational sampling using replica exchange with concurrent solute scaling and hamiltonian biasing realized in one dimension.

    PubMed

    Yang, Mingjun; Huang, Jing; MacKerell, Alexander D

    2015-06-09

    Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost due to the multidimensionality of exchanges becomes challenging for use on complex systems under explicit solvent conditions. In this study, we develop a one-dimensional (1D) REX algorithm to concurrently combine the advantages of overall enhanced sampling from Hamiltonian solute scaling and the specific enhancement of collective variables using Hamiltonian biasing potentials. In the present Hamiltonian replica exchange method, termed HREST-BP, Hamiltonian solute scaling is applied to the solute subsystem, and its interactions with the environment to enhance overall conformational transitions and biasing potentials are added along selected collective variables associated with specific conformational transitions, thereby balancing the sampling of different hierarchical degrees of freedom. The two enhanced sampling approaches are implemented concurrently allowing for the use of a small number of replicas (e.g., 6 to 8) in 1D, thus greatly reducing the computational cost in complex system simulations. The present method is applied to conformational sampling of two nitrogen-linked glycans (N-glycans) found on the HIV gp120 envelope protein. Considering the general importance of the conformational sampling problem, HREST-BP represents an efficient procedure for the study of complex saccharides, and, more generally, the method is anticipated to be of general utility for the conformational sampling in a wide range of macromolecular systems.

  20. Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.

    PubMed

    Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping

    2017-10-06

    Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.

  1. Finite Element Multi-scale Modeling of Chemical Segregation in Steel Solidification Taking into Account the Transport of Equiaxed Grains

    NASA Astrophysics Data System (ADS)

    Nguyen, Thi-Thuy-My; Gandin, Charles-André; Combeau, Hervé; Založnik, Miha; Bellet, Michel

    2018-02-01

    The transport of solid crystals in the liquid pool during solidification of large ingots is known to have a significant effect on their final grain structure and macrosegregation. Numerical modeling of the associated physics is challenging since complex and strong interactions between heat and mass transfer at the microscopic and macroscopic scales must be taken into account. The paper presents a finite element multi-scale solidification model coupling nucleation, growth, and solute diffusion at the microscopic scale, represented by a single unique grain, while also including transport of the liquid and solid phases at the macroscopic scale of the ingots. The numerical resolution is based on a splitting method which sequentially describes the evolution and interaction of quantities into a transport and a growth stage. This splitting method reduces the non-linear complexity of the set of equations and is, for the first time, implemented using the finite element method. This is possible due to the introduction of an artificial diffusion in all conservation equations solved by the finite element method. Simulations with and without grain transport are compared to demonstrate the impact of solid phase transport on the solidification process as well as the formation of macrosegregation in a binary alloy (Sn-5 wt pct Pb). The model is also applied to the solidification of the binary alloy Fe-0.36 wt pct C in a domain representative of a 3.3-ton steel ingot.

  2. Exploring a multi-scale method for molecular simulation in continuum solvent model: Explicit simulation of continuum solvent as an incompressible fluid.

    PubMed

    Xiao, Li; Luo, Ray

    2017-12-07

    We explored a multi-scale algorithm for the Poisson-Boltzmann continuum solvent model for more robust simulations of biomolecules. In this method, the continuum solvent/solute interface is explicitly simulated with a numerical fluid dynamics procedure, which is tightly coupled to the solute molecular dynamics simulation. There are multiple benefits to adopt such a strategy as presented below. At this stage of the development, only nonelectrostatic interactions, i.e., van der Waals and hydrophobic interactions, are included in the algorithm to assess the quality of the solvent-solute interface generated by the new method. Nevertheless, numerical challenges exist in accurately interpolating the highly nonlinear van der Waals term when solving the finite-difference fluid dynamics equations. We were able to bypass the challenge rigorously by merging the van der Waals potential and pressure together when solving the fluid dynamics equations and by considering its contribution in the free-boundary condition analytically. The multi-scale simulation method was first validated by reproducing the solute-solvent interface of a single atom with analytical solution. Next, we performed the relaxation simulation of a restrained symmetrical monomer and observed a symmetrical solvent interface at equilibrium with detailed surface features resembling those found on the solvent excluded surface. Four typical small molecular complexes were then tested, both volume and force balancing analyses showing that these simple complexes can reach equilibrium within the simulation time window. Finally, we studied the quality of the multi-scale solute-solvent interfaces for the four tested dimer complexes and found that they agree well with the boundaries as sampled in the explicit water simulations.

  3. Rolling bearing fault detection and diagnosis based on composite multiscale fuzzy entropy and ensemble support vector machines

    NASA Astrophysics Data System (ADS)

    Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng

    2017-02-01

    To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.

  4. 3-D imaging of large scale buried structure by 1-D inversion of very early time electromagnetic (VETEM) data

    USGS Publications Warehouse

    Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2001-01-01

    A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.

  5. III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.

    PubMed

    Davis-Kean, Pamela E; Jager, Justin

    2017-06-01

    For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.

  6. Statistical Analysis of Big Data on Pharmacogenomics

    PubMed Central

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  7. Rapid multi-modality preregistration based on SIFT descriptor.

    PubMed

    Chen, Jian; Tian, Jie

    2006-01-01

    This paper describes the scale invariant feature transform (SIFT) method for rapid preregistration of medical image. This technique originates from Lowe's method wherein preregistration is achieved by matching the corresponding keypoints between two images. The computational complexity has been reduced when we applied SIFT preregistration method before refined registration due to its O(n) exponential calculations. The features of SIFT are highly distinctive and invariant to image scaling and rotation, and partially invariant to change in illumination and contrast, it is robust and repeatable for cursorily matching two images. We also altered the descriptor so our method can deal with multimodality preregistration.

  8. Relationship between femtosecond-picosecond dynamics to enzyme catalyzed H-transfer

    PubMed Central

    Cheatum, Christopher M.; Kohen, Amnon

    2015-01-01

    At physiological temperatures, enzymes exhibit a broad spectrum of conformations, which interchange via thermally activated dynamics. These conformations are sampled differently in different complexes of the protein and its ligands, and the dynamics of exchange between these conformers depends on the mass of the group that is moving and the length scale of the motion, as well as restrictions imposed by the globular fold of the enzymatic complex. Many of these motions have been examined and their role in the enzyme function illuminated, yet most experimental tools applied so far have identified dynamics at time scales of seconds to nanoseconds, which are much slower than the time scale for H-transfer between two heavy atoms. This chemical conversion and other processes involving cleavage of covalent bonds occur on picosecond to femtosecond time scales, where slower processes mask both the kinetics and dynamics. Here we present a combination of kinetic and spectroscopic methods that may enable closer examination of the relationship between enzymatic C-H→C transfer and the dynamics of the active site environment at the chemically relevant time scale. These methods include kinetic isotope effects and their temperature dependence, which are used to study the kinetic nature of the H-transfer, and 2D IR spectroscopy, which is used to study the dynamics of transition-state- and ground-state-analog complexes. The combination of these tools is likely to provide a new approach to examine the protein dynamics that directly influence the chemical conversion catalyzed by enzymes. PMID:23539379

  9. Organizational Agility and Complex Enterprise System Innovations: A Mixed Methods Study of the Effects of Enterprise Systems on Organizational Agility

    ERIC Educational Resources Information Center

    Kharabe, Amol T.

    2012-01-01

    Over the last two decades, firms have operated in "increasingly" accelerated "high-velocity" dynamic markets, which require them to become "agile." During the same time frame, firms have increasingly deployed complex enterprise systems--large-scale packaged software "innovations" that integrate and automate…

  10. Large-Eddy Simulations of Atmospheric Flows Over Complex Terrain Using the Immersed-Boundary Method in the Weather Research and Forecasting Model

    NASA Astrophysics Data System (ADS)

    Ma, Yulong; Liu, Heping

    2017-12-01

    Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.

  11. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    PubMed

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  12. Multi-Dimensional Scaling based grouping of known complexes and intelligent protein complex detection.

    PubMed

    Rehman, Zia Ur; Idris, Adnan; Khan, Asifullah

    2018-06-01

    Protein-Protein Interactions (PPI) play a vital role in cellular processes and are formed because of thousands of interactions among proteins. Advancements in proteomics technologies have resulted in huge PPI datasets that need to be systematically analyzed. Protein complexes are the locally dense regions in PPI networks, which extend important role in metabolic pathways and gene regulation. In this work, a novel two-phase protein complex detection and grouping mechanism is proposed. In the first phase, topological and biological features are extracted for each complex, and prediction performance is investigated using Bagging based Ensemble classifier (PCD-BEns). Performance evaluation through cross validation shows improvement in comparison to CDIP, MCode, CFinder and PLSMC methods Second phase employs Multi-Dimensional Scaling (MDS) for the grouping of known complexes by exploring inter complex relations. It is experimentally observed that the combination of topological and biological features in the proposed approach has greatly enhanced prediction performance for protein complex detection, which may help to understand various biological processes, whereas application of MDS based exploration may assist in grouping potentially similar complexes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Using d15N of Chironomidae to help assess lake condition and possible stressors in EPA?s National Lakes Assessment.

    EPA Science Inventory

    Background/Questions/Methods As interest in continental-scale ecology increases to address large-scale ecological problems, ecologists need indicators of complex processes that can be collected quickly at many sites across large areas. We are exploring the utility of stable isot...

  14. Transition Manifolds of Complex Metastable Systems: Theory and Data-Driven Computation of Effective Dynamics.

    PubMed

    Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof

    2018-01-01

    We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.

  15. Two-harmonic complex spectral-domain optical coherence tomography using achromatic sinusoidal phase modulation

    NASA Astrophysics Data System (ADS)

    Lu, Sheng-Hua; Huang, Siang-Ru; Chou, Che-Chung

    2018-03-01

    We resolve the complex conjugate ambiguity in spectral-domain optical coherence tomography (SD-OCT) by using achromatic two-harmonic method. Unlike previous researches, the optical phase of the fiber interferometer is modulated by an achromatic phase shifter based on an optical delay line. The achromatic phase modulation leads to a wavelength-independent scaling coefficient for the two harmonics. Dividing the mean absolute value of the first harmonic by that of the second harmonic in a B-scan interferogram directly gives the scaling coefficient. It greatly simplifies the determination of the magnitude ratio between the two harmonics without the need of third harmonic and cumbersome iterative calculations. The inverse fast Fourier transform of the complex-valued interferogram constructed with the scaling coefficient, first and second harmonics yields a full-range OCT image. Experimental results confirm the effectiveness of the proposed achromatic two-harmonic technique for suppressing the mirror artifacts in SD-OCT images.

  16. Transition Manifolds of Complex Metastable Systems

    NASA Astrophysics Data System (ADS)

    Bittracher, Andreas; Koltai, Péter; Klus, Stefan; Banisch, Ralf; Dellnitz, Michael; Schütte, Christof

    2018-04-01

    We consider complex dynamical systems showing metastable behavior, but no local separation of fast and slow time scales. The article raises the question of whether such systems exhibit a low-dimensional manifold supporting its effective dynamics. For answering this question, we aim at finding nonlinear coordinates, called reaction coordinates, such that the projection of the dynamics onto these coordinates preserves the dominant time scales of the dynamics. We show that, based on a specific reducibility property, the existence of good low-dimensional reaction coordinates preserving the dominant time scales is guaranteed. Based on this theoretical framework, we develop and test a novel numerical approach for computing good reaction coordinates. The proposed algorithmic approach is fully local and thus not prone to the curse of dimension with respect to the state space of the dynamics. Hence, it is a promising method for data-based model reduction of complex dynamical systems such as molecular dynamics.

  17. Application of hierarchical clustering method to classify of space-time rainfall patterns

    NASA Astrophysics Data System (ADS)

    Yu, Hwa-Lung; Chang, Tu-Je

    2010-05-01

    Understanding the local precipitation patterns is essential to the water resources management and flooding mitigation. The precipitation patterns can vary in space and time depending upon the factors from different spatial scales such as local topological changes and macroscopic atmospheric circulation. The spatiotemporal variation of precipitation in Taiwan is significant due to its complex terrain and its location at west pacific and subtropical area, where is the boundary between the pacific ocean and Asia continent with the complex interactions among the climatic processes. This study characterizes local-scale precipitation patterns by classifying the historical space-time precipitation records. We applied the hierarchical ascending clustering method to analyze the precipitation records from 1960 to 2008 at the six rainfall stations located in Lan-yang catchment at the northeast of the island. Our results identify the four primary space-time precipitation types which may result from distinct driving forces from the changes of atmospheric variables and topology at different space-time scales. This study also presents an important application of the statistical downscaling to combine large-scale upper-air circulation with local space-time precipitation patterns.

  18. Pathological mechanisms underlying single large‐scale mitochondrial DNA deletions

    PubMed Central

    Rocha, Mariana C.; Rosa, Hannah S.; Grady, John P.; Blakely, Emma L.; He, Langping; Romain, Nadine; Haller, Ronald G.; Newman, Jane; McFarland, Robert; Ng, Yi Shiau; Gorman, Grainne S.; Schaefer, Andrew M.; Tuppen, Helen A.; Taylor, Robert W.

    2018-01-01

    Objective Single, large‐scale deletions in mitochondrial DNA (mtDNA) are a common cause of mitochondrial disease. This study aimed to investigate the relationship between the genetic defect and molecular phenotype to improve understanding of pathogenic mechanisms associated with single, large‐scale mtDNA deletions in skeletal muscle. Methods We investigated 23 muscle biopsies taken from adult patients (6 males/17 females with a mean age of 43 years) with characterized single, large‐scale mtDNA deletions. Mitochondrial respiratory chain deficiency in skeletal muscle biopsies was quantified by immunoreactivity levels for complex I and complex IV proteins. Single muscle fibers with varying degrees of deficiency were selected from 6 patient biopsies for determination of mtDNA deletion level and copy number by quantitative polymerase chain reaction. Results We have defined 3 “classes” of single, large‐scale deletion with distinct patterns of mitochondrial deficiency, determined by the size and location of the deletion. Single fiber analyses showed that fibers with greater respiratory chain deficiency harbored higher levels of mtDNA deletion with an increase in total mtDNA copy number. For the first time, we have demonstrated that threshold levels for complex I and complex IV deficiency differ based on deletion class. Interpretation Combining genetic and immunofluorescent assays, we conclude that thresholds for complex I and complex IV deficiency are modulated by the deletion of complex‐specific protein‐encoding genes. Furthermore, removal of mt‐tRNA genes impacts specific complexes only at high deletion levels, when complex‐specific protein‐encoding genes remain. These novel findings provide valuable insight into the pathogenic mechanisms associated with these mutations. Ann Neurol 2018;83:115–130 PMID:29283441

  19. Wavefield complexity and stealth structures: Resolution constraints by wave physics

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Leng, K.

    2017-12-01

    Imaging the Earth's interior relies on understanding how waveforms encode information from heterogeneous multi-scale structure. This relation is given by elastodynamics, but forward modeling in the context of tomography primarily serves to deliver synthetic waveforms and gradients for the inversion procedure. While this is entirely appropriate, it depreciates a wealth of complementary inference that can be obtained from the complexity of the wavefield. Here, we are concerned with the imprint of realistic multi-scale Earth structure on the wavefield, and the question on the inherent physical resolution limit of structures encoded in seismograms. We identify parameter and scattering regimes where structures remain invisible as a function of seismic wavelength, structural multi-scale geometry, scattering strength, and propagation path. Ultimately, this will aid in interpreting tomographic images by acknowledging the scope of "forgotten" structures, and shall offer guidance for optimising the selection of seismic data for tomography. To do so, we use our novel 3D modeling method AxiSEM3D which tackles global wave propagation in visco-elastic, anisotropic 3D structures with undulating boundaries at unprecedented resolution and efficiency by exploiting the inherent azimuthal smoothness of wavefields via a coupled Fourier expansion-spectral-element approach. The method links computational cost to wavefield complexity and thereby lends itself well to exploring the relation between waveforms and structures. We will show various examples of multi-scale heterogeneities which appear or disappear in the waveform, and argue that the nature of the structural power spectrum plays a central role in this. We introduce the concept of wavefield learning to examine the true wavefield complexity for a complexity-dependent modeling framework and discriminate which scattering structures can be retrieved by surface measurements. This leads to the question of physical invisibility and the tomographic resolution limit, and offers insight as to why tomographic images still show stark differences for smaller-scale heterogeneities despite progress in modeling and data resolution. Finally, we give an outlook on how we expand this modeling framework towards an inversion procedure guided by wavefield complexity.

  20. A Mathematical Model of the Color Preference Scale Construction in Quality Management at the Machine-Building Enterprise

    NASA Astrophysics Data System (ADS)

    Averchenkov, V. I.; Kondratenko, S. V.; Potapov, L. A.; Spasennikov, V. V.

    2017-01-01

    In this article, the author consider the basic features of color preferences. The famous scientists’ works confirm their identity and independence of subjective factors. The article examines the method of constructing the respondent’s color preference individual scale on the basis of L Thurstone’s pair election method. The practical example of applying this technique for constructing the respondent’s color preference individual scale is given. The result of this method application is the color preference individual scale with the weight value of each color. The authors also developed and presented the algorithm of applying this method within the program complex to determine the respondents’ attitude to the issues under investigation based on their color preferences. Also, the article considers the possibility of using the software at the industrial enterprises to improve the quality of the consumer quality products.

  1. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Lalime, Aimee L.; Johnson, Marty E.; Rizzi, Stephen A. (Technical Monitor)

    2002-01-01

    Binaural or "virtual acoustic" representation has been proposed as a method of analyzing acoustic and vibroacoustic data. Unfortunately, this binaural representation can require extensive computer power to apply the Head Related Transfer Functions (HRTFs) to a large number of sources, as with a vibrating structure. This work focuses on reducing the number of real-time computations required in this binaural analysis through the use of Singular Value Decomposition (SVD) and Equivalent Source Reduction (ESR). The SVD method reduces the complexity of the HRTF computations by breaking the HRTFs into dominant singular values (and vectors). The ESR method reduces the number of sources to be analyzed in real-time computation by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. It is shown that the effectiveness of the SVD and ESR methods improves as the complexity of the source increases. In addition, preliminary auralization tests have shown that the results from both the SVD and ESR methods are indistinguishable from the results found with the exhaustive method.

  2. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  3. Outlier-resilient complexity analysis of heartbeat dynamics

    NASA Astrophysics Data System (ADS)

    Lo, Men-Tzung; Chang, Yi-Chung; Lin, Chen; Young, Hsu-Wen Vincent; Lin, Yen-Hung; Ho, Yi-Lwun; Peng, Chung-Kang; Hu, Kun

    2015-03-01

    Complexity in physiological outputs is believed to be a hallmark of healthy physiological control. How to accurately quantify the degree of complexity in physiological signals with outliers remains a major barrier for translating this novel concept of nonlinear dynamic theory to clinical practice. Here we propose a new approach to estimate the complexity in a signal by analyzing the irregularity of the sign time series of its coarse-grained time series at different time scales. Using surrogate data, we show that the method can reliably assess the complexity in noisy data while being highly resilient to outliers. We further apply this method to the analysis of human heartbeat recordings. Without removing any outliers due to ectopic beats, the method is able to detect a degradation of cardiac control in patients with congestive heart failure and a more degradation in critically ill patients whose life continuation relies on extracorporeal membrane oxygenator (ECMO). Moreover, the derived complexity measures can predict the mortality of ECMO patients. These results indicate that the proposed method may serve as a promising tool for monitoring cardiac function of patients in clinical settings.

  4. An unsupervised method for quantifying the behavior of paired animals

    NASA Astrophysics Data System (ADS)

    Klibaite, Ugne; Berman, Gordon J.; Cande, Jessica; Stern, David L.; Shaevitz, Joshua W.

    2017-02-01

    Behaviors involving the interaction of multiple individuals are complex and frequently crucial for an animal’s survival. These interactions, ranging across sensory modalities, length scales, and time scales, are often subtle and difficult to characterize. Contextual effects on the frequency of behaviors become even more difficult to quantify when physical interaction between animals interferes with conventional data analysis, e.g. due to visual occlusion. We introduce a method for quantifying behavior in fruit fly interaction that combines high-throughput video acquisition and tracking of individuals with recent unsupervised methods for capturing an animal’s entire behavioral repertoire. We find behavioral differences between solitary flies and those paired with an individual of the opposite sex, identifying specific behaviors that are affected by social and spatial context. Our pipeline allows for a comprehensive description of the interaction between two individuals using unsupervised machine learning methods, and will be used to answer questions about the depth of complexity and variance in fruit fly courtship.

  5. Combinatorial depletion analysis to assemble the network architecture of the SAGA and ADA chromatin remodeling complexes.

    PubMed

    Lee, Kenneth K; Sardiu, Mihaela E; Swanson, Selene K; Gilmore, Joshua M; Torok, Michael; Grant, Patrick A; Florens, Laurence; Workman, Jerry L; Washburn, Michael P

    2011-07-05

    Despite the availability of several large-scale proteomics studies aiming to identify protein interactions on a global scale, little is known about how proteins interact and are organized within macromolecular complexes. Here, we describe a technique that consists of a combination of biochemistry approaches, quantitative proteomics and computational methods using wild-type and deletion strains to investigate the organization of proteins within macromolecular protein complexes. We applied this technique to determine the organization of two well-studied complexes, Spt-Ada-Gcn5 histone acetyltransferase (SAGA) and ADA, for which no comprehensive high-resolution structures exist. This approach revealed that SAGA/ADA is composed of five distinct functional modules, which can persist separately. Furthermore, we identified a novel subunit of the ADA complex, termed Ahc2, and characterized Sgf29 as an ADA family protein present in all Gcn5 histone acetyltransferase complexes. Finally, we propose a model for the architecture of the SAGA and ADA complexes, which predicts novel functional associations within the SAGA complex and provides mechanistic insights into phenotypical observations in SAGA mutants.

  6. Combinatorial depletion analysis to assemble the network architecture of the SAGA and ADA chromatin remodeling complexes

    PubMed Central

    Lee, Kenneth K; Sardiu, Mihaela E; Swanson, Selene K; Gilmore, Joshua M; Torok, Michael; Grant, Patrick A; Florens, Laurence; Workman, Jerry L; Washburn, Michael P

    2011-01-01

    Despite the availability of several large-scale proteomics studies aiming to identify protein interactions on a global scale, little is known about how proteins interact and are organized within macromolecular complexes. Here, we describe a technique that consists of a combination of biochemistry approaches, quantitative proteomics and computational methods using wild-type and deletion strains to investigate the organization of proteins within macromolecular protein complexes. We applied this technique to determine the organization of two well-studied complexes, Spt–Ada–Gcn5 histone acetyltransferase (SAGA) and ADA, for which no comprehensive high-resolution structures exist. This approach revealed that SAGA/ADA is composed of five distinct functional modules, which can persist separately. Furthermore, we identified a novel subunit of the ADA complex, termed Ahc2, and characterized Sgf29 as an ADA family protein present in all Gcn5 histone acetyltransferase complexes. Finally, we propose a model for the architecture of the SAGA and ADA complexes, which predicts novel functional associations within the SAGA complex and provides mechanistic insights into phenotypical observations in SAGA mutants. PMID:21734642

  7. Detection of Protein Complexes Based on Penalized Matrix Decomposition in a Sparse Protein⁻Protein Interaction Network.

    PubMed

    Cao, Buwen; Deng, Shuguang; Qin, Hua; Ding, Pingjian; Chen, Shaopeng; Li, Guanghui

    2018-06-15

    High-throughput technology has generated large-scale protein interaction data, which is crucial in our understanding of biological organisms. Many complex identification algorithms have been developed to determine protein complexes. However, these methods are only suitable for dense protein interaction networks, because their capabilities decrease rapidly when applied to sparse protein⁻protein interaction (PPI) networks. In this study, based on penalized matrix decomposition ( PMD ), a novel method of penalized matrix decomposition for the identification of protein complexes (i.e., PMD pc ) was developed to detect protein complexes in the human protein interaction network. This method mainly consists of three steps. First, the adjacent matrix of the protein interaction network is normalized. Second, the normalized matrix is decomposed into three factor matrices. The PMD pc method can detect protein complexes in sparse PPI networks by imposing appropriate constraints on factor matrices. Finally, the results of our method are compared with those of other methods in human PPI network. Experimental results show that our method can not only outperform classical algorithms, such as CFinder, ClusterONE, RRW, HC-PIN, and PCE-FR, but can also achieve an ideal overall performance in terms of a composite score consisting of F-measure, accuracy (ACC), and the maximum matching ratio (MMR).

  8. Absorbing boundaries in numerical solutions of the time-dependent Schroedinger equation on a grid using exterior complex scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, F.; Ruiz, C.; Becker, A.

    We study the suppression of reflections in the numerical simulation of the time-dependent Schroedinger equation for strong-field problems on a grid using exterior complex scaling (ECS) as an absorbing boundary condition. It is shown that the ECS method can be applied in both the length and the velocity gauge as long as appropriate approximations are applied in the ECS transformation of the electron-field coupling. It is found that the ECS method improves the suppression of reflection as compared to the conventional masking function technique in typical simulations of atoms exposed to an intense laser pulse. Finally, we demonstrate the advantagemore » of the ECS technique to avoid unphysical artifacts in the evaluation of high harmonic spectra.« less

  9. Characterising Dynamic Instability in High Water-Cut Oil-Water Flows Using High-Resolution Microwave Sensor Signals

    NASA Astrophysics Data System (ADS)

    Liu, Weixin; Jin, Ningde; Han, Yunfeng; Ma, Jing

    2018-06-01

    In the present study, multi-scale entropy algorithm was used to characterise the complex flow phenomena of turbulent droplets in high water-cut oil-water two-phase flow. First, we compared multi-scale weighted permutation entropy (MWPE), multi-scale approximate entropy (MAE), multi-scale sample entropy (MSE) and multi-scale complexity measure (MCM) for typical nonlinear systems. The results show that MWPE presents satisfied variability with scale and anti-noise ability. Accordingly, we conducted an experiment of vertical upward oil-water two-phase flow with high water-cut and collected the signals of a high-resolution microwave resonant sensor, based on which two indexes, the entropy rate and mean value of MWPE, were extracted. Besides, the effects of total flow rate and water-cut on these two indexes were analysed. Our researches show that MWPE is an effective method to uncover the dynamic instability of oil-water two-phase flow with high water-cut.

  10. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  11. Nanoindentation methods for wood-adhesive bond lines

    Treesearch

    Joseph E. Jakes; Donald S. Stone; Charles R. Frihart

    2008-01-01

    As an adherend, wood is structurally, chemically, and mechanically more complex than metals or plastics, and the largest source of this complexity is wood’s chemical and mechanical inhomogeneities. Understanding and predicting the performance of adhesively bonded wood requires knowledge of the interactions occurring at length scales ranging from the macro down to the...

  12. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  13. Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation

    PubMed Central

    Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan

    2010-01-01

    Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939

  14. Time-averaged aerodynamic loads on the vane sets of the 40- by 80-foot and 80- by 120-foot wind tunnel complex

    NASA Technical Reports Server (NTRS)

    Aoyagi, Kiyoshi; Olson, Lawrence E.; Peterson, Randall L.; Yamauchi, Gloria K.; Ross, James C.; Norman, Thomas R.

    1987-01-01

    Time-averaged aerodynamic loads are estimated for each of the vane sets in the National Full-Scale Aerodynamic Complex (NFAC). The methods used to compute global and local loads are presented. Experimental inputs used to calculate these loads are based primarily on data obtained from tests conducted in the NFAC 1/10-Scale Vane-Set Test Facility and from tests conducted in the NFAC 1/50-Scale Facility. For those vane sets located directly downstream of either the 40- by 80-ft test section or the 80- by 120-ft test section, aerodynamic loads caused by the impingement of model-generated wake vortices and model-generated jet and propeller wakes are also estimated.

  15. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  16. The Forest Canopy as a Temporally and Spatially Dynamic Ecosystem: Preliminary Results of Biomass Scaling and Habitat Use from a Case Study in Large Eastern White Pines (Pinus Strobus)

    NASA Astrophysics Data System (ADS)

    Martin, J.; Laughlin, M. M.; Olson, E.

    2017-12-01

    Canopy processes can be viewed at many scales and through many lenses. Fundamentally, we may wish to start by treating each canopy as a unique surface, an ecosystem unto itself. By doing so, we can may make some important observations that greatly influence our ability to scale canopies to landscape, regional and global scales. This work summarizes an ongoing endeavor to quantify various canopy level processes on individual old and large Eastern white pine trees (Pinus strobus). Our work shows that these canopies contain complex structures that vary with height and as the tree ages. This phenomenon complicates the allometric scaling of these large trees using standard methods, but detailed measurements from within the canopy provided a method to constrain scaling equations. We also quantified how these canopies change and respond to canopy disturbance, and documented disproportionate variation of growth compared to the lower stem as the trees develop. Additionally, the complex shape and surface area allow these canopies to act like ecosystems themselves; despite being relatively young and more commonplace when compared to the more notable canopies of the tropics and the Pacific Northwestern US. The white pines of these relatively simple, near boreal forests appear to house various species including many lichens. The lichen species can cover significant portions of the canopy surface area (which may be only 25 to 50 years old) and are a sizable source of potential nitrogen additions to the soils below, as well as a modulator to hydrologic cycles by holding significant amounts of precipitation. Lastly, the combined complex surface area and focused verticality offers important habitat to numerous animal species, some of which are quite surprising.

  17. Kernel methods for large-scale genomic data analysis

    PubMed Central

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  18. Advances in Multi-Sensor Scanning and Visualization of Complex Plants: the Utmost Case of a Reactor Building

    NASA Astrophysics Data System (ADS)

    Hullo, J.-F.; Thibault, G.; Boucheny, C.

    2015-02-01

    In a context of increased maintenance operations and workers generational renewal, a nuclear owner and operator like Electricité de France (EDF) is interested in the scaling up of tools and methods of "as-built virtual reality" for larger buildings and wider audiences. However, acquisition and sharing of as-built data on a large scale (large and complex multi-floored buildings) challenge current scientific and technical capacities. In this paper, we first present a state of the art of scanning tools and methods for industrial plants with very complex architecture. Then, we introduce the inner characteristics of the multi-sensor scanning and visualization of the interior of the most complex building of a power plant: a nuclear reactor building. We introduce several developments that made possible a first complete survey of such a large building, from acquisition, processing and fusion of multiple data sources (3D laser scans, total-station survey, RGB panoramic, 2D floor plans, 3D CAD as-built models). In addition, we present the concepts of a smart application developed for the painless exploration of the whole dataset. The goal of this application is to help professionals, unfamiliar with the manipulation of such datasets, to take into account spatial constraints induced by the building complexity while preparing maintenance operations. Finally, we discuss the main feedbacks of this large experiment, the remaining issues for the generalization of such large scale surveys and the future technical and scientific challenges in the field of industrial "virtual reality".

  19. Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei

    2013-08-01

    Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.

  20. Digital Reef Rugosity Estimates Coral Reef Habitat Complexity

    PubMed Central

    Dustan, Phillip; Doherty, Orla; Pardede, Shinta

    2013-01-01

    Ecological habitats with greater structural complexity contain more species due to increased niche diversity. This is especially apparent on coral reefs where individual coral colonies aggregate to give a reef its morphology, species zonation, and three dimensionality. Structural complexity is classically measured with a reef rugosity index, which is the ratio of a straight line transect to the distance a flexible chain of equal length travels when draped over the reef substrate; yet, other techniques from visual categories to remote sensing have been used to characterize structural complexity at scales from microhabitats to reefscapes. Reef-scale methods either lack quantitative precision or are too time consuming to be routinely practical, while remotely sensed indices are mismatched to the finer scale morphology of coral colonies and reef habitats. In this communication a new digital technique, Digital Reef Rugosity (DRR) is described which utilizes a self-contained water level gauge enabling a diver to quickly and accurately characterize rugosity with non-invasive millimeter scale measurements of coral reef surface height at decimeter intervals along meter scale transects. The precise measurements require very little post-processing and are easily imported into a spreadsheet for statistical analyses and modeling. To assess its applicability we investigated the relationship between DRR and fish community structure at four coral reef sites on Menjangan Island off the northwest corner of Bali, Indonesia and one on mainland Bali to the west of Menjangan Island; our findings show a positive relationship between DRR and fish diversity. Since structural complexity drives key ecological processes on coral reefs, we consider that DRR may become a useful quantitative community-level descriptor to characterize reef complexity. PMID:23437380

  1. Digital reef rugosity estimates coral reef habitat complexity.

    PubMed

    Dustan, Phillip; Doherty, Orla; Pardede, Shinta

    2013-01-01

    Ecological habitats with greater structural complexity contain more species due to increased niche diversity. This is especially apparent on coral reefs where individual coral colonies aggregate to give a reef its morphology, species zonation, and three dimensionality. Structural complexity is classically measured with a reef rugosity index, which is the ratio of a straight line transect to the distance a flexible chain of equal length travels when draped over the reef substrate; yet, other techniques from visual categories to remote sensing have been used to characterize structural complexity at scales from microhabitats to reefscapes. Reef-scale methods either lack quantitative precision or are too time consuming to be routinely practical, while remotely sensed indices are mismatched to the finer scale morphology of coral colonies and reef habitats. In this communication a new digital technique, Digital Reef Rugosity (DRR) is described which utilizes a self-contained water level gauge enabling a diver to quickly and accurately characterize rugosity with non-invasive millimeter scale measurements of coral reef surface height at decimeter intervals along meter scale transects. The precise measurements require very little post-processing and are easily imported into a spreadsheet for statistical analyses and modeling. To assess its applicability we investigated the relationship between DRR and fish community structure at four coral reef sites on Menjangan Island off the northwest corner of Bali, Indonesia and one on mainland Bali to the west of Menjangan Island; our findings show a positive relationship between DRR and fish diversity. Since structural complexity drives key ecological processes on coral reefs, we consider that DRR may become a useful quantitative community-level descriptor to characterize reef complexity.

  2. A fast button surface defects detection method based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Liu, Lizhe; Cao, Danhua; Wu, Songlin; Wu, Yubin; Wei, Taoran

    2018-01-01

    Considering the complexity of the button surface texture and the variety of buttons and defects, we propose a fast visual method for button surface defect detection, based on convolutional neural network (CNN). CNN has the ability to extract the essential features by training, avoiding designing complex feature operators adapted to different kinds of buttons, textures and defects. Firstly, we obtain the normalized button region and then use HOG-SVM method to identify the front and back side of the button. Finally, a convolutional neural network is developed to recognize the defects. Aiming at detecting the subtle defects, we propose a network structure with multiple feature channels input. To deal with the defects of different scales, we take a strategy of multi-scale image block detection. The experimental results show that our method is valid for a variety of buttons and able to recognize all kinds of defects that have occurred, including dent, crack, stain, hole, wrong paint and uneven. The detection rate exceeds 96%, which is much better than traditional methods based on SVM and methods based on template match. Our method can reach the speed of 5 fps on DSP based smart camera with 600 MHz frequency.

  3. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  4. Tracking vortices in superconductors: Extracting singularities from a discretized complex scalar field evolving in time

    DOE PAGES

    Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom; ...

    2016-02-19

    In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function of time as well. A vortex now corresponds to a 2Dmore » space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. In addition, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less

  5. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    NASA Technical Reports Server (NTRS)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  6. Reproducing the scaling laws for Slow and Fast ruptures

    NASA Astrophysics Data System (ADS)

    Romanet, Pierre; Bhat, Harsha; Madariaga, Raúl

    2017-04-01

    Modelling long term behaviour of large, natural fault systems, that are geometrically complex, is a challenging problem. This is why most of the research so far has concentrated on modelling the long term response of single planar fault system. To overcome this limitation, we appeal to a novel algorithm called the Fast Multipole Method which was developed in the context of modelling gravitational N-body problems. This method allows us to decrease the computational complexity of the calculation from O(N2) to O(N log N), N being the number of discretised elements on the fault. We then adapted this method to model the long term quasi-dynamic response of two faults, with step-over like geometry, that are governed by rate and state friction laws. We assume the faults have spatially uniform rate weakening friction. The results show that when stress interaction between faults is accounted, a complex spectrum of slip (including slow-slip events, dynamic ruptures and partial ruptures) emerges naturally. The simulated slow-slip and dynamic events follow the scaling law inferred by Ide et al. 2007 i. e. M ∝ T for slow-slip events and M ∝ T2 (in 2D) for dynamic events.

  7. Participatory approaches to understanding practices of flood management across borders

    NASA Astrophysics Data System (ADS)

    Bracken, L. J.; Forrester, J.; Oughton, E. A.; Cinderby, S.; Donaldson, A.; Anness, L.; Passmore, D.

    2012-04-01

    The aim of this paper is to outline and present initial results from a study designed to identify principles of and practices for adaptive co-management strategies for resilience to flooding in borderlands using participatory methods. Borderlands are the complex and sometimes undefined spaces existing at the interface of different territories and draws attention towards messy connections and disconnections (Strathern 2004; Sassen 2006). For this project the borderlands concerned are those between professional and lay knowledge, between responsible agencies, and between one nation and another. Research was focused on the River Tweed catchment, located on the Scottish-English border. This catchment is subject to complex environmental designations and rural development regimes that make integrated management of the whole catchment difficult. A multi-method approach was developed using semi-structured interviews, Q methodology and participatory GIS in order to capture wide ranging practices for managing flooding, the judgements behind these practices and to 'scale up' participation in the study. Professionals and local experts were involved in the research. The methodology generated a useful set of options for flood management, with research outputs easily understood by key management organisations and the wider public alike. There was a wide endorsement of alternative flood management solutions from both managers and local experts. The role of location was particularly important for ensuring communication and data sharing between flood managers from different organisations and more wide ranging stakeholders. There were complex issues around scale; both the mismatch between communities and evidence of flooding and the mismatch between governance and scale of intervention for natural flood management. The multi-method approach was essential in capturing practice and the complexities around governance of flooding. The involvement of key flood management organisations was integral to making the research of relevance to professionals.

  8. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  9. Directed formation of micro- and nanoscale patterns of functional light-harvesting LH2 complexes.

    PubMed

    Reynolds, Nicholas P; Janusz, Stefan; Escalante-Marun, Maryana; Timney, John; Ducker, Robert E; Olsen, John D; Otto, Cees; Subramaniam, Vinod; Leggett, Graham J; Hunter, C Neil

    2007-11-28

    The precision placement of the desired protein components on a suitable substrate is an essential prelude to any hybrid "biochip" device, but a second and equally important condition must also be met: the retention of full biological activity. Here we demonstrate the selective binding of an optically active membrane protein, the light-harvesting LH2 complex from Rhodobacter sphaeroides, to patterned self-assembled monolayers at the micron scale and the fabrication of nanometer-scale patterns of these molecules using near-field photolithographic methods. In contrast to plasma proteins, which are reversibly adsorbed on many surfaces, the LH2 complex is readily patterned simply by spatial control of surface polarity. Near-field photolithography has yielded rows of light-harvesting complexes only 98 nm wide. Retention of the native optical properties of patterned LH2 molecules was demonstrated using in situ fluorescence emission spectroscopy.

  10. Time-Series Analysis of Embodied Interaction: Movement Variability and Complexity Matching As Dyadic Properties

    PubMed Central

    Zapata-Fonseca, Leonardo; Dotov, Dobromir; Fossion, Ruben; Froese, Tom

    2016-01-01

    There is a growing consensus that a fuller understanding of social cognition depends on more systematic studies of real-time social interaction. Such studies require methods that can deal with the complex dynamics taking place at multiple interdependent temporal and spatial scales, spanning sub-personal, personal, and dyadic levels of analysis. We demonstrate the value of adopting an extended multi-scale approach by re-analyzing movement time-series generated in a study of embodied dyadic interaction in a minimal virtual reality environment (a perceptual crossing experiment). Reduced movement variability revealed an interdependence between social awareness and social coordination that cannot be accounted for by either subjective or objective factors alone: it picks out interactions in which subjective and objective conditions are convergent (i.e., elevated coordination is perceived as clearly social, and impaired coordination is perceived as socially ambiguous). This finding is consistent with the claim that interpersonal interaction can be partially constitutive of direct social perception. Clustering statistics (Allan Factor) of salient events revealed fractal scaling. Complexity matching defined as the similarity between these scaling laws was significantly more pronounced in pairs of participants as compared to surrogate dyads. This further highlights the multi-scale and distributed character of social interaction and extends previous complexity matching results from dyadic conversation to non-verbal social interaction dynamics. Trials with successful joint interaction were also associated with an increase in local coordination. Consequently, a local coordination pattern emerges on the background of complex dyadic interactions in the PCE task and makes joint successful performance possible. PMID:28018274

  11. User group attitudes toward forest management treatments on the Shawnee National Forest: application of a photo-evaluation technique

    Treesearch

    Jonathan M. Cohen; Jean C. Mangun; Mae A. Davenport; Andrew D. Carver

    2008-01-01

    Diverse public opinions, competing management goals, and polarized interest groups combine with problems of scale to create a complex management arena for managers in the Central Hardwood Forest region. A mixed-methods approach that incorporated quantitative analysis of data from a photo evaluation-attitude scale survey instrument was used to assess attitudes toward...

  12. Structural and spectroscopic investigation of the N-methylformamide-water (NMF···3H2O) complex

    NASA Astrophysics Data System (ADS)

    Hammami, F.; Ghalla, H.; Chebaane, A.; Nasr, S.

    2015-01-01

    In this work, theoretical studies on the structure, molecular properties, hydrogen bonding, and vibrational spectra of the N-methylformamide-water (NMF...3H2O) complex will be presented. The molecular geometry was optimised by using Hartree-Fock (HF), second Møller-Plesset (MP2), and density functional theory methods with different basis sets. The harmonic vibrational frequencies are computed by using the B3LYP method with 6-311++G(d,p) as a basis set and then scaled with a suitable scale factor to yield good coherence with the observed values. The temperature dependence of various thermodynamic functions (heat capacity, entropy, and enthalpy changes) was also studied. A detailed analysis of the nature of the hydrogen bonding, using natural bond orbital (NBO) and topological atoms in molecules theory, has been reported.

  13. An evaluation of methods for scaling aircraft noise perception

    NASA Technical Reports Server (NTRS)

    Ollerhead, J. B.

    1971-01-01

    One hundred and twenty recorded sounds, including jets, turboprops, piston engined aircraft and helicopters were rated by a panel of subjects in a paired comparison test. The results were analyzed to evaluate a number of noise rating procedures in terms of their ability to accurately estimate both relative and absolute perceived noise levels. It was found that the complex procedures developed by Stevens, Zwicker and Kryter are superior to other scales. The main advantage of these methods over the more convenient weighted sound pressure level scales lies in their ability to cope with signals over a wide range of bandwidth. However, Stevens' loudness level scale and the perceived noise level scale both overestimate the growth of perceived level with intensity because of an apparent deficiency in the band level summation rule. A simple correction is proposed which will enable these scales to properly account for the experimental observations.

  14. Processor farming in two-level analysis of historical bridge

    NASA Astrophysics Data System (ADS)

    Krejčí, T.; Kruis, J.; Koudelka, T.; Šejnoha, M.

    2017-11-01

    This contribution presents a processor farming method in connection with a multi-scale analysis. In this method, each macro-scopic integration point or each finite element is connected with a certain meso-scopic problem represented by an appropriate representative volume element (RVE). The solution of a meso-scale problem provides then effective parameters needed on the macro-scale. Such an analysis is suitable for parallel computing because the meso-scale problems can be distributed among many processors. The application of the processor farming method to a real world masonry structure is illustrated by an analysis of Charles bridge in Prague. The three-dimensional numerical model simulates the coupled heat and moisture transfer of one half of arch No. 3. and it is a part of a complex hygro-thermo-mechanical analysis which has been developed to determine the influence of climatic loading on the current state of the bridge.

  15. Molecular Precision at Micrometer Length Scales: Hierarchical Assembly of DNA-Protein Nanostructures.

    PubMed

    Schiffels, Daniel; Szalai, Veronika A; Liddle, J Alexander

    2017-07-25

    Robust self-assembly across length scales is a ubiquitous feature of biological systems but remains challenging for synthetic structures. Taking a cue from biology-where disparate molecules work together to produce large, functional assemblies-we demonstrate how to engineer microscale structures with nanoscale features: Our self-assembly approach begins by using DNA polymerase to controllably create double-stranded DNA (dsDNA) sections on a single-stranded template. The single-stranded DNA (ssDNA) sections are then folded into a mechanically flexible skeleton by the origami method. This process simultaneously shapes the structure at the nanoscale and directs the large-scale geometry. The DNA skeleton guides the assembly of RecA protein filaments, which provides rigidity at the micrometer scale. We use our modular design strategy to assemble tetrahedral, rectangular, and linear shapes of defined dimensions. This method enables the robust construction of complex assemblies, greatly extending the range of DNA-based self-assembly methods.

  16. Coarse-grained molecular dynamics simulations for giant protein-DNA complexes

    NASA Astrophysics Data System (ADS)

    Takada, Shoji

    Biomolecules are highly hierarchic and intrinsically flexible. Thus, computational modeling calls for multi-scale methodologies. We have been developing a coarse-grained biomolecular model where on-average 10-20 atoms are grouped into one coarse-grained (CG) particle. Interactions among CG particles are tuned based on atomistic interactions and the fluctuation matching algorithm. CG molecular dynamics methods enable us to simulate much longer time scale motions of much larger molecular systems than fully atomistic models. After broad sampling of structures with CG models, we can easily reconstruct atomistic models, from which one can continue conventional molecular dynamics simulations if desired. Here, we describe our CG modeling methodology for protein-DNA complexes, together with various biological applications, such as the DNA duplication initiation complex, model chromatins, and transcription factor dynamics on chromatin-like environment.

  17. A Separable Insertion Method to Calculate Atomic and Molecular Resonances on a FE-DVR Grid using Exterior Complex Scaling

    NASA Astrophysics Data System (ADS)

    Abeln, Brant Anthony

    The study of metastable electronic resonances, anion or neutral states of finite lifetime, in molecules is an important area of research where currently no theoretical technique is generally applicable. The role of theory is to calculate both the position and width, which is proportional to the inverse of the lifetime, of these resonances and how they vary with respect to nuclear geometry in order to generate potential energy surfaces. These surfaces are the basis of time-dependent models of the molecular dynamics where the system moves towards vibrational excitation or fragmentation. Three fundamental electronic processes that can be modeled this way are dissociative electronic attachment, vibrational excitation through electronic impact and autoionization. Currently, experimental investigation into these processes is being preformed on polyatomic molecules while theoreticians continue their fifty-year-old search for robust methods to calculate them. The separable insertion method, investigated in this thesis, seeks to tackle the problem of calculating metastable resonances by using existing quantum chemistry tools along with a grid-based method employing exterior complex scaling (ECS). Modern quantum chemistry methods are extremely efficient at calculating ground and (bound) excited electronic states of atoms and molecules by utilizing Gaussian basis functions. These functions provide both a numerically fast and analytic solution to the necessary two-electron, six-dimensional integrals required in structure calculations. However, these computer programs, based on analytic Gaussian basis sets, cannot construct solutions that are not square-integrable, such as resonance wavefunctions. ECS, on the other hand, can formally calculate resonance solutions by rotating the asymptotic electronic coordinates into the complex plane. The complex Siegert energies for resonances, Eres = ER - iGamma/2 where ER is the real-valued position of the resonance and Gamma is the width of the resonance, can be found directly as an isolated pole in the complex energy plane. Unlike the straight complex scaling, ECS on the electronic coordinates overcomes the non-analytic behavior of the nuclear attraction potential, as a function of complex [special characters omitted] where the sum is over each nucleus in a molecular system. Discouragingly, the Gaussian basis functions, which are computationally well-suited for bound electronic structure, fail at forming an effective basis set for ECS due to the derivative discontinuity generated by the complex coordinate rotation and the piecewise defined contour. This thesis seeks to explore methods for implementing ECS indirectly without losing the numerical simplicity and power of Gaussian basis sets. The separable insertion method takes advantage of existing software by constructing a N2-term separable potential of the target system using Gaussian functions to be inserted into a finite-element discrete variable representation (FE-DVR) grid that implements ECS. This work reports an exhaustive investigation into this approach for calculating resonances. This thesis shows that this technique is successful at describing an anion shape resonance of a closed-shell atom or molecule in the static-exchange approximation. This method is applied to the 2P Be-, 2pig N2- and 2pi u CO2- shape resonances to calculate their complex Seigert energies. Additionally, many details on the exact construction of the separable potential and of the expansion basis are explored. The future work considers methods for faster convergence of the resonance energy, moving beyond the static-exchange approximation and applying this technique to polyatomic systems of interest.

  18. Complex dewetting scenarios of ultrathin silicon films for large-scale nanoarchitectures

    PubMed Central

    Naffouti, Meher; Backofen, Rainer; Salvalaglio, Marco; Bottein, Thomas; Lodari, Mario; Voigt, Axel; David, Thomas; Benkouider, Abdelmalek; Fraj, Ibtissem; Favre, Luc; Ronda, Antoine; Berbezier, Isabelle; Grosso, David; Abbarchi, Marco; Bollani, Monica

    2017-01-01

    Dewetting is a ubiquitous phenomenon in nature; many different thin films of organic and inorganic substances (such as liquids, polymers, metals, and semiconductors) share this shape instability driven by surface tension and mass transport. Via templated solid-state dewetting, we frame complex nanoarchitectures of monocrystalline silicon on insulator with unprecedented precision and reproducibility over large scales. Phase-field simulations reveal the dominant role of surface diffusion as a driving force for dewetting and provide a predictive tool to further engineer this hybrid top-down/bottom-up self-assembly method. Our results demonstrate that patches of thin monocrystalline films of metals and semiconductors share the same dewetting dynamics. We also prove the potential of our method by fabricating nanotransfer molding of metal oxide xerogels on silicon and glass substrates. This method allows the novel possibility of transferring these Si-based patterns on different materials, which do not usually undergo dewetting, offering great potential also for microfluidic or sensing applications. PMID:29296680

  19. Complex dewetting scenarios of ultrathin silicon films for large-scale nanoarchitectures.

    PubMed

    Naffouti, Meher; Backofen, Rainer; Salvalaglio, Marco; Bottein, Thomas; Lodari, Mario; Voigt, Axel; David, Thomas; Benkouider, Abdelmalek; Fraj, Ibtissem; Favre, Luc; Ronda, Antoine; Berbezier, Isabelle; Grosso, David; Abbarchi, Marco; Bollani, Monica

    2017-11-01

    Dewetting is a ubiquitous phenomenon in nature; many different thin films of organic and inorganic substances (such as liquids, polymers, metals, and semiconductors) share this shape instability driven by surface tension and mass transport. Via templated solid-state dewetting, we frame complex nanoarchitectures of monocrystalline silicon on insulator with unprecedented precision and reproducibility over large scales. Phase-field simulations reveal the dominant role of surface diffusion as a driving force for dewetting and provide a predictive tool to further engineer this hybrid top-down/bottom-up self-assembly method. Our results demonstrate that patches of thin monocrystalline films of metals and semiconductors share the same dewetting dynamics. We also prove the potential of our method by fabricating nanotransfer molding of metal oxide xerogels on silicon and glass substrates. This method allows the novel possibility of transferring these Si-based patterns on different materials, which do not usually undergo dewetting, offering great potential also for microfluidic or sensing applications.

  20. What qualitative research can contribute to a randomized controlled trial of a complex community intervention.

    PubMed

    Nelson, Geoffrey; Macnaughton, Eric; Goering, Paula

    2015-11-01

    Using the case of a large-scale, multi-site Canadian Housing First research demonstration project for homeless people with mental illness, At Home/Chez Soi, we illustrate the value of qualitative methods in a randomized controlled trial (RCT) of a complex community intervention. We argue that quantitative RCT research can neither capture the complexity nor tell the full story of a complex community intervention. We conceptualize complex community interventions as having multiple phases and dimensions that require both RCT and qualitative research components. Rather than assume that qualitative research and RCTs are incommensurate, a more pragmatic mixed methods approach was used, which included using both qualitative and quantitative methods to understand program implementation and outcomes. At the same time, qualitative research was used to examine aspects of the intervention that could not be understood through the RCT, such as its conception, planning, sustainability, and policy impacts. Through this example, we show how qualitative research can tell a more complete story about complex community interventions. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Cardiac interbeat interval dynamics from childhood to senescence : comparison of conventional and new measures based on fractals and chaos theory

    NASA Technical Reports Server (NTRS)

    Pikkujamsa, S. M.; Makikallio, T. H.; Sourander, L. B.; Raiha, I. J.; Puukka, P.; Skytta, J.; Peng, C. K.; Goldberger, A. L.; Huikuri, H. V.

    1999-01-01

    BACKGROUND: New methods of R-R interval variability based on fractal scaling and nonlinear dynamics ("chaos theory") may give new insights into heart rate dynamics. The aims of this study were to (1) systematically characterize and quantify the effects of aging from early childhood to advanced age on 24-hour heart rate dynamics in healthy subjects; (2) compare age-related changes in conventional time- and frequency-domain measures with changes in newly derived measures based on fractal scaling and complexity (chaos) theory; and (3) further test the hypothesis that there is loss of complexity and altered fractal scaling of heart rate dynamics with advanced age. METHODS AND RESULTS: The relationship between age and cardiac interbeat (R-R) interval dynamics from childhood to senescence was studied in 114 healthy subjects (age range, 1 to 82 years) by measurement of the slope, beta, of the power-law regression line (log power-log frequency) of R-R interval variability (10(-4) to 10(-2) Hz), approximate entropy (ApEn), short-term (alpha(1)) and intermediate-term (alpha(2)) fractal scaling exponents obtained by detrended fluctuation analysis, and traditional time- and frequency-domain measures from 24-hour ECG recordings. Compared with young adults (<40 years old, n=29), children (<15 years old, n=27) showed similar complexity (ApEn) and fractal correlation properties (alpha(1), alpha(2), beta) of R-R interval dynamics despite lower spectral and time-domain measures. Progressive loss of complexity (decreased ApEn, r=-0.69, P<0.001) and alterations of long-term fractal-like heart rate behavior (increased alpha(2), r=0.63, decreased beta, r=-0.60, P<0.001 for both) were observed thereafter from middle age (40 to 60 years, n=29) to old age (>60 years, n=29). CONCLUSIONS: Cardiac interbeat interval dynamics change markedly from childhood to old age in healthy subjects. Children show complexity and fractal correlation properties of R-R interval time series comparable to those of young adults, despite lower overall heart rate variability. Healthy aging is associated with R-R interval dynamics showing higher regularity and altered fractal scaling consistent with a loss of complex variability.

  2. Sampling from complex networks using distributed learning automata

    NASA Astrophysics Data System (ADS)

    Rezvanian, Alireza; Rahmati, Mohammad; Meybodi, Mohammad Reza

    2014-02-01

    A complex network provides a framework for modeling many real-world phenomena in the form of a network. In general, a complex network is considered as a graph of real world phenomena such as biological networks, ecological networks, technological networks, information networks and particularly social networks. Recently, major studies are reported for the characterization of social networks due to a growing trend in analysis of online social networks as dynamic complex large-scale graphs. Due to the large scale and limited access of real networks, the network model is characterized using an appropriate part of a network by sampling approaches. In this paper, a new sampling algorithm based on distributed learning automata has been proposed for sampling from complex networks. In the proposed algorithm, a set of distributed learning automata cooperate with each other in order to take appropriate samples from the given network. To investigate the performance of the proposed algorithm, several simulation experiments are conducted on well-known complex networks. Experimental results are compared with several sampling methods in terms of different measures. The experimental results demonstrate the superiority of the proposed algorithm over the others.

  3. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  4. A new method to real-normalize measured complex modes

    NASA Technical Reports Server (NTRS)

    Wei, Max L.; Allemang, Randall J.; Zhang, Qiang; Brown, David L.

    1987-01-01

    A time domain subspace iteration technique is presented to compute a set of normal modes from the measured complex modes. By using the proposed method, a large number of physical coordinates are reduced to a smaller number of model or principal coordinates. Subspace free decay time responses are computed using properly scaled complex modal vectors. Companion matrix for the general case of nonproportional damping is then derived in the selected vector subspace. Subspace normal modes are obtained through eigenvalue solution of the (M sub N) sup -1 (K sub N) matrix and transformed back to the physical coordinates to get a set of normal modes. A numerical example is presented to demonstrate the outlined theory.

  5. A simple method for estimating the size of nuclei on fractal surfaces

    NASA Astrophysics Data System (ADS)

    Zeng, Qiang

    2017-10-01

    Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.

  6. CORALINA: a universal method for the generation of gRNA libraries for CRISPR-based screening.

    PubMed

    Köferle, Anna; Worf, Karolina; Breunig, Christopher; Baumann, Valentin; Herrero, Javier; Wiesbeck, Maximilian; Hutter, Lukas H; Götz, Magdalena; Fuchs, Christiane; Beck, Stephan; Stricker, Stefan H

    2016-11-14

    The bacterial CRISPR system is fast becoming the most popular genetic and epigenetic engineering tool due to its universal applicability and adaptability. The desire to deploy CRISPR-based methods in a large variety of species and contexts has created an urgent need for the development of easy, time- and cost-effective methods enabling large-scale screening approaches. Here we describe CORALINA (comprehensive gRNA library generation through controlled nuclease activity), a method for the generation of comprehensive gRNA libraries for CRISPR-based screens. CORALINA gRNA libraries can be derived from any source of DNA without the need of complex oligonucleotide synthesis. We show the utility of CORALINA for human and mouse genomic DNA, its reproducibility in covering the most relevant genomic features including regulatory, coding and non-coding sequences and confirm the functionality of CORALINA generated gRNAs. The simplicity and cost-effectiveness make CORALINA suitable for any experimental system. The unprecedented sequence complexities obtainable with CORALINA libraries are a necessary pre-requisite for less biased large scale genomic and epigenomic screens.

  7. Exploring metabolic pathways in genome-scale networks via generating flux modes.

    PubMed

    Rezola, A; de Figueiredo, L F; Brock, M; Pey, J; Podhorski, A; Wittmann, C; Schuster, S; Bockmayr, A; Planes, F J

    2011-02-15

    The reconstruction of metabolic networks at the genome scale has allowed the analysis of metabolic pathways at an unprecedented level of complexity. Elementary flux modes (EFMs) are an appropriate concept for such analysis. However, their number grows in a combinatorial fashion as the size of the metabolic network increases, which renders the application of EFMs approach to large metabolic networks difficult. Novel methods are expected to deal with such complexity. In this article, we present a novel optimization-based method for determining a minimal generating set of EFMs, i.e. a convex basis. We show that a subset of elements of this convex basis can be effectively computed even in large metabolic networks. Our method was applied to examine the structure of pathways producing lysine in Escherichia coli. We obtained a more varied and informative set of pathways in comparison with existing methods. In addition, an alternative pathway to produce lysine was identified using a detour via propionyl-CoA, which shows the predictive power of our novel approach. The source code in C++ is available upon request.

  8. Telescoping Mechanics: A New Paradigm for Composite Behavior Simulation

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Murthy, P. L. N.; Gotsis, P. K.; Mital. S. K.

    2004-01-01

    This report reviews the application of telescoping mechanics to composites using recursive laminate theory. The elemental scale is the fiber-matrix slice, the behavior of which propagates to laminate. The results from using applications for typical, hybrid, and smart composites and composite-enhanced reinforced concrete structures illustrate the versatility and generality of telescoping scale mechanics. Comparisons with approximate, single-cell, and two- and three-dimensional finite-element methods demonstrate the accuracy and computational effectiveness of telescoping scale mechanics for predicting complex composite behavior.

  9. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum.

    PubMed

    Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph

    2012-06-22

    Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.

  10. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum

    PubMed Central

    2012-01-01

    Background Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. Results This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. Conclusions The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding. PMID:22727013

  11. Optimization of a method for preparing solid complexes of essential clove oil with β-cyclodextrins.

    PubMed

    Hernández-Sánchez, Pilar; López-Miranda, Santiago; Guardiola, Lucía; Serrano-Martínez, Ana; Gabaldón, José Antonio; Nuñez-Delicado, Estrella

    2017-01-01

    Clove oil (CO) is an aromatic oily liquid used in the food, cosmetics and pharmaceutical industries for its functional properties. However, its disadvantages of pungent taste, volatility, light sensitivity and poor water solubility can be solved by applying microencapsulation or complexation techniques. Essential CO was successfully solubilized in aqueous solution by forming inclusion complexes with β-cyclodextrins (β-CDs). Moreover, phase solubility studies demonstrated that essential CO also forms insoluble complexes with β-CDs. Based on these results, essential CO-β-CD solid complexes were prepared by the novel approach of microwave irradiation (MWI), followed by three different drying methods: vacuum oven drying (VO), freeze-drying (FD) or spray-drying (SD). FD was the best option for drying the CO-β-CD solid complexes, followed by VO and SD. MWI can be used efficiently to prepare essential CO-β-CD complexes with good yield on an industrial scale. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  12. A Generalized Hybrid Multiscale Modeling Approach for Flow and Reactive Transport in Porous Media

    NASA Astrophysics Data System (ADS)

    Yang, X.; Meng, X.; Tang, Y. H.; Guo, Z.; Karniadakis, G. E.

    2017-12-01

    Using emerging understanding of biological and environmental processes at fundamental scales to advance predictions of the larger system behavior requires the development of multiscale approaches, and there is strong interest in coupling models at different scales together in a hybrid multiscale simulation framework. A limited number of hybrid multiscale simulation methods have been developed for subsurface applications, mostly using application-specific approaches for model coupling. The proposed generalized hybrid multiscale approach is designed with minimal intrusiveness to the at-scale simulators (pre-selected) and provides a set of lightweight C++ scripts to manage a complex multiscale workflow utilizing a concurrent coupling approach. The workflow includes at-scale simulators (using the lattice-Boltzmann method, LBM, at the pore and Darcy scale, respectively), scripts for boundary treatment (coupling and kriging), and a multiscale universal interface (MUI) for data exchange. The current study aims to apply the generalized hybrid multiscale modeling approach to couple pore- and Darcy-scale models for flow and mixing-controlled reaction with precipitation/dissolution in heterogeneous porous media. The model domain is packed heterogeneously that the mixing front geometry is more complex and not known a priori. To address those challenges, the generalized hybrid multiscale modeling approach is further developed to 1) adaptively define the locations of pore-scale subdomains, 2) provide a suite of physical boundary coupling schemes and 3) consider the dynamic change of the pore structures due to mineral precipitation/dissolution. The results are validated and evaluated by comparing with single-scale simulations in terms of velocities, reactive concentrations and computing cost.

  13. Next Generation Analytic Tools for Large Scale Genetic Epidemiology Studies of Complex Diseases

    PubMed Central

    Mechanic, Leah E.; Chen, Huann-Sheng; Amos, Christopher I.; Chatterjee, Nilanjan; Cox, Nancy J.; Divi, Rao L.; Fan, Ruzong; Harris, Emily L.; Jacobs, Kevin; Kraft, Peter; Leal, Suzanne M.; McAllister, Kimberly; Moore, Jason H.; Paltoo, Dina N.; Province, Michael A.; Ramos, Erin M.; Ritchie, Marylyn D.; Roeder, Kathryn; Schaid, Daniel J.; Stephens, Matthew; Thomas, Duncan C.; Weinberg, Clarice R.; Witte, John S.; Zhang, Shunpu; Zöllner, Sebastian; Feuer, Eric J.; Gillanders, Elizabeth M.

    2012-01-01

    Over the past several years, genome-wide association studies (GWAS) have succeeded in identifying hundreds of genetic markers associated with common diseases. However, most of these markers confer relatively small increments of risk and explain only a small proportion of familial clustering. To identify obstacles to future progress in genetic epidemiology research and provide recommendations to NIH for overcoming these barriers, the National Cancer Institute sponsored a workshop entitled “Next Generation Analytic Tools for Large-Scale Genetic Epidemiology Studies of Complex Diseases” on September 15–16, 2010. The goal of the workshop was to facilitate discussions on (1) statistical strategies and methods to efficiently identify genetic and environmental factors contributing to the risk of complex disease; and (2) how to develop, apply, and evaluate these strategies for the design, analysis, and interpretation of large-scale complex disease association studies in order to guide NIH in setting the future agenda in this area of research. The workshop was organized as a series of short presentations covering scientific (gene-gene and gene-environment interaction, complex phenotypes, and rare variants and next generation sequencing) and methodological (simulation modeling and computational resources and data management) topic areas. Specific needs to advance the field were identified during each session and are summarized. PMID:22147673

  14. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  15. A polynomial primal-dual Dikin-type algorithm for linear programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, B.; Roos, R.; Terlaky, T.

    1994-12-31

    We present a new primal-dual affine scaling method for linear programming. The search direction is obtained by using Dikin`s original idea: minimize the objective function (which is the duality gap in a primal-dual algorithm) over a suitable ellipsoid. The search direction has no obvious relationship with the directions proposed in the literature so far. It guarantees a significant decrease in the duality gap in each iteration, and at the same time drives the iterates to the central path. The method admits a polynomial complexity bound that is better than the one for Monteiro et al.`s original primal-dual affine scaling method.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Attota, Ravikiran, E-mail: Ravikiran.attota@nist.gov; Dixson, Ronald G.

    We experimentally demonstrate that the three-dimensional (3-D) shape variations of nanometer-scale objects can be resolved and measured with sub-nanometer scale sensitivity using conventional optical microscopes by analyzing 4-D optical data using the through-focus scanning optical microscopy (TSOM) method. These initial results show that TSOM-determined cross-sectional (3-D) shape differences of 30 nm–40 nm wide lines agree well with critical-dimension atomic force microscope measurements. The TSOM method showed a linewidth uncertainty of 1.22 nm (k = 2). Complex optical simulations are not needed for analysis using the TSOM method, making the process simple, economical, fast, and ideally suited for high volume nanomanufacturing process monitoring.

  17. Simulations of Sea Level Rise Effects on Complex Coastal Systems

    NASA Astrophysics Data System (ADS)

    Niedoroda, A. W.; Ye, M.; Saha, B.; Donoghue, J. F.; Reed, C. W.

    2009-12-01

    It is now established that complex coastal systems with elements such as beaches, inlets, bays, and rivers adjust their morphologies according to time-varying balances in between the processes that control the exchange of sediment. Accelerated sea level rise introduces a major perturbation into the sediment-sharing systems. A modeling framework based on a new SL-PR model which is an advanced version of the aggregate-scale CST Model and the event-scale CMS-2D and CMS-Wave combination have been used to simulate the recent evolution of a portion of the Florida panhandle coast. This combination of models provides a method to evaluate coefficients in the aggregate-scale model that were previously treated as fitted parameters. That is, by carrying out simulations of a complex coastal system with runs of the event-scale model representing more than a year it is now possible to directly relate the coefficients in the large-scale SL-PR model to measureable physical parameters in the current and wave fields. This cross-scale modeling procedure has been used to simulate the shoreline evolution at the Santa Rosa Island, a long barrier which houses significant military infrastructure at the north Gulf Coast. The model has been used to simulate 137 years of measured shoreline change and to extend these to predictions of future rates of shoreline migration.

  18. Digital geomorphological landslide hazard mapping of the Alpago area, Italy

    NASA Astrophysics Data System (ADS)

    van Westen, Cees J.; Soeters, Rob; Sijmons, Koert

    Large-scale geomorphological maps of mountainous areas are traditionally made using complex symbol-based legends. They can serve as excellent "geomorphological databases", from which an experienced geomorphologist can extract a large amount of information for hazard mapping. However, these maps are not designed to be used in combination with a GIS, due to their complex cartographic structure. In this paper, two methods are presented for digital geomorphological mapping at large scales using GIS and digital cartographic software. The methods are applied to an area with a complex geomorphological setting on the Borsoia catchment, located in the Alpago region, near Belluno in the Italian Alps. The GIS database set-up is presented with an overview of the data layers that have been generated and how they are interrelated. The GIS database was also converted into a paper map, using a digital cartographic package. The resulting largescale geomorphological hazard map is attached. The resulting GIS database and cartographic product can be used to analyse the hazard type and hazard degree for each polygon, and to find the reasons for the hazard classification.

  19. Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks

    NASA Astrophysics Data System (ADS)

    Valdez, Marc Andrew; Jaschke, Daniel; Vargas, David L.; Carr, Lincoln D.

    2017-12-01

    We quantify the emergent complexity of quantum states near quantum critical points on regular 1D lattices, via complex network measures based on quantum mutual information as the adjacency matrix, in direct analogy to quantifying the complexity of electroencephalogram or functional magnetic resonance imaging measurements of the brain. Using matrix product state methods, we show that network density, clustering, disparity, and Pearson's correlation obtain the critical point for both quantum Ising and Bose-Hubbard models to a high degree of accuracy in finite-size scaling for three classes of quantum phase transitions, Z2, mean field superfluid to Mott insulator, and a Berzinskii-Kosterlitz-Thouless crossover.

  20. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    PubMed

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  1. Evaluating sensitivity of complex electrical methods for monitoring CO2 intrusion into a shallow groundwater system and associated geochemical transformations

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Wu, Y.; Hubbard, S. S.; Birkholzer, J. T.; Daley, T. M.; Pugh, J. D.; Peterson, J.; Trautz, R. C.

    2011-12-01

    A risk factor of CO2 storage in deep geological formations includes its potential to leak into shallow formations and impact groundwater geochemistry and quality. In particular, CO2 decreases groundwater pH, which can potentially mobilize naturally occurring trace metals and ions commonly absorbed to or contained in sediments. Here, geophysical studies (primarily complex electrical method) are being carried out at both laboratory and field scales to evaluate the sensitivity of geophysical methods for monitoring dissolved CO2 distribution and geochemical transformations that may impact water quality. Our research is performed in association with a field test that is exploring the effects of dissolved CO2 intrusion on groundwater geochemistry. Laboratory experiments using site sediments (silica sand and some fraction of clay minerals) and groundwater were initially conducted under field relevant CO2 partial pressures (pCO2). A significant pH drop was observed with inline sensors with concurrent changes in fluid conductivity caused by CO2 dissolution. Electrical resistivity and electrical phase responses correlated well with the CO2 dissolution process at various pCO2. Specifically, resistivity decreased initially at low pCO2 condition resulting from CO2 dissolution followed by a slight rebound because of the transition of bicarbonate into non-dissociated carbonic acid at lower pH slightly reducing the total concentration of dissociated species. Continuous electrical phase decreases were also observed, which are interpreted to be driven by the decrease of surface charge density (due to the decrease of pH, which approaches the PZC of the sediments). In general, laboratory experiments revealed the sensitivity of electrical signals to CO2 intrusion into groundwater formations and can be used to guide field data interpretation. Cross well complex electrical data are currently being collected periodically throughout a field experiment involving the controlled release of dissolved CO2 into groundwater. The objective of the geophysical cross well monitoring effort is to evaluate the sensitivity of complex electrical methods to dissolved CO2 at the field scale. Here, we report on the ability to translate laboratory-based petrophysical information from lab to field scales, and on the potential of field complex electrical methods for remotely monitoring CO2-induced geochemical transformations.

  2. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  3. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  4. Analysis and elimination of a bias in targeted molecular dynamics simulations of conformational transitions: application to calmodulin.

    PubMed

    Ovchinnikov, Victor; Karplus, Martin

    2012-07-26

    The popular targeted molecular dynamics (TMD) method for generating transition paths in complex biomolecular systems is revisited. In a typical TMD transition path, the large-scale changes occur early and the small-scale changes tend to occur later. As a result, the order of events in the computed paths depends on the direction in which the simulations are performed. To identify the origin of this bias, and to propose a method in which the bias is absent, variants of TMD in the restraint formulation are introduced and applied to the complex open ↔ closed transition in the protein calmodulin. Due to the global best-fit rotation that is typically part of the TMD method, the simulated system is guided implicitly along the lowest-frequency normal modes, until the large spatial scales associated with these modes are near the target conformation. The remaining portion of the transition is described progressively by higher-frequency modes, which correspond to smaller-scale rearrangements. A straightforward modification of TMD that avoids the global best-fit rotation is the locally restrained TMD (LRTMD) method, in which the biasing potential is constructed from a number of TMD potentials, each acting on a small connected portion of the protein sequence. With a uniform distribution of these elements, transition paths that lack the length-scale bias are obtained. Trajectories generated by steered MD in dihedral angle space (DSMD), a method that avoids best-fit rotations altogether, also lack the length-scale bias. To examine the importance of the paths generated by TMD, LRTMD, and DSMD in the actual transition, we use the finite-temperature string method to compute the free energy profile associated with a transition tube around a path generated by each algorithm. The free energy barriers associated with the paths are comparable, suggesting that transitions can occur along each route with similar probabilities. This result indicates that a broad ensemble of paths needs to be calculated to obtain a full description of conformational changes in biomolecules. The breadth of the contributing ensemble suggests that energetic barriers for conformational transitions in proteins are offset by entropic contributions that arise from a large number of possible paths.

  5. Analysis of protein-protein docking decoys using interaction fingerprints: application to the reconstruction of CaM-ligand complexes.

    PubMed

    Uchikoga, Nobuyuki; Hirokawa, Takatsugu

    2010-05-11

    Protein-protein docking for proteins with large conformational changes was analyzed by using interaction fingerprints, one of the scales for measuring similarities among complex structures, utilized especially for searching near-native protein-ligand or protein-protein complex structures. Here, we have proposed a combined method for analyzing protein-protein docking by taking large conformational changes into consideration. This combined method consists of ensemble soft docking with multiple protein structures, refinement of complexes, and cluster analysis using interaction fingerprints and energy profiles. To test for the applicability of this combined method, various CaM-ligand complexes were reconstructed from the NMR structures of unbound CaM. For the purpose of reconstruction, we used three known CaM-ligands, namely, the CaM-binding peptides of cyclic nucleotide gateway (CNG), CaM kinase kinase (CaMKK) and the plasma membrane Ca2+ ATPase pump (PMCA), and thirty-one structurally diverse CaM conformations. For each ligand, 62000 CaM-ligand complexes were generated in the docking step and the relationship between their energy profiles and structural similarities to the native complex were analyzed using interaction fingerprint and RMSD. Near-native clusters were obtained in the case of CNG and CaMKK. The interaction fingerprint method discriminated near-native structures better than the RMSD method in cluster analysis. We showed that a combined method that includes the interaction fingerprint is very useful for protein-protein docking analysis of certain cases.

  6. Spatio-temporal precipitation climatology over complex terrain using a censored additive regression model.

    PubMed

    Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim

    2017-06-15

    Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.

  7. Automatic Brain Portion Segmentation From Magnetic Resonance Images of Head Scans Using Gray Scale Transformation and Morphological Operations.

    PubMed

    Somasundaram, Karuppanagounder; Ezhilarasan, Kamalanathan

    2015-01-01

    To develop an automatic skull stripping method for magnetic resonance imaging (MRI) of human head scans. The proposed method is based on gray scale transformation and morphological operations. The proposed method has been tested with 20 volumes of normal T1-weighted images taken from Internet Brain Segmentation Repository. Experimental results show that the proposed method gives better results than the popular skull stripping methods Brain Extraction Tool and Brain Surface Extractor. The average value of Jaccard and Dice coefficients are 0.93 and 0.962 respectively. In this article, we have proposed a novel skull stripping method using intensity transformation and morphological operations. This is a low computational complexity method but gives competitive or better results than that of the popular skull stripping methods Brain Surface Extractor and Brain Extraction Tool.

  8. Macroscopic modeling and simulations of supercoiled DNA with bound proteins

    NASA Astrophysics Data System (ADS)

    Huang, Jing; Schlick, Tamar

    2002-11-01

    General methods are presented for modeling and simulating DNA molecules with bound proteins on the macromolecular level. These new approaches are motivated by the need for accurate and affordable methods to simulate slow processes (on the millisecond time scale) in DNA/protein systems, such as the large-scale motions involved in the Hin-mediated inversion process. Our approaches, based on the wormlike chain model of long DNA molecules, introduce inhomogeneous potentials for DNA/protein complexes based on available atomic-level structures. Electrostatically, treat those DNA/protein complexes as sets of effective charges, optimized by our discrete surface charge optimization package, in which the charges are distributed on an excluded-volume surface that represents the macromolecular complex. We also introduce directional bending potentials as well as non-identical bead hydrodynamics algorithm to further mimic the inhomogeneous effects caused by protein binding. These models thus account for basic elements of protein binding effects on DNA local structure but remain computational tractable. To validate these models and methods, we reproduce various properties measured by both Monte Carlo methods and experiments. We then apply the developed models to study the Hin-mediated inversion system in long DNA. By simulating supercoiled, circular DNA with or without bound proteins, we observe significant effects of protein binding on global conformations and long-time dynamics of the DNA on the kilo basepair length.

  9. Methods of testing parameterizations: Vertical ocean mixing

    NASA Technical Reports Server (NTRS)

    Tziperman, Eli

    1992-01-01

    The ocean's velocity field is characterized by an exceptional variety of scales. While the small-scale oceanic turbulence responsible for the vertical mixing in the ocean is of scales a few centimeters and smaller, the oceanic general circulation is characterized by horizontal scales of thousands of kilometers. In oceanic general circulation models that are typically run today, the vertical structure of the ocean is represented by a few tens of discrete grid points. Such models cannot explicitly model the small-scale mixing processes, and must, therefore, find ways to parameterize them in terms of the larger-scale fields. Finding a parameterization that is both reliable and plausible to use in ocean models is not a simple task. Vertical mixing in the ocean is the combined result of many complex processes, and, in fact, mixing is one of the less known and less understood aspects of the oceanic circulation. In present models of the oceanic circulation, the many complex processes responsible for vertical mixing are often parameterized in an oversimplified manner. Yet, finding an adequate parameterization of vertical ocean mixing is crucial to the successful application of ocean models to climate studies. The results of general circulation models for quantities that are of particular interest to climate studies, such as the meridional heat flux carried by the ocean, are quite sensitive to the strength of the vertical mixing. We try to examine the difficulties in choosing an appropriate vertical mixing parameterization, and the methods that are available for validating different parameterizations by comparing model results to oceanographic data. First, some of the physical processes responsible for vertically mixing the ocean are briefly mentioned, and some possible approaches to the parameterization of these processes in oceanographic general circulation models are described in the following section. We then discuss the role of the vertical mixing in the physics of the large-scale ocean circulation, and examine methods of validating mixing parameterizations using large-scale ocean models.

  10. Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes

    NASA Astrophysics Data System (ADS)

    Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth

    2014-12-01

    There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.

  11. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  13. 7th Annual Systems Biology Symposium: Systems Biology and Engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galitski, Timothy P.

    2008-04-01

    Systems biology recognizes the complex multi-scale organization of biological systems, from molecules to ecosystems. The International Symposium on Systems Biology has been hosted by the Institute for Systems Biology in Seattle, Washington, since 2002. The annual two-day event gathers the most influential researchers transforming biology into an integrative discipline investingating complex systems. Engineering and application of new technology is a central element of systems biology. Genome-scale, or very small-scale, biological questions drive the enigneering of new technologies, which enable new modes of experimentation and computational analysis, leading to new biological insights and questions. Concepts and analytical methods in engineering aremore » now finding direct applications in biology. Therefore, the 2008 Symposium, funded in partnership with the Department of Energy, featured global leaders in "Systems Biology and Engineering."« less

  14. Experimental Methods for Protein Interaction Identification and Characterization

    NASA Astrophysics Data System (ADS)

    Uetz, Peter; Titz, Björn; Cagney, Gerard

    There are dozens of methods for the detection of protein-protein interactions but they fall into a few broad categories. Fragment complementation assays such as the yeast two-hybrid (Y2H) system are based on split proteins that are functionally reconstituted by fusions of interacting proteins. Biophysical methods include structure determination and mass spectrometric (MS) identification of proteins in complexes. Biochemical methods include methods such as far western blotting and peptide arrays. Only the Y2H and protein complex purification combined with MS have been used on a larger scale. Due to the lack of data it is still difficult to compare these methods with respect to their efficiency and error rates. Current data does not favor any particular method and thus multiple experimental approaches are necessary to maximally cover the interactome of any target cell or organism.

  15. High-order Discontinuous Element-based Schemes for the Inviscid Shallow Water Equations: Spectral Multidomain Penalty and Discontinuous Galerkin Methods

    DTIC Science & Technology

    2011-07-19

    multidomain methods, Discontinuous Galerkin methods, interfacial treatment ∗ Jorge A. Escobar-Vargas, School of Civil and Environmental Engineering, Cornell...Click here to view linked References 1. Introduction Geophysical flows exhibit a complex structure and dynamics over a broad range of scales that...hyperbolic problems, where the interfacial patching was implemented with an upwind scheme based on a modified method of characteristics. This approach

  16. Simulation Methods for Optics and Electromagnetics in Complex Geometries and Extreme Nonlinear Regimes with Disparate Scales

    DTIC Science & Technology

    2014-09-30

    software devel- oped with this project support. S1 Cork School 2013: I. UPPEcore Simulator design and usage, Simulation examples II. Nonlinear pulse...pulse propagation 08/28/13 — 08/02/13, University College Cork , Ireland S2 ACMS MURI School 2012: Computational Methods for Nonlinear PDEs describing

  17. Automated object-based classification of topography from SRTM data

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens

    2012-01-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download. PMID:22485060

  18. Multi-Scale Compositionality: Identifying the Compositional Structures of Social Dynamics Using Deep Learning

    PubMed Central

    Peng, Huan-Kai; Marculescu, Radu

    2015-01-01

    Objective Social media exhibit rich yet distinct temporal dynamics which cover a wide range of different scales. In order to study this complex dynamics, two fundamental questions revolve around (1) the signatures of social dynamics at different time scales, and (2) the way in which these signatures interact and form higher-level meanings. Method In this paper, we propose the Recursive Convolutional Bayesian Model (RCBM) to address both of these fundamental questions. The key idea behind our approach consists of constructing a deep-learning framework using specialized convolution operators that are designed to exploit the inherent heterogeneity of social dynamics. RCBM’s runtime and convergence properties are guaranteed by formal analyses. Results Experimental results show that the proposed method outperforms the state-of-the-art approaches both in terms of solution quality and computational efficiency. Indeed, by applying the proposed method on two social network datasets, Twitter and Yelp, we are able to identify the compositional structures that can accurately characterize the complex social dynamics from these two social media. We further show that identifying these patterns can enable new applications such as anomaly detection and improved social dynamics forecasting. Finally, our analysis offers new insights on understanding and engineering social media dynamics, with direct applications to opinion spreading and online content promotion. PMID:25830775

  19. Automated object-based classification of topography from SRTM data

    NASA Astrophysics Data System (ADS)

    Drăguţ, Lucian; Eisank, Clemens

    2012-03-01

    We introduce an object-based method to automatically classify topography from SRTM data. The new method relies on the concept of decomposing land-surface complexity into more homogeneous domains. An elevation layer is automatically segmented and classified at three scale levels that represent domains of complexity by using self-adaptive, data-driven techniques. For each domain, scales in the data are detected with the help of local variance and segmentation is performed at these appropriate scales. Objects resulting from segmentation are partitioned into sub-domains based on thresholds given by the mean values of elevation and standard deviation of elevation respectively. Results resemble reasonably patterns of existing global and regional classifications, displaying a level of detail close to manually drawn maps. Statistical evaluation indicates that most of classes satisfy the regionalization requirements of maximizing internal homogeneity while minimizing external homogeneity. Most objects have boundaries matching natural discontinuities at regional level. The method is simple and fully automated. The input data consist of only one layer, which does not need any pre-processing. Both segmentation and classification rely on only two parameters: elevation and standard deviation of elevation. The methodology is implemented as a customized process for the eCognition® software, available as online download. The results are embedded in a web application with functionalities of visualization and download.

  20. On Statistics of Bi-Orthogonal Eigenvectors in Real and Complex Ginibre Ensembles: Combining Partial Schur Decomposition with Supersymmetry

    NASA Astrophysics Data System (ADS)

    Fyodorov, Yan V.

    2018-06-01

    We suggest a method of studying the joint probability density (JPD) of an eigenvalue and the associated `non-orthogonality overlap factor' (also known as the `eigenvalue condition number') of the left and right eigenvectors for non-selfadjoint Gaussian random matrices of size {N× N} . First we derive the general finite N expression for the JPD of a real eigenvalue {λ} and the associated non-orthogonality factor in the real Ginibre ensemble, and then analyze its `bulk' and `edge' scaling limits. The ensuing distribution is maximally heavy-tailed, so that all integer moments beyond normalization are divergent. A similar calculation for a complex eigenvalue z and the associated non-orthogonality factor in the complex Ginibre ensemble is presented as well and yields a distribution with the finite first moment. Its `bulk' scaling limit yields a distribution whose first moment reproduces the well-known result of Chalker and Mehlig (Phys Rev Lett 81(16):3367-3370, 1998), and we provide the `edge' scaling distribution for this case as well. Our method involves evaluating the ensemble average of products and ratios of integer and half-integer powers of characteristic polynomials for Ginibre matrices, which we perform in the framework of a supersymmetry approach. Our paper complements recent studies by Bourgade and Dubach (The distribution of overlaps between eigenvectors of Ginibre matrices, 2018. arXiv:1801.01219).

  1. Tree growth and climate in the Pacific Northwest, North America: a broad-scale analysis of changing growth environments

    Treesearch

    Whitney L. Albright; David L. Peterson

    2013-01-01

    Climate change in the 21st century will affect tree growth in the Pacific Northwest region of North America, although complex climate–growth relationships make it difficult to identify how radial growth will respond across different species distributions. We used a novel method to examine potential growth responses to climate change at a broad geographical scale with a...

  2. Evaluation of naranjo adverse drug reactions probability scale in causality assessment of drug-induced liver injury.

    PubMed

    García-Cortés, M; Lucena, M I; Pachkoria, K; Borraz, Y; Hidalgo, R; Andrade, R J

    2008-05-01

    Causality assessment in hepatotoxicity is challenging. The current standard liver-specific Council for International Organizations of Medical Sciences/Roussel Uclaf Causality Assessment Method scale is complex and difficult to implement in daily practice. The Naranjo Adverse Drug Reactions Probability Scale is a simple and widely used nonspecific scale, which has not been specifically evaluated in drug-induced liver injury. To compare the Naranjo method with the standard liver-specific Council for International Organizations of Medical Sciences/Roussel Uclaf Causality Assessment Method scale in evaluating the accuracy and reproducibility of Naranjo Adverse Drug Reactions Probability Scale in the diagnosis of hepatotoxicity. Two hundred and twenty-five cases of suspected hepatotoxicity submitted to a national registry were evaluated by two independent observers and assessed for between-observer and between-scale differences using percentages of agreement and the weighted kappa (kappa(w)) test. A total of 249 ratings were generated. Between-observer agreement was 45% with a kappa(w) value of 0.17 for the Naranjo Adverse Drug Reactions Probability Scale, while there was a higher agreement when using the Council for International Organizations of Medical Sciences/Roussel Uclaf Causality Assessment Method scale (72%, kappa(w): 0.71). Concordance between the two scales was 24% (kappa(w): 0.15). The Naranjo Adverse Drug Reactions Probability Scale had low sensitivity (54%) and poor negative predictive value (29%) and showed a limited capability to distinguish between adjacent categories of probability. The Naranjo scale lacks validity and reproducibility in the attribution of causality in hepatotoxicity.

  3. Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport

    NASA Astrophysics Data System (ADS)

    Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.

    2016-12-01

    Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.

  4. Estimation of Global Network Statistics from Incomplete Data

    PubMed Central

    Bliss, Catherine A.; Danforth, Christopher M.; Dodds, Peter Sheridan

    2014-01-01

    Complex networks underlie an enormous variety of social, biological, physical, and virtual systems. A profound complication for the science of complex networks is that in most cases, observing all nodes and all network interactions is impossible. Previous work addressing the impacts of partial network data is surprisingly limited, focuses primarily on missing nodes, and suggests that network statistics derived from subsampled data are not suitable estimators for the same network statistics describing the overall network topology. We generate scaling methods to predict true network statistics, including the degree distribution, from only partial knowledge of nodes, links, or weights. Our methods are transparent and do not assume a known generating process for the network, thus enabling prediction of network statistics for a wide variety of applications. We validate analytical results on four simulated network classes and empirical data sets of various sizes. We perform subsampling experiments by varying proportions of sampled data and demonstrate that our scaling methods can provide very good estimates of true network statistics while acknowledging limits. Lastly, we apply our techniques to a set of rich and evolving large-scale social networks, Twitter reply networks. Based on 100 million tweets, we use our scaling techniques to propose a statistical characterization of the Twitter Interactome from September 2008 to November 2008. Our treatment allows us to find support for Dunbar's hypothesis in detecting an upper threshold for the number of active social contacts that individuals maintain over the course of one week. PMID:25338183

  5. Quantifying complexity of financial short-term time series by composite multiscale entropy measure

    NASA Astrophysics Data System (ADS)

    Niu, Hongli; Wang, Jun

    2015-05-01

    It is significant to study the complexity of financial time series since the financial market is a complex evolved dynamic system. Multiscale entropy is a prevailing method used to quantify the complexity of a time series. Due to its less reliability of entropy estimation for short-term time series at large time scales, a modification method, the composite multiscale entropy, is applied to the financial market. To qualify its effectiveness, its applications in the synthetic white noise and 1 / f noise with different data lengths are reproduced first in the present paper. Then it is introduced for the first time to make a reliability test with two Chinese stock indices. After conducting on short-time return series, the CMSE method shows the advantages in reducing deviations of entropy estimation and demonstrates more stable and reliable results when compared with the conventional MSE algorithm. Finally, the composite multiscale entropy of six important stock indices from the world financial markets is investigated, and some useful and interesting empirical results are obtained.

  6. Multicriteria decision analysis: Overview and implications for environmental decision making

    USGS Publications Warehouse

    Hermans, Caroline M.; Erickson, Jon D.; Erickson, Jon D.; Messner, Frank; Ring, Irene

    2007-01-01

    Environmental decision making involving multiple stakeholders can benefit from the use of a formal process to structure stakeholder interactions, leading to more successful outcomes than traditional discursive decision processes. There are many tools available to handle complex decision making. Here we illustrate the use of a multicriteria decision analysis (MCDA) outranking tool (PROMETHEE) to facilitate decision making at the watershed scale, involving multiple stakeholders, multiple criteria, and multiple objectives. We compare various MCDA methods and their theoretical underpinnings, examining methods that most realistically model complex decision problems in ways that are understandable and transparent to stakeholders.

  7. Connotations of pixel-based scale effect in remote sensing and the modified fractal-based analysis method

    NASA Astrophysics Data System (ADS)

    Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu

    2017-06-01

    Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing data and it is helpful to analyze the observation scale from different aspects. This research will ultimately benefit for remote sensing data selection and application.

  8. Efficient methods and readily customizable libraries for managing complexity of large networks.

    PubMed

    Dogrusoz, Ugur; Karacelik, Alper; Safarli, Ilkin; Balci, Hasan; Dervishi, Leonard; Siper, Metin Can

    2018-01-01

    One common problem in visualizing real-life networks, including biological pathways, is the large size of these networks. Often times, users find themselves facing slow, non-scaling operations due to network size, if not a "hairball" network, hindering effective analysis. One extremely useful method for reducing complexity of large networks is the use of hierarchical clustering and nesting, and applying expand-collapse operations on demand during analysis. Another such method is hiding currently unnecessary details, to later gradually reveal on demand. Major challenges when applying complexity reduction operations on large networks include efficiency and maintaining the user's mental map of the drawing. We developed specialized incremental layout methods for preserving a user's mental map while managing complexity of large networks through expand-collapse and hide-show operations. We also developed open-source JavaScript libraries as plug-ins to the web based graph visualization library named Cytsocape.js to implement these methods as complexity management operations. Through efficient specialized algorithms provided by these extensions, one can collapse or hide desired parts of a network, yielding potentially much smaller networks, making them more suitable for interactive visual analysis. This work fills an important gap by making efficient implementations of some already known complexity management techniques freely available to tool developers through a couple of open source, customizable software libraries, and by introducing some heuristics which can be applied upon such complexity management techniques to ensure preserving mental map of users.

  9. The fast multipole method and point dipole moment polarizable force fields.

    PubMed

    Coles, Jonathan P; Masella, Michel

    2015-01-14

    We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.

  10. Complex dynamics of our economic life on different scales: insights from search engine query data.

    PubMed

    Preis, Tobias; Reith, Daniel; Stanley, H Eugene

    2010-12-28

    Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns.

  11. Energy Spectral Behaviors of Communication Networks of Open-Source Communities

    PubMed Central

    Yang, Jianmei; Yang, Huijie; Liao, Hao; Wang, Jiangtao; Zeng, Jinqun

    2015-01-01

    Large-scale online collaborative production activities in open-source communities must be accompanied by large-scale communication activities. Nowadays, the production activities of open-source communities, especially their communication activities, have been more and more concerned. Take CodePlex C # community for example, this paper constructs the complex network models of 12 periods of communication structures of the community based on real data; then discusses the basic concepts of quantum mapping of complex networks, and points out that the purpose of the mapping is to study the structures of complex networks according to the idea of quantum mechanism in studying the structures of large molecules; finally, according to this idea, analyzes and compares the fractal features of the spectra in different quantum mappings of the networks, and concludes that there are multiple self-similarity and criticality in the communication structures of the community. In addition, this paper discusses the insights and application conditions of different quantum mappings in revealing the characteristics of the structures. The proposed quantum mapping method can also be applied to the structural studies of other large-scale organizations. PMID:26047331

  12. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  13. A review of numerical techniques approaching microstructures of crystalline rocks

    NASA Astrophysics Data System (ADS)

    Zhang, Yahui; Wong, Louis Ngai Yuen

    2018-06-01

    The macro-mechanical behavior of crystalline rocks including strength, deformability and failure pattern are dominantly influenced by their grain-scale structures. Numerical technique is commonly used to assist understanding the complicated mechanisms from a microscopic perspective. Each numerical method has its respective strengths and limitations. This review paper elucidates how numerical techniques take geometrical aspects of the grain into consideration. Four categories of numerical methods are examined: particle-based methods, block-based methods, grain-based methods, and node-based methods. Focusing on the grain-scale characters, specific relevant issues including increasing complexity of micro-structure, deformation and breakage of model elements, fracturing and fragmentation process are described in more detail. Therefore, the intrinsic capabilities and limitations of different numerical approaches in terms of accounting for the micro-mechanics of crystalline rocks and their phenomenal mechanical behavior are explicitly presented.

  14. Path changing methods applied to the 4-D guidance of STOL aircraft.

    DOT National Transportation Integrated Search

    1971-11-01

    Prior to the advent of large-scale commercial STOL service, some challenging navigation and guidance problems must be solved. Proposed terminal area operations may require that these aircraft be capable of accurately flying complex flight paths, and ...

  15. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; He, Ya-Ling; Kang, Qinjun

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less

  16. On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo

    NASA Astrophysics Data System (ADS)

    Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl

    2016-09-01

    A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.

  17. Ecologically Enhancing Coastal Infrastructure

    NASA Astrophysics Data System (ADS)

    Mac Arthur, Mairi; Naylor, Larissa; Hansom, Jim; Burrows, Mike; Boyd, Ian

    2017-04-01

    Hard engineering structures continue to proliferate in the coastal zone globally in response to increasing pressures associated with rising sea levels, coastal flooding and erosion. These structures are typically plain-cast by design and function as poor ecological surrogates for natural rocky shores which are highly topographically complex and host a range of available microhabitats for intertidal species. Ecological enhancement mitigates some of these negative impacts by integrating components of nature into the construction and design of these structures to improve their sustainability, resilience and multifunctionality. In the largest UK ecological enhancement trial to date, 184 tiles (15x15cm) of up to nine potential designs were deployed on vertical concrete coastal infrastructure in 2016 at three sites across the UK (Saltcoats, Blackness and Isle of Wight). The surface texture and complexity of the tiles were varied to test the effect of settlement surface texture at the mm-cm scale of enhancement on the success of colonisation and biodiversity in the mid-upper intertidal zone in order to answer the following experimental hypotheses: • Tiles with mm-scale geomorphic complexity will have greater barnacle abundances • Tiles with cm-scale geomorphic complexity will have greater species richness than mm-scale tiles. A range of methods were used in creating the tile designs including terrestrial laser scanning of creviced rock surfaces to mimic natural rocky shore complexity as well as artificially generated complexity using computer software. The designs replicated the topographic features of high ecological importance found on natural rocky shores and promoted species recruitment and community composition on artificial surfaces; thus enabling us to evaluate biological responses to geomorphic complexity in a controlled field trial. At two of the sites, the roughest tile designs (cm scale) did not have the highest levels of barnacle recruits which were instead counted on tiles of intermediate roughness such as the grooved concrete with 257 recruits on average (n=8) at four months' post-installation (Saltcoats) and 1291 recruits at two months' post-installation (Isle of Wight). This indicates that a higher level of complexity does not always reflect the most appropriate roughness scale for some colonisers. On average, tiles with mm scale texture were more successful in terms of barnacle colonisation compared to plain-cast control tiles (n=8 per site). The poor performance of the control tiles (9 recruits, Saltcoats; 147 recruits, Isle of Wight after 4 and 2 months, respectively) further highlights that artificial, hard substrates are poor ecological surrogates for natural rocky shores. One of the sites, Blackness, was an observed outlier to the general trend of colonisation, likely due to its estuarine location. This factor may contribute to why every design, including the control tile, had high abundances of barnacles. Artificially designed tiles with cm-scale complexity had higher levels of species richness, with periwinkles and topshells frequently observed to utilise the tile microhabitats in greater numbers than found on other tile designs. These results show that the scale of geomorphic complexity influences early stage colonisation. Data analysis is being carried out between now and the EGU - these advanced analyses would be presented.

  18. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  19. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios G.

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less

  20. Model Updating of Complex Structures Using the Combination of Component Mode Synthesis and Kriging Predictor

    PubMed Central

    Li, Yan; Wang, Dejun; Zhang, Shaoyi

    2014-01-01

    Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters. PMID:24634612

  1. Comparison of Efficiency of Jackknife and Variance Component Estimators of Standard Errors. Program Statistics Research. Technical Report.

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…

  2. Systematic methods for defining coarse-grained maps in large biomolecules.

    PubMed

    Zhang, Zhiyong

    2015-01-01

    Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.

  3. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  4. Quantum watermarking scheme through Arnold scrambling and LSB steganography

    NASA Astrophysics Data System (ADS)

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping

    2017-09-01

    Based on the NEQR of quantum images, a new quantum gray-scale image watermarking scheme is proposed through Arnold scrambling and least significant bit (LSB) steganography. The sizes of the carrier image and the watermark image are assumed to be 2n× 2n and n× n, respectively. Firstly, a classical n× n sized watermark image with 8-bit gray scale is expanded to a 2n× 2n sized image with 2-bit gray scale. Secondly, through the module of PA-MOD N, the expanded watermark image is scrambled to a meaningless image by the Arnold transform. Then, the expanded scrambled image is embedded into the carrier image by the steganography method of LSB. Finally, the time complexity analysis is given. The simulation experiment results show that our quantum circuit has lower time complexity, and the proposed watermarking scheme is superior to others.

  5. Implicit and explicit subgrid-scale modeling in discontinuous Galerkin methods for large-eddy simulation

    NASA Astrophysics Data System (ADS)

    Fernandez, Pablo; Nguyen, Ngoc-Cuong; Peraire, Jaime

    2017-11-01

    Over the past few years, high-order discontinuous Galerkin (DG) methods for Large-Eddy Simulation (LES) have emerged as a promising approach to solve complex turbulent flows. Despite the significant research investment, the relation between the discretization scheme, the Riemann flux, the subgrid-scale (SGS) model and the accuracy of the resulting LES solver remains unclear. In this talk, we investigate the role of the Riemann solver and the SGS model in the ability to predict a variety of flow regimes, including transition to turbulence, wall-free turbulence, wall-bounded turbulence, and turbulence decay. The Taylor-Green vortex problem and the turbulent channel flow at various Reynolds numbers are considered. Numerical results show that DG methods implicitly introduce numerical dissipation in under-resolved turbulence simulations and, even in the high Reynolds number limit, this implicit dissipation provides a more accurate representation of the actual subgrid-scale dissipation than that by explicit models.

  6. A class of hybrid finite element methods for electromagnetics: A review

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Chatterjee, A.; Gong, J.

    1993-01-01

    Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.

  7. Significant Scales in Community Structure

    NASA Astrophysics Data System (ADS)

    Traag, V. A.; Krings, G.; van Dooren, P.

    2013-10-01

    Many complex networks show signs of modular structure, uncovered by community detection. Although many methods succeed in revealing various partitions, it remains difficult to detect at what scale some partition is significant. This problem shows foremost in multi-resolution methods. We here introduce an efficient method for scanning for resolutions in one such method. Additionally, we introduce the notion of ``significance'' of a partition, based on subgraph probabilities. Significance is independent of the exact method used, so could also be applied in other methods, and can be interpreted as the gain in encoding a graph by making use of a partition. Using significance, we can determine ``good'' resolution parameters, which we demonstrate on benchmark networks. Moreover, optimizing significance itself also shows excellent performance. We demonstrate our method on voting data from the European Parliament. Our analysis suggests the European Parliament has become increasingly ideologically divided and that nationality plays no role.

  8. Design and Performance of Insect-Scale Flapping-Wing Vehicles

    NASA Astrophysics Data System (ADS)

    Whitney, John Peter

    Micro-air vehicles (MAVs)---small versions of full-scale aircraft---are the product of a continued path of miniaturization which extends across many fields of engineering. Increasingly, MAVs approach the scale of small birds, and most recently, their sizes have dipped into the realm of hummingbirds and flying insects. However, these non-traditional biologically-inspired designs are without well-established design methods, and manufacturing complex devices at these tiny scales is not feasible using conventional manufacturing methods. This thesis presents a comprehensive investigation of new MAV design and manufacturing methods, as applicable to insect-scale hovering flight. New design methods combine an energy-based accounting of propulsion and aerodynamics with a one degree-of-freedom dynamic flapping model. Important results include analytical expressions for maximum flight endurance and range, and predictions for maximum feasible wing size and body mass. To meet manufacturing constraints, the use of passive wing dynamics to simplify vehicle design and control was investigated; supporting tests included the first synchronized measurements of real-time forces and three-dimensional kinematics generated by insect-scale flapping wings. These experimental methods were then expanded to study optimal wing shapes and high-efficiency flapping kinematics. To support the development of high-fidelity test devices and fully-functional flight hardware, a new class of manufacturing methods was developed, combining elements of rigid-flex printed circuit board fabrication with "pop-up book" folding mechanisms. In addition to their current and future support of insect-scale MAV development, these new manufacturing techniques are likely to prove an essential element to future advances in micro-optomechanics, micro-surgery, and many other fields.

  9. Extending Quantum Chemistry of Bound States to Electronic Resonances

    NASA Astrophysics Data System (ADS)

    Jagau, Thomas-C.; Bravaya, Ksenia B.; Krylov, Anna I.

    2017-05-01

    Electronic resonances are metastable states with finite lifetime embedded in the ionization or detachment continuum. They are ubiquitous in chemistry, physics, and biology. Resonances play a central role in processes as diverse as DNA radiolysis, plasmonic catalysis, and attosecond spectroscopy. This review describes novel equation-of-motion coupled-cluster (EOM-CC) methods designed to treat resonances and bound states on an equal footing. Built on complex-variable techniques such as complex scaling and complex absorbing potentials that allow resonances to be associated with a single eigenstate of the molecular Hamiltonian rather than several continuum eigenstates, these methods extend electronic-structure tools developed for bound states to electronic resonances. Selected examples emphasize the formal advantages as well as the numerical accuracy of EOM-CC in the treatment of electronic resonances. Connections to experimental observables such as spectra and cross sections, as well as practical aspects of implementing complex-valued approaches, are also discussed.

  10. Isoelectric focusing of small non-covalent metal species from plants.

    PubMed

    Köster, Jessica; Hayen, Heiko; von Wirén, Nicolaus; Weber, Günther

    2011-03-01

    IEF is known as a powerful electrophoretic separation technique for amphoteric molecules, in particular for proteins. The objective of the present work is to prove the suitability of IEF also for the separation of small, non-covalent metal species. Investigations are performed with copper-glutathione complexes, with the synthetic ligand ethylenediamine-N,N'-bis(o-hydroxyphenyl)acetic acid (EDDHA) and respective metal complexes (Fe, Ga, Al, Ni, Zn), and with the phytosiderophore 2'-deoxymugineic acid (DMA) and its ferric complex. It is shown that ethylenediamine-N,N'-bis(o-hydroxyphenyl)acetic acid and DMA species are stable during preparative scale IEF, whereas copper-glutathione dissociates considerably. It is also shown that preparative scale IEF can be applied successfully to isolate ferric DMA from real plant samples, and that multidimensional separations are possible by combining preparative scale IEF with subsequent HPLC-MS analysis. Focusing of free ligands and respective metal complexes with di- and trivalent metals results in different pIs, but CIEF is usually needed for a reliable estimation of pI values. Limitations of the proposed methods (preparative IEF and CIEF) and consequences of the results with respect to metal speciation in plants are discussed. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  12. A new fabrication technique for complex refractive micro-optical systems

    NASA Astrophysics Data System (ADS)

    Tormen, Massimo; Carpentiero, Alessandro; Ferrari, Enrico; Cabrini, Stefano; Cojoc, Dan; Di Fabrizio, Enzo

    2006-01-01

    We present a new method that allows to fabricate structures with tightly controlled three-dimensional profiles in the 10 nm to 100 μm scale range. This consists of a sequence of lithographic steps such as Electron Beam (EB) or Focused Ion Beam (FIB) lithography, alternated with isotropic wet etching processes performed on a quartz substrate. Morphological characterization by SEM and AFM shows that 3D structures with very accurate shape control and nanometer scale surface roughness can be realized. Quartz templates have been employed as complex system of micromirrors after metal coating of the patterned surface or used as stamps in nanoimprint, hot embossing or casting processes to shape complex plastic elements. Compared to other 3D micro and nanostructuring methods, in which a hard material is directly "sculptured" by energetic beams, our technique requires a much less intensive use of expensive lithographic equipments, for comparable volumes of structured material, resulting in dramatic increase of throughput. Refractive micro-optical elements have been fabricated and characterized in transmission and reflection modes with white and monochromatic light. The elements produce a distribution of sharp focal spots and lines in the three dimensional space, opening the route for applications of image reconstruction based on refractive optics.

  13. Analysis of the structure of complex networks at different resolution levels

    NASA Astrophysics Data System (ADS)

    Arenas, A.; Fernández, A.; Gómez, S.

    2008-05-01

    Modular structure is ubiquitous in real-world complex networks, and its detection is important because it gives insights into the structure-functionality relationship. The standard approach is based on the optimization of a quality function, modularity, which is a relative quality measure for the partition of a network into modules. Recently, some authors (Fortunato and Barthélemy 2007 Proc. Natl Acad. Sci. USA 104 36 and Kumpula et al 2007 Eur. Phys. J. B 56 41) have pointed out that the optimization of modularity has a fundamental drawback: the existence of a resolution limit beyond which no modular structure can be detected even though these modules might have their own entity. The reason is that several topological descriptions of the network coexist at different scales, which is, in general, a fingerprint of complex systems. Here, we propose a method that allows for multiple resolution screening of the modular structure. The method has been validated using synthetic networks, discovering the predefined structures at all scales. Its application to two real social networks allows us to find the exact splits reported in the literature, as well as the substructure beyond the actual split.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom

    In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, in Phillips et al. [Phys. Rev. E 91, 023311 (2015)], we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function ofmore » time as well. A vortex now corresponds to a 2D space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. Additionally, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less

  15. Cracks dynamics under tensional stress - a DEM approach

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech; Klejment, Piotr; Kosmala, Alicja; Foltyn, Natalia; Szpindler, Maciej

    2017-04-01

    Breaking and fragmentation of solid materials is an extremely complex process involving scales ranging from an atomic scale (breaking inter-atomic bounds) up to thousands of kilometers in case of catastrophic earthquakes (in energy scale it ranges from single eV up to 1024 J). Such a large scale span of breaking processes opens lot of questions like, for example, scaling of breaking processes, existence of factors controlling final size of broken area, existence of precursors, dynamics of fragmentation, to name a few. The classical approach to study breaking process at seismological scales, i.e., physical processes in earthquake foci, is essentially based on two factors: seismic data (mostly) and the continuum mechanics (including the linear fracture mechanics). Such approach has been gratefully successful in developing kinematic (first) and dynamic (recently) models of seismic rupture and explaining many of earthquake features observed all around the globe. However, such approach will sooner or latter face a limitation due to a limited information content of seismic data and inherit limitations of the fracture mechanics principles. A way of avoiding this expected limitation is turning an attention towards a well established in physics method of computational simulations - a powerful branch of contemporary physics. In this presentation we discuss preliminary results of analysis of fracturing dynamics under external tensional forces using the Discrete Element Method approach. We demonstrate that even under a very simplified tensional conditions, the fragmentation dynamics is a very complex process, including multi-fracturing, spontaneous fracture generation and healing, etc. We also emphasis a role of material heterogeneity on the fragmentation process.

  16. Scale-Dependent Rates of Uranyl Surface Complexation Reaction in Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Chongxuan; Shang, Jianying; Kerisit, Sebastien N.

    Scale-dependency of uranyl[U(VI)] surface complexation rates was investigated in stirred flow-cell and column systems using a U(VI)-contaminated sediment from the US Department of Energy, Hanford site, WA. The experimental results were used to estimate the apparent rate of U(VI) surface complexation at the grain-scale and in porous media. Numerical simulations using molecular, pore-scale, and continuum models were performed to provide insights into and to estimate the rate constants of U(VI) surface complexation at the different scales. The results showed that the grain-scale rate constant of U(VI) surface complexation was over 3 to 10 orders of magnitude smaller, dependent on themore » temporal scale, than the rate constant calculated using the molecular simulations. The grain-scale rate was faster initially and slower with time, showing the temporal scale-dependency. The largest rate constant at the grain-scale decreased additional 2 orders of magnitude when the rate was scaled to the porous media in the column. The scaling effect from the grain-scale to the porous media became less important for the slower sorption sites. Pore-scale simulations revealed the importance of coupled mass transport and reactions in both intragranular and inter-granular domains, which caused both spatial and temporal dependence of U(VI) surface complexation rates in the sediment. Pore-scale simulations also revealed a new rate-limiting mechanism in the intragranular porous domains that the rate of coupled diffusion and surface complexation reaction was slower than either process alone. The results provided important implications for developing models to scale geochemical/biogeochemical reactions.« less

  17. Cost and time-effective method for multi-scale measures of rugosity, fractal dimension, and vector dispersion from coral reef 3D models

    PubMed Central

    Dey, S.

    2017-01-01

    We present a method to construct and analyse 3D models of underwater scenes using a single cost-effective camera on a standard laptop with (a) free or low-cost software, (b) no computer programming ability, and (c) minimal man hours for both filming and analysis. This study focuses on four key structural complexity metrics: point-to-point distances, linear rugosity (R), fractal dimension (D), and vector dispersion (1/k). We present the first assessment of accuracy and precision of structure-from-motion (SfM) 3D models from an uncalibrated GoPro™ camera at a small scale (4 m2) and show that they can provide meaningful, ecologically relevant results. Models had root mean square errors of 1.48 cm in X-Y and 1.35 in Z, and accuracies of 86.8% (R), 99.6% (D at scales 30–60 cm), 93.6% (D at scales 1–5 cm), and 86.9 (1/k). Values of R were compared to in-situ chain-and-tape measurements, while values of D and 1/k were compared with ground truths from 3D printed objects modelled underwater. All metrics varied less than 3% between independently rendered models. We thereby improve and rigorously validate a tool for ecologists to non-invasively quantify coral reef structural complexity with a variety of multi-scale metrics. PMID:28406937

  18. Vertical Scale Height of the Topside Ionosphere Around the Korean Peninsula: Estimates from Ionosondes and the Swarm Constellation

    NASA Astrophysics Data System (ADS)

    Park, Jaeheung; Kwak, Young-Sil; Mun, Jun-Chul; Min, Kyoung-Wook

    2015-12-01

    In this study, we estimated the topside scale height of plasma density (Hm) using the Swarm constellation and ionosondes in Korea. The Hm above Korean Peninsula is generally around 50 km. Statistical distributions of the topside scale height exhibited a complex dependence upon local time and season. The results were in general agreement with those of Tulasi Ram et al. (2009), who used the same method to calculate the topside scale height in a mid-latitude region. On the contrary, our results did not fully coincide with those obtained by Liu et al. (2007), who used electron density profiles from Arecibo Incoherent Scatter Radar (ISR) between 1966 and 2002. The disagreement may result from the limitations in our approximation method and data coverage used for estimations, as well as the inherent dependence of Hm on Geographic LONgitude (GLON).

  19. Sea-land segmentation for infrared remote sensing images based on superpixels and multi-scale features

    NASA Astrophysics Data System (ADS)

    Lei, Sen; Zou, Zhengxia; Liu, Dunge; Xia, Zhenghuan; Shi, Zhenwei

    2018-06-01

    Sea-land segmentation is a key step for the information processing of ocean remote sensing images. Traditional sea-land segmentation algorithms ignore the local similarity prior of sea and land, and thus fail in complex scenarios. In this paper, we propose a new sea-land segmentation method for infrared remote sensing images to tackle the problem based on superpixels and multi-scale features. Considering the connectivity and local similarity of sea or land, we interpret the sea-land segmentation task in view of superpixels rather than pixels, where similar pixels are clustered and the local similarity are explored. Moreover, the multi-scale features are elaborately designed, comprising of gray histogram and multi-scale total variation. Experimental results on infrared bands of Landsat-8 satellite images demonstrate that the proposed method can obtain more accurate and more robust sea-land segmentation results than the traditional algorithms.

  20. Modeling complexity in engineered infrastructure system: Water distribution network as an example

    NASA Astrophysics Data System (ADS)

    Zeng, Fang; Li, Xiang; Li, Ke

    2017-02-01

    The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.

  1. Atomic switch networks—nanoarchitectonic design of a complex system for natural computing

    NASA Astrophysics Data System (ADS)

    Demis, E. C.; Aguilera, R.; Sillin, H. O.; Scharnhorst, K.; Sandouk, E. J.; Aono, M.; Stieg, A. Z.; Gimzewski, J. K.

    2015-05-01

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing—a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  2. Atomic switch networks-nanoarchitectonic design of a complex system for natural computing.

    PubMed

    Demis, E C; Aguilera, R; Sillin, H O; Scharnhorst, K; Sandouk, E J; Aono, M; Stieg, A Z; Gimzewski, J K

    2015-05-22

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing-a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  3. Foot-ankle complex injury risk curves using calcaneus bone mineral density data.

    PubMed

    Yoganandan, Narayan; Chirvi, Sajal; Voo, Liming; DeVogel, Nicholas; Pintar, Frank A; Banerjee, Anjishnu

    2017-08-01

    Biomechanical data from post mortem human subject (PMHS) experiments are used to derive human injury probability curves and develop injury criteria. This process has been used in previous and current automotive crashworthiness studies, Federal safety standards, and dummy design and development. Human bone strength decreases as the individuals reach their elderly age. Injury risk curves using the primary predictor variable (e.g., force) should therefore account for such strength reduction when the test data are collected from PMHS specimens of different ages (age at the time of death). This demographic variable is meant to be a surrogate for fracture, often representing bone strength as other parameters have not been routinely gathered in previous experiments. However, bone mineral densities (BMD) can be gathered from tested specimens (presented in this manuscript). The objective of this study is to investigate different approaches of accounting for BMD in the development of human injury risk curves. Using simulated underbody blast (UBB) loading experiments conducted with the PMHS lower leg-foot-ankle complexes, a comparison is made between the two methods: treating BMD as a covariate and pre-scaling test data based on BMD. Twelve PMHS lower leg-foot-ankle specimens were subjected to UBB loads. Calcaneus BMD was obtained from quantitative computed tomography (QCT) images. Fracture forces were recorded using a load cell. They were treated as uncensored data in the survival analysis model which used the Weibull distribution in both methods. The width of the normalized confidence interval (NCIS) was obtained using the mean and ± 95% confidence limit curves. The mean peak forces of 3.9kN and 8.6kN were associated with the 5% and 50% probability of injury for the covariate method of deriving the risk curve for the reference age of 45 years. The mean forces of 5.4 kN and 9.2kN were associated with the 5% and 50% probability of injury for the pre-scaled method. The NCIS magnitudes were greater in the covariate-based risk curves (0.52-1.00) than in the risk curves based on the pre-scaled method (0.24-0.66). The pre-scaling method resulted in a generally greater injury force and a tighter injury risk curve confidence interval. Although not directly applicable to the foot-ankle fractures, when compared with the use of spine BMD from QCT scans to pre-scale the force, the calcaneus BMD scaled data produced greater force at the same risk level in general. Pre-scaling the force data using BMD is an alternate, and likely a more accurate, method instead of using covariate to account for the age-related bone strength change in deriving risk curves from biomechanical experiments using PMHS. Because of the proximity of the calcaneus bone to the impacting load, it is suggested to use and determine the BMD of the foot-ankle bone in future UBB and other loading conditions to derive human injury probability curves for the foot-ankle complex. Copyright © 2017. Published by Elsevier Ltd.

  4. Data series embedding and scale invariant statistics.

    PubMed

    Michieli, I; Medved, B; Ristov, S

    2010-06-01

    Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.

  5. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  6. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  7. Reframed Genome-Scale Metabolic Model to Facilitate Genetic Design and Integration with Expression Data.

    PubMed

    Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang

    2017-01-01

    Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.

  8. Numerical Technology for Large-Scale Computational Electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, R; Champagne, N; White, D

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less

  9. Coupling lattice Boltzmann and continuum equations for flow and reactive transport in porous media.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coon, Ethan; Porter, Mark L.; Kang, Qinjun

    2012-06-18

    In spatially and temporally localized instances, capturing sub-reservoir scale information is necessary. Capturing sub-reservoir scale information everywhere is neither necessary, nor computationally possible. The lattice Boltzmann Method for solving pore-scale systems. At the pore-scale, LBM provides an extremely scalable, efficient way of solving Navier-Stokes equations on complex geometries. Coupling pore-scale and continuum scale systems via domain decomposition. By leveraging the interpolations implied by pore-scale and continuum scale discretizations, overlapping Schwartz domain decomposition is used to ensure continuity of pressure and flux. This approach is demonstrated on a fractured medium, in which Navier-Stokes equations are solved within the fracture while Darcy'smore » equation is solved away from the fracture Coupling reactive transport to pore-scale flow simulators allows hybrid approaches to be extended to solve multi-scale reactive transport.« less

  10. Efficient electronic structure theory via hierarchical scale-adaptive coupled-cluster formalism: I. Theory and computational complexity analysis

    NASA Astrophysics Data System (ADS)

    Lyakh, Dmitry I.

    2018-03-01

    A novel reduced-scaling, general-order coupled-cluster approach is formulated by exploiting hierarchical representations of many-body tensors, combined with the recently suggested formalism of scale-adaptive tensor algebra. Inspired by the hierarchical techniques from the renormalisation group approach, H/H2-matrix algebra and fast multipole method, the computational scaling reduction in our formalism is achieved via coarsening of quantum many-body interactions at larger interaction scales, thus imposing a hierarchical structure on many-body tensors of coupled-cluster theory. In our approach, the interaction scale can be defined on any appropriate Euclidean domain (spatial domain, momentum-space domain, energy domain, etc.). We show that the hierarchically resolved many-body tensors can reduce the storage requirements to O(N), where N is the number of simulated quantum particles. Subsequently, we prove that any connected many-body diagram consisting of a finite number of arbitrary-order tensors, e.g. an arbitrary coupled-cluster diagram, can be evaluated in O(NlogN) floating-point operations. On top of that, we suggest an additional approximation to further reduce the computational complexity of higher order coupled-cluster equations, i.e. equations involving higher than double excitations, which otherwise would introduce a large prefactor into formal O(NlogN) scaling.

  11. Scaling and characterisation of a 2-DoF velocity amplified electromagnetic vibration energy harvester

    NASA Astrophysics Data System (ADS)

    O’Donoghue, D.; Frizzell, R.; Punch, J.

    2018-07-01

    Vibration energy harvesters (VEHs) offer an alternative to batteries for the autonomous operation of low-power electronics. Understanding the influence of scaling on VEHs is of great importance in the design of reduced scale harvesters. The nonlinear harvesters investigated here employ velocity amplification, a technique used to increase velocity through impacts, to improve the power output of multiple-degree-of-freedom VEHs, compared to linear resonators. Such harvesters, employing electromagnetic induction, are referred to as velocity amplified electromagnetic generators (VAEGs), with gains in power achieved by increasing the relative velocity between the magnet and coil in the transducer. The influence of scaling on a nonlinear 2-DoF VAEG is presented. Due to the increased complexity of VAEGs, compared to linear systems, linear scaling theory cannot be directly applied to VAEGs. Therefore, a detailed nonlinear scaling method is utilised. Experimental and numerical methods are employed. This nonlinear scaling method can be used for analysing the scaling behaviour of all nonlinear electromagnetic VEHs. It is demonstrated that the electromagnetic coupling coefficient degrades more rapidly with scale for systems with larger displacement amplitudes, meaning that systems operating at low frequencies will scale poorly compared to those operating at higher frequencies. The load power of the 2-DoF VAEG is predicted to scale as {P}L\\propto {s}5.51 (s = volume1/3), suggesting that achieving high power densities in a VAEG with low device volume is extremely challenging.

  12. Large-scale structural analysis: The structural analyst, the CSM Testbed and the NAS System

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Mccleary, Susan L.; Macy, Steven C.; Aminpour, Mohammad A.

    1989-01-01

    The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  13. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  14. A decision tree algorithm for investigation of model biases related to dynamical cores and physical parameterizations: CESM/CAM EVALUATION BY DECISION TREES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soner Yorgun, M.; Rood, Richard B.

    An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smoothmore » topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.« less

  15. A decision tree algorithm for investigation of model biases related to dynamical cores and physical parameterizations: CESM/CAM EVALUATION BY DECISION TREES

    DOE PAGES

    Soner Yorgun, M.; Rood, Richard B.

    2016-11-11

    An object-based evaluation method using a pattern recognition algorithm (i.e., classification trees) is applied to the simulated orographic precipitation for idealized experimental setups using the National Center of Atmospheric Research (NCAR) Community Atmosphere Model (CAM) with the finite volume (FV) and the Eulerian spectral transform dynamical cores with varying resolutions. Daily simulations were analyzed and three different types of precipitation features were identified by the classification tree algorithm. The statistical characteristics of these features (i.e., maximum value, mean value, and variance) were calculated to quantify the difference between the dynamical cores and changing resolutions. Even with the simple and smoothmore » topography in the idealized setups, complexity in the precipitation fields simulated by the models develops quickly. The classification tree algorithm using objective thresholding successfully detected different types of precipitation features even as the complexity of the precipitation field increased. The results show that the complexity and the bias introduced in small-scale phenomena due to the spectral transform method of CAM Eulerian spectral dynamical core is prominent, and is an important reason for its dissimilarity from the FV dynamical core. The resolvable scales, both in horizontal and vertical dimensions, have significant effect on the simulation of precipitation. The results of this study also suggest that an efficient and informative study about the biases produced by GCMs should involve daily (or even hourly) output (rather than monthly mean) analysis over local scales.« less

  16. Laser jetting of femto-liter metal droplets for high resolution 3D printed structures

    NASA Astrophysics Data System (ADS)

    Zenou, M.; Sa'Ar, A.; Kotler, Z.

    2015-11-01

    Laser induced forward transfer (LIFT) is employed in a special, high accuracy jetting regime, by adequately matching the sub-nanosecond pulse duration to the metal donor layer thickness. Under such conditions, an effective solid nozzle is formed, providing stability and directionality to the femto-liter droplets which are printed from a large gap in excess of 400 μm. We illustrate the wide applicability of this method by printing several 3D metal objects. First, very high aspect ratio (A/R > 20), micron scale, copper pillars in various configuration, upright and arbitrarily bent, then a micron scale 3D object composed of gold and copper. Such a digital printing method could serve the generation of complex, multi-material, micron-scale, 3D materials and novel structures.

  17. Multiscale structure of time series revealed by the monotony spectrum.

    PubMed

    Vamoş, Călin

    2017-03-01

    Observation of complex systems produces time series with specific dynamics at different time scales. The majority of the existing numerical methods for multiscale analysis first decompose the time series into several simpler components and the multiscale structure is given by the properties of their components. We present a numerical method which describes the multiscale structure of arbitrary time series without decomposing them. It is based on the monotony spectrum defined as the variation of the mean amplitude of the monotonic segments with respect to the mean local time scale during successive averagings of the time series, the local time scales being the durations of the monotonic segments. The maxima of the monotony spectrum indicate the time scales which dominate the variations of the time series. We show that the monotony spectrum can correctly analyze a diversity of artificial time series and can discriminate the existence of deterministic variations at large time scales from the random fluctuations. As an application we analyze the multifractal structure of some hydrological time series.

  18. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  19. Hybrid finite-volume/transported PDF method for the simulation of turbulent reactive flows

    NASA Astrophysics Data System (ADS)

    Raman, Venkatramanan

    A novel computational scheme is formulated for simulating turbulent reactive flows in complex geometries with detailed chemical kinetics. A Probability Density Function (PDF) based method that handles the scalar transport equation is coupled with an existing Finite Volume (FV) Reynolds-Averaged Navier-Stokes (RANS) flow solver. The PDF formulation leads to closed chemical source terms and facilitates the use of detailed chemical mechanisms without approximations. The particle-based PDF scheme is modified to handle complex geometries and grid structures. Grid-independent particle evolution schemes that scale linearly with the problem size are implemented in the Monte-Carlo PDF solver. A novel algorithm, in situ adaptive tabulation (ISAT) is employed to ensure tractability of complex chemistry involving a multitude of species. Several non-reacting test cases are performed to ascertain the efficiency and accuracy of the method. Simulation results from a turbulent jet-diffusion flame case are compared against experimental data. The effect of micromixing model, turbulence model and reaction scheme on flame predictions are discussed extensively. Finally, the method is used to analyze the Dow Chlorination Reactor. Detailed kinetics involving 37 species and 158 reactions as well as a reduced form with 16 species and 21 reactions are used. The effect of inlet configuration on reactor behavior and product distribution is analyzed. Plant-scale reactors exhibit quenching phenomena that cannot be reproduced by conventional simulation methods. The FV-PDF method predicts quenching accurately and provides insight into the dynamics of the reactor near extinction. The accuracy of the fractional time-stepping technique in discussed in the context of apparent multiple-steady states observed in a non-premixed feed configuration of the chlorination reactor.

  20. Synaptic dynamics contribute to long-term single neuron response fluctuations.

    PubMed

    Reinartz, Sebastian; Biro, Istvan; Gal, Asaf; Giugliano, Michele; Marom, Shimon

    2014-01-01

    Firing rate variability at the single neuron level is characterized by long-memory processes and complex statistics over a wide range of time scales (from milliseconds up to several hours). Here, we focus on the contribution of non-stationary efficacy of the ensemble of synapses-activated in response to a given stimulus-on single neuron response variability. We present and validate a method tailored for controlled and specific long-term activation of a single cortical neuron in vitro via synaptic or antidromic stimulation, enabling a clear separation between two determinants of neuronal response variability: membrane excitability dynamics vs. synaptic dynamics. Applying this method we show that, within the range of physiological activation frequencies, the synaptic ensemble of a given neuron is a key contributor to the neuronal response variability, long-memory processes and complex statistics observed over extended time scales. Synaptic transmission dynamics impact on response variability in stimulation rates that are substantially lower compared to stimulation rates that drive excitability resources to fluctuate. Implications to network embedded neurons are discussed.

  1. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  2. Multiplexed genome engineering and genotyping methods applications for synthetic biology and metabolic engineering.

    PubMed

    Wang, Harris H; Church, George M

    2011-01-01

    Engineering at the scale of whole genomes requires fundamentally new molecular biology tools. Recent advances in recombineering using synthetic oligonucleotides enable the rapid generation of mutants at high efficiency and specificity and can be implemented at the genome scale. With these techniques, libraries of mutants can be generated, from which individuals with functionally useful phenotypes can be isolated. Furthermore, populations of cells can be evolved in situ by directed evolution using complex pools of oligonucleotides. Here, we discuss ways to utilize these multiplexed genome engineering methods, with special emphasis on experimental design and implementation. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. In-chip direct laser writing of a centimeter-scale acoustic micromixer

    NASA Astrophysics Data System (ADS)

    van't Oever, Jorick; Spannenburg, Niels; Offerhaus, Herman; van den Ende, Dirk; Herek, Jennifer; Mugele, Frieder

    2015-04-01

    A centimeter-scale micromixer was fabricated by two-photon polymerization inside a closed microchannel using direct laser writing. The structure consists of a repeating pattern of 20 μm×20 μm×155 μm acrylate pillars and extends over 1.2 cm. Using external ultrasonic actuation, the micropillars locally induce streaming with flow speeds of 30 μm s-1. The fabrication method allows for large flexibility and more complex designs.

  4. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  5. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  6. Optimally Robust Redundancy Relations for Failure Detection in Uncertain Systems,

    DTIC Science & Technology

    1983-04-01

    particular applications. While the general methods provide the basis for what in principle should be a widely applicable failure detection methodology...modifications to this result which overcome them at no fundmental increase in complexity. 4.1 Scaling A critical problem with the criteria of the preceding...criterion which takes scaling into account L 2 s[ (45) As in (38), we can multiply the C. by positive scalars to take into account unequal weightings on

  7. Multifractals embedded in short time series: An unbiased estimation of probability moment

    NASA Astrophysics Data System (ADS)

    Qiu, Lu; Yang, Tianguang; Yin, Yanhua; Gu, Changgui; Yang, Huijie

    2016-12-01

    An exact estimation of probability moments is the base for several essential concepts, such as the multifractals, the Tsallis entropy, and the transfer entropy. By means of approximation theory we propose a new method called factorial-moment-based estimation of probability moments. Theoretical prediction and computational results show that it can provide us an unbiased estimation of the probability moments of continuous order. Calculations on probability redistribution model verify that it can extract exactly multifractal behaviors from several hundred recordings. Its powerfulness in monitoring evolution of scaling behaviors is exemplified by two empirical cases, i.e., the gait time series for fast, normal, and slow trials of a healthy volunteer, and the closing price series for Shanghai stock market. By using short time series with several hundred lengths, a comparison with the well-established tools displays significant advantages of its performance over the other methods. The factorial-moment-based estimation can evaluate correctly the scaling behaviors in a scale range about three generations wider than the multifractal detrended fluctuation analysis and the basic estimation. The estimation of partition function given by the wavelet transform modulus maxima has unacceptable fluctuations. Besides the scaling invariance focused in the present paper, the proposed factorial moment of continuous order can find its various uses, such as finding nonextensive behaviors of a complex system and reconstructing the causality relationship network between elements of a complex system.

  8. Rapid and specific purification of Argonaute-small RNA complexes from crude cell lysates

    PubMed Central

    Flores-Jasso, C. Fabián; Salomon, William E.; Zamore, Phillip D.

    2013-01-01

    Small interfering RNAs (siRNAs) direct Argonaute proteins, the core components of the RNA-induced silencing complex (RISC), to cleave complementary target RNAs. Here, we describe a method to purify active RISC containing a single, unique small RNA guide sequence. We begin by capturing RISC using a complementary 2′-O-methyl oligonucleotide tethered to beads. Unlike other methods that capture RISC but do not allow its recovery, our strategy purifies active, soluble RISC in good yield. The method takes advantage of the finding that RISC partially paired to a target through its siRNA guide dissociates more than 300 times faster than a fully paired siRNA in RISC. We use this strategy to purify fly Ago1- and Ago2-RISC, as well as mouse AGO2-RISC. The method can discriminate among RISCs programmed with different guide strands, making it possible to deplete and recover specific RISC populations. Endogenous microRNA:Argonaute complexes can also be purified from cell lysates. Our method scales readily and takes less than a day to complete. PMID:23249751

  9. Rapid and specific purification of Argonaute-small RNA complexes from crude cell lysates.

    PubMed

    Flores-Jasso, C Fabián; Salomon, William E; Zamore, Phillip D

    2013-02-01

    Small interfering RNAs (siRNAs) direct Argonaute proteins, the core components of the RNA-induced silencing complex (RISC), to cleave complementary target RNAs. Here, we describe a method to purify active RISC containing a single, unique small RNA guide sequence. We begin by capturing RISC using a complementary 2'-O-methyl oligonucleotide tethered to beads. Unlike other methods that capture RISC but do not allow its recovery, our strategy purifies active, soluble RISC in good yield. The method takes advantage of the finding that RISC partially paired to a target through its siRNA guide dissociates more than 300 times faster than a fully paired siRNA in RISC. We use this strategy to purify fly Ago1- and Ago2-RISC, as well as mouse AGO2-RISC. The method can discriminate among RISCs programmed with different guide strands, making it possible to deplete and recover specific RISC populations. Endogenous microRNA:Argonaute complexes can also be purified from cell lysates. Our method scales readily and takes less than a day to complete.

  10. Scale relativity: from quantum mechanics to chaotic dynamics.

    NASA Astrophysics Data System (ADS)

    Nottale, L.

    Scale relativity is a new approach to the problem of the origin of fundamental scales and of scaling laws in physics, which consists in generalizing Einstein's principle of relativity to the case of scale transformations of resolutions. We recall here how it leads one to the concept of fractal space-time, and to introduce a new complex time derivative operator which allows to recover the Schrödinger equation, then to generalize it. In high energy quantum physics, it leads to the introduction of a Lorentzian renormalization group, in which the Planck length is reinterpreted as a lowest, unpassable scale, invariant under dilatations. These methods are successively applied to two problems: in quantum mechanics, that of the mass spectrum of elementary particles; in chaotic dynamics, that of the distribution of planets in the Solar System.

  11. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  12. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  13. Optimization of incremental structure from motion combining a random k-d forest and pHash for unordered images in a complex scene

    NASA Astrophysics Data System (ADS)

    Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi

    2018-01-01

    On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.

  14. Final Report. Analysis and Reduction of Complex Networks Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef M.; Coles, T.; Spantini, A.

    2013-09-30

    The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local andmore » long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty in this context raised fundamentally new issues, e.g., how is the topology of slow manifolds transformed by parametric uncertainty? How to construct dynamical models on these uncertain manifolds? To address these questions, we used stochastic spectral polynomial chaos (PC) methods to reformulate uncertain network models and analyzed them using CSP in probabilistic terms. Finding uncertain manifolds involved the solution of stochastic eigenvalue problems, facilitated by projection onto PC bases. These problems motivated us to explore the spectral properties stochastic Galerkin systems. We also introduced novel methods for rank-reduction in stochastic eigensystems—transformations of a uncertain dynamical system that lead to lower storage and solution complexity. These technical accomplishments are detailed below. This report focuses on the MIT portion of the joint project.« less

  15. From path models to commands during additive printing of large-scale architectural designs

    NASA Astrophysics Data System (ADS)

    Chepchurov, M. S.; Zhukov, E. M.; Yakovlev, E. A.; Matveykin, V. G.

    2018-05-01

    The article considers the problem of automation of the formation of large complex parts, products and structures, especially for unique or small-batch objects produced by a method of additive technology [1]. Results of scientific research in search for the optimal design of a robotic complex, its modes of operation (work), structure of its control helped to impose the technical requirements on the technological process for manufacturing and design installation of the robotic complex. Research on virtual models of the robotic complexes allowed defining the main directions of design improvements and the main goal (purpose) of testing of the the manufactured prototype: checking the positioning accuracy of the working part.

  16. Network complexity as a measure of information processing across resting-state networks: evidence from the Human Connectome Project

    PubMed Central

    McDonough, Ian M.; Nashiro, Kaoru

    2014-01-01

    An emerging field of research focused on fluctuations in brain signals has provided evidence that the complexity of those signals, as measured by entropy, conveys important information about network dynamics (e.g., local and distributed processing). While much research has focused on how neural complexity differs in populations with different age groups or clinical disorders, substantially less research has focused on the basic understanding of neural complexity in populations with young and healthy brain states. The present study used resting-state fMRI data from the Human Connectome Project (Van Essen et al., 2013) to test the extent that neural complexity in the BOLD signal, as measured by multiscale entropy (1) would differ from random noise, (2) would differ between four major resting-state networks previously associated with higher-order cognition, and (3) would be associated with the strength and extent of functional connectivity—a complementary method of estimating information processing. We found that complexity in the BOLD signal exhibited different patterns of complexity from white, pink, and red noise and that neural complexity was differentially expressed between resting-state networks, including the default mode, cingulo-opercular, left and right frontoparietal networks. Lastly, neural complexity across all networks was negatively associated with functional connectivity at fine scales, but was positively associated with functional connectivity at coarse scales. The present study is the first to characterize neural complexity in BOLD signals at a high temporal resolution and across different networks and might help clarify the inconsistencies between neural complexity and functional connectivity, thus informing the mechanisms underlying neural complexity. PMID:24959130

  17. Generalizing the Network Scale-Up Method: A New Estimator for the Size of Hidden Populations*

    PubMed Central

    Feehan, Dennis M.; Salganik, Matthew J.

    2018-01-01

    The network scale-up method enables researchers to estimate the size of hidden populations, such as drug injectors and sex workers, using sampled social network data. The basic scale-up estimator offers advantages over other size estimation techniques, but it depends on problematic modeling assumptions. We propose a new generalized scale-up estimator that can be used in settings with non-random social mixing and imperfect awareness about membership in the hidden population. Further, the new estimator can be used when data are collected via complex sample designs and from incomplete sampling frames. However, the generalized scale-up estimator also requires data from two samples: one from the frame population and one from the hidden population. In some situations these data from the hidden population can be collected by adding a small number of questions to already planned studies. For other situations, we develop interpretable adjustment factors that can be applied to the basic scale-up estimator. We conclude with practical recommendations for the design and analysis of future studies. PMID:29375167

  18. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  19. A linear framework for time-scale separation in nonlinear biochemical systems.

    PubMed

    Gunawardena, Jeremy

    2012-01-01

    Cellular physiology is implemented by formidably complex biochemical systems with highly nonlinear dynamics, presenting a challenge for both experiment and theory. Time-scale separation has been one of the few theoretical methods for distilling general principles from such complexity. It has provided essential insights in areas such as enzyme kinetics, allosteric enzymes, G-protein coupled receptors, ion channels, gene regulation and post-translational modification. In each case, internal molecular complexity has been eliminated, leading to rational algebraic expressions among the remaining components. This has yielded familiar formulas such as those of Michaelis-Menten in enzyme kinetics, Monod-Wyman-Changeux in allostery and Ackers-Johnson-Shea in gene regulation. Here we show that these calculations are all instances of a single graph-theoretic framework. Despite the biochemical nonlinearity to which it is applied, this framework is entirely linear, yet requires no approximation. We show that elimination of internal complexity is feasible when the relevant graph is strongly connected. The framework provides a new methodology with the potential to subdue combinatorial explosion at the molecular level.

  20. Ionization dynamics of the water trimer: A direct ab initio MD study

    NASA Astrophysics Data System (ADS)

    Tachikawa, Hiroto; Takada, Tomoya

    2013-03-01

    Ionization dynamics of the cyclic water trimer (H2O)3 have been investigated by means of direct ab initio molecular dynamics (AIMD) method. Two reaction channels, complex formation and OH dissociation, were found following the ionization of (H2O)3. In both channels, first, a proton was rapidly transferred from H2O+ to H2O (time scale is ˜15 fs after the ionization). In complex channel, an ion-radical contact pair (H3O+-OH) solvated by the third water molecule was formed as a long-lived H3O+(OH)H2O complex. In OH dissociation channel, the second proton transfer further takes place from H3O+(OH) to H2O (time scale is 50-100 fs) and the OH radical is separated from the H3O+. At the same time, the OH dissociation takes place when the excess energy is efficiently transferred into the kinetic energy of OH radical. The OH dissociation channel is significantly minor, and almost all product channels were the complex formation. The reaction mechanism was discussed on the basis of theoretical results.

  1. Validation of the DIFFAL, HPAC and HotSpot Dispersion Models Using the Full-Scale Radiological Dispersal Device (FSRDD) Field Trials Witness Plate Deposition Dataset.

    PubMed

    Purves, Murray; Parkes, David

    2016-05-01

    Three atmospheric dispersion models--DIFFAL, HPAC, and HotSpot--of differing complexities have been validated against the witness plate deposition dataset taken during the Full-Scale Radiological Dispersal Device (FSRDD) Field Trials. The small-scale nature of these trials in comparison to many other historical radiological dispersion trials provides a unique opportunity to evaluate the near-field performance of the models considered. This paper performs validation of these models using two graphical methods of comparison: deposition contour plots and hotline profile graphs. All of the models tested are assessed to perform well, especially considering that previous model developments and validations have been focused on larger-scale scenarios. Of the models, HPAC generally produced the most accurate results, especially at locations within ∼100 m of GZ. Features present within the observed data, such as hot spots, were not well modeled by any of the codes considered. Additionally, it was found that an increase in the complexity of the meteorological data input to the models did not necessarily lead to an improvement in model accuracy; this is potentially due to the small-scale nature of the trials.

  2. Analysis of genetic diversity using SNP markers in oat

    USDA-ARS?s Scientific Manuscript database

    A large-scale single nucleotide polymorphism (SNP) discovery was carried out in cultivated oat using Roche 454 sequencing methods. DNA sequences were generated from cDNAs originating from a panel of 20 diverse oat cultivars, and from Diversity Array Technology (DArT) genomic complexity reductions fr...

  3. Inverse finite-size scaling for high-dimensional significance analysis

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki

    2018-06-01

    We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

  4. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE PAGES

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  5. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less

  6. Multiscale approach to pest insect monitoring: Random walks, pattern formation, synchronization, and networks

    NASA Astrophysics Data System (ADS)

    Petrovskii, Sergei; Petrovskaya, Natalia; Bearup, Daniel

    2014-09-01

    Pest insects pose a significant threat to food production worldwide resulting in annual losses worth hundreds of billions of dollars. Pest control attempts to prevent pest outbreaks that could otherwise destroy a sward. It is good practice in integrated pest management to recommend control actions (usually pesticides application) only when the pest density exceeds a certain threshold. Accurate estimation of pest population density in ecosystems, especially in agro-ecosystems, is therefore very important, and this is the overall goal of the pest insect monitoring. However, this is a complex and challenging task; providing accurate information about pest abundance is hardly possible without taking into account the complexity of ecosystems' dynamics, in particular, the existence of multiple scales. In the case of pest insects, monitoring has three different spatial scales, each of them having their own scale-specific goal and their own approaches to data collection and interpretation. In this paper, we review recent progress in mathematical models and methods applied at each of these scales and show how it helps to improve the accuracy and robustness of pest population density estimation.

  7. Self-potential and Complex Conductivity Monitoring of In Situ Hydrocarbon Remediation in Microbial Fuel Cell

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Revil, A.; Ren, Z.; Karaoulis, M.; Mendonca, C. A.

    2013-12-01

    Petroleum hydrocarbon contamination of soil and groundwater in both non-aqueous phase liquid and dissolved forms generated from spills and leaks is a wide spread environmental issue. Traditional cleanup of hydrocarbon contamination in soils and ground water using physical, chemical, and biological remedial techniques is often expensive and ineffective. Recent studies show that the microbial fuel cell (MFC) can simultaneously enhance biodegradation of hydrocarbons in soil and groundwater and yield electricity. Non-invasive geophysical techniques such as self-potential (SP) and complex conductivity (induced polarization) have shown the potential to detect and characterize the nature of electron transport mechanism of in situ bioremediation of organic contamination plumes. In this study, we deployed both SP and complex conductivity in lab scale MFCs to monitor time-laps geophysical response of degradation of hydrocarbons by MFC. Two different sizes of MFC reactors were used in this study (DI=15 cm cylinder reactor and 94.5cm x 43.5 cm rectangle reactor), and the initial hydrocarbon concentration is 15 g diesel/kg soil. SP and complex conductivity measurements were measured using non-polarizing Ag/AgCl electrodes. Sensitivity study was also performed using COMSOL Multiphysics to test different electrode configurations. The SP measurements showed stronger anomalies adjacent to the MFC than locations afar, and both real and imaginary parts of complex conductivity are greater in areas close to MFC than areas further away and control samples without MFC. The joint use of SP and complex conductivity could in situ evaluate the dynamic changes of electrochemical parameters during this bioremediation process at spatiotemporal scales unachievable with traditional sampling methods. The joint inversion of these two methods to evaluate the efficiency of MFC enhanced hydrocarbon remediation in the subsurface.

  8. Complex, multi-scale small intestinal topography replicated in cellular growth substrates fabricated via chemical vapor deposition of Parylene C.

    PubMed

    Koppes, Abigail N; Kamath, Megha; Pfluger, Courtney A; Burkey, Daniel D; Dokmeci, Mehmet; Wang, Lin; Carrier, Rebecca L

    2016-08-22

    Native small intestine possesses distinct multi-scale structures (e.g., crypts, villi) not included in traditional 2D intestinal culture models for drug delivery and regenerative medicine. The known impact of structure on cell function motivates exploration of the influence of intestinal topography on the phenotype of cultured epithelial cells, but the irregular, macro- to submicron-scale features of native intestine are challenging to precisely replicate in cellular growth substrates. Herein, we utilized chemical vapor deposition of Parylene C on decellularized porcine small intestine to create polymeric intestinal replicas containing biomimetic irregular, multi-scale structures. These replicas were used as molds for polydimethylsiloxane (PDMS) growth substrates with macro to submicron intestinal topographical features. Resultant PDMS replicas exhibit multiscale resolution including macro- to micro-scale folds, crypt and villus structures, and submicron-scale features of the underlying basement membrane. After 10 d of human epithelial colorectal cell culture on PDMS substrates, the inclusion of biomimetic topographical features enhanced alkaline phosphatase expression 2.3-fold compared to flat controls, suggesting biomimetic topography is important in induced epithelial differentiation. This work presents a facile, inexpensive method for precisely replicating complex hierarchal features of native tissue, towards a new model for regenerative medicine and drug delivery for intestinal disorders and diseases.

  9. Structure and information in spatial segregation

    PubMed Central

    2017-01-01

    Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. PMID:29078323

  10. Structure and information in spatial segregation.

    PubMed

    Chodrow, Philip S

    2017-10-31

    Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. Published under the PNAS license.

  11. Acetylcholine molecular arrays enable quantum information processing

    NASA Astrophysics Data System (ADS)

    Tamulis, Arvydas; Majauskaite, Kristina; Talaikis, Martynas; Zborowski, Krzysztof; Kairys, Visvaldas

    2017-09-01

    We have found self-assembly of four neurotransmitter acetylcholine (ACh) molecular complexes in a water molecules environment by using geometry optimization with DFT B97d method. These complexes organizes to regular arrays of ACh molecules possessing electronic spins, i.e. quantum information bits. These spin arrays could potentially be controlled by the application of a non-uniform external magnetic field. The proper sequence of resonant electromagnetic pulses would then drive all the spin groups into the 3-spin entangled state and proceed large scale quantum information bits.

  12. A review of advantages of high-efficiency X-ray spectrum imaging for analysis of nanostructured ferritic alloys

    DOE PAGES

    Parish, Chad M.; Miller, Michael K.

    2014-12-09

    Nanostructured ferritic alloys (NFAs) exhibit complex microstructures consisting of 100-500 nm ferrite grains, grain boundary solute enrichment, and multiple populations of precipitates and nanoclusters (NCs). Understanding these materials' excellent creep and radiation-tolerance properties requires a combination of multiple atomic-scale experimental techniques. Recent advances in scanning transmission electron microscopy (STEM) hardware and data analysis methods have the potential to revolutionize nanometer to micrometer scale materials analysis. The application of these methods is applied to NFAs as a test case and is compared to both conventional STEM methods as well as complementary methods such as scanning electron microscopy and atom probe tomography.more » In this paper, we review past results and present new results illustrating the effectiveness of latest-generation STEM instrumentation and data analysis.« less

  13. [Application of nootropic agents in complex treatment of patients with concussion of the brain].

    PubMed

    Tkachev, A V

    2007-01-01

    65 patients with a mild craniocereberal trauma have been observed. Medical examination included among general clinical methods the following methods: KT (MRT) of the brain, oculist examination including the observation of eye fundus. For objectification of a patient' complaints the authors used orientation and Galvestona's amnesia tests, feeling scale (psychological test), the table to determine the level of memory. Tests have been carried out on the first, tenth and thirty day of the treatment. Patients of the first group received in a complex treatment -pramistar, patients of the second group - piracetam. Patients of both groups noted considerable improvement during a complex treatment (disappearance of headache, dizziness and nausea) and at the same time patients receiving pramistar had better restoration of orientation and feeling. Pramistar was also more effective in patients with amnesia.

  14. Spin Glass Patch Planting

    NASA Technical Reports Server (NTRS)

    Wang, Wenlong; Mandra, Salvatore; Katzgraber, Helmut G.

    2016-01-01

    In this paper, we propose a patch planting method for creating arbitrarily large spin glass instances with known ground states. The scaling of the computational complexity of these instances with various block numbers and sizes is investigated and compared with random instances using population annealing Monte Carlo and the quantum annealing DW2X machine. The method can be useful for benchmarking tests for future generation quantum annealing machines, classical and quantum mechanical optimization algorithms.

  15. Comparing two remote video survey methods for spatial predictions of the distribution and environmental niche suitability of demersal fishes.

    PubMed

    Galaiduk, Ronen; Radford, Ben T; Wilson, Shaun K; Harvey, Euan S

    2017-12-15

    Information on habitat associations from survey data, combined with spatial modelling, allow the development of more refined species distribution modelling which may identify areas of high conservation/fisheries value and consequentially improve conservation efforts. Generalised additive models were used to model the probability of occurrence of six focal species after surveys that utilised two remote underwater video sampling methods (i.e. baited and towed video). Models developed for the towed video method had consistently better predictive performance for all but one study species although only three models had a good to fair fit, and the rest were poor fits, highlighting the challenges associated with modelling habitat associations of marine species in highly homogenous, low relief environments. Models based on baited video dataset regularly included large-scale measures of structural complexity, suggesting fish attraction to a single focus point by bait. Conversely, models based on the towed video data often incorporated small-scale measures of habitat complexity and were more likely to reflect true species-habitat relationships. The cost associated with use of the towed video systems for surveying low-relief seascapes was also relatively low providing additional support for considering this method for marine spatial ecological modelling.

  16. Detection of Cardiac Abnormalities from Multilead ECG using Multiscale Phase Alternation Features.

    PubMed

    Tripathy, R K; Dandapat, S

    2016-06-01

    The cardiac activities such as the depolarization and the relaxation of atria and ventricles are observed in electrocardiogram (ECG). The changes in the morphological features of ECG are the symptoms of particular heart pathology. It is a cumbersome task for medical experts to visually identify any subtle changes in the morphological features during 24 hours of ECG recording. Therefore, the automated analysis of ECG signal is a need for accurate detection of cardiac abnormalities. In this paper, a novel method for automated detection of cardiac abnormalities from multilead ECG is proposed. The method uses multiscale phase alternation (PA) features of multilead ECG and two classifiers, k-nearest neighbor (KNN) and fuzzy KNN for classification of bundle branch block (BBB), myocardial infarction (MI), heart muscle defect (HMD) and healthy control (HC). The dual tree complex wavelet transform (DTCWT) is used to decompose the ECG signal of each lead into complex wavelet coefficients at different scales. The phase of the complex wavelet coefficients is computed and the PA values at each wavelet scale are used as features for detection and classification of cardiac abnormalities. A publicly available multilead ECG database (PTB database) is used for testing of the proposed method. The experimental results show that, the proposed multiscale PA features and the fuzzy KNN classifier have better performance for detection of cardiac abnormalities with sensitivity values of 78.12 %, 80.90 % and 94.31 % for BBB, HMD and MI classes. The sensitivity value of proposed method for MI class is compared with the state-of-art techniques from multilead ECG.

  17. Optimised padlock probe ligation and microarray detection of multiple (non-authorised) GMOs in a single reaction

    PubMed Central

    Prins, Theo W; van Dijk, Jeroen P; Beenen, Henriek G; Van Hoef, AM Angeline; Voorhuijzen, Marleen M; Schoen, Cor D; Aarts, Henk JM; Kok, Esther J

    2008-01-01

    Background To maintain EU GMO regulations, producers of new GM crop varieties need to supply an event-specific method for the new variety. As a result methods are nowadays available for EU-authorised genetically modified organisms (GMOs), but only to a limited extent for EU-non-authorised GMOs (NAGs). In the last decade the diversity of genetically modified (GM) ingredients in food and feed has increased significantly. As a result of this increase GMO laboratories currently need to apply many different methods to establish to potential presence of NAGs in raw materials and complex derived products. Results In this paper we present an innovative method for detecting (approved) GMOs as well as the potential presence of NAGs in complex DNA samples containing different crop species. An optimised protocol has been developed for padlock probe ligation in combination with microarray detection (PPLMD) that can easily be scaled up. Linear padlock probes targeted against GMO-events, -elements and -species have been developed that can hybridise to their genomic target DNA and are visualised using microarray hybridisation. In a tenplex PPLMD experiment, different genomic targets in Roundup-Ready soya, MON1445 cotton and Bt176 maize were detected down to at least 1%. In single experiments, the targets were detected down to 0.1%, i.e. comparable to standard qPCR. Conclusion Compared to currently available methods this is a significant step forward towards multiplex detection in complex raw materials and derived products. It is shown that the PPLMD approach is suitable for large-scale detection of GMOs in real-life samples and provides the possibility to detect and/or identify NAGs that would otherwise remain undetected. PMID:19055784

  18. Optimised padlock probe ligation and microarray detection of multiple (non-authorised) GMOs in a single reaction.

    PubMed

    Prins, Theo W; van Dijk, Jeroen P; Beenen, Henriek G; Van Hoef, Am Angeline; Voorhuijzen, Marleen M; Schoen, Cor D; Aarts, Henk J M; Kok, Esther J

    2008-12-04

    To maintain EU GMO regulations, producers of new GM crop varieties need to supply an event-specific method for the new variety. As a result methods are nowadays available for EU-authorised genetically modified organisms (GMOs), but only to a limited extent for EU-non-authorised GMOs (NAGs). In the last decade the diversity of genetically modified (GM) ingredients in food and feed has increased significantly. As a result of this increase GMO laboratories currently need to apply many different methods to establish to potential presence of NAGs in raw materials and complex derived products. In this paper we present an innovative method for detecting (approved) GMOs as well as the potential presence of NAGs in complex DNA samples containing different crop species. An optimised protocol has been developed for padlock probe ligation in combination with microarray detection (PPLMD) that can easily be scaled up. Linear padlock probes targeted against GMO-events, -elements and -species have been developed that can hybridise to their genomic target DNA and are visualised using microarray hybridisation.In a tenplex PPLMD experiment, different genomic targets in Roundup-Ready soya, MON1445 cotton and Bt176 maize were detected down to at least 1%. In single experiments, the targets were detected down to 0.1%, i.e. comparable to standard qPCR. Compared to currently available methods this is a significant step forward towards multiplex detection in complex raw materials and derived products. It is shown that the PPLMD approach is suitable for large-scale detection of GMOs in real-life samples and provides the possibility to detect and/or identify NAGs that would otherwise remain undetected.

  19. Evaluation of a Specialized Yoga Program for Persons Admitted to a Complex Continuing Care Hospital: A Pilot Study

    PubMed Central

    Kuluski, Kerry; Bechsgaard, Gitte; Ridgway, Jennifer; Katz, Joel

    2016-01-01

    Introduction. The purpose of this study was to evaluate a specialized yoga intervention for inpatients in a rehabilitation and complex continuing care hospital. Design. Single-cohort repeated measures design. Methods. Participants (N = 10) admitted to a rehabilitation and complex continuing care hospital were recruited to participate in a 50–60 min Hatha Yoga class (modified for wheelchair users/seated position) once a week for eight weeks, with assigned homework practice. Questionnaires on pain (pain, pain interference, and pain catastrophizing), psychological variables (depression, anxiety, and experiences with injustice), mindfulness, self-compassion, and spiritual well-being were collected at three intervals: pre-, mid-, and post-intervention. Results. Repeated measures ANOVAs revealed a significant main effect of time indicating improvements over the course of the yoga program on the (1) anxiety subscale of the Hospital Anxiety and Depression Scale, F(2,18) = 4.74, p < .05, and η p 2 = .35, (2) Self-Compassion Scale-Short Form, F(2,18) = 3.71, p < .05, and η p 2 = .29, and (3) Magnification subscale of the Pain Catastrophizing Scale, F(2,18) = 3. 66, p < .05, and η p 2 = .29. Discussion. The results suggest that an 8-week Hatha Yoga program improves pain-related factors and psychological experiences in individuals admitted to a rehabilitation and complex continuing care hospital. PMID:28115969

  20. Network science of biological systems at different scales: A review

    NASA Astrophysics Data System (ADS)

    Gosak, Marko; Markovič, Rene; Dolenšek, Jurij; Slak Rupnik, Marjan; Marhl, Marko; Stožer, Andraž; Perc, Matjaž

    2018-03-01

    Network science is today established as a backbone for description of structure and function of various physical, chemical, biological, technological, and social systems. Here we review recent advances in the study of complex biological systems that were inspired and enabled by methods of network science. First, we present

  1. Validating the Psychological Climate Scale in Voluntary Child Welfare

    ERIC Educational Resources Information Center

    Zeitlin, Wendy; Claiborne, Nancy; Lawrence, Catherine K.; Auerbach, Charles

    2016-01-01

    Objective: Organizational climate has emerged as an important factor in understanding and addressing the complexities of providing services in child welfare. This research examines the psychometric properties of each of the dimensions of Parker and colleagues' Psychological Climate Survey in a sample of voluntary child welfare workers. Methods:…

  2. Communication Network Analysis Methods.

    ERIC Educational Resources Information Center

    Farace, Richard V.; Mabee, Timothy

    This paper reviews a variety of analytic procedures that can be applied to network data, discussing the assumptions and usefulness of each procedure when applied to the complexity of human communication. Special attention is paid to the network properties measured or implied by each procedure. Factor analysis and multidimensional scaling are among…

  3. Direct reading of electrocardiograms and respiration rates

    NASA Technical Reports Server (NTRS)

    Wise, J. P.

    1969-01-01

    Technique for reading heart and respiration rates is more accurate and direct than the previous method. Index of a plastic calibrated card is aligned with a point on the electrocardiogram. Complexes are counted as indicated on the card and heart or respiration rate is read directly from the appropriate scale.

  4. Rich-Cores in Networks

    PubMed Central

    Ma, Athen; Mondragón, Raúl J.

    2015-01-01

    A core comprises of a group of central and densely connected nodes which governs the overall behaviour of a network. It is recognised as one of the key meso-scale structures in complex networks. Profiling this meso-scale structure currently relies on a limited number of methods which are often complex and parameter dependent or require a null model. As a result, scalability issues are likely to arise when dealing with very large networks together with the need for subjective adjustment of parameters. The notion of a rich-club describes nodes which are essentially the hub of a network, as they play a dominating role in structural and functional properties. The definition of a rich-club naturally emphasises high degree nodes and divides a network into two subgroups. Here, we develop a method to characterise a rich-core in networks by theoretically coupling the underlying principle of a rich-club with the escape time of a random walker. The method is fast, scalable to large networks and completely parameter free. In particular, we show that the evolution of the core in World Trade and C. elegans networks correspond to responses to historical events and key stages in their physical development, respectively. PMID:25799585

  5. Rich-cores in networks.

    PubMed

    Ma, Athen; Mondragón, Raúl J

    2015-01-01

    A core comprises of a group of central and densely connected nodes which governs the overall behaviour of a network. It is recognised as one of the key meso-scale structures in complex networks. Profiling this meso-scale structure currently relies on a limited number of methods which are often complex and parameter dependent or require a null model. As a result, scalability issues are likely to arise when dealing with very large networks together with the need for subjective adjustment of parameters. The notion of a rich-club describes nodes which are essentially the hub of a network, as they play a dominating role in structural and functional properties. The definition of a rich-club naturally emphasises high degree nodes and divides a network into two subgroups. Here, we develop a method to characterise a rich-core in networks by theoretically coupling the underlying principle of a rich-club with the escape time of a random walker. The method is fast, scalable to large networks and completely parameter free. In particular, we show that the evolution of the core in World Trade and C. elegans networks correspond to responses to historical events and key stages in their physical development, respectively.

  6. Self-folding with shape memory composites at the millimeter scale

    NASA Astrophysics Data System (ADS)

    Felton, S. M.; Becker, K. P.; Aukes, D. M.; Wood, R. J.

    2015-08-01

    Self-folding is an effective method for creating 3D shapes from flat sheets. In particular, shape memory composites—laminates containing shape memory polymers—have been used to self-fold complex structures and machines. To date, however, these composites have been limited to feature sizes larger than one centimeter. We present a new shape memory composite capable of folding millimeter-scale features. This technique can be activated by a global heat source for simultaneous folding, or by resistive heaters for sequential folding. It is capable of feature sizes ranging from 0.5 to 40 mm, and is compatible with multiple laminate compositions. We demonstrate the ability to produce complex structures and mechanisms by building two self-folding pieces: a model ship and a model bumblebee.

  7. The Material Point Method and Simulation of Wave Propagation in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Bardenhagen, S. G.; Greening, D. R.; Roessig, K. M.

    2004-07-01

    The mechanical response of polycrystalline materials, particularly under shock loading, is of significant interest in a variety of munitions and industrial applications. Homogeneous continuum models have been developed to describe material response, including Equation of State, strength, and reactive burn models. These models provide good estimates of bulk material response. However, there is little connection to underlying physics and, consequently, they cannot be applied far from their calibrated regime with confidence. Both explosives and metals have important structure at the (energetic or single crystal) grain scale. The anisotropic properties of the individual grains and the presence of interfaces result in the localization of energy during deformation. In explosives energy localization can lead to initiation under weak shock loading, and in metals to material ejecta under strong shock loading. To develop accurate, quantitative and predictive models it is imperative to develop a sound physical understanding of the grain-scale material response. Numerical simulations are performed to gain insight into grain-scale material response. The Generalized Interpolation Material Point Method family of numerical algorithms, selected for their robust treatment of large deformation problems and convenient framework for implementing material interface models, are reviewed. A three-dimensional simulation of wave propagation through a granular material indicates the scale and complexity of a representative grain-scale computation. Verification and validation calculations on model bimaterial systems indicate the minimum numerical algorithm complexity required for accurate simulation of wave propagation across material interfaces and demonstrate the importance of interfacial decohesion. Preliminary results are presented which predict energy localization at the grain boundary in a metallic bicrystal.

  8. Enhancing multiple-point geostatistical modeling: 1. Graph theory and pattern adjustment

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman; Sahimi, Muhammad

    2016-03-01

    In recent years, higher-order geostatistical methods have been used for modeling of a wide variety of large-scale porous media, such as groundwater aquifers and oil reservoirs. Their popularity stems from their ability to account for qualitative data and the great flexibility that they offer for conditioning the models to hard (quantitative) data, which endow them with the capability for generating realistic realizations of porous formations with very complex channels, as well as features that are mainly a barrier to fluid flow. One group of such models consists of pattern-based methods that use a set of data points for generating stochastic realizations by which the large-scale structure and highly-connected features are reproduced accurately. The cross correlation-based simulation (CCSIM) algorithm, proposed previously by the authors, is a member of this group that has been shown to be capable of simulating multimillion cell models in a matter of a few CPU seconds. The method is, however, sensitive to pattern's specifications, such as boundaries and the number of replicates. In this paper the original CCSIM algorithm is reconsidered and two significant improvements are proposed for accurately reproducing large-scale patterns of heterogeneities in porous media. First, an effective boundary-correction method based on the graph theory is presented by which one identifies the optimal cutting path/surface for removing the patchiness and discontinuities in the realization of a porous medium. Next, a new pattern adjustment method is proposed that automatically transfers the features in a pattern to one that seamlessly matches the surrounding patterns. The original CCSIM algorithm is then combined with the two methods and is tested using various complex two- and three-dimensional examples. It should, however, be emphasized that the methods that we propose in this paper are applicable to other pattern-based geostatistical simulation methods.

  9. TopoSCALE v.1.0: downscaling gridded climate data in complex terrain

    NASA Astrophysics Data System (ADS)

    Fiddes, J.; Gruber, S.

    2014-02-01

    Simulation of land surface processes is problematic in heterogeneous terrain due to the the high resolution required of model grids to capture strong lateral variability caused by, for example, topography, and the lack of accurate meteorological forcing data at the site or scale it is required. Gridded data products produced by atmospheric models can fill this gap, however, often not at an appropriate spatial resolution to drive land-surface simulations. In this study we describe a method that uses the well-resolved description of the atmospheric column provided by climate models, together with high-resolution digital elevation models (DEMs), to downscale coarse-grid climate variables to a fine-scale subgrid. The main aim of this approach is to provide high-resolution driving data for a land-surface model (LSM). The method makes use of an interpolation of pressure-level data according to topographic height of the subgrid. An elevation and topography correction is used to downscale short-wave radiation. Long-wave radiation is downscaled by deriving a cloud-component of all-sky emissivity at grid level and using downscaled temperature and relative humidity fields to describe variability with elevation. Precipitation is downscaled with a simple non-linear lapse and optionally disaggregated using a climatology approach. We test the method in comparison with unscaled grid-level data and a set of reference methods, against a large evaluation dataset (up to 210 stations per variable) in the Swiss Alps. We demonstrate that the method can be used to derive meteorological inputs in complex terrain, with most significant improvements (with respect to reference methods) seen in variables derived from pressure levels: air temperature, relative humidity, wind speed and incoming long-wave radiation. This method may be of use in improving inputs to numerical simulations in heterogeneous and/or remote terrain, especially when statistical methods are not possible, due to lack of observations (i.e. remote areas or future periods).

  10. Approximating the Basset force by optimizing the method of van Hinsberg et al.

    NASA Astrophysics Data System (ADS)

    Casas, G.; Ferrer, A.; Oñate, E.

    2018-01-01

    In this work we put the method proposed by van Hinsberg et al. [29] to the test, highlighting its accuracy and efficiency in a sequence of benchmarks of increasing complexity. Furthermore, we explore the possibility of systematizing the way in which the method's free parameters are determined by generalizing the optimization problem that was considered originally. Finally, we provide a list of worked-out values, ready for implementation in large-scale particle-laden flow simulations.

  11. COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES

    PubMed Central

    Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus

    2017-01-01

    The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875

  12. Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.

    1998-01-01

    The term "scale", both in space and time, is central to remote sensing and geographic information systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated in ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.

  13. Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale

    NASA Technical Reports Server (NTRS)

    Quattrochi, D. A.

    1998-01-01

    The term "scale", both in space and time, is central to remote sensing and Geographic Information Systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.

  14. Finding equilibrium in the spatiotemporal chaos of the complex Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Ballard, Christopher C.; Esty, C. Clark; Egolf, David A.

    2016-11-01

    Equilibrium statistical mechanics allows the prediction of collective behaviors of large numbers of interacting objects from just a few system-wide properties; however, a similar theory does not exist for far-from-equilibrium systems exhibiting complex spatial and temporal behavior. We propose a method for predicting behaviors in a broad class of such systems and apply these ideas to an archetypal example, the spatiotemporal chaotic 1D complex Ginzburg-Landau equation in the defect chaos regime. Building on the ideas of Ruelle and of Cross and Hohenberg that a spatiotemporal chaotic system can be considered a collection of weakly interacting dynamical units of a characteristic size, the chaotic length scale, we identify underlying, mesoscale, chaotic units and effective interaction potentials between them. We find that the resulting equilibrium Takahashi model accurately predicts distributions of particle numbers. These results suggest the intriguing possibility that a class of far-from-equilibrium systems may be well described at coarse-grained scales by the well-established theory of equilibrium statistical mechanics.

  15. Finding equilibrium in the spatiotemporal chaos of the complex Ginzburg-Landau equation.

    PubMed

    Ballard, Christopher C; Esty, C Clark; Egolf, David A

    2016-11-01

    Equilibrium statistical mechanics allows the prediction of collective behaviors of large numbers of interacting objects from just a few system-wide properties; however, a similar theory does not exist for far-from-equilibrium systems exhibiting complex spatial and temporal behavior. We propose a method for predicting behaviors in a broad class of such systems and apply these ideas to an archetypal example, the spatiotemporal chaotic 1D complex Ginzburg-Landau equation in the defect chaos regime. Building on the ideas of Ruelle and of Cross and Hohenberg that a spatiotemporal chaotic system can be considered a collection of weakly interacting dynamical units of a characteristic size, the chaotic length scale, we identify underlying, mesoscale, chaotic units and effective interaction potentials between them. We find that the resulting equilibrium Takahashi model accurately predicts distributions of particle numbers. These results suggest the intriguing possibility that a class of far-from-equilibrium systems may be well described at coarse-grained scales by the well-established theory of equilibrium statistical mechanics.

  16. Experimental phase synchronization detection in non-phase coherent chaotic systems by using the discrete complex wavelet approach

    NASA Astrophysics Data System (ADS)

    Ferreira, Maria Teodora; Follmann, Rosangela; Domingues, Margarete O.; Macau, Elbert E. N.; Kiss, István Z.

    2017-08-01

    Phase synchronization may emerge from mutually interacting non-linear oscillators, even under weak coupling, when phase differences are bounded, while amplitudes remain uncorrelated. However, the detection of this phenomenon can be a challenging problem to tackle. In this work, we apply the Discrete Complex Wavelet Approach (DCWA) for phase assignment, considering signals from coupled chaotic systems and experimental data. The DCWA is based on the Dual-Tree Complex Wavelet Transform (DT-CWT), which is a discrete transformation. Due to its multi-scale properties in the context of phase characterization, it is possible to obtain very good results from scalar time series, even with non-phase-coherent chaotic systems without state space reconstruction or pre-processing. The method correctly predicts the phase synchronization for a chemical experiment with three locally coupled, non-phase-coherent chaotic processes. The impact of different time-scales is demonstrated on the synchronization process that outlines the advantages of DCWA for analysis of experimental data.

  17. Applied mathematical problems in modern electromagnetics

    NASA Astrophysics Data System (ADS)

    Kriegsman, Gregory

    1994-05-01

    We have primarily investigated two classes of electromagnetic problems. The first contains the quantitative description of microwave heating of dispersive and conductive materials. Such problems arise, for example, when biological tissue are exposed, accidentally or purposefully, to microwave radiation. Other instances occur in ceramic processing, such as sintering and microwave assisted chemical vapor infiltration and other industrial drying processes, such as the curing of paints and concrete. The second class characterizes the scattering of microwaves by complex targets which possess two or more disparate length and/or time scales. Spatially complex scatterers arise in a variety of applications, such as large gratings and slowly changing guiding structures. The former are useful in developing microstrip energy couplers while the later can be used to model anatomical subsystems (e.g., the open guiding structure composed of two legs and the adjoining lower torso). Temporally complex targets occur in applications involving dispersive media whose relaxation times differ by orders of magnitude from thermal and/or electromagnetic time scales. For both cases the mathematical description of the problems gives rise to complicated ill-conditioned boundary value problems, whose accurate solutions require a blend of both asymptotic techniques, such as multiscale methods and matched asymptotic expansions, and numerical methods incorporating radiation boundary conditions, such as finite differences and finite elements.

  18. Coupled-rearrangement-channels calculation of the three-body system under the absorbing boundary condition

    NASA Astrophysics Data System (ADS)

    Iwasaki, M.; Otani, R.; Ito, M.; Kamimura, M.

    2016-05-01

    We formulate the method of the absorbing boundary condition (ABC) in the coupled-rearrangement-channels variational method (CRCMV) for the three-body problem. In the present study, we handle the simple three-boson system, and the absorbing potential is introduced in the Jacobi coordinate in the individual rearrangement channels. The resonance parameters and the strength of the monopole breakup are compared with the complex scaling method (CSM). We have found that the CRCVM + ABC method nicely works in the threebody problem with the rearrangement channels.

  19. Applications of the Lattice Boltzmann Method to Complex and Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Luo, Li-Shi; Qi, Dewei; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We briefly review the method of the lattice Boltzmann equation (LBE). We show the three-dimensional LBE simulation results for a non-spherical particle in Couette flow and 16 particles in sedimentation in fluid. We compare the LBE simulation of the three-dimensional homogeneous isotropic turbulence flow in a periodic cubic box of the size 1283 with the pseudo-spectral simulation, and find that the two results agree well with each other but the LBE method is more dissipative than the pseudo-spectral method in small scales, as expected.

  20. CSM Testbed Development and Large-Scale Structural Applications

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Gillian, R. E.; Mccleary, Susan L.; Lotts, C. G.; Poole, E. L.; Overman, A. L.; Macy, S. C.

    1989-01-01

    A research activity called Computational Structural Mechanics (CSM) conducted at the NASA Langley Research Center is described. This activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM Testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM Testbed methods development environment is presented and some new numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  1. A new and fast method for preparing high quality lambda DNA suitable for sequencing.

    PubMed Central

    Manfioletti, G; Schneider, C

    1988-01-01

    A method is described for the rapid purification of high quality lambda DNA. The method can be used from either liquid or plate lysates and on a small scale or a large scale. It relies on the preadsobtion of all polyanions present in the lysate to an "insoluble" anion-exchange matrix (DEAE or TEAE). Phage particles are then disrupted by combined treatment with EDTA/proteinase K and the resulting DNA is precipitated by the addition of the cationic detergent cetyl (or hexadecyl)-trimethyl ammonium bromide-CTAB ("soluble" anion-exchange matrix). The precipitated CTAB-DNA complex is then exchanged to Na-DNA and ethanol precipitated. The resultant purified DNA is suitable for enzymatic reactions and provides a high quality template for dideoxy-sequence analysis. Images PMID:2966928

  2. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  3. Applying Parallel Adaptive Methods with GeoFEST/PYRAMID to Simulate Earth Surface Crustal Dynamics

    NASA Technical Reports Server (NTRS)

    Norton, Charles D.; Lyzenga, Greg; Parker, Jay; Glasscoe, Margaret; Donnellan, Andrea; Li, Peggy

    2006-01-01

    This viewgraph presentation reviews the use Adaptive Mesh Refinement (AMR) in simulating the Crustal Dynamics of Earth's Surface. AMR simultaneously improves solution quality, time to solution, and computer memory requirements when compared to generating/running on a globally fine mesh. The use of AMR in simulating the dynamics of the Earth's Surface is spurred by future proposed NASA missions, such as InSAR for Earth surface deformation and other measurements. These missions will require support for large-scale adaptive numerical methods using AMR to model observations. AMR was chosen because it has been successful in computation fluid dynamics for predictive simulation of complex flows around complex structures.

  4. Can Regional Climate Models be used in the assessment of vulnerability and risk caused by extreme events?

    NASA Astrophysics Data System (ADS)

    Nunes, Ana

    2015-04-01

    Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.

  5. Action detection by double hierarchical multi-structure space-time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  6. Parallel Adaptive High-Order CFD Simulations Characterizing Cavity Acoustics for the Complete SOFIA Aircraft

    NASA Technical Reports Server (NTRS)

    Barad, Michael F.; Brehm, Christoph; Kiris, Cetin C.; Biswas, Rupak

    2014-01-01

    This paper presents one-of-a-kind MPI-parallel computational fluid dynamics simulations for the Stratospheric Observatory for Infrared Astronomy (SOFIA). SOFIA is an airborne, 2.5-meter infrared telescope mounted in an open cavity in the aft of a Boeing 747SP. These simulations focus on how the unsteady flow field inside and over the cavity interferes with the optical path and mounting of the telescope. A temporally fourth-order Runge-Kutta, and spatially fifth-order WENO-5Z scheme was used to perform implicit large eddy simulations. An immersed boundary method provides automated gridding for complex geometries and natural coupling to a block-structured Cartesian adaptive mesh refinement framework. Strong scaling studies using NASA's Pleiades supercomputer with up to 32,000 cores and 4 billion cells shows excellent scaling. Dynamic load balancing based on execution time on individual AMR blocks addresses irregularities caused by the highly complex geometry. Limits to scaling beyond 32K cores are identified, and targeted code optimizations are discussed.

  7. Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane wave basis

    NASA Astrophysics Data System (ADS)

    Schäfer, Tobias; Ramberger, Benjamin; Kresse, Georg

    2017-03-01

    We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order Møller-Plesset (MP2) perturbation theory. In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, O ( N 4 ) , with respect to the system size N and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is eased by an internal basis set extrapolation. The key concept to reduce the scaling is to eliminate all summations over virtual orbitals which can be elegantly achieved in the Laplace transformed MP2 formulation using plane wave basis sets and fast Fourier transforms. Analogously, this approach could allow us to calculate second order screened exchange as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.

  8. Polymer physics of chromosome large-scale 3D organisation

    NASA Astrophysics Data System (ADS)

    Chiariello, Andrea M.; Annunziatella, Carlo; Bianco, Simona; Esposito, Andrea; Nicodemi, Mario

    2016-07-01

    Chromosomes have a complex architecture in the cell nucleus, which serves vital functional purposes, yet its structure and folding mechanisms remain still incompletely understood. Here we show that genome-wide chromatin architecture data, as mapped by Hi-C methods across mammalian cell types and chromosomes, are well described by classical scaling concepts of polymer physics, from the sub-Mb to chromosomal scales. Chromatin is a complex mixture of different regions, folded in the conformational classes predicted by polymer thermodynamics. The contact matrix of the Sox9 locus, a region linked to severe human congenital diseases, is derived with high accuracy in mESCs and its molecular determinants identified by the theory; Sox9 self-assembles hierarchically in higher-order domains, involving abundant many-body contacts. Our approach is also applied to the Bmp7 locus. Finally, the model predictions on the effects of mutations on folding are tested against available data on a deletion in the Xist locus. Our results can help progressing new diagnostic tools for diseases linked to chromatin misfolding.

  9. Action detection by double hierarchical multi-structure space–time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-06-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  10. A Large-Scale Multi-Hop Localization Algorithm Based on Regularized Extreme Learning for Wireless Networks.

    PubMed

    Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan

    2017-12-20

    A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.

  11. Cascade phenomenon against subsequent failures in complex networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhong-Yuan; Liu, Zhi-Quan; He, Xuan; Ma, Jian-Feng

    2018-06-01

    Cascade phenomenon may lead to catastrophic disasters which extremely imperil the network safety or security in various complex systems such as communication networks, power grids, social networks and so on. In some flow-based networks, the load of failed nodes can be redistributed locally to their neighboring nodes to maximally preserve the traffic oscillations or large-scale cascading failures. However, in such local flow redistribution model, a small set of key nodes attacked subsequently can result in network collapse. Then it is a critical problem to effectively find the set of key nodes in the network. To our best knowledge, this work is the first to study this problem comprehensively. We first introduce the extra capacity for every node to put up with flow fluctuations from neighbors, and two extra capacity distributions including degree based distribution and average distribution are employed. Four heuristic key nodes discovering methods including High-Degree-First (HDF), Low-Degree-First (LDF), Random and Greedy Algorithms (GA) are presented. Extensive simulations are realized in both scale-free networks and random networks. The results show that the greedy algorithm can efficiently find the set of key nodes in both scale-free and random networks. Our work studies network robustness against cascading failures from a very novel perspective, and methods and results are very useful for network robustness evaluations and protections.

  12. Multi-scale compositionality: identifying the compositional structures of social dynamics using deep learning.

    PubMed

    Peng, Huan-Kai; Marculescu, Radu

    2015-01-01

    Social media exhibit rich yet distinct temporal dynamics which cover a wide range of different scales. In order to study this complex dynamics, two fundamental questions revolve around (1) the signatures of social dynamics at different time scales, and (2) the way in which these signatures interact and form higher-level meanings. In this paper, we propose the Recursive Convolutional Bayesian Model (RCBM) to address both of these fundamental questions. The key idea behind our approach consists of constructing a deep-learning framework using specialized convolution operators that are designed to exploit the inherent heterogeneity of social dynamics. RCBM's runtime and convergence properties are guaranteed by formal analyses. Experimental results show that the proposed method outperforms the state-of-the-art approaches both in terms of solution quality and computational efficiency. Indeed, by applying the proposed method on two social network datasets, Twitter and Yelp, we are able to identify the compositional structures that can accurately characterize the complex social dynamics from these two social media. We further show that identifying these patterns can enable new applications such as anomaly detection and improved social dynamics forecasting. Finally, our analysis offers new insights on understanding and engineering social media dynamics, with direct applications to opinion spreading and online content promotion.

  13. Argument Complexity: Teaching Undergraduates to Make Better Arguments

    ERIC Educational Resources Information Center

    Kelly, Matthew A.; West, Robert L.

    2017-01-01

    The task of turning undergrads into academics requires teaching them to reason about the world in a more complex way. We present the Argument Complexity Scale, a tool for analysing the complexity of argumentation, based on the Integrative Complexity and Conceptual Complexity Scales from, respectively, political psychology and personality theory.…

  14. Traits Without Borders: Integrating Functional Diversity Across Scales.

    PubMed

    Carmona, Carlos P; de Bello, Francesco; Mason, Norman W H; Lepš, Jan

    2016-05-01

    Owing to the conceptual complexity of functional diversity (FD), a multitude of different methods are available for measuring it, with most being operational at only a small range of spatial scales. This causes uncertainty in ecological interpretations and limits the potential to generalize findings across studies or compare patterns across scales. We solve this problem by providing a unified framework expanding on and integrating existing approaches. The framework, based on trait probability density (TPD), is the first to fully implement the Hutchinsonian concept of the niche as a probabilistic hypervolume in estimating FD. This novel approach could revolutionize FD-based research by allowing quantification of the various FD components from organismal to macroecological scales, and allowing seamless transitions between scales. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Tailoring the Variational Implicit Solvent Method for New Challenges: Biomolecular Recognition and Assembly

    PubMed Central

    Ricci, Clarisse Gravina; Li, Bo; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew

    2018-01-01

    Predicting solvation free energies and describing the complex water behavior that plays an important role in essentially all biological processes is a major challenge from the computational standpoint. While an atomistic, explicit description of the solvent can turn out to be too expensive in large biomolecular systems, most implicit solvent methods fail to capture “dewetting” effects and heterogeneous hydration by relying on a pre-established (i.e., guessed) solvation interface. Here we focus on the Variational Implicit Solvent Method, an implicit solvent method that adds water “plasticity” back to the picture by formulating the solvation free energy as a functional of all possible solvation interfaces. We survey VISM's applications to the problem of molecular recognition and report some of the most recent efforts to tailor VISM for more challenging scenarios, with the ultimate goal of including thermal fluctuations into the framework. The advances reported herein pave the way to make VISM a uniquely successful approach to characterize complex solvation properties in the recognition and binding of large-scale biomolecular complexes. PMID:29484300

  16. Rapid identifying high-influence nodes in complex networks

    NASA Astrophysics Data System (ADS)

    Song, Bo; Jiang, Guo-Ping; Song, Yu-Rong; Xia, Ling-Ling

    2015-10-01

    A tiny fraction of influential individuals play a critical role in the dynamics on complex systems. Identifying the influential nodes in complex networks has theoretical and practical significance. Considering the uncertainties of network scale and topology, and the timeliness of dynamic behaviors in real networks, we propose a rapid identifying method (RIM) to find the fraction of high-influential nodes. Instead of ranking all nodes, our method only aims at ranking a small number of nodes in network. We set the high-influential nodes as initial spreaders, and evaluate the performance of RIM by the susceptible-infected-recovered (SIR) model. The simulations show that in different networks, RIM performs well on rapid identifying high-influential nodes, which is verified by typical ranking methods, such as degree, closeness, betweenness, and eigenvector centrality methods. Project supported by the National Natural Science Foundation of China (Grant Nos. 61374180 and 61373136), the Ministry of Education Research in the Humanities and Social Sciences Planning Fund Project, China (Grant No. 12YJAZH120), and the Six Projects Sponsoring Talent Summits of Jiangsu Province, China (Grant No. RLD201212).

  17. Permutation entropy analysis of heart rate variability for the assessment of cardiovascular autonomic neuropathy in type 1 diabetes mellitus.

    PubMed

    Carricarte Naranjo, Claudia; Sanchez-Rodriguez, Lazaro M; Brown Martínez, Marta; Estévez Báez, Mario; Machado García, Andrés

    2017-07-01

    Heart rate variability (HRV) analysis is a relevant tool for the diagnosis of cardiovascular autonomic neuropathy (CAN). To our knowledge, no previous investigation on CAN has assessed the complexity of HRV from an ordinal perspective. Therefore, the aim of this work is to explore the potential of permutation entropy (PE) analysis of HRV complexity for the assessment of CAN. For this purpose, we performed a short-term PE analysis of HRV in healthy subjects and type 1 diabetes mellitus patients, including patients with CAN. Standard HRV indicators were also calculated in the control group. A discriminant analysis was used to select the variables combination with best discriminative power between control and CAN patients groups, as well as for classifying cases. We found that for some specific temporal scales, PE indicators were significantly lower in CAN patients than those calculated for controls. In such cases, there were ordinal patterns with high probabilities of occurrence, while others were hardly found. We posit this behavior occurs due to a decrease of HRV complexity in the diseased system. Discriminant functions based on PE measures or probabilities of occurrence of ordinal patterns provided an average of 75% and 96% classification accuracy. Correlations of PE and HRV measures showed to depend only on temporal scale, regardless of pattern length. PE analysis at some specific temporal scales, seem to provide additional information to that obtained with traditional HRV methods. We concluded that PE analysis of HRV is a promising method for the assessment of CAN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Computation of wind tunnel wall effects for complex models using a low-order panel method

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.; Harris, Scott H.

    1994-01-01

    A technique for determining wind tunnel wall effects for complex models using the low-order, three dimensional panel method PMARC (Panel Method Ames Research Center) has been developed. Initial validation of the technique was performed using lift-coefficient data in the linear lift range from tests of a large-scale STOVL fighter model in the National Full-Scale Aerodynamics Complex (NFAC) facility. The data from these tests served as an ideal database for validating the technique because the same model was tested in two wind tunnel test sections with widely different dimensions. The lift-coefficient data obtained for the same model configuration in the two test sections were different, indicating a significant influence of the presence of the tunnel walls and mounting hardware on the lift coefficient in at least one of the two test sections. The wind tunnel wall effects were computed using PMARC and then subtracted from the measured data to yield corrected lift-coefficient versus angle-of-attack curves. The corrected lift-coefficient curves from the two wind tunnel test sections matched very well. Detailed pressure distributions computed by PMARC on the wing lower surface helped identify the source of large strut interference effects in one of the wind tunnel test sections. Extension of the technique to analysis of wind tunnel wall effects on the lift coefficient in the nonlinear lift range and on drag coefficient will require the addition of boundary-layer and separated-flow models to PMARC.

  19. A study on axial and torsional resonant mode matching for a mechanical system with complex nonlinear geometries

    NASA Astrophysics Data System (ADS)

    Watson, Brett; Yeo, Leslie; Friend, James

    2010-06-01

    Making use of mechanical resonance has many benefits for the design of microscale devices. A key to successfully incorporating this phenomenon in the design of a device is to understand how the resonant frequencies of interest are affected by changes to the geometric parameters of the design. For simple geometric shapes, this is quite easy, but for complex nonlinear designs, it becomes significantly more complex. In this paper, two novel modeling techniques are demonstrated to extract the axial and torsional resonant frequencies of a complex nonlinear geometry. The first decomposes the complex geometry into easy to model components, while the second uses scaling techniques combined with the finite element method. Both models overcome problems associated with using current analytical methods as design tools, and enable a full investigation of how changes in the geometric parameters affect the resonant frequencies of interest. The benefit of such models is then demonstrated through their use in the design of a prototype piezoelectric ultrasonic resonant micromotor which has improved performance characteristics over previous prototypes.

  20. Advanced Kalman Filter for Real-Time Responsiveness in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welch, Gregory Francis; Zhang, Jinghe

    2014-06-10

    Complex engineering systems pose fundamental challenges in real-time operations and control because they are highly dynamic systems consisting of a large number of elements with severe nonlinearities and discontinuities. Today’s tools for real-time complex system operations are mostly based on steady state models, unable to capture the dynamic nature and too slow to prevent system failures. We developed advanced Kalman filtering techniques and the formulation of dynamic state estimation using Kalman filtering techniques to capture complex system dynamics in aiding real-time operations and control. In this work, we looked at complex system issues including severe nonlinearity of system equations, discontinuitiesmore » caused by system controls and network switches, sparse measurements in space and time, and real-time requirements of power grid operations. We sought to bridge the disciplinary boundaries between Computer Science and Power Systems Engineering, by introducing methods that leverage both existing and new techniques. While our methods were developed in the context of electrical power systems, they should generalize to other large-scale scientific and engineering applications.« less

  1. Spectroscopic and DFT studies of flurbiprofen as dimer and its Cu(II) and Hg(II) complexes

    NASA Astrophysics Data System (ADS)

    Sagdinc, Seda; Pir, Hacer

    2009-07-01

    The vibrational study in the solid state of flurbiprofen and its Cu(II) and Hg(II) complexes was performed by IR and Raman spectroscopy. The changes observed between the IR and Raman spectra of the ligand and of the complexes allowed us to establish the coordination mode of the metal in both complexes. The comparative vibrational analysis of the free ligand and its complexes gave evidence that flurbiprofen binds metal (II) through the carboxylate oxygen. The fully optimized equilibrium structure of flurbiprofen and its metal complexes was obtained by density functional B3LYP method by using LanL2DZ and 6-31 G(d,p) basis sets. The harmonic vibrational frequencies, infrared intensities and Raman scattering activities of flurbiprofen were calculated by density functional B3LYP methods by using 6-31G(d,p) basis set. The scaled theoretical wavenumbers showed very good agreement with the experimental values. The electronic properties of the free molecule and its complexes were also performed at B3LYP/6-31G(d,p) level of theory. Detailed interpretations of the infrared and Raman spectra of flurbiprofen are reported. The UV-vis spectra of flurbiprofen and its metal complexes were also investigated in organic solvents.

  2. Age-related variation in EEG complexity to photic stimulation: A multiscale entropy analysis

    PubMed Central

    Takahashi, Tetsuya; Cho, Raymond Y.; Murata, Tetsuhito; Mizuno, Tomoyuki; Kikuchi, Mitsuru; Mizukami, Kimiko; Kosaka, Hirotaka; Takahashi, Koichi; Wada, Yuji

    2010-01-01

    Objective This study was intended to examine variations in electroencephalographic (EEG) complexity in response to photic stimulation (PS) during aging to test the hypothesis that the aging process reduces physiologic complexity and functional responsiveness. Methods Multiscale entropy (MSE), an estimate of time-series signal complexity associated with long-range temporal correlation, is used as a recently proposed method for quantifying EEG complexity with multiple coarse-grained sequences. We recorded EEG in 13 healthy elderly subjects and 12 healthy young subjects during pre-PS and post-PS conditions and estimated their respective MSE values. Results For the pre-PS condition, no significant complexity difference was found between the groups. However, a significant MSE change (complexity increase) was found post-PS only in young subjects, thereby revealing a power-law scaling property, which means long-range temporal correlation. Conclusions Enhancement of long-range temporal correlation in young subjects after PS might reflect a cortical response to stimuli, which was absent in elderly subjects. These results are consistent with the general “loss of complexity/diminished functional response to stimuli” theory of aging. Significance Our findings demonstrate that application of MSE analysis to EEG is a powerful approach for studying age-related changes in brain function. PMID:19231279

  3. Rupture Complexities of Fluid Induced Microseismic Events at the Basel EGS Project

    NASA Astrophysics Data System (ADS)

    Folesky, Jonas; Kummerow, Jörn; Shapiro, Serge A.; Häring, Markus; Asanuma, Hiroshi

    2016-04-01

    Microseismic data sets of excellent quality, such as the seismicity recorded in the Basel-1 enhanced geothermal system, Switzerland, in 2006-2007, provide the opportunity to analyse induced seismic events in great detail. It is important to understand in how far seismological insights on e.g. source and rupture processes are scale dependent and how they can be transferred to fluid induced micro-seismicity. We applied the empirical Green's function (EGF) method in order to reconstruct the relative source time functions of 195 suitable microseismic events from the Basel-1 reservoir. We found 93 solutions with a clear and consistent directivity pattern. The remaining events display either no measurable directivity, are unfavourably oriented or exhibit non consistent or complex relative source time functions. In this work we focus on selected events of M ˜ 1 which show possible rupture complexities. It is demonstrated that the EGF method allows to resolve complex rupture behaviour even if it is not directly identifiable in the seismograms. We find clear evidence of rupture directivity and multi-phase rupturing in the analysed relative source time functions. The time delays between consecutive subevents lies in the order of 10ms. Amplitudes of the relative source time functions of the subevents do not always show the same azimuthal dependence, indicating dissimilarity in the rupture directivity of the subevents. Our observations support the assumption that heterogeneity on fault surfaces persists down to small scale (few tens of meters).

  4. Approaches to advancescientific understanding of macrosystems ecology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, Ofir; Ball, Becky; Bond-Lamberty, Benjamin

    Macrosystem ecological studies inherently investigate processes that interact across multiple spatial and temporal scales, requiring intensive sampling and massive amounts of data from diverse sources to incorporate complex cross-scale and hierarchical interactions. Inherent challenges associated with these characteristics include high computational demands, data standardization and assimilation, identification of important processes and scales without prior knowledge, and the need for large, cross-disciplinary research teams that conduct long-term studies. Therefore, macrosystem ecology studies must utilize a unique set of approaches that are capable of encompassing these methodological characteristics and associated challenges. Several case studies demonstrate innovative methods used in current macrosystem ecologymore » studies.« less

  5. Fault severity assessment for rolling element bearings using the Lempel-Ziv complexity and continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Hong, Hoonbin; Liang, Ming

    2009-02-01

    This paper proposes a new version of the Lempel-Ziv complexity as a bearing fault (single point) severity measure based on the continuous wavelet transform (CWT) results, and attempts to address the issues present in the current version of the Lempel-Ziv complexity measure. To establish the relationship between the Lempel-Ziv complexity and bearing fault severity, an analytical model for a single-point defective bearing is adopted and the factors contributing to the complexity value are explained. To avoid the ambiguity between fault and noise, the Lempel-Ziv complexity is jointly applied with the CWT. The CWT is used to identify the best scale where the fault resides and eliminate the interferences of noise and irrelevant signal components as much as possible. Then, the Lempel-Ziv complexity values are calculated for both the envelope and high-frequency carrier signal obtained from wavelet coefficients at the best scale level. As the noise and other un-related signal components have been largely removed, the Lempel-Ziv complexity value will be mostly contributed by the bearing system and hence can be reliably used as a bearing fault measure. The applications to the bearing inner- and outer-race fault signals have demonstrated that the revised Lempel-Ziv complexity can effectively measure the severity of both inner- and outer-race faults. Since the complexity values are not dependent on the magnitude of the measured signal, the proposed method is less sensitive to the data sets measured under different data acquisition conditions. In addition, as the normalized complexity values are bounded between zero and one, it is convenient to observe the fault growing trend by examining the Lempel-Ziv complexity.

  6. Dynamics of Marine Microbial Metabolism and Physiology at Station ALOHA

    NASA Astrophysics Data System (ADS)

    Casey, John R.

    Marine microbial communities influence global biogeochemical cycles by coupling the transduction of free energy to the transformation of Earth's essential bio-elements: H, C, N, O, P, and S. The web of interactions between these processes is extraordinarily complex, though fundamental physical and thermodynamic principles should describe its dynamics. In this collection of 5 studies, aspects of the complexity of marine microbial metabolism and physiology were investigated as they interact with biogeochemical cycles and direct the flow of energy within the Station ALOHA surface layer microbial community. In Chapter 1, and at the broadest level of complexity discussed, a method to relate cell size to metabolic activity was developed to evaluate allometric power laws at fine scales within picoplankton populations. Although size was predictive of metabolic rates, within-population power laws deviated from the broader size spectrum, suggesting metabolic diversity as a key determinant of microbial activity. In Chapter 2, a set of guidelines was proposed by which organic substrates are selected and utilized by the heterotrophic community based on their nitrogen content, carbon content, and energy content. A hierarchical experimental design suggested that the heterotrophic microbial community prefers high nitrogen content but low energy density substrates, while carbon content was not important. In Chapter 3, a closer look at the light-dependent dynamics of growth on a single organic substrate, glycolate, suggested that growth yields were improved by photoheterotrophy. The remaining chapters were based on the development of a genome-scale metabolic network reconstruction of the cyanobacterium Prochlorococcus to probe its metabolic capabilities and quantify metabolic fluxes. Findings described in Chapter 4 pointed to evolution of the Prochlorococcus metabolic network to optimize growth at low phosphate concentrations. Finally, in Chapter 5 and at the finest scale of complexity, a method was developed to predict hourly changes in both physiology and metabolic fluxes in Prochlorococcus by incorporating gene expression time-series data within the metabolic network model. Growth rates predicted by this method more closely matched experimental data, and diel changes in elemental composition and the energy content of biomass were predicted. Collectively, these studies identify and quantify the potential impact of variations in metabolic and physiological traits on the melee of microbial community interactions.

  7. Atomistic to continuum modeling of solidification microstructures

    DOE PAGES

    Karma, Alain; Tourret, Damien

    2015-09-26

    We summarize recent advances in modeling of solidification microstructures using computational methods that bridge atomistic to continuum scales. We first discuss progress in atomistic modeling of equilibrium and non-equilibrium solid–liquid interface properties influencing microstructure formation, as well as interface coalescence phenomena influencing the late stages of solidification. The latter is relevant in the context of hot tearing reviewed in the article by M. Rappaz in this issue. We then discuss progress to model microstructures on a continuum scale using phase-field methods. We focus on selected examples in which modeling of 3D cellular and dendritic microstructures has been directly linked tomore » experimental observations. Finally, we discuss a recently introduced coarse-grained dendritic needle network approach to simulate the formation of well-developed dendritic microstructures. The approach reliably bridges the well-separated scales traditionally simulated by phase-field and grain structure models, hence opening new avenues for quantitative modeling of complex intra- and inter-grain dynamical interactions on a grain scale.« less

  8. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    PubMed

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  9. Lattice Boltzmann modeling of transport phenomena in fuel cells and flow batteries

    NASA Astrophysics Data System (ADS)

    Xu, Ao; Shyy, Wei; Zhao, Tianshou

    2017-06-01

    Fuel cells and flow batteries are promising technologies to address climate change and air pollution problems. An understanding of the complex multiscale and multiphysics transport phenomena occurring in these electrochemical systems requires powerful numerical tools. Over the past decades, the lattice Boltzmann (LB) method has attracted broad interest in the computational fluid dynamics and the numerical heat transfer communities, primarily due to its kinetic nature making it appropriate for modeling complex multiphase transport phenomena. More importantly, the LB method fits well with parallel computing due to its locality feature, which is required for large-scale engineering applications. In this article, we review the LB method for gas-liquid two-phase flows, coupled fluid flow and mass transport in porous media, and particulate flows. Examples of applications are provided in fuel cells and flow batteries. Further developments of the LB method are also outlined.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Weizhou, E-mail: wzw@lynu.edu.cn, E-mail: ybw@gzu.edu.cn; Zhang, Yu; Sun, Tao

    High-level coupled cluster singles, doubles, and perturbative triples [CCSD(T)] computations with up to the aug-cc-pVQZ basis set (1924 basis functions) and various extrapolations toward the complete basis set (CBS) limit are presented for the sandwich, T-shaped, and parallel-displaced benzene⋯naphthalene complex. Using the CCSD(T)/CBS interaction energies as a benchmark, the performance of some newly developed wave function and density functional theory methods has been evaluated. The best performing methods were found to be the dispersion-corrected PBE0 functional (PBE0-D3) and spin-component scaled zeroth-order symmetry-adapted perturbation theory (SCS-SAPT0). The success of SCS-SAPT0 is very encouraging because it provides one method for energy componentmore » analysis of π-stacked complexes with 200 atoms or more. Most newly developed methods do, however, overestimate the interaction energies. The results of energy component analysis show that interaction energies are overestimated mainly due to the overestimation of dispersion energy.« less

  11. Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trebotich, D

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less

  12. Modeling complex biological flows in multi-scale systems using the APDEC framework

    NASA Astrophysics Data System (ADS)

    Trebotich, David

    2006-09-01

    We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.

  13. Understanding the Complexity of Temperature Dynamics in Xinjiang, China, from Multitemporal Scale and Spatial Perspectives

    PubMed Central

    Chen, Yaning; Li, Weihong; Liu, Zuhan; Wei, Chunmeng; Tang, Jie

    2013-01-01

    Based on the observed data from 51 meteorological stations during the period from 1958 to 2012 in Xinjiang, China, we investigated the complexity of temperature dynamics from the temporal and spatial perspectives by using a comprehensive approach including the correlation dimension (CD), classical statistics, and geostatistics. The main conclusions are as follows (1) The integer CD values indicate that the temperature dynamics are a complex and chaotic system, which is sensitive to the initial conditions. (2) The complexity of temperature dynamics decreases along with the increase of temporal scale. To describe the temperature dynamics, at least 3 independent variables are needed at daily scale, whereas at least 2 independent variables are needed at monthly, seasonal, and annual scales. (3) The spatial patterns of CD values at different temporal scales indicate that the complex temperature dynamics are derived from the complex landform. PMID:23843732

  14. Remote sensing of freeze-thaw transitions in Arctic soils using the complex resistivity method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Yuxin; Hubbard, Susan S; Ulrich, Craig

    2013-01-01

    Our ability to monitor freeze - thaw transitions is critical to developing a predictive understanding of biogeochemical transitions and carbon dynamics in high latitude environments. In this study, we conducted laboratory column experiments to explore the potential of the complex resistivity method for monitoring the freeze - thaw transitions of the arctic permafrost soils. Samples for the experiment were collected from the upper active layer of Gelisol soils at the Barrow Environmental Observatory, Barrow Alaska. Freeze - thaw transitions were induced through exposing the soil column to controlled temperature environments at 4 C and -20 C. Complex resistivity and temperaturemore » measurements were collected regularly during the freeze - thaw transitions using electrodes and temperature sensors installed along the column. During the experiments, over two orders of magnitude of resistivity variations were observed when the temperature was increased or decreased between -20 C and 0 C. Smaller resistivity variations were also observed during the isothermal thawing or freezing processes that occurred near 0 C. Single frequency electrical phase response and imaginary conductivity at 1 Hz were found to be exclusively related to the unfrozen water in the soil matrix, suggesting that these geophysical 24 attributes can be used as a proxy for the monitoring of the onset and progression of the freeze - thaw transitions. Spectral electrical responses and fitted Cole Cole parameters contained additional information about the freeze - thaw transition affected by the soil grain size distribution. Specifically, a shift of the observed spectral response to lower frequency was observed during isothermal thawing process, which we interpret to be due to sequential thawing, first from fine then to coarse particles within the soil matrix. Our study demonstrates the potential of the complex resistivity method for remote monitoring of freeze - thaw transitions in arctic soils. Although conducted at the laboratory scale, this study provides the foundation for exploring the potential of the complex resistivity signals for monitoring spatiotemporal variations of freeze - thaw transitions over field-relevant scales.« less

  15. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    PubMed Central

    Shang, Yizi; Shang, Ling; Gao, Chuanchang; Lu, Guiming; Ye, Yuntao; Jia, Dongdong

    2014-01-01

    This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications. PMID:24574931

  16. Non-stationary dynamics in the bouncing ball: A wavelet perspective

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behera, Abhinna K., E-mail: abhinna@iiserkol.ac.in; Panigrahi, Prasanta K., E-mail: pprasanta@iiserkol.ac.in; Sekar Iyengar, A. N., E-mail: ansekar.iyengar@saha.ac.in

    2014-12-01

    The non-stationary dynamics of a bouncing ball, comprising both periodic as well as chaotic behavior, is studied through wavelet transform. The multi-scale characterization of the time series displays clear signatures of self-similarity, complex scaling behavior, and periodicity. Self-similar behavior is quantified by the generalized Hurst exponent, obtained through both wavelet based multi-fractal detrended fluctuation analysis and Fourier methods. The scale dependent variable window size of the wavelets aptly captures both the transients and non-stationary periodic behavior, including the phase synchronization of different modes. The optimal time-frequency localization of the continuous Morlet wavelet is found to delineate the scales corresponding tomore » neutral turbulence, viscous dissipation regions, and different time varying periodic modulations.« less

  17. Definition of (so MIScalled) ''Complexity'' as UTTER-SIMPLICITY!!! Versus Deviations From it as Complicatedness-Measure

    NASA Astrophysics Data System (ADS)

    Young, F.; Siegel, Edward Carl-Ludwig

    2011-03-01

    (so MIScalled) "complexity" with INHERENT BOTH SCALE-Invariance Symmetry-RESTORING, AND 1 / w (1.000..) "pink" Zipf-law Archimedes-HYPERBOLICITY INEVITABILITY power-spectrum power-law decay algebraicity. Their CONNECTION is via simple-calculus SCALE-Invariance Symmetry-RESTORING logarithm-function derivative: (d/ d ω) ln(ω) = 1 / ω , i.e. (d/ d ω) [SCALE-Invariance Symmetry-RESTORING](ω) = 1/ ω . Via Noether-theorem continuous-symmetries relation to conservation-laws: (d/ d ω) [inter-scale 4-current 4-div-ergence} = 0](ω) = 1 / ω . Hence (so MIScalled) "complexity" is information inter-scale conservation, in agreement with Anderson-Mandell [Fractals of Brain/Mind, G. Stamov ed.(1994)] experimental-psychology!!!], i.e. (so MIScalled) "complexity" is UTTER-SIMPLICITY!!! Versus COMPLICATEDNESS either PLUS (Additive) VS. TIMES (Multiplicative) COMPLICATIONS of various system-specifics. COMPLICATEDNESS-MEASURE DEVIATIONS FROM complexity's UTTER-SIMPLICITY!!!: EITHER [SCALE-Invariance Symmetry-BREAKING] MINUS [SCALE-Invariance Symmetry-RESTORING] via power-spectrum power-law algebraicity decays DIFFERENCES: ["red"-Pareto] MINUS ["pink"-Zipf Archimedes-HYPERBOLICITY INEVITABILITY]!!!

  18. Heuristic decomposition for non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.; Hajela, P.

    1991-01-01

    Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.

  19. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    Methods for development of logic design together with algorithms for failure testing, a method for design of logic for ultra-large-scale integration, extension of quantum calculus to describe the functional behavior of a mechanism component-by-component and to computer tests for failures in the mechanism using the diagnosis algorithm, and the development of an algorithm for the multi-output 2-level minimization problem are discussed.

  20. Live 129I-129Xe dating

    NASA Technical Reports Server (NTRS)

    Marti, K.

    1986-01-01

    A technique of cosmic ray exposure age dating using cosmic ray produced I-129 and Xe-129 components is discussed. The live I-129 - Xe-129 method provides an ideal monitor for cosmic ray flux variations on the 10(7)y - 10(8)y time-scale. It is based on low-energy neutron reactions on Te, and these data, when coupled to those from other methods, may facilitate the detection of complex exposure histories.

  1. Anchor-Free Localization Method for Mobile Targets in Coal Mine Wireless Sensor Networks

    PubMed Central

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes’ location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines. PMID:22574048

  2. Anchor-free localization method for mobile targets in coal mine wireless sensor networks.

    PubMed

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes' location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines.

  3. Accessible methods for the dynamic time-scale decomposition of biochemical systems.

    PubMed

    Surovtsova, Irina; Simus, Natalia; Lorenz, Thomas; König, Artjom; Sahle, Sven; Kummer, Ursula

    2009-11-01

    The growing complexity of biochemical models asks for means to rationally dissect the networks into meaningful and rather independent subnetworks. Such foregoing should ensure an understanding of the system without any heuristics employed. Important for the success of such an approach is its accessibility and the clarity of the presentation of the results. In order to achieve this goal, we developed a method which is a modification of the classical approach of time-scale separation. This modified method as well as the more classical approach have been implemented for time-dependent application within the widely used software COPASI. The implementation includes different possibilities for the representation of the results including 3D-visualization. The methods are included in COPASI which is free for academic use and available at www.copasi.org. irina.surovtsova@bioquant.uni-heidelberg.de Supplementary data are available at Bioinformatics online.

  4. A Normalization-Free and Nonparametric Method Sharpens Large-Scale Transcriptome Analysis and Reveals Common Gene Alteration Patterns in Cancers.

    PubMed

    Li, Qi-Gang; He, Yong-Han; Wu, Huan; Yang, Cui-Ping; Pu, Shao-Yan; Fan, Song-Qing; Jiang, Li-Ping; Shen, Qiu-Shuo; Wang, Xiao-Xiong; Chen, Xiao-Qiong; Yu, Qin; Li, Ying; Sun, Chang; Wang, Xiangting; Zhou, Jumin; Li, Hai-Peng; Chen, Yong-Bin; Kong, Qing-Peng

    2017-01-01

    Heterogeneity in transcriptional data hampers the identification of differentially expressed genes (DEGs) and understanding of cancer, essentially because current methods rely on cross-sample normalization and/or distribution assumption-both sensitive to heterogeneous values. Here, we developed a new method, Cross-Value Association Analysis (CVAA), which overcomes the limitation and is more robust to heterogeneous data than the other methods. Applying CVAA to a more complex pan-cancer dataset containing 5,540 transcriptomes discovered numerous new DEGs and many previously rarely explored pathways/processes; some of them were validated, both in vitro and in vivo , to be crucial in tumorigenesis, e.g., alcohol metabolism ( ADH1B ), chromosome remodeling ( NCAPH ) and complement system ( Adipsin ). Together, we present a sharper tool to navigate large-scale expression data and gain new mechanistic insights into tumorigenesis.

  5. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  6. Comparison of methods for extracting annual cycle with changing amplitude in climate science

    NASA Astrophysics Data System (ADS)

    Deng, Q.; Fu, Z.

    2017-12-01

    Changes of annual cycle gains a growing concern recently. The basic hypothesis regards annual cycle as constant. Climatology mean within a time period is usually used to depict the annual cycle. Obviously this hypothesis contradicts with the fact that annual cycle is changing every year. For the lack of a unified definition about annual cycle, the approaches adopted in extracting annual cycle are various and may lead to different results. The precision and validity of these methods need to be examined. In this work we numerical experiments with known monofrequent annual cycle are set to evaluate five popular extracting methods: fitting sinusoids, complex demodulation, Ensemble Empirical Mode Decomposition (EEMD), Nonlinear Mode Decomposition (NMD) and Seasonal trend decomposition procedure based on loess (STL). Three different types of changing amplitude will be generated: steady, linear increasing and nonlinearly varying. Comparing the annual cycle extracted by these methods with the generated annual cycle, we find that (1) NMD performs best in depicting annual cycle itself and its amplitude change, (2) fitting sinusoids, complex demodulation and EEMD methods are more sensitive to long-term memory(LTM) of generated time series thus lead to overfitting annual cycle and too noisy amplitude, oppositely the result of STL underestimate the amplitude variation (3)all of them can present the amplitude trend correctly in long-time scale but the errors on account of noise and LTM are common in some methods over short time scales.

  7. Visual analytics in medical education: impacting analytical reasoning and decision making for quality improvement.

    PubMed

    Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil

    2015-01-01

    The medical curriculum is the main tool representing the entire undergraduate medical education. Due to its complexity and multilayered structure it is of limited use to teachers in medical education for quality improvement purposes. In this study we evaluated three visualizations of curriculum data from a pilot course, using teachers from an undergraduate medical program and applying visual analytics methods. We found that visual analytics can be used to positively impacting analytical reasoning and decision making in medical education through the realization of variables capable to enhance human perception and cognition on complex curriculum data. The positive results derived from our evaluation of a medical curriculum and in a small scale, signify the need to expand this method to an entire medical curriculum. As our approach sustains low levels of complexity it opens a new promising direction in medical education informatics research.

  8. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids by Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  9. Efficient Construction of Discrete Adjoint Operators on Unstructured Grids Using Complex Variables

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Kleb, William L.

    2005-01-01

    A methodology is developed and implemented to mitigate the lengthy software development cycle typically associated with constructing a discrete adjoint solver for aerodynamic simulations. The approach is based on a complex-variable formulation that enables straightforward differentiation of complicated real-valued functions. An automated scripting process is used to create the complex-variable form of the set of discrete equations. An efficient method for assembling the residual and cost function linearizations is developed. The accuracy of the implementation is verified through comparisons with a discrete direct method as well as a previously developed handcoded discrete adjoint approach. Comparisons are also shown for a large-scale configuration to establish the computational efficiency of the present scheme. To ultimately demonstrate the power of the approach, the implementation is extended to high temperature gas flows in chemical nonequilibrium. Finally, several fruitful research and development avenues enabled by the current work are suggested.

  10. Screening of Small Molecule Interactor Library by Using In-Cell NMR Spectroscopy (SMILI-NMR)

    PubMed Central

    Xie, Jingjing; Thapa, Rajiv; Reverdatto, Sergey; Burz, David S.; Shekhtman, Alexander

    2011-01-01

    We developed an in-cell NMR assay for screening small molecule interactor libraries (SMILI-NMR) for compounds capable of disrupting or enhancing specific interactions between two or more components of a biomolecular complex. The method relies on the formation of a well-defined biocomplex and utilizes in-cell NMR spectroscopy to identify the molecular surfaces involved in the interaction at atomic scale resolution. Changes in the interaction surface caused by a small molecule interfering with complex formation are used as a read-out of the assay. The in-cell nature of the experimental protocol insures that the small molecule is capable of penetrating the cell membrane and specifically engaging the target molecule(s). Utility of the method was demonstrated by screening a small dipeptide library against the FKBP–FRB protein complex involved in cell cycle arrest. The dipeptide identified by SMILI-NMR showed biological activity in a functional assay in yeast. PMID:19422228

  11. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.

  12. Drawing the PDB: Protein-Ligand Complexes in Two Dimensions.

    PubMed

    Stierand, Katrin; Rarey, Matthias

    2010-12-09

    The two-dimensional representation of molecules is a popular communication medium in chemistry and the associated scientific fields. Computational methods for drawing small molecules with and without manual investigation are well-established and widely spread in terms of numerous software tools. Concerning the planar depiction of molecular complexes, there is considerably less choice. We developed the software PoseView, which automatically generates two-dimensional diagrams of macromolecular complexes, showing the ligand, the interactions, and the interacting residues. All depicted molecules are drawn on an atomic level as structure diagrams; thus, the output plots are clearly structured and easily readable for the scientist. We tested the performance of PoseView in a large-scale application on nearly all druglike complexes of the PDB (approximately 200000 complexes); for more than 92% of the complexes considered for drawing, a layout could be computed. In the following, we will present the results of this application study.

  13. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE PAGES

    Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; ...

    2015-09-28

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less

  14. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.

    2016-09-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less

  15. Economic development and wage inequality: A complex system analysis

    PubMed Central

    Pugliese, Emanuele; Pietronero, Luciano

    2017-01-01

    Adapting methods from complex system analysis, this paper analyzes the features of the complex relationship between wage inequality and the development and industrialization of a country. Development is understood as a combination of a monetary index, GDP per capita, and a recently introduced measure of a country’s economic complexity: Fitness. Initially the paper looks at wage inequality on a global scale, over the time period 1990–2008. Our empirical results show that globally the movement of wage inequality along with the ongoing industrialization of countries has followed a longitudinally persistent pattern comparable to the one theorized by Kuznets in the fifties: countries with an average level of development suffer the highest levels of wage inequality. Next, the study narrows its focus on wage inequality within the United States. By using data on wages and employment in the approximately 3100 US counties over the time interval 1990–2014, it generalizes the Fitness-Complexity metric for geographic units and industrial sectors, and then investigates wage inequality between NAICS industries. The empirical time and scale dependencies are consistent with a relation between wage inequality and development driven by institutional factors comparing countries, and by change in the structural compositions of sectors in a homogeneous institutional environment, such as the counties of the United States. PMID:28926577

  16. Economic development and wage inequality: A complex system analysis.

    PubMed

    Sbardella, Angelica; Pugliese, Emanuele; Pietronero, Luciano

    2017-01-01

    Adapting methods from complex system analysis, this paper analyzes the features of the complex relationship between wage inequality and the development and industrialization of a country. Development is understood as a combination of a monetary index, GDP per capita, and a recently introduced measure of a country's economic complexity: Fitness. Initially the paper looks at wage inequality on a global scale, over the time period 1990-2008. Our empirical results show that globally the movement of wage inequality along with the ongoing industrialization of countries has followed a longitudinally persistent pattern comparable to the one theorized by Kuznets in the fifties: countries with an average level of development suffer the highest levels of wage inequality. Next, the study narrows its focus on wage inequality within the United States. By using data on wages and employment in the approximately 3100 US counties over the time interval 1990-2014, it generalizes the Fitness-Complexity metric for geographic units and industrial sectors, and then investigates wage inequality between NAICS industries. The empirical time and scale dependencies are consistent with a relation between wage inequality and development driven by institutional factors comparing countries, and by change in the structural compositions of sectors in a homogeneous institutional environment, such as the counties of the United States.

  17. Closing the water balance with cosmic-ray soil moisture measurements and assessing their relation to evapotranspiration in two semiarid watersheds

    USDA-ARS?s Scientific Manuscript database

    Soil moisture dynamics reflect the complex interactions of meteorological conditions with soil, vegetation and terrain properties. In this study, intermediate-scale soil moisture estimates from the cosmic-ray neutron sensing (CRNS) method are evaluated for two semiarid ecosystems in the southwestern...

  18. Watershed councils: it takes a community to restore a watershed

    Treesearch

    Marie Oliver; Rebecca Flitcroft

    2011-01-01

    Regulation alone cannot solve complex ecological problems on private lands that are managed for diverse uses. Executing coordinated restoration projects at the watershed scale is only possible with the cooperation and commitment of all stakeholders. Locally organized, nonregulatory watershed councils have proven to be a powerful method of engaging citizens from all...

  19. Nevada Photo-Based Inventory Pilot (NPIP) photo sampling procedures

    Treesearch

    Tracey S. Frescino; Gretchen G. Moisen; Kevin A. Megown; Val J. Nelson; Elizabeth A. Freeman; Paul L. Patterson; Mark Finco; James Menlove

    2009-01-01

    The Forest Inventory and Analysis program (FIA) of the U.S. Forest Service monitors status and trends in forested ecoregions nationwide. The complex nature of this broad-scale, strategic-level inventory demands constant evolution and evaluation of methods to get the best information possible while continuously increasing efficiency. In 2004, the "Nevada Photo-...

  20. Association genetics in Pinus taeda L. I. wood property traits

    Treesearch

    Santiago C. Gonzalez-Martinez; Nicholas C. Wheeler; Elhan Ersoz; C. Dana Nelson; David B. Neale

    2007-01-01

    Genetic association is a powerful method for dissecting complex adaptive traits due to (i) fine-scale mapping resulting from historical recombination, (ii) wide coverage of phenotypic and genotypic variation within a single experiment, and (iii) the simultaneous discovery of loci and alleles. In this article, genetic association among single nucleotide polymorphisms (...

  1. Bio-inspired Fabrication of Complex Hierarchical Structure in Silicon.

    PubMed

    Gao, Yang; Peng, Zhengchun; Shi, Tielin; Tan, Xianhua; Zhang, Deqin; Huang, Qiang; Zou, Chuanping; Liao, Guanglan

    2015-08-01

    In this paper, we developed a top-down method to fabricate complex three dimensional silicon structure, which was inspired by the hierarchical micro/nanostructure of the Morpho butterfly scales. The fabrication procedure includes photolithography, metal masking, and both dry and wet etching techniques. First, microscale photoresist grating pattern was formed on the silicon (111) wafer. Trenches with controllable rippled structures on the sidewalls were etched by inductively coupled plasma reactive ion etching Bosch process. Then, Cr film was angled deposited on the bottom of the ripples by electron beam evaporation, followed by anisotropic wet etching of the silicon. The simple fabrication method results in large scale hierarchical structure on a silicon wafer. The fabricated Si structure has multiple layers with uniform thickness of hundreds nanometers. We conducted both light reflection and heat transfer experiments on this structure. They exhibited excellent antireflection performance for polarized ultraviolet, visible and near infrared wavelengths. And the heat flux of the structure was significantly enhanced. As such, we believe that these bio-inspired hierarchical silicon structure will have promising applications in photovoltaics, sensor technology and photonic crystal devices.

  2. Clustering biomolecular complexes by residue contacts similarity.

    PubMed

    Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2012-07-01

    Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.

  3. Implementation of the infinite-range exterior complex scaling to the time-dependent complete-active-space self-consistent-field method

    NASA Astrophysics Data System (ADS)

    Orimo, Yuki; Sato, Takeshi; Scrinzi, Armin; Ishikawa, Kenichi L.

    2018-02-01

    We present a numerical implementation of the infinite-range exterior complex scaling [Scrinzi, Phys. Rev. A 81, 053845 (2010), 10.1103/PhysRevA.81.053845] as an efficient absorbing boundary to the time-dependent complete-active-space self-consistent field method [Sato, Ishikawa, Březinová, Lackner, Nagele, and Burgdörfer, Phys. Rev. A 94, 023405 (2016), 10.1103/PhysRevA.94.023405] for multielectron atoms subject to an intense laser pulse. We introduce Gauss-Laguerre-Radau quadrature points to construct discrete variable representation basis functions in the last radial finite element extending to infinity. This implementation is applied to strong-field ionization and high-harmonic generation in He, Be, and Ne atoms. It efficiently prevents unphysical reflection of photoelectron wave packets at the simulation boundary, enabling accurate simulations with substantially reduced computational cost, even under significant (≈50 % ) double ionization. For the case of a simulation of high-harmonic generation from Ne, for example, 80% cost reduction is achieved, compared to a mask-function absorption boundary.

  4. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  5. Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems

    NASA Astrophysics Data System (ADS)

    Razzak, M. A.; Alam, M. Z.; Sharif, M. N.

    2018-03-01

    In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.

  6. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach

    PubMed Central

    2016-01-01

    Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235

  7. A novel way to detect correlations on multi-time scales, with temporal evolution and for multi-variables

    NASA Astrophysics Data System (ADS)

    Yuan, Naiming; Xoplaki, Elena; Zhu, Congwen; Luterbacher, Juerg

    2016-06-01

    In this paper, two new methods, Temporal evolution of Detrended Cross-Correlation Analysis (TDCCA) and Temporal evolution of Detrended Partial-Cross-Correlation Analysis (TDPCCA), are proposed by generalizing DCCA and DPCCA. Applying TDCCA/TDPCCA, it is possible to study correlations on multi-time scales and over different periods. To illustrate their properties, we used two climatological examples: i) Global Sea Level (GSL) versus North Atlantic Oscillation (NAO); and ii) Summer Rainfall over Yangtze River (SRYR) versus previous winter Pacific Decadal Oscillation (PDO). We find significant correlations between GSL and NAO on time scales of 60 to 140 years, but the correlations are non-significant between 1865-1875. As for SRYR and PDO, significant correlations are found on time scales of 30 to 35 years, but the correlations are more pronounced during the recent 30 years. By combining TDCCA/TDPCCA and DCCA/DPCCA, we proposed a new correlation-detection system, which compared to traditional methods, can objectively show how two time series are related (on which time scale, during which time period). These are important not only for diagnosis of complex system, but also for better designs of prediction models. Therefore, the new methods offer new opportunities for applications in natural sciences, such as ecology, economy, sociology and other research fields.

  8. Fully coupled approach to modeling shallow water flow, sediment transport, and bed evolution in rivers

    NASA Astrophysics Data System (ADS)

    Li, Shuangcai; Duffy, Christopher J.

    2011-03-01

    Our ability to predict complex environmental fluid flow and transport hinges on accurate and efficient simulations of multiple physical phenomenon operating simultaneously over a wide range of spatial and temporal scales, including overbank floods, coastal storm surge events, drying and wetting bed conditions, and simultaneous bed form evolution. This research implements a fully coupled strategy for solving shallow water hydrodynamics, sediment transport, and morphological bed evolution in rivers and floodplains (PIHM_Hydro) and applies the model to field and laboratory experiments that cover a wide range of spatial and temporal scales. The model uses a standard upwind finite volume method and Roe's approximate Riemann solver for unstructured grids. A multidimensional linear reconstruction and slope limiter are implemented, achieving second-order spatial accuracy. Model efficiency and stability are treated using an explicit-implicit method for temporal discretization with operator splitting. Laboratory-and field-scale experiments were compiled where coupled processes across a range of scales were observed and where higher-order spatial and temporal accuracy might be needed for accurate and efficient solutions. These experiments demonstrate the ability of the fully coupled strategy in capturing dynamics of field-scale flood waves and small-scale drying-wetting processes.

  9. Resistance to Internal Damage and Scaling of Concrete Air Entrained By Microspheres

    NASA Astrophysics Data System (ADS)

    Molendowska, Agnieszka; Wawrzenczyk, Jerzy

    2017-10-01

    This paper report the test results of high strength concrete produced with slag cement and air entrained with polymer microspheres in three diameters. The study focused on determining the effects of the microsphere size and quantity on the air void structure and resistance to internal cracking and scaling of the concrete. The resistance to internal cracking was determined in compliance with the requirements of the modified ASTM C666 A method on beam specimens. The scaling resistance in a 3% NaCl solution was determined using the slab test in accordance with PKN-CEN/TS 12390-9:2007. The air void structure parameters were determined to PN-EN 480-11:1998. The study results indicate that the use of microspheres is an effective air entrainment method providing very good air void structure parameters. The results show high freeze-thaw durability of polymer microsphere-based concrete in exposure class XF3. The scaling resistance test confirms that it is substantially more difficult to protect concrete against scaling in the presence of the 3% NaCl solution (exposure class XF4). Concrete scaling is a complex phenomenon controlled by a number of independent factors.

  10. An immersed boundary method for direct and large eddy simulation of stratified flows in complex geometry

    NASA Astrophysics Data System (ADS)

    Rapaka, Narsimha R.; Sarkar, Sutanu

    2016-10-01

    A sharp-interface Immersed Boundary Method (IBM) is developed to simulate density-stratified turbulent flows in complex geometry using a Cartesian grid. The basic numerical scheme corresponds to a central second-order finite difference method, third-order Runge-Kutta integration in time for the advective terms and an alternating direction implicit (ADI) scheme for the viscous and diffusive terms. The solver developed here allows for both direct numerical simulation (DNS) and large eddy simulation (LES) approaches. Methods to enhance the mass conservation and numerical stability of the solver to simulate high Reynolds number flows are discussed. Convergence with second-order accuracy is demonstrated in flow past a cylinder. The solver is validated against past laboratory and numerical results in flow past a sphere, and in channel flow with and without stratification. Since topographically generated internal waves are believed to result in a substantial fraction of turbulent mixing in the ocean, we are motivated to examine oscillating tidal flow over a triangular obstacle to assess the ability of this computational model to represent nonlinear internal waves and turbulence. Results in laboratory-scale (order of few meters) simulations show that the wave energy flux, mean flow properties and turbulent kinetic energy agree well with our previous results obtained using a body-fitted grid (BFG). The deviation of IBM results from BFG results is found to increase with increasing nonlinearity in the wave field that is associated with either increasing steepness of the topography relative to the internal wave propagation angle or with the amplitude of the oscillatory forcing. LES is performed on a large scale ridge, of the order of few kilometers in length, that has the same geometrical shape and same non-dimensional values for the governing flow and environmental parameters as the laboratory-scale topography, but significantly larger Reynolds number. A non-linear drag law is utilized in the large-scale application to parameterize turbulent losses due to bottom friction at high Reynolds number. The large scale problem exhibits qualitatively similar behavior to the laboratory scale problem with some differences: slightly larger intensification of the boundary flow and somewhat higher non-dimensional values for the energy fluxed away by the internal wave field. The phasing of wave breaking and turbulence exhibits little difference between small-scale and large-scale obstacles as long as the important non-dimensional parameters are kept the same. We conclude that IBM is a viable approach to the simulation of internal waves and turbulence in high Reynolds number stratified flows over topography.

  11. Numerical Modeling of Propellant Boil-Off in a Cryogenic Storage Tank

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Steadman, T. E.; Maroney, J. L.; Sass, J. P.; Fesmire, J. E.

    2007-01-01

    A numerical model to predict boil-off of stored propellant in large spherical cryogenic tanks has been developed. Accurate prediction of tank boil-off rates for different thermal insulation systems was the goal of this collaboration effort. The Generalized Fluid System Simulation Program, integrating flow analysis and conjugate heat transfer for solving complex fluid system problems, was used to create the model. Calculation of tank boil-off rate requires simultaneous simulation of heat transfer processes among liquid propellant, vapor ullage space, and tank structure. The reference tank for the boil-off model was the 850,000 gallon liquid hydrogen tank at Launch Complex 39B (LC- 39B) at Kennedy Space Center, which is under study for future infrastructure improvements to support the Constellation program. The methodology employed in the numerical model was validated using a sub-scale model and tank. Experimental test data from a 1/15th scale version of the LC-39B tank using both liquid hydrogen and liquid nitrogen were used to anchor the analytical predictions of the sub-scale model. Favorable correlations between sub-scale model and experimental test data have provided confidence in full-scale tank boil-off predictions. These methods are now being used in the preliminary design for other cases including future launch vehicles

  12. Multiresolution persistent homology for excessively large biomolecular datasets

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei, E-mail: wei@math.msu.edu

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topologicalmore » analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.« less

  14. Micro-CT Pore Scale Study Of Flow In Porous Media: Effect Of Voxel Resolution

    NASA Astrophysics Data System (ADS)

    Shah, S.; Gray, F.; Crawshaw, J.; Boek, E.

    2014-12-01

    In the last few years, pore scale studies have become the key to understanding the complex fluid flow processes in the fields of groundwater remediation, hydrocarbon recovery and environmental issues related to carbon storage and capture. A pore scale study is often comprised of two key procedures: 3D pore scale imaging and numerical modelling techniques. The essence of a pore scale study is to test the physics implemented in a model of complicated fluid flow processes at one scale (microscopic) and then apply the model to solve the problems associated with water resources and oil recovery at other scales (macroscopic and field). However, the process of up-scaling from the pore scale to the macroscopic scale has encountered many challenges due to both pore scale imaging and modelling techniques. Due to the technical limitations in the imaging method, there is always a compromise between the spatial (voxel) resolution and the physical volume of the sample (field of view, FOV) to be scanned by the imaging methods, specifically X-ray micro-CT (XMT) in our case In this study, a careful analysis was done to understand the effect of voxel size, using XMT to image the 3D pore space of a variety of porous media from sandstones to carbonates scanned at different voxel resolution (4.5 μm, 6.2 μm, 8.3 μm and 10.2 μm) but keeping the scanned FOV constant for all the samples. We systematically segment the micro-CT images into three phases, the macro-pore phase, an intermediate phase (unresolved micro-pores + grains) and the grain phase and then study the effect of voxel size on the structure of the macro-pore and the intermediate phases and the fluid flow properties using lattice-Boltzmann (LB) and pore network (PN) modelling methods. We have also applied a numerical coarsening algorithm (up-scale method) to reduce the computational power and time required to accurately predict the flow properties using the LB and PN method.

  15. Factorized Runge-Kutta-Chebyshev Methods

    NASA Astrophysics Data System (ADS)

    O'Sullivan, Stephen

    2017-05-01

    The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc

  16. RRW: repeated random walks on genome-scale protein networks for local cluster discovery

    PubMed Central

    Macropol, Kathy; Can, Tolga; Singh, Ambuj K

    2009-01-01

    Background We propose an efficient and biologically sensitive algorithm based on repeated random walks (RRW) for discovering functional modules, e.g., complexes and pathways, within large-scale protein networks. Compared to existing cluster identification techniques, RRW implicitly makes use of network topology, edge weights, and long range interactions between proteins. Results We apply the proposed technique on a functional network of yeast genes and accurately identify statistically significant clusters of proteins. We validate the biological significance of the results using known complexes in the MIPS complex catalogue database and well-characterized biological processes. We find that 90% of the created clusters have the majority of their catalogued proteins belonging to the same MIPS complex, and about 80% have the majority of their proteins involved in the same biological process. We compare our method to various other clustering techniques, such as the Markov Clustering Algorithm (MCL), and find a significant improvement in the RRW clusters' precision and accuracy values. Conclusion RRW, which is a technique that exploits the topology of the network, is more precise and robust in finding local clusters. In addition, it has the added flexibility of being able to find multi-functional proteins by allowing overlapping clusters. PMID:19740439

  17. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  18. Multiscale approach to pest insect monitoring: random walks, pattern formation, synchronization, and networks.

    PubMed

    Petrovskii, Sergei; Petrovskaya, Natalia; Bearup, Daniel

    2014-09-01

    Pest insects pose a significant threat to food production worldwide resulting in annual losses worth hundreds of billions of dollars. Pest control attempts to prevent pest outbreaks that could otherwise destroy a sward. It is good practice in integrated pest management to recommend control actions (usually pesticides application) only when the pest density exceeds a certain threshold. Accurate estimation of pest population density in ecosystems, especially in agro-ecosystems, is therefore very important, and this is the overall goal of the pest insect monitoring. However, this is a complex and challenging task; providing accurate information about pest abundance is hardly possible without taking into account the complexity of ecosystems' dynamics, in particular, the existence of multiple scales. In the case of pest insects, monitoring has three different spatial scales, each of them having their own scale-specific goal and their own approaches to data collection and interpretation. In this paper, we review recent progress in mathematical models and methods applied at each of these scales and show how it helps to improve the accuracy and robustness of pest population density estimation. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. The Multi-Scale Network Landscape of Collaboration.

    PubMed

    Bae, Arram; Park, Doheum; Ahn, Yong-Yeol; Park, Juyong

    2016-01-01

    Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena--which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists.

  20. Development of the Communication Complexity Scale

    PubMed Central

    Brady, Nancy C.; Fleming, Kandace; Thiemann-Bourque, Kathy; Olswang, Lesley; Dowden, Patricia; Saunders, Muriel D.

    2011-01-01

    Accurate description of an individual's communication status is critical in both research and practice. Describing the communication status of individuals with severe intellectual and developmental disabilities is difficult because these individuals often communicate with presymbolic means that may not be readily recognized. Our goal was to design a communication scale and summary score for interpretation that could be applied across populations of children and adults with limited (often presymbolic) communication forms. Methods The Communication Complexity Scale (CCS) was developed by a team of researchers and tested with 178 participants with varying levels of presymbolic and early symbolic communication skills. Correlations between standardized and informant measures were completed, and expert opinions were obtained regarding the CCS. Results CCS scores were within expected ranges for the populations studied and inter-rater reliability was high. Comparison across other measures indicated significant correlations with standardized tests of language. Scores on informant report measures tended to place children at higher levels of communication. Expert opinions generally favored the development of the CCS. Clinical implications The scale appears to be useful for describing a given individual's level of presymbolic or early symbolic communication. Further research is needed to determine if it is sensitive to developmental growth in communication. PMID:22049404

  1. The Multi-Scale Network Landscape of Collaboration

    PubMed Central

    Ahn, Yong-Yeol; Park, Juyong

    2016-01-01

    Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena—which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists. PMID:26990088

  2. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  3. A simple multi-scale Gaussian smoothing-based strategy for automatic chromatographic peak extraction.

    PubMed

    Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng

    2016-06-24

    Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Agent-Based Modeling in Molecular Systems Biology.

    PubMed

    Soheilypour, Mohammad; Mofrad, Mohammad R K

    2018-07-01

    Molecular systems orchestrating the biology of the cell typically involve a complex web of interactions among various components and span a vast range of spatial and temporal scales. Computational methods have advanced our understanding of the behavior of molecular systems by enabling us to test assumptions and hypotheses, explore the effect of different parameters on the outcome, and eventually guide experiments. While several different mathematical and computational methods are developed to study molecular systems at different spatiotemporal scales, there is still a need for methods that bridge the gap between spatially-detailed and computationally-efficient approaches. In this review, we summarize the capabilities of agent-based modeling (ABM) as an emerging molecular systems biology technique that provides researchers with a new tool in exploring the dynamics of molecular systems/pathways in health and disease. © 2018 WILEY Periodicals, Inc.

  5. Disentangling Random Motion and Flow in a Complex Medium

    PubMed Central

    Koslover, Elena F.; Chan, Caleb K.; Theriot, Julie A.

    2016-01-01

    We describe a technique for deconvolving the stochastic motion of particles from large-scale fluid flow in a dynamic environment such as that found in living cells. The method leverages the separation of timescales to subtract out the persistent component of motion from single-particle trajectories. The mean-squared displacement of the resulting trajectories is rescaled so as to enable robust extraction of the diffusion coefficient and subdiffusive scaling exponent of the stochastic motion. We demonstrate the applicability of the method for characterizing both diffusive and fractional Brownian motion overlaid by flow and analytically calculate the accuracy of the method in different parameter regimes. This technique is employed to analyze the motion of lysosomes in motile neutrophil-like cells, showing that the cytoplasm of these cells behaves as a viscous fluid at the timescales examined. PMID:26840734

  6. Multifractal analysis of mobile social networks

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Zhang, Zifeng; Deng, Yufan

    2017-09-01

    As Wireless Fidelity (Wi-Fi)-enabled handheld devices have been widely used, the mobile social networks (MSNs) has been attracting extensive attention. Fractal approaches have also been widely applied to characterierize natural networks as useful tools to depict their spatial distribution and scaling properties. Moreover, when the complexity of the spatial distribution of MSNs cannot be properly charaterized by single fractal dimension, multifractal analysis is required. For further research, we introduced a multifractal analysis method based on box-covering algorithm to describe the structure of MSNs. Using this method, we find that the networks are multifractal at different time interval. The simulation results demonstrate that the proposed method is efficient for analyzing the multifractal characteristic of MSNs, which provides a distribution of singularities adequately describing both the heterogeneity of fractal patterns and the statistics of measurements across spatial scales in MSNs.

  7. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  8. Influence of muscle-tendon complex geometrical parameters on modeling passive stretch behavior with the Discrete Element Method.

    PubMed

    Roux, A; Laporte, S; Lecompte, J; Gras, L-L; Iordanoff, I

    2016-01-25

    The muscle-tendon complex (MTC) is a multi-scale, anisotropic, non-homogeneous structure. It is composed of fascicles, gathered together in a conjunctive aponeurosis. Fibers are oriented into the MTC with a pennation angle. Many MTC models use the Finite Element Method (FEM) to simulate the behavior of the MTC as a hyper-viscoelastic material. The Discrete Element Method (DEM) could be adapted to model fibrous materials, such as the MTC. DEM could capture the complex behavior of a material with a simple discretization scheme and help in understanding the influence of the orientation of fibers on the MTC׳s behavior. The aims of this study were to model the MTC in DEM at the macroscopic scale and to obtain the force/displacement curve during a non-destructive passive tensile test. Another aim was to highlight the influence of the geometrical parameters of the MTC on the global mechanical behavior. A geometrical construction of the MTC was done using discrete element linked by springs. Young׳s modulus values of the MTC׳s components were retrieved from the literature to model the microscopic stiffness of each spring. Alignment and re-orientation of all of the muscle׳s fibers with the tensile axis were observed numerically. The hyper-elastic behavior of the MTC was pointed out. The structure׳s effects, added to the geometrical parameters, highlight the MTC׳s mechanical behavior. It is also highlighted by the heterogeneity of the strain of the MTC׳s components. DEM seems to be a promising method to model the hyper-elastic macroscopic behavior of the MTC with simple elastic microscopic elements. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Method for Hot Real-Time Analysis of Pyrolysis Vapors at Pilot Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pomeroy, Marc D

    Pyrolysis oils contain more than 400 compounds, up to 60% of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during quenching and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors at the pilot scale, many challenges must be overcome.

  10. Identification of Phosphorylated Proteins on a Global Scale.

    PubMed

    Iliuk, Anton

    2018-05-31

    Liquid chromatography (LC) coupled with tandem mass spectrometry (MS/MS) has enabled researchers to analyze complex biological samples with unprecedented depth. It facilitates the identification and quantification of modifications within thousands of proteins in a single large-scale proteomic experiment. Analysis of phosphorylation, one of the most common and important post-translational modifications, has particularly benefited from such progress in the field. Here, detailed protocols are provided for a few well-regarded, common sample preparation methods for an effective phosphoproteomic experiment. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.

  11. Printing of metallic 3D micro-objects by laser induced forward transfer.

    PubMed

    Zenou, Michael; Kotler, Zvi

    2016-01-25

    Digital printing of 3D metal micro-structures by laser induced forward transfer under ambient conditions is reviewed. Recent progress has allowed drop on demand transfer of molten, femto-liter, metal droplets with a high jetting directionality. Such small volume droplets solidify instantly, on a nanosecond time scale, as they touch the substrate. This fast solidification limits their lateral spreading and allows the fabrication of high aspect ratio and complex 3D metal structures. Several examples of micron-scale resolution metal objects printed using this method are presented and discussed.

  12. What Is a Complex Innovation System?

    PubMed Central

    Katz, J. Sylvan

    2016-01-01

    Innovation systems are sometimes referred to as complex systems, something that is intuitively understood but poorly defined. A complex system dynamically evolves in non-linear ways giving it unique properties that distinguish it from other systems. In particular, a common signature of complex systems is scale-invariant emergent properties. A scale-invariant property can be identified because it is solely described by a power law function, f(x) = kxα, where the exponent, α, is a measure of scale-invariance. The focus of this paper is to describe and illustrate that innovation systems have properties of a complex adaptive system. In particular scale-invariant emergent properties indicative of their complex nature that can be quantified and used to inform public policy. The global research system is an example of an innovation system. Peer-reviewed publications containing knowledge are a characteristic output. Citations or references to these articles are an indirect measure of the impact the knowledge has on the research community. Peer-reviewed papers indexed in Scopus and in the Web of Science were used as data sources to produce measures of sizes and impact. These measures are used to illustrate how scale-invariant properties can be identified and quantified. It is demonstrated that the distribution of impact has a reasonable likelihood of being scale-invariant with scaling exponents that tended toward a value of less than 3.0 with the passage of time and decreasing group sizes. Scale-invariant correlations are shown between the evolution of impact and size with time and between field impact and sizes at points in time. The recursive or self-similar nature of scale-invariance suggests that any smaller innovation system within the global research system is likely to be complex with scale-invariant properties too. PMID:27258040

  13. Wave Propagation in Non-Stationary Statistical Mantle Models at the Global Scale

    NASA Astrophysics Data System (ADS)

    Meschede, M.; Romanowicz, B. A.

    2014-12-01

    We study the effect of statistically distributed heterogeneities that are smaller than the resolution of current tomographic models on seismic waves that propagate through the Earth's mantle at teleseismic distances. Current global tomographic models are missing small-scale structure as evidenced by the failure of even accurate numerical synthetics to explain enhanced coda in observed body and surface waveforms. One way to characterize small scale heterogeneity is to construct random models and confront observed coda waveforms with predictions from these models. Statistical studies of the coda typically rely on models with simplified isotropic and stationary correlation functions in Cartesian geometries. We show how to construct more complex random models for the mantle that can account for arbitrary non-stationary and anisotropic correlation functions as well as for complex geometries. Although this method is computationally heavy, model characteristics such as translational, cylindrical or spherical symmetries can be used to greatly reduce the complexity such that this method becomes practical. With this approach, we can create 3D models of the full spherical Earth that can be radially anisotropic, i.e. with different horizontal and radial correlation functions, and radially non-stationary, i.e. with radially varying model power and correlation functions. Both of these features are crucial for a statistical description of the mantle in which structure depends to first order on the spherical geometry of the Earth. We combine different random model realizations of S velocity with current global tomographic models that are robust at long wavelengths (e.g. Meschede and Romanowicz, 2014, GJI submitted), and compute the effects of these hybrid models on the wavefield with a spectral element code (SPECFEM3D_GLOBE). We finally analyze the resulting coda waves for our model selection and compare our computations with observations. Based on these observations, we make predictions about the strength of unresolved small-scale structure and extrinsic attenuation.

  14. Sleep spindle and K-complex detection using tunable Q-factor wavelet transform and morphological component analysis

    PubMed Central

    Lajnef, Tarek; Chaibi, Sahbi; Eichenlaub, Jean-Baptiste; Ruby, Perrine M.; Aguera, Pierre-Emmanuel; Samet, Mounir; Kachouri, Abdennaceur; Jerbi, Karim

    2015-01-01

    A novel framework for joint detection of sleep spindles and K-complex events, two hallmarks of sleep stage S2, is proposed. Sleep electroencephalography (EEG) signals are split into oscillatory (spindles) and transient (K-complex) components. This decomposition is conveniently achieved by applying morphological component analysis (MCA) to a sparse representation of EEG segments obtained by the recently introduced discrete tunable Q-factor wavelet transform (TQWT). Tuning the Q-factor provides a convenient and elegant tool to naturally decompose the signal into an oscillatory and a transient component. The actual detection step relies on thresholding (i) the transient component to reveal K-complexes and (ii) the time-frequency representation of the oscillatory component to identify sleep spindles. Optimal thresholds are derived from ROC-like curves (sensitivity vs. FDR) on training sets and the performance of the method is assessed on test data sets. We assessed the performance of our method using full-night sleep EEG data we collected from 14 participants. In comparison to visual scoring (Expert 1), the proposed method detected spindles with a sensitivity of 83.18% and false discovery rate (FDR) of 39%, while K-complexes were detected with a sensitivity of 81.57% and an FDR of 29.54%. Similar performances were obtained when using a second expert as benchmark. In addition, when the TQWT and MCA steps were excluded from the pipeline the detection sensitivities dropped down to 70% for spindles and to 76.97% for K-complexes, while the FDR rose up to 43.62 and 49.09%, respectively. Finally, we also evaluated the performance of the proposed method on a set of publicly available sleep EEG recordings. Overall, the results we obtained suggest that the TQWT-MCA method may be a valuable alternative to existing spindle and K-complex detection methods. Paths for improvements and further validations with large-scale standard open-access benchmarking data sets are discussed. PMID:26283943

  15. Drug- and Herb-Induced Liver Injury in Clinical and Translational Hepatology: Causality Assessment Methods, Quo Vadis?

    PubMed Central

    Eickhoff, Axel; Schulze, Johannes

    2013-01-01

    Drug-induced liver injury (DILI) and herb-induced liver injury (HILI) are typical diseases of clinical and translational hepatology. Their diagnosis is complex and requires an experienced clinician to translate basic science into clinical judgment and identify a valid causality algorithm. To prospectively assess causality starting on the day DILI or HILI is suspected, the best approach for physicians is to use the Council for International Organizations of Medical Sciences (CIOMS) scale in its original or preferably its updated version. The CIOMS scale is validated, liver-specific, structured, and quantitative, providing final causality grades based on scores of specific items for individual patients. These items include latency period, decline in liver values after treatment cessation, risk factors, co-medication, alternative diagnoses, hepatotoxicity track record of the suspected product, and unintentional re-exposure. Provided causality is established as probable or highly probable, data of the CIOMS scale with all individual items, a short clinical report, and complete raw data should be transmitted to the regulatory agencies, manufacturers, expert panels, and possibly to the scientific community for further refinement of the causality evaluation in a setting of retrospective expert opinion. Good-quality case data combined with thorough CIOMS-based assessment as a standardized approach should avert subsequent necessity for other complex causality assessment methods that may have inter-rater problems because of poor-quality data. In the future, the CIOMS scale will continue to be the preferred tool to assess causality of DILI and HILI cases and should be used consistently, both prospectively by physicians, and retrospectively for subsequent expert opinion if needed. For comparability and international harmonization, all parties assessing causality in DILI and HILI cases should attempt this standardized approach using the updated CIOMS scale. PMID:26357608

  16. Comparative effectiveness of a complex Ayurvedic treatment and conventional standard care in osteoarthritis of the knee--study protocol for a randomized controlled trial.

    PubMed

    Witt, Claudia M; Michalsen, Andreas; Roll, Stephanie; Morandi, Antonio; Gupta, Shivnarain; Rosenberg, Mark; Kronpass, Ludwig; Stapelfeldt, Elmar; Hissar, Syed; Müller, Matthias; Kessler, Christian

    2013-05-23

    Traditional Indian Ayurvedic medicine uses complex treatment approaches, including manual therapies, lifestyle and nutritional advice, dietary supplements, medication, yoga, and purification techniques. Ayurvedic strategies are often used to treat osteoarthritis (OA) of the knee; however, no systematic data are available on their effectiveness in comparison with standard care. The aim of this study is to evaluate the effectiveness of complex Ayurvedic treatment in comparison with conventional methods of treating OA symptoms in patients with knee osteoarthritis. In a prospective, multicenter, randomized controlled trial, 150 patients between 40 and 70 years, diagnosed with osteoarthritis of the knee, following American College of Rheumatology criteria and an average pain intensity of ≥40 mm on a 100 mm visual analog scale in the affected knee at baseline will be randomized into two groups. In the Ayurveda group, treatment will include tailored combinations of manual treatments, massages, dietary and lifestyle advice, consideration of selected foods, nutritional supplements, yoga posture advice, and knee massage. Patients in the conventional group will receive self-care advice, pain medication, weight-loss advice (if overweight), and physiotherapy following current international guidelines. Both groups will receive 15 treatment sessions over 12 weeks. Outcomes will be evaluated after 6 and 12 weeks and 6 and 12 months. The primary endpoint is a change in the score on the Western Ontario and McMaster University Osteoarthritis Index (WOMAC) after 12 weeks. Secondary outcome measurements will use WOMAC subscales, a pain disability index, a visual analog scale for pain and sleep quality, a pain experience scale, a quality-of-life index, a profile of mood states, and Likert scales for patient satisfaction, patient diaries, and safety. Using an adapted PRECIS scale, the trial was identified as lying mainly in the middle of the efficacy-effectiveness continuum. This trial is the first to compare the effectiveness of a complex Ayurvedic intervention with a complex conventional intervention in a Western medical setting in patients with knee osteoarthritis. During the trial design, aspects of efficacy and effectiveness were discussed. The resulting design is a compromise between rigor and pragmatism. NCT01225133.

  17. Accurate detection of hierarchical communities in complex networks based on nonlinear dynamical evolution

    NASA Astrophysics Data System (ADS)

    Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng

    2018-04-01

    One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community detection in complex networks.

  18. Scale effects on information theory-based measures applied to streamflow patterns in two rural watersheds

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Pachepsky, Yakov A.; Guber, Andrey K.; McPherson, Brian J.; Hill, Robert L.

    2012-01-01

    SummaryUnderstanding streamflow patterns in space and time is important for improving flood and drought forecasting, water resources management, and predictions of ecological changes. Objectives of this work include (a) to characterize the spatial and temporal patterns of streamflow using information theory-based measures at two thoroughly-monitored agricultural watersheds located in different hydroclimatic zones with similar land use, and (b) to elucidate and quantify temporal and spatial scale effects on those measures. We selected two USDA experimental watersheds to serve as case study examples, including the Little River experimental watershed (LREW) in Tifton, Georgia and the Sleepers River experimental watershed (SREW) in North Danville, Vermont. Both watersheds possess several nested sub-watersheds and more than 30 years of continuous data records of precipitation and streamflow. Information content measures (metric entropy and mean information gain) and complexity measures (effective measure complexity and fluctuation complexity) were computed based on the binary encoding of 5-year streamflow and precipitation time series data. We quantified patterns of streamflow using probabilities of joint or sequential appearances of the binary symbol sequences. Results of our analysis illustrate that information content measures of streamflow time series are much smaller than those for precipitation data, and the streamflow data also exhibit higher complexity, suggesting that the watersheds effectively act as filters of the precipitation information that leads to the observed additional complexity in streamflow measures. Correlation coefficients between the information-theory-based measures and time intervals are close to 0.9, demonstrating the significance of temporal scale effects on streamflow patterns. Moderate spatial scale effects on streamflow patterns are observed with absolute values of correlation coefficients between the measures and sub-watershed area varying from 0.2 to 0.6 in the two watersheds. We conclude that temporal effects must be evaluated and accounted for when the information theory-based methods are used for performance evaluation and comparison of hydrological models.

  19. Few-Body Techniques Using Coordinate Space for Bound and Continuum States

    NASA Astrophysics Data System (ADS)

    Garrido, E.

    2018-05-01

    These notes are a short summary of a set of lectures given within the frame of the "Critical Stability of Quantum Few-Body Systems" International School held in the Max Planck Institute for the Physics of Complex Systems (Dresden). The main goal of the lectures has been to provide the basic ingredients for the description of few-body systems in coordinate space. The hyperspherical harmonic and the adiabatic expansion methods are introduced in detail, and subsequently used to describe bound and continuum states. The expressions for the cross sections and reaction rates for three-body processes are derived. The case of resonant scattering and the complex scaling method as a tool to obtain the resonance energy and width is also introduced.

  20. Spatio-Temporal Process Variability in Watershed Scale Wetland Restoration Planning

    NASA Astrophysics Data System (ADS)

    Evenson, G. R.

    2012-12-01

    Watershed scale restoration decision making processes are increasingly informed by quantitative methodologies providing site-specific restoration recommendations - sometimes referred to as "systematic planning." The more advanced of these methodologies are characterized by a coupling of search algorithms and ecological models to discover restoration plans that optimize environmental outcomes. Yet while these methods have exhibited clear utility as decision support toolsets, they may be critiqued for flawed evaluations of spatio-temporally variable processes fundamental to watershed scale restoration. Hydrologic and non-hydrologic mediated process connectivity along with post-restoration habitat dynamics, for example, are commonly ignored yet known to appreciably affect restoration outcomes. This talk will present a methodology to evaluate such spatio-temporally complex processes in the production of watershed scale wetland restoration plans. Using the Tuscarawas Watershed in Eastern Ohio as a case study, a genetic algorithm will be coupled with the Soil and Water Assessment Tool (SWAT) to reveal optimal wetland restoration plans as measured by their capacity to maximize nutrient reductions. Then, a so-called "graphical" representation of the optimization problem will be implemented in-parallel to promote hydrologic and non-hydrologic mediated connectivity amongst existing wetlands and sites selected for restoration. Further, various search algorithm mechanisms will be discussed as a means of accounting for temporal complexities such as post-restoration habitat dynamics. Finally, generalized patterns of restoration plan optimality will be discussed as an alternative and possibly superior decision support toolset given the complexity and stochastic nature of spatio-temporal process variability.

  1. Continuous Production of Discrete Plasmid DNA-Polycation Nanoparticles Using Flash Nanocomplexation.

    PubMed

    Santos, Jose Luis; Ren, Yong; Vandermark, John; Archang, Maani M; Williford, John-Michael; Liu, Heng-Wen; Lee, Jason; Wang, Tza-Huei; Mao, Hai-Quan

    2016-12-01

    Despite successful demonstration of linear polyethyleneimine (lPEI) as an effective carrier for a wide range of gene medicine, including DNA plasmids, small interfering RNAs, mRNAs, etc., and continuous improvement of the physical properties and biological performance of the polyelectrolyte complex nanoparticles prepared from lPEI and nucleic acids, there still exist major challenges to produce these nanocomplexes in a scalable manner, particularly for lPEI/DNA nanoparticles. This has significantly hindered the progress toward clinical translation of these nanoparticle-based gene medicine. Here the authors report a flash nanocomplexation (FNC) method that achieves continuous production of lPEI/plasmid DNA nanoparticles with narrow size distribution using a confined impinging jet device. The method involves the complex coacervation of negatively charged DNA plasmid and positive charged lPEI under rapid, highly dynamic, and homogeneous mixing conditions, producing polyelectrolyte complex nanoparticles with narrow distribution of particle size and shape. The average number of plasmid DNA packaged per nanoparticles and its distribution are similar between the FNC method and the small-scale batch mixing method. In addition, the nanoparticles prepared by these two methods exhibit similar cell transfection efficiency. These results confirm that FNC is an effective and scalable method that can produce well-controlled lPEI/plasmid DNA nanoparticles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Continuous Production of Discrete Plasmid DNA-Polycation Nanoparticles Using Flash Nanocomplexation

    PubMed Central

    Santos, Jose Luis; Ren, Yong; Vandermark, John; Archang, Maani M.; Williford, John-Michael; Liu, Heng-wen; Lee, Jason; Wang, Tza-Huei; Mao, Hai-Quan

    2016-01-01

    Despite successful demonstration of linear polyethyleneimine (lPEI) as an effective carrier for a wide range of gene medicine, including DNA plasmids, small interfering RNAs, mRNAs, etc., and continuous improvement of the physical properties and biological performance of the polyelectrolyte complex nanoparticles prepared from lPEI and nucleic acids, there still exist major challenges to produce these nanocomplexes in a scalable manner, particularly for lPEI/DNA nanoparticles. This has significantly hindered the progress towards clinical translation of these nanoparticle-based gene medicine. Here we report a flash nanocomplexation (FNC) method that achieves continuous production of lPEI/plasmid DNA nanoparticles with narrow size distribution using a confined impinging jet device. The method involves the complex coacervation of negatively charged DNA plasmid and positive charged lPEI under rapid, highly dynamic, and homogeneous mixing conditions, producing polyelectrolyte complex nanoparticles with narrow distribution of particle size and shape. The average number of plasmid DNA packaged per nanoparticles and its distribution are similar between the FNC method and the small-scale batch mixing method. In addition, the nanoparticles prepared by these two methods exhibit similar cell transfection efficiency. These results confirm that FNC is an effective and scalable method that can produce well-controlled lPEI/plasmid DNA nanoparticles. PMID:27717227

  3. Linguistic complex networks as a young field of quantitative linguistics. Comment on "Approaching human language with complex networks" by J. Cong and H. Liu

    NASA Astrophysics Data System (ADS)

    Köhler, Reinhard

    2014-12-01

    We have long been used to the domination of qualitative methods in modern linguistics. Indeed, qualitative methods have advantages such as ease of use and wide applicability to many types of linguistic phenomena. However, this shall not overshadow the fact that a great part of human language is amenable to quantification. Moreover, qualitative methods may lead to over-simplification by employing the rigid yes/no scale. When variability and vagueness of human language must be taken into account, qualitative methods will prove inadequate and give way to quantitative methods [1, p. 11]. In addition to such advantages as exactness and precision, quantitative concepts and methods make it possible to find laws of human language which are just like those in natural sciences. These laws are fundamental elements of linguistic theories in the spirit of the philosophy of science [2,3]. Theorization effort of this type is what quantitative linguistics [1,4,5] is devoted to. The review of Cong and Liu [6] has provided an informative and insightful survey of linguistic complex networks as a young field of quantitative linguistics, including the basic concepts and measures, the major lines of research with linguistic motivation, and suggestions for future research.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagaoka, Masataka; Core Research for Evolutional Science and Technology; ESICB, Kyoto University, Kyodai Katsura, Nishikyo-ku, Kyoto 615-8520

    A new efficient hybrid Monte Carlo (MC)/molecular dynamics (MD) reaction method with a rare event-driving mechanism is introduced as a practical ‘atomistic’ molecular simulation of large-scale chemically reactive systems. Starting its demonstrative application to the racemization reaction of (R)-2-chlorobutane in N,N-dimethylformamide solution, several other applications are shown from the practical viewpoint of molecular controlling of complex chemical reactions, stereochemistry and aggregate structures. Finally, I would like to mention the future applications of the hybrid MC/MD reaction method.

  5. Mathematical programming for the efficient allocation of health care resources.

    PubMed

    Stinnett, A A; Paltiel, A D

    1996-10-01

    Previous discussions of methods for the efficient allocation of health care resources subject to a budget constraint have relied on unnecessarily restrictive assumptions. This paper makes use of established optimization techniques to demonstrate that a general mathematical programming framework can accommodate much more complex information regarding returns to scale, partial and complete indivisibility and program interdependence. Methods are also presented for incorporating ethical constraints into the resource allocation process, including explicit identification of the cost of equity.

  6. Time to "go large" on biofilm research: advantages of an omics approach.

    PubMed

    Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J

    2009-04-01

    In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.

  7. The Ghost in the Machine: Fracking in the Earth's Complex Brittle Crust

    NASA Astrophysics Data System (ADS)

    Malin, P. E.

    2015-12-01

    This paper discusses in the impact of complex rock properties on practical applications like fracking and its associated seismic emissions. A variety of borehole measurements show that the complex physical properties of the upper crust cannot be characterized by averages on any scale. Instead they appear to follow 3 empirical rule: a power law distribution in physical scales, a lognormal distribution in populations, and a direct relation between changes in porosity and log(permeability). These rules can be directly related to the presence of fluid rich and seismically active fractures - from mineral grains to fault segments. (These are the "ghosts" referred to in the title.) In other physical systems, such behaviors arise on the boundaries of phase changes, and are studied as "critical state physics". In analogy to the 4 phases of water, crustal rocks progress upward from a un-fractured, ductile lower crust to nearly cohesionless surface alluvium. The crust in between is in an unstable transition. It is in this layer methods such as hydrofracking operate - be they in Oil and Gas, geothermal, or mining. As a result, nothing is predictable in these systems. Crustal models have conventionally been constructed assuming that in situ permeability and related properties are normally distributed. This approach is consistent with the use of short scale-length cores and logs to estimate properties. However, reservoir-scale flow data show that they are better fit to lognormal distributions. Such "long tail" distributions are observed for well productivity, ore vein grades, and induced seismic signals. Outcrop and well-log data show that many rock properties also show a power-law-type variation in scale lengths. In terms of Fourier power spectra, if peaks per km is k, then their power is proportional to 1/k. The source of this variation is related to pore-space connectivity, beginning with grain-fractures. We then show that a passive seismic method, Tomographic Fracture ImagingTM (TFI), can observe the distribution of this connectivity. Combined with TFI data, our fracture-connectivity model reveals the most significant crustal features and account for their range of passive and stimulated behaviors.

  8. Complexity of heart rate fluctuations in near-term sheep and human fetuses during sleep.

    PubMed

    Frank, Birgit; Frasch, Martin G; Schneider, Uwe; Roedel, Marcus; Schwab, Matthias; Hoyer, Dirk

    2006-10-01

    We investigated how the complexity of fetal heart rate fluctuations (fHRF) is related to the sleep states in sheep and human fetuses. The complexity as a function of time scale for fetal heart rate data for 7 sheep and 27 human fetuses was estimated in rapid eye movement (REM) and non-REM sleep by means of permutation entropy and the associated Kullback-Leibler entropy. We found that in humans, fHRF complexity is higher in non-REM than REM sleep, whereas in sheep this relationship is reversed. To show this relation, choice of the appropriate time scale is crucial. In sheep fetuses, we found differences in the complexity of fHRF between REM and non-REM sleep only for larger time scales (above 2.5 s), whereas in human fetuses the complexity was clearly different between REM and non-REM sleep over the whole range of time scales. This may be due to inherent time scales of complexity, which reflect species-specific functions of the autonomic nervous system. Such differences have to be considered when animal data are translated to the human situation.

  9. Community structure from spectral properties in complex networks

    NASA Astrophysics Data System (ADS)

    Servedio, V. D. P.; Colaiori, F.; Capocci, A.; Caldarelli, G.

    2005-06-01

    We analyze the spectral properties of complex networks focusing on their relation to the community structure, and develop an algorithm based on correlations among components of different eigenvectors. The algorithm applies to general weighted networks, and, in a suitably modified version, to the case of directed networks. Our method allows to correctly detect communities in sharply partitioned graphs, however it is useful to the analysis of more complex networks, without a well defined cluster structure, as social and information networks. As an example, we test the algorithm on a large scale data-set from a psychological experiment of free word association, where it proves to be successful both in clustering words, and in uncovering mental association patterns.

  10. Complex Chemical Reaction Networks from Heuristics-Aided Quantum Chemistry.

    PubMed

    Rappoport, Dmitrij; Galvin, Cooper J; Zubarev, Dmitry Yu; Aspuru-Guzik, Alán

    2014-03-11

    While structures and reactivities of many small molecules can be computed efficiently and accurately using quantum chemical methods, heuristic approaches remain essential for modeling complex structures and large-scale chemical systems. Here, we present a heuristics-aided quantum chemical methodology applicable to complex chemical reaction networks such as those arising in cell metabolism and prebiotic chemistry. Chemical heuristics offer an expedient way of traversing high-dimensional reactive potential energy surfaces and are combined here with quantum chemical structure optimizations, which yield the structures and energies of the reaction intermediates and products. Application of heuristics-aided quantum chemical methodology to the formose reaction reproduces the experimentally observed reaction products, major reaction pathways, and autocatalytic cycles.

  11. Diffraction scattering computed tomography: a window into the structures of complex nanomaterials

    PubMed Central

    Birkbak, M. E.; Leemreize, H.; Frølich, S.; Stock, S. R.

    2015-01-01

    Modern functional nanomaterials and devices are increasingly composed of multiple phases arranged in three dimensions over several length scales. Therefore there is a pressing demand for improved methods for structural characterization of such complex materials. An excellent emerging technique that addresses this problem is diffraction/scattering computed tomography (DSCT). DSCT combines the merits of diffraction and/or small angle scattering with computed tomography to allow imaging the interior of materials based on the diffraction or small angle scattering signals. This allows, e.g., one to distinguish the distributions of polymorphs in complex mixtures. Here we review this technique and give examples of how it can shed light on modern nanoscale materials. PMID:26505175

  12. Controlling sign problems in spin models using tensor renormalization

    NASA Astrophysics Data System (ADS)

    Denbleyker, Alan; Liu, Yuzhi; Meurice, Y.; Qin, M. P.; Xiang, T.; Xie, Z. Y.; Yu, J. F.; Zou, Haiyuan

    2014-01-01

    We consider the sign problem for classical spin models at complex β =1/g02 on L ×L lattices. We show that the tensor renormalization group method allows reliable calculations for larger Imβ than the reweighting Monte Carlo method. For the Ising model with complex β we compare our results with the exact Onsager-Kaufman solution at finite volume. The Fisher zeros can be determined precisely with the tensor renormalization group method. We check the convergence of the tensor renormalization group method for the O(2) model on L×L lattices when the number of states Ds increases. We show that the finite size scaling of the calculated Fisher zeros agrees very well with the Kosterlitz-Thouless transition assumption and predict the locations for larger volume. The location of these zeros agree with Monte Carlo reweighting calculation for small volume. The application of the method for the O(2) model with a chemical potential is briefly discussed.

  13. Superstatistical fluctuations in time series: Applications to share-price dynamics and turbulence

    NASA Astrophysics Data System (ADS)

    van der Straeten, Erik; Beck, Christian

    2009-09-01

    We report a general technique to study a given experimental time series with superstatistics. Crucial for the applicability of the superstatistics concept is the existence of a parameter β that fluctuates on a large time scale as compared to the other time scales of the complex system under consideration. The proposed method extracts the main superstatistical parameters out of a given data set and examines the validity of the superstatistical model assumptions. We test the method thoroughly with surrogate data sets. Then the applicability of the superstatistical approach is illustrated using real experimental data. We study two examples, velocity time series measured in turbulent Taylor-Couette flows and time series of log returns of the closing prices of some stock market indices.

  14. Target matching based on multi-view tracking

    NASA Astrophysics Data System (ADS)

    Liu, Yahui; Zhou, Changsheng

    2011-01-01

    A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.

  15. Measurement of the speed of sound by observation of the Mach cones in a complex plasma under microgravity conditions

    NASA Astrophysics Data System (ADS)

    Zhukhovitskii, D. I.; Fortov, V. E.; Molotkov, V. I.; Lipaev, A. M.; Naumkin, V. N.; Thomas, H. M.; Ivlev, A. V.; Schwabe, M.; Morfill, G. E.

    2015-02-01

    We report the first observation of the Mach cones excited by a larger microparticle (projectile) moving through a cloud of smaller microparticles (dust) in a complex plasma with neon as a buffer gas under microgravity conditions. A collective motion of the dust particles occurs as propagation of the contact discontinuity. The corresponding speed of sound was measured by a special method of the Mach cone visualization. The measurement results are incompatible with the theory of ion acoustic waves. The estimate for the pressure in a strongly coupled Coulomb system and a scaling law for the complex plasma make it possible to derive an evaluation for the speed of sound, which is in a reasonable agreement with the experiments in complex plasmas.

  16. A simple model clarifies the complicated relationships of complex networks

    PubMed Central

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-01-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506

  17. A mesoscopic bridging scale method for fluids and coupling dissipative particle dynamics with continuum finite element method

    PubMed Central

    Kojic, Milos; Filipovic, Nenad; Tsuda, Akira

    2012-01-01

    A multiscale procedure to couple a mesoscale discrete particle model and a macroscale continuum model of incompressible fluid flow is proposed in this study. We call this procedure the mesoscopic bridging scale (MBS) method since it is developed on the basis of the bridging scale method for coupling molecular dynamics and finite element models [G.J. Wagner, W.K. Liu, Coupling of atomistic and continuum simulations using a bridging scale decomposition, J. Comput. Phys. 190 (2003) 249–274]. We derive the governing equations of the MBS method and show that the differential equations of motion of the mesoscale discrete particle model and finite element (FE) model are only coupled through the force terms. Based on this coupling, we express the finite element equations which rely on the Navier–Stokes and continuity equations, in a way that the internal nodal FE forces are evaluated using viscous stresses from the mesoscale model. The dissipative particle dynamics (DPD) method for the discrete particle mesoscale model is employed. The entire fluid domain is divided into a local domain and a global domain. Fluid flow in the local domain is modeled with both DPD and FE method, while fluid flow in the global domain is modeled by the FE method only. The MBS method is suitable for modeling complex (colloidal) fluid flows, where continuum methods are sufficiently accurate only in the large fluid domain, while small, local regions of particular interest require detailed modeling by mesoscopic discrete particles. Solved examples – simple Poiseuille and driven cavity flows illustrate the applicability of the proposed MBS method. PMID:23814322

  18. Effects of Spatial Scale on Cognitive Play in Preschool Children.

    ERIC Educational Resources Information Center

    Delong, Alton J.; And Others

    1994-01-01

    Examined effects of a reduced-scale play environment on the temporal aspects of complex play behavior. Children playing with playdough in a 7 x 5 x 5-foot structure began complex play more quickly, played in longer segments, and spent slightly more time in complex play than when in full-size conditions, suggesting that scale-reduced environments…

  19. Coastline complexity: A parameter for functional classification of coastal environments

    USGS Publications Warehouse

    Bartley, J.D.; Buddemeier, R.W.; Bennett, D.A.

    2001-01-01

    To understand the role of the world's coastal zone (CZ) in global biogeochemical fluxes (particularly those of carbon, nitrogen, phosphorus, and sediments) we must generalise from a limited number of observations associated with a few well-studied coastal systems to the global scale. Global generalisation must be based on globally available data and on robust techniques for classification and upscaling. These requirements impose severe constraints on the set of variables that can be used to extract information about local CZ functions such as advective and metabolic fluxes, and differences resulting from changes in biotic communities. Coastal complexity (plan-view tortuosity of the coastline) is a potentially useful parameter, since it interacts strongly with both marine and terrestrial forcing functions to determine coastal energy regimes and water residence times, and since 'open' vs. 'sheltered' categories are important components of most coastal habitat classification schemes. This study employs the World Vector Shoreline (WVS) dataset, originally developed at a scale of 1:250 000. Coastline complexity measures are generated using a modification of the Angle Measurement Technique (AMT), in which the basic measurement is the angle between two lines of specified length drawn from a selected point to the closest points of intersection with the coastline. Repetition of these measurements for different lengths at the same point yields a distribution of angles descriptive of the extent and scale of complexity in the vicinity of that point; repetition of the process at different points on the coast provides a basis for comparing both the extent and the characteristic scale of coastline variation along different reaches of the coast. The coast of northwestern Mexico (Baja California and the Gulf of California) was used as a case study for initial development and testing of the method. The characteristic angle distribution plots generated by the AMT analysis were clustered using LOICZVIEW, a high dimensionality clustering routine developed for large-scale coastal classification studies. The results show distinctive differences in coastal environments that have the potential for interpretation in terms of both biotic and hydrogeochemical environments, and that can be related to the resolution limits and uncertainties of the shoreline data used. These objective, quantitative measures of coastal complexity as a function of scale can be further developed and combined with other data sets to provide a key component of functional classification of coastal environments. ?? 2001 Elsevier Science B.V. All rights reserved.

  20. Measurement of electroosmotic and electrophoretic velocities using pulsed and sinusoidal electric fields

    PubMed Central

    Sadek, Samir H.; Pimenta, Francisco; Pinho, Fernando T.

    2017-01-01

    In this work, we explore two methods to simultaneously measure the electroosmotic mobility in microchannels and the electrophoretic mobility of micron‐sized tracer particles. The first method is based on imposing a pulsed electric field, which allows to isolate electrophoresis and electroosmosis at the startup and shutdown of the pulse, respectively. In the second method, a sinusoidal electric field is generated and the mobilities are found by minimizing the difference between the measured velocity of tracer particles and the velocity computed from an analytical expression. Both methods produced consistent results using polydimethylsiloxane microchannels and polystyrene micro‐particles, provided that the temporal resolution of the particle tracking velocimetry technique used to compute the velocity of the tracer particles is fast enough to resolve the diffusion time‐scale based on the characteristic channel length scale. Additionally, we present results with the pulse method for viscoelastic fluids, which show a more complex transient response with significant velocity overshoots and undershoots after the start and the end of the applied electric pulse, respectively. PMID:27990654

  1. A Review of Computational Methods in Materials Science: Examples from Shock-Wave and Polymer Physics

    PubMed Central

    Steinhauser, Martin O.; Hiermaier, Stefan

    2009-01-01

    This review discusses several computational methods used on different length and time scales for the simulation of material behavior. First, the importance of physical modeling and its relation to computer simulation on multiscales is discussed. Then, computational methods used on different scales are shortly reviewed, before we focus on the molecular dynamics (MD) method. Here we survey in a tutorial-like fashion some key issues including several MD optimization techniques. Thereafter, computational examples for the capabilities of numerical simulations in materials research are discussed. We focus on recent results of shock wave simulations of a solid which are based on two different modeling approaches and we discuss their respective assets and drawbacks with a view to their application on multiscales. Then, the prospects of computer simulations on the molecular length scale using coarse-grained MD methods are covered by means of examples pertaining to complex topological polymer structures including star-polymers, biomacromolecules such as polyelectrolytes and polymers with intrinsic stiffness. This review ends by highlighting new emerging interdisciplinary applications of computational methods in the field of medical engineering where the application of concepts of polymer physics and of shock waves to biological systems holds a lot of promise for improving medical applications such as extracorporeal shock wave lithotripsy or tumor treatment. PMID:20054467

  2. COBRApy: COnstraints-Based Reconstruction and Analysis for Python.

    PubMed

    Ebrahim, Ali; Lerman, Joshua A; Palsson, Bernhard O; Hyduke, Daniel R

    2013-08-08

    COnstraint-Based Reconstruction and Analysis (COBRA) methods are widely used for genome-scale modeling of metabolic networks in both prokaryotes and eukaryotes. Due to the successes with metabolism, there is an increasing effort to apply COBRA methods to reconstruct and analyze integrated models of cellular processes. The COBRA Toolbox for MATLAB is a leading software package for genome-scale analysis of metabolism; however, it was not designed to elegantly capture the complexity inherent in integrated biological networks and lacks an integration framework for the multiomics data used in systems biology. The openCOBRA Project is a community effort to promote constraints-based research through the distribution of freely available software. Here, we describe COBRA for Python (COBRApy), a Python package that provides support for basic COBRA methods. COBRApy is designed in an object-oriented fashion that facilitates the representation of the complex biological processes of metabolism and gene expression. COBRApy does not require MATLAB to function; however, it includes an interface to the COBRA Toolbox for MATLAB to facilitate use of legacy codes. For improved performance, COBRApy includes parallel processing support for computationally intensive processes. COBRApy is an object-oriented framework designed to meet the computational challenges associated with the next generation of stoichiometric constraint-based models and high-density omics data sets. http://opencobra.sourceforge.net/

  3. An Immersed Boundary - Adaptive Mesh Refinement solver (IB-AMR) for high fidelity fully resolved wind turbine simulations

    NASA Astrophysics Data System (ADS)

    Angelidis, Dionysios; Sotiropoulos, Fotis

    2015-11-01

    The geometrical details of wind turbines determine the structure of the turbulence in the near and far wake and should be taken in account when performing high fidelity calculations. Multi-resolution simulations coupled with an immersed boundary method constitutes a powerful framework for high-fidelity calculations past wind farms located over complex terrains. We develop a 3D Immersed-Boundary Adaptive Mesh Refinement flow solver (IB-AMR) which enables turbine-resolving LES of wind turbines. The idea of using a hybrid staggered/non-staggered grid layout adopted in the Curvilinear Immersed Boundary Method (CURVIB) has been successfully incorporated on unstructured meshes and the fractional step method has been employed. The overall performance and robustness of the second order accurate, parallel, unstructured solver is evaluated by comparing the numerical simulations against conforming grid calculations and experimental measurements of laminar and turbulent flows over complex geometries. We also present turbine-resolving multi-scale LES considering all the details affecting the induced flow field; including the geometry of the tower, the nacelle and especially the rotor blades of a wind tunnel scale turbine. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the Sandia National Laboratories.

  4. A design tool for direct and non-stochastic calculations of near-field radiative transfer in complex structures: The NF-RT-FDTD algorithm

    NASA Astrophysics Data System (ADS)

    Didari, Azadeh; Pinar Mengüç, M.

    2017-08-01

    Advances in nanotechnology and nanophotonics are inextricably linked with the need for reliable computational algorithms to be adapted as design tools for the development of new concepts in energy harvesting, radiative cooling, nanolithography and nano-scale manufacturing, among others. In this paper, we provide an outline for such a computational tool, named NF-RT-FDTD, to determine the near-field radiative transfer between structured surfaces using Finite Difference Time Domain method. NF-RT-FDTD is a direct and non-stochastic algorithm, which accounts for the statistical nature of the thermal radiation and is easily applicable to any arbitrary geometry at thermal equilibrium. We present a review of the fundamental relations for far- and near-field radiative transfer between different geometries with nano-scale surface and volumetric features and gaps, and then we discuss the details of the NF-RT-FDTD formulation, its application to sample geometries and outline its future expansion to more complex geometries. In addition, we briefly discuss some of the recent numerical works for direct and indirect calculations of near-field thermal radiation transfer, including Scattering Matrix method, Finite Difference Time Domain method (FDTD), Wiener Chaos Expansion, Fluctuating Surface Current (FSC), Fluctuating Volume Current (FVC) and Thermal Discrete Dipole Approximations (TDDA).

  5. Mapping of the extinction in giant molecular clouds using optical star counts

    NASA Astrophysics Data System (ADS)

    Cambrésy, L.

    1999-05-01

    This paper presents large scale extinction maps of most nearby Giant Molecular Clouds of the Galaxy (Lupus, rho Ophiuchus, Scorpius, Coalsack, Taurus, Chamaeleon, Musca, Corona Australis, Serpens, IC 5146, Vela, Orion, Monoceros R1 and R2, Rosette, Carina) derived from a star count method using an adaptive grid and a wavelet decomposition applied to the optical data provided by the USNO-Precision Measuring Machine. The distribution of the extinction in the clouds leads to estimate their total individual masses M and their maximum of extinction. I show that the relation between the mass contained within an iso-extinction contour and the extinction is similar from cloud to cloud and allows the extrapolation of the maximum of extinction in the range 5.7 to 25.5 magnitudes. I found that about half of the mass is contained in regions where the visual extinction is smaller than 1 magnitude. The star count method used on large scale ( ~ 250 square degrees) is a powerful and relatively straightforward method to estimate the mass of molecular complexes. A systematic study of the all sky would lead to discover new clouds as I did in the Lupus complex for which I found a sixth cloud of about 10(4) M_⊙.

  6. A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes

    With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less

  7. Combining states without scale hierarchies with ordered parton showers

    DOE PAGES

    Fischer, Nadine; Prestel, Stefan

    2017-09-12

    Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less

  8. Time-oriented hierarchical method for computation of principal components using subspace learning algorithm.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2004-10-01

    Principal Component Analysis (PCA) and Principal Subspace Analysis (PSA) are classic techniques in statistical data analysis, feature extraction and data compression. Given a set of multivariate measurements, PCA and PSA provide a smaller set of "basis vectors" with less redundancy, and a subspace spanned by them, respectively. Artificial neurons and neural networks have been shown to perform PSA and PCA when gradient ascent (descent) learning rules are used, which is related to the constrained maximization (minimization) of statistical objective functions. Due to their low complexity, such algorithms and their implementation in neural networks are potentially useful in cases of tracking slow changes of correlations in the input data or in updating eigenvectors with new samples. In this paper we propose PCA learning algorithm that is fully homogeneous with respect to neurons. The algorithm is obtained by modification of one of the most famous PSA learning algorithms--Subspace Learning Algorithm (SLA). Modification of the algorithm is based on Time-Oriented Hierarchical Method (TOHM). The method uses two distinct time scales. On a faster time scale PSA algorithm is responsible for the "behavior" of all output neurons. On a slower scale, output neurons will compete for fulfillment of their "own interests". On this scale, basis vectors in the principal subspace are rotated toward the principal eigenvectors. At the end of the paper it will be briefly analyzed how (or why) time-oriented hierarchical method can be used for transformation of any of the existing neural network PSA method, into PCA method.

  9. Investigate the complex process in particle-fluid based surface generation technology using reactive molecular dynamics method

    NASA Astrophysics Data System (ADS)

    Han, Xuesong; Li, Haiyan; Zhao, Fu

    2017-07-01

    Particle-fluid based surface generation process has already become one of the most important materials processing technology for many advanced materials such as optical crystal, ceramics and so on. Most of the particle-fluid based surface generation technology involves two key process: chemical reaction which is responsible for surface softening; physical behavior which is responsible for materials removal/deformation. Presently, researchers cannot give a reasonable explanation about the complex process in the particle-fluid based surface generation technology because of the small temporal-spatial scale and the concurrent influence of physical-chemical process. Molecular dynamics (MD) method has already been proved to be a promising approach for constructing effective model of atomic scale phenomenon and can serve as a predicting simulation tool in analyzing the complex surface generation mechanism and is employed in this research to study the essence of surface generation. The deformation and piles of water molecule is induced with the feeding of abrasive particle which justifies the property mutation of water at nanometer scale. There are little silica molecule aggregation or materials removal because the water-layer greatly reduce the strength of mechanical interaction between particle and materials surface and minimize the stress concentration. Furthermore, chemical effect is also observed at the interface: stable chemical bond is generated between water and silica which lead to the formation of silconl and the reaction rate changes with the amount of water molecules in the local environment. Novel ring structure is observed in the silica surface and it is justified to be favored of chemical reaction with water molecule. The siloxane bond formation process quickly strengthened across the interface with the feeding of abrasive particle because of the compressive stress resulted by the impacting behavior.

  10. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  11. Determination of six sulfonamide antibiotics, two metabolites and trimethoprim in wastewater by isotope dilution liquid chromatography/tandem mass spectrometry.

    PubMed

    Le-Minh, Nhat; Stuetz, Richard M; Khan, Stuart J

    2012-01-30

    A highly sensitive method for the analysis of six sulfonamide antibiotics (sulfadiazine, sulfathiazole, sulfapyridine, sulfamerazine, sulfamethazine and sulfamethoxazole), two sulfonamide metabolites (N(4)-acetyl sulfamethazine and N(4)-acetyl sulfamethoxazole) and the commonly co-applied antibiotic trimethoprim was developed for the analysis of complex wastewater samples. The method involves solid phase extraction of filtered wastewater samples followed by liquid chromatography-tandem mass spectral detection. Method detection limits were shown to be matrix-dependent but ranged between 0.2 and 0.4 ng/mL for ultrapure water, 0.4 and 0.7 ng/mL for tap water, 1.4 and 5.9 ng/mL for a laboratory-scale membrane bioreactor (MBR) mixed liquor, 0.7 and 1.7 ng/mL for biologically treated effluent and 0.5 and 1.5 ng/g dry weight for MBR activated sludge. An investigation of analytical matrix effects was undertaken, demonstrating the significant and largely unpredictable nature of signal suppression observed for variably complex matrices compared to an ultrapure water matrix. The results demonstrate the importance of accounting for such matrix effects for accurate quantitation, as done in the presented method by isotope dilution. Comprehensive validation of calibration linearity, reproducibility, extraction recovery, limits of detection and quantification are also presented. Finally, wastewater samples from a variety of treatment stages in a full-scale wastewater treatment plant were analysed to illustrate the effectiveness of the method. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Scaling down the size and increasing the throughput of glycosyltransferase assays: activity changes on stem cell differentiation.

    PubMed

    Patil, Shilpa A; Chandrasekaran, E V; Matta, Khushi L; Parikh, Abhirath; Tzanakakis, Emmanuel S; Neelamegham, Sriram

    2012-06-15

    Glycosyltransferases (glycoTs) catalyze the transfer of monosaccharides from nucleotide-sugars to carbohydrate-, lipid-, and protein-based acceptors. We examined strategies to scale down and increase the throughput of glycoT enzymatic assays because traditional methods require large reaction volumes and complex chromatography. Approaches tested used (i) microarray pin printing, an appropriate method when glycoT activity was high; (ii) microwells and microcentrifuge tubes, a suitable method for studies with cell lysates when enzyme activity was moderate; and (iii) C(18) pipette tips and solvent extraction, a method that enriched reaction product when the extent of reaction was low. In all cases, reverse-phase thin layer chromatography (RP-TLC) coupled with phosphorimaging quantified the reaction rate. Studies with mouse embryonic stem cells (mESCs) demonstrated an increase in overall β(1,3)galactosyltransferase and α(2,3)sialyltransferase activity and a decrease in α(1,3)fucosyltransferases when these cells differentiate toward cardiomyocytes. Enzymatic and lectin binding data suggest a transition from Lewis(x)-type structures in mESCs to sialylated Galβ1,3GalNAc-type glycans on differentiation, with more prominent changes in enzyme activity occurring at later stages when embryoid bodies differentiated toward cardiomyocytes. Overall, simple, rapid, quantitative, and scalable glycoT activity analysis methods are presented. These use a range of natural and synthetic acceptors for the analysis of complex biological specimens that have limited availability. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Multi scales based sparse matrix spectral clustering image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  14. Improved visibility graph fractality with application for the diagnosis of Autism Spectrum Disorder

    NASA Astrophysics Data System (ADS)

    Ahmadlou, Mehran; Adeli, Hojjat; Adeli, Amir

    2012-10-01

    Recently, the visibility graph (VG) algorithm was proposed for mapping a time series to a graph to study complexity and fractality of the time series through investigation of the complexity of its graph. The visibility graph algorithm converts a fractal time series to a scale-free graph. VG has been used for the investigation of fractality in the dynamic behavior of both artificial and natural complex systems. However, robustness and performance of the power of scale-freeness of VG (PSVG) as an effective method for measuring fractality has not been investigated. Since noise is unavoidable in real life time series, the robustness of a fractality measure is of paramount importance. To improve the accuracy and robustness of PSVG to noise for measurement of fractality of time series in biological time-series, an improved PSVG is presented in this paper. The proposed method is evaluated using two examples: a synthetic benchmark time series and a complicated real life Electroencephalograms (EEG)-based diagnostic problem, that is distinguishing autistic children from non-autistic children. It is shown that the proposed improved PSVG is less sensitive to noise and therefore more robust compared with PSVG. Further, it is shown that using improved PSVG in the wavelet-chaos neural network model of Adeli and c-workers in place of the Katz fractality dimension results in a more accurate diagnosis of autism, a complicated neurological and psychiatric disorder.

  15. Local and landscape associations between wintering dabbling ducks and wetland complexes in Mississippi

    USGS Publications Warehouse

    Pearse, Aaron T.; Kaminski, Richard M.; Reinecke, Kenneth J.; Dinsmore, Stephen J.

    2012-01-01

    Landscape features influence distribution of waterbirds throughout their annual cycle. A conceptual model, the wetland habitat complex, may be useful in conservation of wetland habitats for dabbling ducks (Anatini). The foundation of this conceptual model is that ducks seek complexes of wetlands containing diverse resources to meet dynamic physiological needs. We included flooded croplands, wetlands and ponds, public-land waterfowl sanctuary, and diversity of habitats as key components of wetland habitat complexes and compared their relative influence at two spatial scales (i.e., local, 0.25-km radius; landscape, 4-km) on dabbling ducks wintering in western Mississippi, USA during winters 2002–2004. Distribution of mallard (Anas platyrhynchos) groups was positively associated with flooded cropland at local and landscape scales. Models representing flooded croplands at the landscape scale best explained occurrence of other dabbling ducks. Habitat complexity measured at both scales best explained group size of other dabbling ducks. Flooded croplands likely provided food that had decreased in availability due to conversion of wetlands to agriculture. Wetland complexes at landscape scales were more attractive to wintering ducks than single or structurally simple wetlands. Conservation of wetland complexes at large spatial scales (≥5,000 ha) on public and private lands will require coordination among multiple stakeholders.

  16. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brislawn, Christopher M.

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementationmore » techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.« less

  17. A Nonlinear Inversion Approach to Map the Magnetic Basement: A Case Study from Central India Using Aeromagnetic Data

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Bansal, A. R.; Anand, S. P.; Rao, V. K.; Singh, U. K.

    2016-12-01

    The central India region is having complex geology covering various geological units e.g., Precambrian Bastar Craton (including Proterozoic Chhattisgarh Basin, granitic intrusions etc.) and Eastern Ghat Mobile Belt, Gondwana Godavari and Mahanadi Grabens, Late Cretaceous Deccan Traps etc. The central India is well covered by reconnaissance scale aeromagnetic data. We analyzed this data for mapping the basement by dividing into143 overlapping blocks of 100×100km using least square nonlinear inversion method for fractal distribution of sources. The scaling exponents and depth values are optimized using grid search method. We interpreted estimated depths of anomalous sources as magnetic basement and shallow anomalous magnetic sources. The shallow magnetic anomalies are found to vary from 1 to 3km whereas magnetic basement depths are found to vary from 2km to 7km. The shallowest basement depth of 2km found corresponding to Kanker granites a part of Bastar Craton whereas deepest basement depth of 7km is associated with Godavari Graben and south eastern part of Eastern Ghat Mobile Belts near the Parvatipuram Bobbili fault. The variation of magnetic basement, shallow depths and scaling exponent in the region indicate complex tectonic, heterogeneity and intrusive bodies at different depths which is due to different tectonic processes in the region. The detailed basement depth of central India is presented in this study.

  18. Should Child Protection Services Respond Differently to Maltreatment, Risk of Maltreatment, and Risk of Harm?

    ERIC Educational Resources Information Center

    Fallon, Barbara; Trocme, Nico; MacLaurin, Bruce

    2011-01-01

    Objective: To examine evidence available in large-scale North American datasets on child abuse and neglect that can assist in understanding the complexities of child protection case classifications. Methods: A review of child abuse and neglect data from large North American epidemiological studies including the Canadian Incidence Study of Reported…

  19. Nonparametric Bayesian Multiple Imputation for Incomplete Categorical Variables in Large-Scale Assessment Surveys

    ERIC Educational Resources Information Center

    Si, Yajuan; Reiter, Jerome P.

    2013-01-01

    In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…

  20. A new machine classification method applied to human peripheral blood leukocytes

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Fitzpatrick, Steven J.; Vitthal, Sanjay; Ladoulis, Charles T.

    1994-01-01

    Human beings judge images by complex mental processes, whereas computing machines extract features. By reducing scaled human judgments and machine extracted features to a common metric space and fitting them by regression, the judgments of human experts rendered on a sample of images may be imposed on an image population to provide automatic classification.

  1. Managing Rater Effects through the Use of FACETS Analysis: The Case of a University Placement Test

    ERIC Educational Resources Information Center

    Wu, Siew Mei; Tan, Susan

    2016-01-01

    Rating essays is a complex task where students' grades could be adversely affected by test-irrelevant factors such as rater characteristics and rating scales. Understanding these factors and controlling their effects are crucial for test validity. Rater behaviour has been extensively studied through qualitative methods such as questionnaires and…

  2. Next-generation simulation and optimization platform for forest management and analysis

    Treesearch

    Antti Makinen; Jouni Kalliovirta; Jussi Rasinmaki

    2009-01-01

    Late developments in the objectives and the data collection methods of forestry create new challenges and possibilities in forest management planning. Tools in forest management and forest planning systems must be able to make good use of novel data sources, use new models, and solve complex forest planning tasks at different scales. The SIMulation and Optimization (...

  3. Nevada Photo-Based Inventory Pilot (NPIP) resource estimates (2004-2005)

    Treesearch

    Tracey S. Frescino; Gretchen G. Moisen; Paul L. Patterson; Elizabeth A. Freeman; James Menlove

    2016-01-01

    The complex nature of broad-scale, strategic-level inventories, such as the Forest Inventory and Analysis program (FIA) of the USDA Forest Service, demands constant evolution and evaluation of methods to get the best information possible while continuously increasing efficiency. The State of Nevada is predominantly comprised of nonforested Federal lands with a small...

  4. Complex Network Simulation of Forest Network Spatial Pattern in Pearl River Delta

    NASA Astrophysics Data System (ADS)

    Zeng, Y.

    2017-09-01

    Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network's power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network's degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network's main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc.) for networking a standard and base datum.

  5. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  6. Detecting transitions in protein dynamics using a recurrence quantification analysis based bootstrap method.

    PubMed

    Karain, Wael I

    2017-11-28

    Proteins undergo conformational transitions over different time scales. These transitions are closely intertwined with the protein's function. Numerous standard techniques such as principal component analysis are used to detect these transitions in molecular dynamics simulations. In this work, we add a new method that has the ability to detect transitions in dynamics based on the recurrences in the dynamical system. It combines bootstrapping and recurrence quantification analysis. We start from the assumption that a protein has a "baseline" recurrence structure over a given period of time. Any statistically significant deviation from this recurrence structure, as inferred from complexity measures provided by recurrence quantification analysis, is considered a transition in the dynamics of the protein. We apply this technique to a 132 ns long molecular dynamics simulation of the β-Lactamase Inhibitory Protein BLIP. We are able to detect conformational transitions in the nanosecond range in the recurrence dynamics of the BLIP protein during the simulation. The results compare favorably to those extracted using the principal component analysis technique. The recurrence quantification analysis based bootstrap technique is able to detect transitions between different dynamics states for a protein over different time scales. It is not limited to linear dynamics regimes, and can be generalized to any time scale. It also has the potential to be used to cluster frames in molecular dynamics trajectories according to the nature of their recurrence dynamics. One shortcoming for this method is the need to have large enough time windows to insure good statistical quality for the recurrence complexity measures needed to detect the transitions.

  7. The Robin Hood method - A novel numerical method for electrostatic problems based on a non-local charge transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lazic, Predrag; Stefancic, Hrvoje; Abraham, Hrvoje

    2006-03-20

    We introduce a novel numerical method, named the Robin Hood method, of solving electrostatic problems. The approach of the method is closest to the boundary element methods, although significant conceptual differences exist with respect to this class of methods. The method achieves equipotentiality of conducting surfaces by iterative non-local charge transfer. For each of the conducting surfaces, non-local charge transfers are performed between surface elements, which differ the most from the targeted equipotentiality of the surface. The method is tested against analytical solutions and its wide range of application is demonstrated. The method has appealing technical characteristics. For the problemmore » with N surface elements, the computational complexity of the method essentially scales with N {sup {alpha}}, where {alpha} < 2, the required computer memory scales with N, while the error of the potential decreases exponentially with the number of iterations for many orders of magnitude of the error, without the presence of the Critical Slowing Down. The Robin Hood method could prove useful in other classical or even quantum problems. Some future development ideas for possible applications outside electrostatics are addressed.« less

  8. Approximate l-fold cross-validation with Least Squares SVM and Kernel Ridge Regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, Richard E; Zhang, Hao; Parker, Lynne Edwards

    2013-01-01

    Kernel methods have difficulties scaling to large modern data sets. The scalability issues are based on computational and memory requirements for working with a large matrix. These requirements have been addressed over the years by using low-rank kernel approximations or by improving the solvers scalability. However, Least Squares Support VectorMachines (LS-SVM), a popular SVM variant, and Kernel Ridge Regression still have several scalability issues. In particular, the O(n^3) computational complexity for solving a single model, and the overall computational complexity associated with tuning hyperparameters are still major problems. We address these problems by introducing an O(n log n) approximate l-foldmore » cross-validation method that uses a multi-level circulant matrix to approximate the kernel. In addition, we prove our algorithm s computational complexity and present empirical runtimes on data sets with approximately 1 million data points. We also validate our approximate method s effectiveness at selecting hyperparameters on real world and standard benchmark data sets. Lastly, we provide experimental results on using a multi-level circulant kernel approximation to solve LS-SVM problems with hyperparameters selected using our method.« less

  9. A method of extracting ontology module using concept relations for sharing knowledge in mobile cloud computing environment.

    PubMed

    Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won

    2014-01-01

    In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.

  10. Deciphering the complex: methodological overview of statistical models to derive OMICS-based biomarkers.

    PubMed

    Chadeau-Hyam, Marc; Campanella, Gianluca; Jombart, Thibaut; Bottolo, Leonardo; Portengen, Lutzen; Vineis, Paolo; Liquet, Benoit; Vermeulen, Roel C H

    2013-08-01

    Recent technological advances in molecular biology have given rise to numerous large-scale datasets whose analysis imposes serious methodological challenges mainly relating to the size and complex structure of the data. Considerable experience in analyzing such data has been gained over the past decade, mainly in genetics, from the Genome-Wide Association Study era, and more recently in transcriptomics and metabolomics. Building upon the corresponding literature, we provide here a nontechnical overview of well-established methods used to analyze OMICS data within three main types of regression-based approaches: univariate models including multiple testing correction strategies, dimension reduction techniques, and variable selection models. Our methodological description focuses on methods for which ready-to-use implementations are available. We describe the main underlying assumptions, the main features, and advantages and limitations of each of the models. This descriptive summary constitutes a useful tool for driving methodological choices while analyzing OMICS data, especially in environmental epidemiology, where the emergence of the exposome concept clearly calls for unified methods to analyze marginally and jointly complex exposure and OMICS datasets. Copyright © 2013 Wiley Periodicals, Inc.

  11. Intercomparison of 3D pore-scale flow and solute transport simulation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.

    2016-09-01

    Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include methods that 1) explicitly model the three-dimensional geometry of pore spaces and 2) those that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of class 1, based on direct numerical simulation using computational fluid dynamics (CFD) codes, against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of class 1 based on the immersed-boundary method (IMB),more » lattice Boltzmann method (LBM), smoothed particle hydrodynamics (SPH), as well as a model of class 2 (a pore-network model or PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results with previously reported experimental observations. Experimental observations are limited to measured pore-scale velocities, so solute transport comparisons are made only among the various models. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations).« less

  12. Reconstructing Information in Large-Scale Structure via Logarithmic Mapping

    NASA Astrophysics Data System (ADS)

    Szapudi, Istvan

    We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.

  13. Development and validation of panoptic Meso scale discovery assay to quantify total systemic interleukin-6

    PubMed Central

    Chaturvedi, Shalini; Siegel, Derick; Wagner, Carrie L; Park, Jaehong; van de Velde, Helgi; Vermeulen, Jessica; Fung, Man-Cheong; Reddy, Manjula; Hall, Brett; Sasser, Kate

    2015-01-01

    Aim Interleukin-6 (IL-6), a multifunctional cytokine, exists in several forms ranging from a low molecular weight (MW 20–30 kDa) non-complexed form to high MW (200–450 kDa), complexes. Accurate baseline IL-6 assessment is pivotal to understand clinical responses to IL-6-targeted treatments. Existing assays measure only the low MW, non-complexed IL-6 form. The present work aimed to develop a validated assay to measure accurately total IL-6 (complexed and non-complexed) in serum or plasma as matrix in a high throughput and easily standardized format for clinical testing. Methods Commercial capture and detection antibodies were screened against humanized IL-6 and evaluated in an enzyme-linked immunosorbent assay format. The best antibody combinations were screened to identify an antibody pair that gave minimum background and maximum recovery of IL-6 in the presence of 100% serum matrix. A plate-based total IL-6 assay was developed and transferred to the Meso Scale Discovery (MSD) platform for large scale clinical testing. Results The top-performing antibody pair from 36 capture and four detection candidates was validated on the MSD platform. The lower limit of quantification in human serum samples (n = 6) was 9.77 pg l–1, recovery ranged from 93.13–113.27%, the overall pooled coefficients of variation were 20.12% (inter-assay) and 8.67% (intra-assay). High MW forms of IL-6, in size fractionated serum samples from myelodysplastic syndrome and rheumatoid arthritis patients, were detected by the assay but not by a commercial kit. Conclusion This novel panoptic (sees all forms) IL-6 MSD assay that measures both high and low MW forms may have clinical utility. PMID:25847183

  14. Kinetic pathway of 40S ribosomal subunit recruitment to hepatitis C virus internal ribosome entry site.

    PubMed

    Fuchs, Gabriele; Petrov, Alexey N; Marceau, Caleb D; Popov, Lauren M; Chen, Jin; O'Leary, Seán E; Wang, Richard; Carette, Jan E; Sarnow, Peter; Puglisi, Joseph D

    2015-01-13

    Translation initiation can occur by multiple pathways. To delineate these pathways by single-molecule methods, fluorescently labeled ribosomal subunits are required. Here, we labeled human 40S ribosomal subunits with a fluorescent SNAP-tag at ribosomal protein eS25 (RPS25). The resulting ribosomal subunits could be specifically labeled in living cells and in vitro. Using single-molecule Förster resonance energy transfer (FRET) between RPS25 and domain II of the hepatitis C virus (HCV) internal ribosome entry site (IRES), we measured the rates of 40S subunit arrival to the HCV IRES. Our data support a single-step model of HCV IRES recruitment to 40S subunits, irreversible on the initiation time scale. We furthermore demonstrated that after binding, the 40S:HCV IRES complex is conformationally dynamic, undergoing slow large-scale rearrangements. Addition of translation extracts suppresses these fluctuations, funneling the complex into a single conformation on the 80S assembly pathway. These findings show that 40S:HCV IRES complex formation is accompanied by dynamic conformational rearrangements that may be modulated by initiation factors.

  15. Large area sub-micron chemical imaging of magnesium in sea urchin teeth.

    PubMed

    Masic, Admir; Weaver, James C

    2015-03-01

    The heterogeneous and site-specific incorporation of inorganic ions can profoundly influence the local mechanical properties of damage tolerant biological composites. Using the sea urchin tooth as a research model, we describe a multi-technique approach to spatially map the distribution of magnesium in this complex multiphase system. Through the combined use of 16-bit backscattered scanning electron microscopy, multi-channel energy dispersive spectroscopy elemental mapping, and diffraction-limited confocal Raman spectroscopy, we demonstrate a new set of high throughput, multi-spectral, high resolution methods for the large scale characterization of mineralized biological materials. In addition, instrument hardware and data collection protocols can be modified such that several of these measurements can be performed on irregularly shaped samples with complex surface geometries and without the need for extensive sample preparation. Using these approaches, in conjunction with whole animal micro-computed tomography studies, we have been able to spatially resolve micron and sub-micron structural features across macroscopic length scales on entire urchin tooth cross-sections and correlate these complex morphological features with local variability in elemental composition. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Sequencing the Black Aspergilli species complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuo, Alan; Salamov, Asaf; Zhou, Kemin

    2011-03-11

    The ~15 members of the Aspergillus section Nigri species complex (the "Black Aspergilli") are significant as platforms for bioenergy and bioindustrial technology, as members of soil microbial communities and players in the global carbon cycle, and as food processing and spoilage agents and agricultural toxigens. Despite their utility and ubiquity, the morphological and metabolic distinctiveness of the complex's members, and thus their taxonomy, is poorly defined. We are using short read pyrosequencing technology (Roche/454 and Illumina/Solexa) to rapidly scale up genomic and transcriptomic analysis of this species complex. To date we predict 11197 genes in Aspergillus niger, 11624 genes inmore » A. carbonarius, and 10845 genes in A. aculeatus. A. aculeatus is our most recent genome, and was assembled primarily from 454-sequenced reads and annotated with the aid of >2 million 454 ESTs and >300 million Solexa ESTs. To most effectively deploy these very large numbers of ESTs we developed 2 novel methods for clustering the ESTs into assemblies. We have also developed a pipeline to propose orthologies and paralogies among genes in the species complex. In the near future we will apply these methods to additional species of Black Aspergilli that are currently in our sequencing pipeline.« less

  17. Toward cost-efficient sampling methods

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  18. Complexity of Continuous Glucose Monitoring Data in Critically Ill Patients: Continuous Glucose Monitoring Devices, Sensor Locations, and Detrended Fluctuation Analysis Methods

    PubMed Central

    Signal, Matthew; Thomas, Felicity; Shaw, Geoffrey M.; Chase, J. Geoffrey

    2013-01-01

    Background Critically ill patients often experience high levels of insulin resistance and stress-induced hyperglycemia, which may negatively impact outcomes. However, evidence surrounding the causes of negative outcomes remains inconclusive. Continuous glucose monitoring (CGM) devices allow researchers to investigate glucose complexity, using detrended fluctuation analysis (DFA), to determine whether it is associated with negative outcomes. The aim of this study was to investigate the effects of CGM device type/calibration and CGM sensor location on results from DFA. Methods This study uses CGM data from critically ill patients who were each monitored concurrently using Medtronic iPro2s on the thigh and abdomen and a Medtronic Guardian REAL-Time on the abdomen. This allowed interdevice/calibration type and intersensor site variation to be assessed. Detrended fluctuation analysis is a technique that has previously been used to determine the complexity of CGM data in critically ill patients. Two variants of DFA, monofractal and multifractal, were used to assess the complexity of sensor glucose data as well as the precalibration raw sensor current. Monofractal DFA produces a scaling exponent (H), where H is inversely related to complexity. The results of multifractal DFA are presented graphically by the multifractal spectrum. Results From the 10 patients recruited, 26 CGM devices produced data suitable for analysis. The values of H from abdominal iPro2 data were 0.10 (0.03–0.20) higher than those from Guardian REAL-Time data, indicating consistently lower complexities in iPro2 data. However, repeating the analysis on the raw sensor current showed little or no difference in complexity. Sensor site had little effect on the scaling exponents in this data set. Finally, multifractal DFA revealed no significant associations between the multifractal spectrums and CGM device type/calibration or sensor location. Conclusions Monofractal DFA results are dependent on the device/calibration used to obtain CGM data, but sensor location has little impact. Future studies of glucose complexity should consider the findings presented here when designing their investigations. PMID:24351175

  19. Simulation of electron energy loss spectra of nanomaterials with linear-scaling density functional theory

    DOE PAGES

    Tait, E. W.; Ratcliff, L. E.; Payne, M. C.; ...

    2016-04-20

    Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree withmore » those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. As a result, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable.« less

  20. Fabrication of the replica templated from butterfly wing scales with complex light trapping structures

    NASA Astrophysics Data System (ADS)

    Han, Zhiwu; Li, Bo; Mu, Zhengzhi; Yang, Meng; Niu, Shichao; Zhang, Junqiu; Ren, Luquan

    2015-11-01

    The polydimethylsiloxane (PDMS) positive replica templated twice from the excellent light trapping surface of butterfly Trogonoptera brookiana wing scales was fabricated by a simple and promising route. The exact SiO2 negative replica was fabricated by using a synthesis method combining a sol-gel process and subsequent selective etching. Afterwards, a vacuum-aided process was introduced to make PDMS gel fill into the SiO2 negative replica, and the PDMS gel was solidified in an oven. Then, the SiO2 negative replica was used as secondary template and the structures in its surface was transcribed onto the surface of PDMS. At last, the PDMS positive replica was obtained. After comparing the PDMS positive replica and the original bio-template in terms of morphology, dimensions and reflectance spectra and so on, it is evident that the excellent light trapping structures of butterfly wing scales were inherited by the PDMS positive replica faithfully. This bio-inspired route could facilitate the preparation of complex light trapping nanostructure surfaces without any assistance from other power-wasting and expensive nanofabrication technologies.

  1. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  2. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank

    2016-03-01

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.

  3. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    NASA Astrophysics Data System (ADS)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  4. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    PubMed

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  5. Hybrid 3D-2D printing of bone scaffolds Hybrid 3D-2D printing methods for bone scaffolds fabrication.

    PubMed

    Prinz, V Ya; Seleznev, Vladimir

    2016-12-13

    It is a well-known fact that bone scaffold topography on micro- and nanometer scale influences the cellular behavior. Nano-scale surface modification of scaffolds allows the modulation of biological activity for enhanced cell differentiation. To date, there has been only a limited success in printing scaffolds with micro- and nano-scale features exposed on the surface. To improve on the currently available imperfect technologies, in our paper we introduce new hybrid technologies based on a combination of 2D (nano imprint) and 3D printing methods. The first method is based on using light projection 3D printing and simultaneous 2D nanostructuring of each of the layers during the formation of the 3D structure. The second method is based on the sequential integration of preliminarily created 2D nanostructured films into a 3D printed structure. The capabilities of the developed hybrid technologies are demonstrated with the example of forming 3D bone scaffolds. The proposed technologies can be used to fabricate complex 3D micro- and nanostructured products for various fields. Copyright 2016 IOP Publishing Ltd.

  6. Lumen-based detection of prostate cancer via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.

    2017-03-01

    We present a deep learning approach for detecting prostate cancers. The approach consists of two steps. In the first step, we perform tissue segmentation that identifies lumens within digitized prostate tissue specimen images. Intensity- and texture-based image features are computed at five different scales, and a multiview boosting method is adopted to cooperatively combine the image features from differing scales and to identify lumens. In the second step, we utilize convolutional neural networks (CNN) to automatically extract high-level image features of lumens and to predict cancers. The segmented lumens are rescaled to reduce computational complexity and data augmentation by scaling, rotating, and flipping the rescaled image is applied to avoid overfitting. We evaluate the proposed method using two tissue microarrays (TMA) - TMA1 includes 162 tissue specimens (73 Benign and 89 Cancer) and TMA2 comprises 185 tissue specimens (70 Benign and 115 Cancer). In cross-validation on TMA1, the proposed method achieved an AUC of 0.95 (CI: 0.93-0.98). Trained on TMA1 and tested on TMA2, CNN obtained an AUC of 0.95 (CI: 0.92-0.98). This demonstrates that the proposed method can potentially improve prostate cancer pathology.

  7. Applications of species accumulation curves in large-scale biological data analysis.

    PubMed

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2015-09-01

    The species accumulation curve, or collector's curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45-63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k -mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible.

  8. Applications of species accumulation curves in large-scale biological data analysis

    PubMed Central

    Deng, Chao; Daley, Timothy; Smith, Andrew D

    2016-01-01

    The species accumulation curve, or collector’s curve, of a population gives the expected number of observed species or distinct classes as a function of sampling effort. Species accumulation curves allow researchers to assess and compare diversity across populations or to evaluate the benefits of additional sampling. Traditional applications have focused on ecological populations but emerging large-scale applications, for example in DNA sequencing, are orders of magnitude larger and present new challenges. We developed a method to estimate accumulation curves for predicting the complexity of DNA sequencing libraries. This method uses rational function approximations to a classical non-parametric empirical Bayes estimator due to Good and Toulmin [Biometrika, 1956, 43, 45–63]. Here we demonstrate how the same approach can be highly effective in other large-scale applications involving biological data sets. These include estimating microbial species richness, immune repertoire size, and k-mer diversity for genome assembly applications. We show how the method can be modified to address populations containing an effectively infinite number of species where saturation cannot practically be attained. We also introduce a flexible suite of tools implemented as an R package that make these methods broadly accessible. PMID:27252899

  9. A Fictitious Domain Method for Resolving the Interaction of Blood Flow with Clot Growth

    NASA Astrophysics Data System (ADS)

    Mukherjee, Debanjan; Shadden, Shawn

    2016-11-01

    Thrombosis and thrombo-embolism cause a range of diseases including heart attack and stroke. Closer understanding of clot and blood flow mechanics provides valuable insights on the etiology, diagnosis, and treatment of thrombotic diseases. Such mechanics are complicated, however, by the discrete and multi-scale phenomena underlying thrombosis, and the complex interactions of unsteady, pulsatile hemodynamics with a clot of arbitrary shape and microstructure. We have developed a computational technique, based on a fictitious domain based finite element method, to study these interactions. The method can resolve arbitrary clot geometries, and dynamically couple fluid flow with static or growing clot boundaries. Macroscopic thrombus-hemodynamics interactions were investigated within idealized vessel geometries representative of the common carotid artery, with realistic unsteady flow profiles as inputs. The method was also employed successfully to resolve micro-scale interactions using a model driven by in-vivo morphology data. The results provide insights into the flow structures and hemodynamic loading around an arbitrarily grown clot at arterial length-scales, as well as flow and transport within the interstices of platelet aggregates composing the clot. The work was supported by AHA Award No: 16POST27500023.

  10. Hybrid 3D-2D printing for bone scaffolds fabrication

    NASA Astrophysics Data System (ADS)

    Seleznev, V. A.; Prinz, V. Ya

    2017-02-01

    It is a well-known fact that bone scaffold topography on micro- and nanometer scale influences the cellular behavior. Nano-scale surface modification of scaffolds allows the modulation of biological activity for enhanced cell differentiation. To date, there has been only a limited success in printing scaffolds with micro- and nano-scale features exposed on the surface. To improve on the currently available imperfect technologies, in our paper we introduce new hybrid technologies based on a combination of 2D (nano imprint) and 3D printing methods. The first method is based on using light projection 3D printing and simultaneous 2D nanostructuring of each of the layers during the formation of the 3D structure. The second method is based on the sequential integration of preliminarily created 2D nanostructured films into a 3D printed structure. The capabilities of the developed hybrid technologies are demonstrated with the example of forming 3D bone scaffolds. The proposed technologies can be used to fabricate complex 3D micro- and nanostructured products for various fields.

  11. Voltage collapse in complex power grids

    PubMed Central

    Simpson-Porco, John W.; Dörfler, Florian; Bullo, Francesco

    2016-01-01

    A large-scale power grid's ability to transfer energy from producers to consumers is constrained by both the network structure and the nonlinear physics of power flow. Violations of these constraints have been observed to result in voltage collapse blackouts, where nodal voltages slowly decline before precipitously falling. However, methods to test for voltage collapse are dominantly simulation-based, offering little theoretical insight into how grid structure influences stability margins. For a simplified power flow model, here we derive a closed-form condition under which a power network is safe from voltage collapse. The condition combines the complex structure of the network with the reactive power demands of loads to produce a node-by-node measure of grid stress, a prediction of the largest nodal voltage deviation, and an estimate of the distance to collapse. We extensively test our predictions on large-scale systems, highlighting how our condition can be leveraged to increase grid stability margins. PMID:26887284

  12. Complex Ion Dynamics in Carbonate Lithium-Ion Battery Electrolytes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ong, Mitchell T.; Bhatia, Harsh; Gyulassy, Attila G.

    Li-ion battery performance is strongly influenced by ionic conductivity, which depends on the mobility of the Li ions in solution, and is related to their solvation structure. In this work, we have performed first-principles molecular dynamics (FPMD) simulations of a LiPF6 salt solvated in different Li-ion battery organic electrolytes. We employ an analytical method using relative angles from successive time intervals to characterize complex ionic motion in multiple dimensions from our FPMD simulations. We find different characteristics of ionic motion on different time scales. We find that the Li ion exhibits a strong caging effect due to its strong solvationmore » structure, while the counterion, PF6– undergoes more Brownian-like motion. Lastly, our results show that ionic motion can be far from purely diffusive and provide a quantitative characterization of the microscopic motion of ions over different time scales.« less

  13. Complex Ion Dynamics in Carbonate Lithium-Ion Battery Electrolytes

    DOE PAGES

    Ong, Mitchell T.; Bhatia, Harsh; Gyulassy, Attila G.; ...

    2017-03-06

    Li-ion battery performance is strongly influenced by ionic conductivity, which depends on the mobility of the Li ions in solution, and is related to their solvation structure. In this work, we have performed first-principles molecular dynamics (FPMD) simulations of a LiPF6 salt solvated in different Li-ion battery organic electrolytes. We employ an analytical method using relative angles from successive time intervals to characterize complex ionic motion in multiple dimensions from our FPMD simulations. We find different characteristics of ionic motion on different time scales. We find that the Li ion exhibits a strong caging effect due to its strong solvationmore » structure, while the counterion, PF6– undergoes more Brownian-like motion. Lastly, our results show that ionic motion can be far from purely diffusive and provide a quantitative characterization of the microscopic motion of ions over different time scales.« less

  14. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  15. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  16. Mapping wildland fuels for fire management across multiple scales: integrating remote sensing, GIS, and biophysical modeling

    USGS Publications Warehouse

    Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.

    2001-01-01

    Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.

  17. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  18. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  19. Wetland mapping from digitized aerial photography. [Sheboygen Marsh, Sheboygen County, Wisconsin

    NASA Technical Reports Server (NTRS)

    Scarpace, F. L.; Quirk, B. K.; Kiefer, R. W.; Wynn, S. L.

    1981-01-01

    Computer assisted interpretation of small scale aerial imagery was found to be a cost effective and accurate method of mapping complex vegetation patterns if high resolution information is desired. This type of technique is suited for problems such as monitoring changes in species composition due to environmental factors and is a feasible method of monitoring and mapping large areas of wetlands. The technique has the added advantage of being in a computer compatible form which can be transformed into any georeference system of interest.

  20. Observer-based monitoring of heat exchangers.

    PubMed

    Astorga-Zaragoza, Carlos-Manuel; Alvarado-Martínez, Víctor-Manuel; Zavala-Río, Arturo; Méndez-Ocaña, Rafael-Maxim; Guerrero-Ramírez, Gerardo-Vicente

    2008-01-01

    The goal of this work is to provide a method for monitoring performance degradation in counter-flow double-pipe heat exchangers. The overall heat transfer coefficient is estimated by an adaptive observer and monitored in order to infer when the heat exchanger needs preventive or corrective maintenance. A simplified mathematical model is used to synthesize the adaptive observer and a more complex model is used for simulation. The reliability of the proposed method was demonstrated via numerical simulations and laboratory experiments with a bench-scale pilot plant.

Top