Sample records for complex parameter space

  1. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations

    PubMed Central

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431

  2. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations.

    PubMed

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.

  3. Measurement of complex terahertz dielectric properties of polymers using an improved free-space technique

    NASA Astrophysics Data System (ADS)

    Chang, Tianying; Zhang, Xiansheng; Yang, Chuanfa; Sun, Zhonglin; Cui, Hong-Liang

    2017-04-01

    The complex dielectric properties of non-polar solid polymer materials were measured in the terahertz (THz) band by a free-space technique employing a frequency-extended vector network analyzer (VNA), and by THz time-domain spectroscopy (TDS). Mindful of THz wave’s unique characteristics, the free-space method for measurement of material dielectric properties in the microwave band was expanded and improved for application in the THz frequency region. To ascertain the soundness and utility of the proposed method, measurements of the complex dielectric properties of a variety of polymers were carried out, including polytetrafluoroethylene (PTFE, known also by the brand name Teflon), polypropylene (PP), polyethylene (PE), and glass fiber resin (Composite Stone). The free-space method relies on the determination of electromagnetic scattering parameters (S-parameters) of the sample, with the gated-reflect-line (GRL) calibration technique commonly employed using a VNA. Subsequently, based on the S-parameters, the dielectric constant and loss characteristic of the sample were calculated by using a Newtonian iterative algorithm. To verify the calculated results, THz TDS technique, which produced Fresnel parameters such as reflection and transmission coefficients, was also used to independently determine the dielectric properties of these polymer samples, with results satisfactorily corroborating those obtained by the free-space extended microwave technique.

  4. Julia Sets in Parameter Spaces

    NASA Astrophysics Data System (ADS)

    Buff, X.; Henriksen, C.

    Given a complex number λ of modulus 1, we show that the bifurcation locus of the one parameter family {fb(z)=λz+bz2+z3}b∈ contains quasi-conformal copies of the quadratic Julia set J(λz+z2). As a corollary, we show that when the Julia set J(λz+z2) is not locally connected (for example when z|-->λz+z2 has a Cremer point at 0), the bifurcation locus is not locally connected. To our knowledge, this is the first example of complex analytic parameter space of dimension 1, with connected but non-locally connected bifurcation locus. We also show that the set of complex numbers λ of modulus 1, for which at least one of the parameter rays has a non-trivial accumulation set, contains a dense Gδ subset of S1.

  5. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  6. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  7. Parrondo's games based on complex networks and the paradoxical effect.

    PubMed

    Ye, Ye; Wang, Lu; Xie, Nenggang

    2013-01-01

    Parrondo's games were first constructed using a simple tossing scenario, which demonstrates the following paradoxical situation: in sequences of games, a winning expectation may be obtained by playing the games in a random order, although each game (game A or game B) in the sequence may result in losing when played individually. The available Parrondo's games based on the spatial niche (the neighboring environment) are applied in the regular networks. The neighbors of each node are the same in the regular graphs, whereas they are different in the complex networks. Here, Parrondo's model based on complex networks is proposed, and a structure of game B applied in arbitrary topologies is constructed. The results confirm that Parrondo's paradox occurs. Moreover, the size of the region of the parameter space that elicits Parrondo's paradox depends on the heterogeneity of the degree distributions of the networks. The higher heterogeneity yields a larger region of the parameter space where the strong paradox occurs. In addition, we use scale-free networks to show that the network size has no significant influence on the region of the parameter space where the strong or weak Parrondo's paradox occurs. The region of the parameter space where the strong Parrondo's paradox occurs reduces slightly when the average degree of the network increases.

  8. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  9. Modal vector estimation for closely spaced frequency modes

    NASA Technical Reports Server (NTRS)

    Craig, R. R., Jr.; Chung, Y. T.; Blair, M.

    1982-01-01

    Techniques for obtaining improved modal vector estimates for systems with closely spaced frequency modes are discussed. In describing the dynamical behavior of a complex structure modal parameters are often analyzed: undamped natural frequency, mode shape, modal mass, modal stiffness and modal damping. From both an analytical standpoint and an experimental standpoint, identification of modal parameters is more difficult if the system has repeated frequencies or even closely spaced frequencies. The more complex the structure, the more likely it is to have closely spaced frequencies. This makes it difficult to determine valid mode shapes using single shaker test methods. By employing band selectable analysis (zoom) techniques and by employing Kennedy-Pancu circle fitting or some multiple degree of freedom (MDOF) curve fit procedure, the usefulness of the single shaker approach can be extended.

  10. Quantifying networks complexity from information geometry viewpoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Felice, Domenico, E-mail: domenico.felice@unicam.it; Mancini, Stefano; INFN-Sezione di Perugia, Via A. Pascoli, I-06123 Perugia

    We consider a Gaussian statistical model whose parameter space is given by the variances of random variables. Underlying this model we identify networks by interpreting random variables as sitting on vertices and their correlations as weighted edges among vertices. We then associate to the parameter space a statistical manifold endowed with a Riemannian metric structure (that of Fisher-Rao). Going on, in analogy with the microcanonical definition of entropy in Statistical Mechanics, we introduce an entropic measure of networks complexity. We prove that it is invariant under networks isomorphism. Above all, considering networks as simplicial complexes, we evaluate this entropy onmore » simplexes and find that it monotonically increases with their dimension.« less

  11. Phase transitions in tumor growth: V what can be expected from cancer glycolytic oscillations?

    NASA Astrophysics Data System (ADS)

    Martin, R. R.; Montero, S.; Silva, E.; Bizzarri, M.; Cocho, G.; Mansilla, R.; Nieto-Villar, J. M.

    2017-11-01

    Experimental evidence confirms the existence of glycolytic oscillations in cancer, which allows it to self-organize in time and space far from thermodynamic equilibrium, and provides it with high robustness, complexity and adaptability. A kinetic model is proposed for HeLa tumor cells grown in hypoxia conditions. It shows oscillations in a wide range of parameters. Two control parameters (glucose and inorganic phosphate concentration) were varied to explore the phase space, showing also the presence of limit cycles and bifurcations. The complexity of the system was evaluated by focusing on stationary state stability and Lempel-Ziv complexity. Moreover, the calculated entropy production rate was demonstrated behaving as a Lyapunov function.

  12. Exploration of DGVM Parameter Solution Space Using Simulated Annealing: Implications for Forecast Uncertainties

    NASA Astrophysics Data System (ADS)

    Wells, J. R.; Kim, J. B.

    2011-12-01

    Parameters in dynamic global vegetation models (DGVMs) are thought to be weakly constrained and can be a significant source of errors and uncertainties. DGVMs use between 5 and 26 plant functional types (PFTs) to represent the average plant life form in each simulated plot, and each PFT typically has a dozen or more parameters that define the way it uses resource and responds to the simulated growing environment. Sensitivity analysis explores how varying parameters affects the output, but does not do a full exploration of the parameter solution space. The solution space for DGVM parameter values are thought to be complex and non-linear; and multiple sets of acceptable parameters may exist. In published studies, PFT parameters are estimated from published literature, and often a parameter value is estimated from a single published value. Further, the parameters are "tuned" using somewhat arbitrary, "trial-and-error" methods. BIOMAP is a new DGVM created by fusing MAPSS biogeography model with Biome-BGC. It represents the vegetation of North America using 26 PFTs. We are using simulated annealing, a global search method, to systematically and objectively explore the solution space for the BIOMAP PFTs and system parameters important for plant water use. We defined the boundaries of the solution space by obtaining maximum and minimum values from published literature, and where those were not available, using +/-20% of current values. We used stratified random sampling to select a set of grid cells representing the vegetation of the conterminous USA. Simulated annealing algorithm is applied to the parameters for spin-up and a transient run during the historical period 1961-1990. A set of parameter values is considered acceptable if the associated simulation run produces a modern potential vegetation distribution map that is as accurate as one produced by trial-and-error calibration. We expect to confirm that the solution space is non-linear and complex, and that multiple acceptable parameter sets exist. Further we expect to demonstrate that the multiple parameter sets produce significantly divergent future forecasts in NEP, C storage, and ET and runoff; and thereby identify a highly important source of DGVM uncertainty

  13. Development and evaluation of a predictive algorithm for telerobotic task complexity

    NASA Technical Reports Server (NTRS)

    Gernhardt, M. L.; Hunter, R. C.; Hedgecock, J. C.; Stephenson, A. G.

    1993-01-01

    There is a wide range of complexity in the various telerobotic servicing tasks performed in subsea, space, and hazardous material handling environments. Experience with telerobotic servicing has evolved into a knowledge base used to design tasks to be 'telerobot friendly.' This knowledge base generally resides in a small group of people. Written documentation and requirements are limited in conveying this knowledge base to serviceable equipment designers and are subject to misinterpretation. A mathematical model of task complexity based on measurable task parameters and telerobot performance characteristics would be a valuable tool to designers and operational planners. Oceaneering Space Systems and TRW have performed an independent research and development project to develop such a tool for telerobotic orbital replacement unit (ORU) exchange. This algorithm was developed to predict an ORU exchange degree of difficulty rating (based on the Cooper-Harper rating used to assess piloted operations). It is based on measurable parameters of the ORU, attachment receptacle and quantifiable telerobotic performance characteristics (e.g., link length, joint ranges, positional accuracy, tool lengths, number of cameras, and locations). The resulting algorithm can be used to predict task complexity as the ORU parameters, receptacle parameters, and telerobotic characteristics are varied.

  14. Various complexity measures in confined hydrogen atom

    NASA Astrophysics Data System (ADS)

    Majumdar, Sangita; Mukherjee, Neetik; Roy, Amlan K.

    2017-11-01

    Several well-known statistical measures similar to LMC and Fisher-Shannon complexity have been computed for confined hydrogen atom in both position (r) and momentum (p) spaces. Further, a more generalized form of these quantities with Rényi entropy (R) is explored here. The role of scaling parameter in the exponential part is also pursued. R is evaluated taking order of entropic moments α, β as (2/3, 3) in r and p spaces. Detailed systematic results of these measures with respect to variation of confinement radius rc is presented for low-lying states such as, 1 s - 3 d, 4 f and 5 g . For nodal states, such as 2 s, 3 s and 3 p , as rc progresses there appears a maximum followed by a minimum in r space, having certain values of the scaling parameter. However, the corresponding p-space results lack such distinct patterns. This study reveals many other interesting features.

  15. Parametric Analysis of a Hover Test Vehicle using Advanced Test Generation and Data Analysis

    NASA Technical Reports Server (NTRS)

    Gundy-Burlet, Karen; Schumann, Johann; Menzies, Tim; Barrett, Tony

    2009-01-01

    Large complex aerospace systems are generally validated in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. This is due to the large parameter space, and complex, highly coupled nonlinear nature of the different systems that contribute to the performance of the aerospace system. We have addressed the factors deterring such an analysis by applying a combination of technologies to the area of flight envelop assessment. We utilize n-factor (2,3) combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. The data generated is automatically analyzed through a combination of unsupervised learning using a Bayesian multivariate clustering technique (AutoBayes) and supervised learning of critical parameter ranges using the machine-learning tool TAR3, a treatment learner. Covariance analysis with scatter plots and likelihood contours are used to visualize correlations between simulation parameters and simulation results, a task that requires tool support, especially for large and complex models. We present results of simulation experiments for a cold-gas-powered hover test vehicle.

  16. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions

    PubMed Central

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation. PMID:26150807

  17. Bottom-up modeling approach for the quantitative estimation of parameters in pathogen-host interactions.

    PubMed

    Lehnert, Teresa; Timme, Sandra; Pollmächer, Johannes; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo

    2015-01-01

    Opportunistic fungal pathogens can cause bloodstream infection and severe sepsis upon entering the blood stream of the host. The early immune response in human blood comprises the elimination of pathogens by antimicrobial peptides and innate immune cells, such as neutrophils or monocytes. Mathematical modeling is a predictive method to examine these complex processes and to quantify the dynamics of pathogen-host interactions. Since model parameters are often not directly accessible from experiment, their estimation is required by calibrating model predictions with experimental data. Depending on the complexity of the mathematical model, parameter estimation can be associated with excessively high computational costs in terms of run time and memory. We apply a strategy for reliable parameter estimation where different modeling approaches with increasing complexity are used that build on one another. This bottom-up modeling approach is applied to an experimental human whole-blood infection assay for Candida albicans. Aiming for the quantification of the relative impact of different routes of the immune response against this human-pathogenic fungus, we start from a non-spatial state-based model (SBM), because this level of model complexity allows estimating a priori unknown transition rates between various system states by the global optimization method simulated annealing. Building on the non-spatial SBM, an agent-based model (ABM) is implemented that incorporates the migration of interacting cells in three-dimensional space. The ABM takes advantage of estimated parameters from the non-spatial SBM, leading to a decreased dimensionality of the parameter space. This space can be scanned using a local optimization approach, i.e., least-squares error estimation based on an adaptive regular grid search, to predict cell migration parameters that are not accessible in experiment. In the future, spatio-temporal simulations of whole-blood samples may enable timely stratification of sepsis patients by distinguishing hyper-inflammatory from paralytic phases in immune dysregulation.

  18. Space-weather Parameters for 1,000 Active Regions Observed by SDO/HMI

    NASA Astrophysics Data System (ADS)

    Bobra, M.; Liu, Y.; Hoeksema, J. T.; Sun, X.

    2013-12-01

    We present statistical studies of several space-weather parameters, derived from observations of the photospheric vector magnetic field by the Helioseismic and Magnetic Imager (HMI) aboard the Solar Dynamics Observatory, for a thousand active regions. Each active region has been observed every twelve minutes during the entirety of its disk passage. Some of these parameters, such as energy density and shear angle, indicate the deviation of the photospheric magnetic field from that of a potential field. Other parameters include flux, helicity, field gradients, polarity inversion line properties, and measures of complexity. We show that some of these parameters are useful for event prediction.

  19. Crystallization and crystal manipulation of a steric chaperone in complex with its lipase substrate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pauwels, Kris, E-mail: krpauwel@vub.ac.be; Loris, Remy; Vandenbussche, Guy

    2005-08-01

    Crystals of the lipase of B. glumae in complex with its specific foldase were obtained in two forms. Crystallization, crystal manipulation and preliminary X-ray diffraction analysis are described. Bacterial lipases that are secreted via the type II secretion pathway require a lipase-specific foldase in order to obtain their native and biologically active conformation in the periplasmic space. The lipase–foldase complex from Burkholderia glumae (319 and 333 residues, respectively) was crystallized in two crystal forms. One crystal form belongs to space group P3{sub 1}21 (P3{sub 2}21), with unit-cell parameters a = b = 122.3, c = 98.2 Å. A procedure ismore » presented which improved the diffraction of these crystals from ∼5 to 2.95 Å. For the second crystal form, which belonged to space group C2 with unit-cell parameters a = 183.0, b = 75.7, c = 116.6 Å, X-ray data were collected to 1.85 Å.« less

  20. Recent experience in simultaneous control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Ramaker, R.; Milman, M.

    1989-01-01

    To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.

  1. Tethered Satellites as Enabling Platforms for an Operational Space Weather Monitoring System

    NASA Technical Reports Server (NTRS)

    Krause, L. Habash; Gilchrist, B. E.; Bilen, S.; Owens, J.; Voronka, N.; Furhop, K.

    2013-01-01

    Space weather nowcasting and forecasting models require assimilation of near-real time (NRT) space environment data to improve the precision and accuracy of operational products. Typically, these models begin with a climatological model to provide "most probable distributions" of environmental parameters as a function of time and space. The process of NRT data assimilation gently pulls the climate model closer toward the observed state (e.g. via Kalman smoothing) for nowcasting, and forecasting is achieved through a set of iterative physics-based forward-prediction calculations. The issue of required space weather observatories to meet the spatial and temporal requirements of these models is a complex one, and we do not address that with this poster. Instead, we present some examples of how tethered satellites can be used to address the shortfalls in our ability to measure critical environmental parameters necessary to drive these space weather models. Examples include very long baseline electric field measurements, magnetized ionospheric conductivity measurements, and the ability to separate temporal from spatial irregularities in environmental parameters. Tethered satellite functional requirements will be presented for each space weather parameter considered in this study.

  2. A Framework to Determine New System Requirements Under Design Parameter and Demand Uncertainties

    DTIC Science & Technology

    2015-04-30

    relegates quantitative complexities of decision-making to the method and designates trade-space exploration to the practitioner. We demonstrate the...quantitative complexities of decision-making to the method and designates trade-space exploration to the practitioner. We demonstrate the approach...play a critical role in determining new system requirements. Scope and Method of Approach The early stages of the design process have substantial

  3. Improving the Fitness of High-Dimensional Biomechanical Models via Data-Driven Stochastic Exploration

    PubMed Central

    Bustamante, Carlos D.; Valero-Cuevas, Francisco J.

    2010-01-01

    The field of complex biomechanical modeling has begun to rely on Monte Carlo techniques to investigate the effects of parameter variability and measurement uncertainty on model outputs, search for optimal parameter combinations, and define model limitations. However, advanced stochastic methods to perform data-driven explorations, such as Markov chain Monte Carlo (MCMC), become necessary as the number of model parameters increases. Here, we demonstrate the feasibility and, what to our knowledge is, the first use of an MCMC approach to improve the fitness of realistically large biomechanical models. We used a Metropolis–Hastings algorithm to search increasingly complex parameter landscapes (3, 8, 24, and 36 dimensions) to uncover underlying distributions of anatomical parameters of a “truth model” of the human thumb on the basis of simulated kinematic data (thumbnail location, orientation, and linear and angular velocities) polluted by zero-mean, uncorrelated multivariate Gaussian “measurement noise.” Driven by these data, ten Markov chains searched each model parameter space for the subspace that best fit the data (posterior distribution). As expected, the convergence time increased, more local minima were found, and marginal distributions broadened as the parameter space complexity increased. In the 36-D scenario, some chains found local minima but the majority of chains converged to the true posterior distribution (confirmed using a cross-validation dataset), thus demonstrating the feasibility and utility of these methods for realistically large biomechanical problems. PMID:19272906

  4. Robust root clustering for linear uncertain systems using generalized Lyapunov theory

    NASA Technical Reports Server (NTRS)

    Yedavalli, R. K.

    1993-01-01

    Consideration is given to the problem of matrix root clustering in subregions of a complex plane for linear state space models with real parameter uncertainty. The nominal matrix root clustering theory of Gutman & Jury (1981) using the generalized Liapunov equation is extended to the perturbed matrix case, and bounds are derived on the perturbation to maintain root clustering inside a given region. The theory makes it possible to obtain an explicit relationship between the parameters of the root clustering region and the uncertainty range of the parameter space.

  5. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  6. An Advanced User Interface Approach for Complex Parameter Study Process Specification in the Information Power Grid

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob; Yan, Jerry C. (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have now become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are now seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers great resource opportunity but at the expense of great difficulty of use. We present an approach to this problem which stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  7. Crystallization and preliminary crystallographic analysis of calcium-binding protein-2 from Entamoeba histolytica and its complexes with strontium and the IQ1 motif of myosin V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gourinath, S., E-mail: sgourinath@mail.jnu.ac.in; Padhan, Narendra; Alam, Neelima

    2005-04-01

    Calcium-binding protein-2 (EhCaBP2) crystals were grown using MPD as a precipitant. EhCaBP2 also crystallized in complex with strontium (replacing calcium) at similar conditions. Preliminary data for EhCaBP2 crystals in complex with an IQ motif are also reported. Calcium plays a pivotal role in the pathogenesis of amoebiasis, a major disease caused by Entamoeba histolytica. Two domains with four canonical EF-hand-containing calcium-binding proteins (CaBPs) have been identified from E. histolytica. Even though they have very high sequence similarity, these bind to different target proteins in a Ca{sup 2+}-dependent manner, leading to different functional pathways. Calcium-binding protein-2 (EhCaBP2) crystals were grown usingmore » MPD as a precipitant. The crystals belong to space group P2{sub 1}, with unit-cell parameters a = 111.74, b = 68.83, c = 113.25 Å, β = 116.7°. EhCaBP2 also crystallized in complex with strontium (replacing calcium) at similar conditions. The crystals belong to space group P2{sub 1}, with unit-cell parameters a = 69.18, b = 112.03, c = 93.42 Å, β = 92.8°. Preliminary data for EhCaBP2 crystals in complex with an IQ motif are also reported. This complex was crystallized with MPD and ethanol as precipitating agents. These crystals belong to space group P2{sub 1}, with unit-cell parameters a = 60.5, b = 69.86, c = 86.5 Å, β = 97.9°.« less

  8. Forecasts of non-Gaussian parameter spaces using Box-Cox transformations

    NASA Astrophysics Data System (ADS)

    Joachimi, B.; Taylor, A. N.

    2011-09-01

    Forecasts of statistical constraints on model parameters using the Fisher matrix abound in many fields of astrophysics. The Fisher matrix formalism involves the assumption of Gaussianity in parameter space and hence fails to predict complex features of posterior probability distributions. Combining the standard Fisher matrix with Box-Cox transformations, we propose a novel method that accurately predicts arbitrary posterior shapes. The Box-Cox transformations are applied to parameter space to render it approximately multivariate Gaussian, performing the Fisher matrix calculation on the transformed parameters. We demonstrate that, after the Box-Cox parameters have been determined from an initial likelihood evaluation, the method correctly predicts changes in the posterior when varying various parameters of the experimental setup and the data analysis, with marginally higher computational cost than a standard Fisher matrix calculation. We apply the Box-Cox-Fisher formalism to forecast cosmological parameter constraints by future weak gravitational lensing surveys. The characteristic non-linear degeneracy between matter density parameter and normalization of matter density fluctuations is reproduced for several cases, and the capabilities of breaking this degeneracy by weak-lensing three-point statistics is investigated. Possible applications of Box-Cox transformations of posterior distributions are discussed, including the prospects for performing statistical data analysis steps in the transformed Gaussianized parameter space.

  9. How Complex, Probable, and Predictable is Genetically Driven Red Queen Chaos?

    PubMed

    Duarte, Jorge; Rodrigues, Carla; Januário, Cristina; Martins, Nuno; Sardanyés, Josep

    2015-12-01

    Coevolution between two antagonistic species has been widely studied theoretically for both ecologically- and genetically-driven Red Queen dynamics. A typical outcome of these systems is an oscillatory behavior causing an endless series of one species adaptation and others counter-adaptation. More recently, a mathematical model combining a three-species food chain system with an adaptive dynamics approach revealed genetically driven chaotic Red Queen coevolution. In the present article, we analyze this mathematical model mainly focusing on the impact of species rates of evolution (mutation rates) in the dynamics. Firstly, we analytically proof the boundedness of the trajectories of the chaotic attractor. The complexity of the coupling between the dynamical variables is quantified using observability indices. By using symbolic dynamics theory, we quantify the complexity of genetically driven Red Queen chaos computing the topological entropy of existing one-dimensional iterated maps using Markov partitions. Co-dimensional two bifurcation diagrams are also built from the period ordering of the orbits of the maps. Then, we study the predictability of the Red Queen chaos, found in narrow regions of mutation rates. To extend the previous analyses, we also computed the likeliness of finding chaos in a given region of the parameter space varying other model parameters simultaneously. Such analyses allowed us to compute a mean predictability measure for the system in the explored region of the parameter space. We found that genetically driven Red Queen chaos, although being restricted to small regions of the analyzed parameter space, might be highly unpredictable.

  10. Purification, crystallization and preliminary X-ray diffraction studies of UDP-N-acetylglucosamine pyrophosphorylase from Candida albicans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maruyama, Daisuke; Nishitani, Yuichi; Nonaka, Tsuyoshi

    2006-12-01

    UDP-N-acetylglucosamine pyrophosphorylase was purified and crystallized and X-ray diffraction data were collected to 2.3 Å resolution. UDP-N-acetylglucosamine pyrophosphorylase (UAP) is an essential enzyme in the synthesis of UDP-N-acetylglucosamine. UAP from Candida albicans was purified and crystallized by the sitting-drop vapour-diffusion method. The crystals of the substrate and product complexes both diffract X-rays to beyond 2.3 Å resolution using synchrotron radiation. The crystals of the substrate complex belong to the triclinic space group P1, with unit-cell parameters a = 47.77, b = 62.89, c = 90.60 Å, α = 90.01, β = 97.72, γ = 92.88°, whereas those of the productmore » complex belong to the orthorhombic space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 61.95, b = 90.87, c = 94.88 Å.« less

  11. Wave Geometry: a Plurality of Singularities

    NASA Astrophysics Data System (ADS)

    Berry, M. V.

    Five interconnected wave singularities are discussed: phase monopoles, at eigenvalue degeneracies in parameter space, where the 2-form generating the geomeeic phase is singular, phase dislocations, at zeros of complex wavefunctions in position space, where different wavefronts (surfaces of constant phase) meet; caustics, that is envelopes (foci) of families of classical paths or geometrical rays, where real rays are born violently and which are complementary to dislocations; Stokes sets, at which a complex ray is born gently where it is maximally dominated by another ray; and complex degeneracies, which are the sources of adiabatic quantum transtions in analytic Hamiltonians.

  12. A joint-space numerical model of metabolic energy expenditure for human multibody dynamic system.

    PubMed

    Kim, Joo H; Roberts, Dustyn

    2015-09-01

    Metabolic energy expenditure (MEE) is a critical performance measure of human motion. In this study, a general joint-space numerical model of MEE is derived by integrating the laws of thermodynamics and principles of multibody system dynamics, which can evaluate MEE without the limitations inherent in experimental measurements (phase delays, steady state and task restrictions, and limited range of motion) or muscle-space models (complexities and indeterminacies from excessive DOFs, contacts and wrapping interactions, and reliance on in vitro parameters). Muscle energetic components are mapped to the joint space, in which the MEE model is formulated. A constrained multi-objective optimization algorithm is established to estimate the model parameters from experimental walking data also used for initial validation. The joint-space parameters estimated directly from active subjects provide reliable MEE estimates with a mean absolute error of 3.6 ± 3.6% relative to validation values, which can be used to evaluate MEE for complex non-periodic tasks that may not be experimentally verifiable. This model also enables real-time calculations of instantaneous MEE rate as a function of time for transient evaluations. Although experimental measurements may not be completely replaced by model evaluations, predicted quantities can be used as strong complements to increase reliability of the results and yield unique insights for various applications. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Expert-guided optimization for 3D printing of soft and liquid materials.

    PubMed

    Abdollahi, Sara; Davis, Alexander; Miller, John H; Feinberg, Adam W

    2018-01-01

    Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained.

  14. Expert-guided optimization for 3D printing of soft and liquid materials

    PubMed Central

    Abdollahi, Sara; Davis, Alexander; Miller, John H.

    2018-01-01

    Additive manufacturing (AM) has rapidly emerged as a disruptive technology to build mechanical parts, enabling increased design complexity, low-cost customization and an ever-increasing range of materials. Yet these capabilities have also created an immense challenge in optimizing the large number of process parameters in order achieve a high-performance part. This is especially true for AM of soft, deformable materials and for liquid-like resins that require experimental printing methods. Here, we developed an expert-guided optimization (EGO) strategy to provide structure in exploring and improving the 3D printing of liquid polydimethylsiloxane (PDMS) elastomer resin. EGO uses three steps, starting first with expert screening to select the parameter space, factors, and factor levels. Second is a hill-climbing algorithm to search the parameter space defined by the expert for the best set of parameters. Third is expert decision making to try new factors or a new parameter space to improve on the best current solution. We applied the algorithm to two calibration objects, a hollow cylinder and a five-sided hollow cube that were evaluated based on a multi-factor scoring system. The optimum print settings were then used to print complex PDMS and epoxy 3D objects, including a twisted vase, water drop, toe, and ear, at a level of detail and fidelity previously not obtained. PMID:29621286

  15. Universal dynamical properties preclude standard clustering in a large class of biochemical data.

    PubMed

    Gomez, Florian; Stoop, Ralph L; Stoop, Ruedi

    2014-09-01

    Clustering of chemical and biochemical data based on observed features is a central cognitive step in the analysis of chemical substances, in particular in combinatorial chemistry, or of complex biochemical reaction networks. Often, for reasons unknown to the researcher, this step produces disappointing results. Once the sources of the problem are known, improved clustering methods might revitalize the statistical approach of compound and reaction search and analysis. Here, we present a generic mechanism that may be at the origin of many clustering difficulties. The variety of dynamical behaviors that can be exhibited by complex biochemical reactions on variation of the system parameters are fundamental system fingerprints. In parameter space, shrimp-like or swallow-tail structures separate parameter sets that lead to stable periodic dynamical behavior from those leading to irregular behavior. We work out the genericity of this phenomenon and demonstrate novel examples for their occurrence in realistic models of biophysics. Although we elucidate the phenomenon by considering the emergence of periodicity in dependence on system parameters in a low-dimensional parameter space, the conclusions from our simple setting are shown to continue to be valid for features in a higher-dimensional feature space, as long as the feature-generating mechanism is not too extreme and the dimension of this space is not too high compared with the amount of available data. For online versions of super-paramagnetic clustering see http://stoop.ini.uzh.ch/research/clustering. Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. A Hardware Model Validation Tool for Use in Complex Space Systems

    NASA Technical Reports Server (NTRS)

    Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.

    2010-01-01

    One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.

  17. Optimizing for Large Planar Fractures in Multistage Horizontal Wells in Enhanced Geothermal Systems Using a Coupled Fluid and Geomechanics Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Xiexiaomen; Tutuncu, Azra; Eustes, Alfred

    Enhanced Geothermal Systems (EGS) could potentially use technological advancements in coupled implementation of horizontal drilling and multistage hydraulic fracturing techniques in tight oil and shale gas reservoirs along with improvements in reservoir simulation techniques to design and create EGS reservoirs. In this study, a commercial hydraulic fracture simulation package, Mangrove by Schlumberger, was used in an EGS model with largely distributed pre-existing natural fractures to model fracture propagation during the creation of a complex fracture network. The main goal of this study is to investigate optimum treatment parameters in creating multiple large, planar fractures to hydraulically connect a horizontal injectionmore » well and a horizontal production well that are 10,000 ft. deep and spaced 500 ft. apart from each other. A matrix of simulations for this study was carried out to determine the influence of reservoir and treatment parameters on preventing (or aiding) the creation of large planar fractures. The reservoir parameters investigated during the matrix simulations include the in-situ stress state and properties of the natural fracture set such as the primary and secondary fracture orientation, average fracture length, and average fracture spacing. The treatment parameters investigated during the simulations were fluid viscosity, proppant concentration, pump rate, and pump volume. A final simulation with optimized design parameters was performed. The optimized design simulation indicated that high fluid viscosity, high proppant concentration, large pump volume and pump rate tend to minimize the complexity of the created fracture network. Additionally, a reservoir with 'friendly' formation characteristics such as large stress anisotropy, natural fractures set parallel to the maximum horizontal principal stress (SHmax), and large natural fracture spacing also promote the creation of large planar fractures while minimizing fracture complexity.« less

  18. Virtual Construction of Space Habitats: Connecting Building Information Models (BIM) and SysML

    NASA Technical Reports Server (NTRS)

    Polit-Casillas, Raul; Howe, A. Scott

    2013-01-01

    Current trends in design, construction and management of complex projects make use of Building Information Models (BIM) connecting different types of data to geometrical models. This information model allow different types of analysis beyond pure graphical representations. Space habitats, regardless their size, are also complex systems that require the synchronization of many types of information and disciplines beyond mass, volume, power or other basic volumetric parameters. For this, the state-of-the-art model based systems engineering languages and processes - for instance SysML - represent a solid way to tackle this problem from a programmatic point of view. Nevertheless integrating this with a powerful geometrical architectural design tool with BIM capabilities could represent a change in the workflow and paradigm of space habitats design applicable to other aerospace complex systems. This paper shows some general findings and overall conclusions based on the ongoing research to create a design protocol and method that practically connects a systems engineering approach with a BIM architectural and engineering design as a complete Model Based Engineering approach. Therefore, one hypothetical example is created and followed during the design process. In order to make it possible this research also tackles the application of IFC categories and parameters in the aerospace field starting with the application upon the space habitats design as way to understand the information flow between disciplines and tools. By building virtual space habitats we can potentially improve in the near future the way more complex designs are developed from very little detail from concept to manufacturing.

  19. Drawing dynamical and parameters planes of iterative families and methods.

    PubMed

    Chicharro, Francisco I; Cordero, Alicia; Torregrosa, Juan R

    2013-01-01

    The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones).

  20. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  1. Reasoning from non-stationarity

    NASA Astrophysics Data System (ADS)

    Struzik, Zbigniew R.; van Wijngaarden, Willem J.; Castelo, Robert

    2002-11-01

    Complex real-world (biological) systems often exhibit intrinsically non-stationary behaviour of their temporal characteristics. We discuss local measures of scaling which can capture and reveal changes in a system's behaviour. Such measures offer increased insight into a system's behaviour and are superior to global, spectral characteristics like the multifractal spectrum. They are, however, often inadequate for fully understanding and modelling the phenomenon. We illustrate an attempt to capture complex model characteristics by analysing (multiple order) correlations in a high dimensional space of parameters of the (biological) system being studied. Both temporal information, among others local scaling information, and external descriptors/parameters, possibly influencing the system's state, are used to span the search space investigated for the presence of a (sub-)optimal model. As an example, we use fetal heartbeat monitored during labour.

  2. Automatic high-throughput screening of colloidal crystals using machine learning

    NASA Astrophysics Data System (ADS)

    Spellings, Matthew; Glotzer, Sharon C.

    Recent improvements in hardware and software have united to pose an interesting problem for computational scientists studying self-assembly of particles into crystal structures: while studies covering large swathes of parameter space can be dispatched at once using modern supercomputers and parallel architectures, identifying the different regions of a phase diagram is often a serial task completed by hand. While analytic methods exist to distinguish some simple structures, they can be difficult to apply, and automatic identification of more complex structures is still lacking. In this talk we describe one method to create numerical ``fingerprints'' of local order and use them to analyze a study of complex ordered structures. We can use these methods as first steps toward automatic exploration of parameter space and, more broadly, the strategic design of new materials.

  3. Plasma Parameters From Reentry Signal Attenuation

    DOE PAGES

    Statom, T. K.

    2018-02-27

    This study presents the application of a theoretically developed method that provides plasma parameter solution space information from measured RF attenuation that occurs during reentry. The purpose is to provide reentry plasma parameter information from the communication signal attenuation. The theoretical development centers around the attenuation and the complex index of refraction. The methodology uses an imaginary index of the refraction matching algorithm with a tolerance to find suitable solutions that satisfy the theory. The imaginary matching terms are then used to determine the real index of refraction resulting in the complex index of refraction. Then a filter is usedmore » to reject nonphysical solutions. Signal attenuation-based plasma parameter properties investigated include the complex index of refraction, plasma frequency, electron density, collision frequency, propagation constant, attenuation constant, phase constant, complex plasma conductivity, and electron mobility. RF plasma thickness attenuation is investigated and compared to the literature. Finally, similar plasma thickness for a specific signal attenuation can have different plasma properties.« less

  4. Plasma Parameters From Reentry Signal Attenuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Statom, T. K.

    This study presents the application of a theoretically developed method that provides plasma parameter solution space information from measured RF attenuation that occurs during reentry. The purpose is to provide reentry plasma parameter information from the communication signal attenuation. The theoretical development centers around the attenuation and the complex index of refraction. The methodology uses an imaginary index of the refraction matching algorithm with a tolerance to find suitable solutions that satisfy the theory. The imaginary matching terms are then used to determine the real index of refraction resulting in the complex index of refraction. Then a filter is usedmore » to reject nonphysical solutions. Signal attenuation-based plasma parameter properties investigated include the complex index of refraction, plasma frequency, electron density, collision frequency, propagation constant, attenuation constant, phase constant, complex plasma conductivity, and electron mobility. RF plasma thickness attenuation is investigated and compared to the literature. Finally, similar plasma thickness for a specific signal attenuation can have different plasma properties.« less

  5. Tracking vortices in superconductors: Extracting singularities from a discretized complex scalar field evolving in time

    DOE PAGES

    Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom; ...

    2016-02-19

    In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function of time as well. A vortex now corresponds to a 2Dmore » space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. In addition, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less

  6. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    PubMed

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.

  7. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  8. Investigating the complexity of precipitation sets within California via the fractal-multifractal method

    NASA Astrophysics Data System (ADS)

    Puente, Carlos E.; Maskey, Mahesh L.; Sivakumar, Bellie

    2017-04-01

    A deterministic geometric approach, the fractal-multifractal (FM) method, is adapted in order to encode highly intermittent daily rainfall records observed over a year. Using such a notion, this research investigates the complexity of rainfall in various stations within the State of California. Specifically, records gathered at (from South to North) Cherry Valley, Merced, Sacramento and Shasta Dam, containing 59, 116, 115 and 72 years, all ending at water year 2015, were encoded and analyzed in detail. The analysis reveals that: (a) the FM approach yields faithful encodings of all records, by years, with mean square and maximum errors in accumulated rain that are less than a mere 2% and 10%, respectively; (b) the evolution of the corresponding "best" FM parameters, allowing visualization of the inter-annual rainfall dynamics from a reduced vantage point, exhibit implicit variability that precludes discriminating between sites and extrapolating to the future; (c) the evolution of the FM parameters, restricted to specific regions within space, allows finding sensible future simulations; and (d) the rain signals at all sites may be termed "equally complex," as usage of k-means clustering and conventional phase space analysis of FM parameters yields comparable results for all sites.

  9. Astrobiological complexity with probabilistic cellular automata.

    PubMed

    Vukotić, Branislav; Ćirković, Milan M

    2012-08-01

    The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.

  10. Drawing Dynamical and Parameters Planes of Iterative Families and Methods

    PubMed Central

    Chicharro, Francisco I.

    2013-01-01

    The complex dynamical analysis of the parametric fourth-order Kim's iterative family is made on quadratic polynomials, showing the MATLAB codes generated to draw the fractal images necessary to complete the study. The parameter spaces associated with the free critical points have been analyzed, showing the stable (and unstable) regions where the selection of the parameter will provide us the excellent schemes (or dreadful ones). PMID:24376386

  11. Crystallization of the C-terminal domain of the addiction antidote CcdA in complex with its toxin CcdB

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buts, Lieven; De Jonge, Natalie; Loris, Remy, E-mail: reloris@vub.ac.be

    2005-10-01

    The CcdA C-terminal domain was crystallized in complex with CcdB in two crystal forms that diffract to beyond 2.0 Å resolution. CcdA and CcdB are the antidote and toxin of the ccd addiction module of Escherichia coli plasmid F. The CcdA C-terminal domain (CcdA{sub C36}; 36 amino acids) was crystallized in complex with CcdB (dimer of 2 × 101 amino acids) in three different crystal forms, two of which diffract to high resolution. Form II belongs to space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 37.6, b = 60.5, c = 83.8 Å and diffracts to 1.8more » Å resolution. Form III belongs to space group P2{sub 1}, with unit-cell parameters a = 41.0, b = 37.9, c = 69.6 Å, β = 96.9°, and diffracts to 1.9 Å resolution.« less

  12. Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.

    PubMed

    Huynh, Linh; Tagkopoulos, Ilias

    2015-08-21

    In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.

  13. Detection of image structures using the Fisher information and the Rao metric.

    PubMed

    Maybank, Stephen J

    2004-12-01

    In many detection problems, the structures to be detected are parameterized by the points of a parameter space. If the conditional probability density function for the measurements is known, then detection can be achieved by sampling the parameter space at a finite number of points and checking each point to see if the corresponding structure is supported by the data. The number of samples and the distances between neighboring samples are calculated using the Rao metric on the parameter space. The Rao metric is obtained from the Fisher information which is, in turn, obtained from the conditional probability density function. An upper bound is obtained for the probability of a false detection. The calculations are simplified in the low noise case by making an asymptotic approximation to the Fisher information. An application to line detection is described. Expressions are obtained for the asymptotic approximation to the Fisher information, the volume of the parameter space, and the number of samples. The time complexity for line detection is estimated. An experimental comparison is made with a Hough transform-based method for detecting lines.

  14. Combining states without scale hierarchies with ordered parton showers

    DOE PAGES

    Fischer, Nadine; Prestel, Stefan

    2017-09-12

    Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less

  15. An Integrated Framework for Parameter-based Optimization of Scientific Workflows.

    PubMed

    Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.

  16. Application and optimization of input parameter spaces in mass flow modelling: a case study with r.randomwalk and r.ranger

    NASA Astrophysics Data System (ADS)

    Krenn, Julia; Zangerl, Christian; Mergili, Martin

    2017-04-01

    r.randomwalk is a GIS-based, multi-functional, conceptual open source model application for forward and backward analyses of the propagation of mass flows. It relies on a set of empirically derived, uncertain input parameters. In contrast to many other tools, r.randomwalk accepts input parameter ranges (or, in case of two or more parameters, spaces) in order to directly account for these uncertainties. Parameter spaces represent a possibility to withdraw from discrete input values which in most cases are likely to be off target. r.randomwalk automatically performs multiple calculations with various parameter combinations in a given parameter space, resulting in the impact indicator index (III) which denotes the fraction of parameter value combinations predicting an impact on a given pixel. Still, there is a need to constrain the parameter space used for a certain process type or magnitude prior to performing forward calculations. This can be done by optimizing the parameter space in terms of bringing the model results in line with well-documented past events. As most existing parameter optimization algorithms are designed for discrete values rather than for ranges or spaces, the necessity for a new and innovative technique arises. The present study aims at developing such a technique and at applying it to derive guiding parameter spaces for the forward calculation of rock avalanches through back-calculation of multiple events. In order to automatize the work flow we have designed r.ranger, an optimization and sensitivity analysis tool for parameter spaces which can be directly coupled to r.randomwalk. With r.ranger we apply a nested approach where the total value range of each parameter is divided into various levels of subranges. All possible combinations of subranges of all parameters are tested for the performance of the associated pattern of III. Performance indicators are the area under the ROC curve (AUROC) and the factor of conservativeness (FoC). This strategy is best demonstrated for two input parameters, but can be extended arbitrarily. We use a set of small rock avalanches from western Austria, and some larger ones from Canada and New Zealand, to optimize the basal friction coefficient and the mass-to-drag ratio of the two-parameter friction model implemented with r.randomwalk. Thereby we repeat the optimization procedure with conservative and non-conservative assumptions of a set of complementary parameters and with different raster cell sizes. Our preliminary results indicate that the model performance in terms of AUROC achieved with broad parameter spaces is hardly surpassed by the performance achieved with narrow parameter spaces. However, broad spaces may result in very conservative or very non-conservative predictions. Therefore, guiding parameter spaces have to be (i) broad enough to avoid the risk of being off target; and (ii) narrow enough to ensure a reasonable level of conservativeness of the results. The next steps will consist in (i) extending the study to other types of mass flow processes in order to support forward calculations using r.randomwalk; and (ii) in applying the same strategy to the more complex, dynamic model r.avaflow.

  17. Crystallization and preliminary X-ray crystallographic analysis of the heterodimeric crotoxin complex and the isolated subunits crotapotin and phospholipase A{sub 2}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, K. F.; Murakami, M. T.; Cintra, A. C. O.

    2007-04-01

    Crotoxin, a potent neurotoxin from the venom of the South American rattlesnake Crotalus durissus terrificus, exists as a heterodimer formed between a phospholipase A{sub 2} and a catalytically inactive acidic phospholipase A{sub 2} analogue (crotapotin). Large single crystals of the crotoxin complex and of the isolated subunits have been obtained. Crotoxin, a potent neurotoxin from the venom of the South American rattlesnake Crotalus durissus terrificus, exists as a heterodimer formed between a phospholipase A{sub 2} and a catalytically inactive acidic phospholipase A{sub 2} analogue (crotapotin). Large single crystals of the crotoxin complex and of the isolated subunits have been obtained.more » The crotoxin complex crystal belongs to the orthorhombic space group P2{sub 1}2{sub 1}2, with unit-cell parameters a = 38.2, b = 68.7, c = 84.2 Å, and diffracted to 1.75 Å resolution. The crystal of the phospholipase A{sub 2} domain belongs to the hexagonal space group P6{sub 1}22 (or its enantiomorph P6{sub 5}22), with unit-cell parameters a = b = 38.7, c = 286.7 Å, and diffracted to 2.6 Å resolution. The crotapotin crystal diffracted to 2.3 Å resolution; however, the highly diffuse diffraction pattern did not permit unambiguous assignment of the unit-cell parameters.« less

  18. Effects of behavioral patterns and network topology structures on Parrondo’s paradox

    PubMed Central

    Ye, Ye; Cheong, Kang Hao; Cen, Yu-wan; Xie, Neng-gang

    2016-01-01

    A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed. PMID:27845430

  19. Effects of behavioral patterns and network topology structures on Parrondo’s paradox

    NASA Astrophysics Data System (ADS)

    Ye, Ye; Cheong, Kang Hao; Cen, Yu-Wan; Xie, Neng-Gang

    2016-11-01

    A multi-agent Parrondo’s model based on complex networks is used in the current study. For Parrondo’s game A, the individual interaction can be categorized into five types of behavioral patterns: the Matthew effect, harmony, cooperation, poor-competition-rich-cooperation and a random mode. The parameter space of Parrondo’s paradox pertaining to each behavioral pattern, and the gradual change of the parameter space from a two-dimensional lattice to a random network and from a random network to a scale-free network was analyzed. The simulation results suggest that the size of the region of the parameter space that elicits Parrondo’s paradox is positively correlated with the heterogeneity of the degree distribution of the network. For two distinct sets of probability parameters, the microcosmic reasons underlying the occurrence of the paradox under the scale-free network are elaborated. Common interaction mechanisms of the asymmetric structure of game B, behavioral patterns and network topology are also revealed.

  20. Modeling individual effects in the Cormack-Jolly-Seber Model: A state-space formulation

    USGS Publications Warehouse

    Royle, J. Andrew

    2008-01-01

    In population and evolutionary biology, there exists considerable interest in individual heterogeneity in parameters of demographic models for open populations. However, flexible and practical solutions to the development of such models have proven to be elusive. In this article, I provide a state-space formulation of open population capture-recapture models with individual effects. The state-space formulation provides a generic and flexible framework for modeling and inference in models with individual effects, and it yields a practical means of estimation in these complex problems via contemporary methods of Markov chain Monte Carlo. A straightforward implementation can be achieved in the software package WinBUGS. I provide an analysis of a simple model with constant parameter detection and survival probability parameters. A second example is based on data from a 7-year study of European dippers, in which a model with year and individual effects is fitted.

  1. Insights on correlation dimension from dynamics mapping of three experimental nonlinear laser systems.

    PubMed

    McMahon, Christopher J; Toomey, Joshua P; Kane, Deb M

    2017-01-01

    We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the 'minimum gradient detection algorithm'. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Applying the new 'minimum gradient detection algorithm' CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements.

  2. Insights on correlation dimension from dynamics mapping of three experimental nonlinear laser systems

    PubMed Central

    McMahon, Christopher J.; Toomey, Joshua P.

    2017-01-01

    Background We have analysed large data sets consisting of tens of thousands of time series from three Type B laser systems: a semiconductor laser in a photonic integrated chip, a semiconductor laser subject to optical feedback from a long free-space-external-cavity, and a solid-state laser subject to optical injection from a master laser. The lasers can deliver either constant, periodic, pulsed, or chaotic outputs when parameters such as the injection current and the level of external perturbation are varied. The systems represent examples of experimental nonlinear systems more generally and cover a broad range of complexity including systematically varying complexity in some regions. Methods In this work we have introduced a new procedure for semi-automatically interrogating experimental laser system output power time series to calculate the correlation dimension (CD) using the commonly adopted Grassberger-Proccacia algorithm. The new CD procedure is called the ‘minimum gradient detection algorithm’. A value of minimum gradient is returned for all time series in a data set. In some cases this can be identified as a CD, with uncertainty. Findings Applying the new ‘minimum gradient detection algorithm’ CD procedure, we obtained robust measurements of the correlation dimension for many of the time series measured from each laser system. By mapping the results across an extended parameter space for operation of each laser system, we were able to confidently identify regions of low CD (CD < 3) and assign these robust values for the correlation dimension. However, in all three laser systems, we were not able to measure the correlation dimension at all parts of the parameter space. Nevertheless, by mapping the staged progress of the algorithm, we were able to broadly classify the dynamical output of the lasers at all parts of their respective parameter spaces. For two of the laser systems this included displaying regions of high-complexity chaos and dynamic noise. These high-complexity regions are differentiated from regions where the time series are dominated by technical noise. This is the first time such differentiation has been achieved using a CD analysis approach. Conclusions More can be known of the CD for a system when it is interrogated in a mapping context, than from calculations using isolated time series. This has been shown for three laser systems and the approach is expected to be useful in other areas of nonlinear science where large data sets are available and need to be semi-automatically analysed to provide real dimensional information about the complex dynamics. The CD/minimum gradient algorithm measure provides additional information that complements other measures of complexity and relative complexity, such as the permutation entropy; and conventional physical measurements. PMID:28837602

  3. Flight control application of new stability robustness bounds for linear uncertain systems

    NASA Technical Reports Server (NTRS)

    Yedavalli, Rama K.

    1993-01-01

    This paper addresses the issue of obtaining bounds on the real parameter perturbations of a linear state-space model for robust stability. Based on Kronecker algebra, new, easily computable sufficient bounds are derived that are much less conservative than the existing bounds since the technique is meant for only real parameter perturbations (in contrast to specializing complex variation case to real parameter case). The proposed theory is illustrated with application to several flight control examples.

  4. Ab Initio Crystal Field for Lanthanides.

    PubMed

    Ungur, Liviu; Chibotaru, Liviu F

    2017-03-13

    An ab initio methodology for the first-principle derivation of crystal-field (CF) parameters for lanthanides is described. The methodology is applied to the analysis of CF parameters in [Tb(Pc) 2 ] - (Pc=phthalocyanine) and Dy 4 K 2 ([Dy 4 K 2 O(OtBu) 12 ]) complexes, and compared with often used approximate and model descriptions. It is found that the application of geometry symmetrization, and the use of electrostatic point-charge and phenomenological CF models, lead to unacceptably large deviations from predictions based on ab initio calculations for experimental geometry. It is shown how the predictions of standard CASSCF (Complete Active Space Self-Consistent Field) calculations (with 4f orbitals in the active space) can be systematically improved by including effects of dynamical electronic correlation (CASPT2 step) and by admixing electronic configurations of the 5d shell. This is exemplified for the well-studied Er-trensal complex (H 3 trensal=2,2',2"-tris(salicylideneimido)trimethylamine). The electrostatic contributions to CF parameters in this complex, calculated with true charge distributions in the ligands, yield less than half of the total CF splitting, thus pointing to the dominant role of covalent effects. This analysis allows the conclusion that ab initio crystal field is an essential tool for the decent description of lanthanides. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Crystal structures of complexes of the cys-syn-cys isomer of dicyclohexano-18-crown-6 with oxonium hexafluorotantalate and oxonium hexafluoroniobate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonari, M. S.; Alekseeva, O. A.; Furmanova, N. G.

    2007-03-15

    The crystal structures of [(cys-syn-cys-dicyclohexano-18-crown-6 . H{sub 3}O)][TaF{sub 6}] and [(cys-syn-cys-dicyclohexano-18-crown-6 . H{sub 3}O)][NbF{sub 6}] complex compounds are determined using X-ray diffraction analysis. The tantalum complex has two polymorphic modifications, namely, the monoclinic (I) and triclinic (II) modifications. The unit cell parameters of these compounds are as follows: a = 8.507(4) A, b = 11.947(5) A, c = 27.392(12) A, {beta} = 93.11(1) deg., Z = 4, and space group P2{sub 1}/n for modification I; and a = 10.828(1) A, b = 11.204(1) A, c = 12.378(1) A, {alpha} = 72.12(1) deg., {beta} = 79.40(1) deg., {gamma} = 73.70(1) deg.,more » Z = 2, and space group P-1 for modification II. The triclinic niobium complex [(cys-syn-cys-dicyclohexano-18-crown-6 . H{sub 3}O)][NbF{sub 6}] (III) with the unit cell parameters a = 10.796(3) A, b = 11.183(3) A, c = 12.352(3) A, {alpha} = 72.364(5) deg., {beta} = 79.577(5) deg., {gamma} = 73.773(4) deg., Z = 2, and space group P-1 is isostructural with tantalum complex II. The structures of all three complexes are ionic in character. The oxonium cation in complexes I-III is encapsulated by the crown ether and thus forms one ordinary and two bifurcated hydrogen bonds with the oxygen atoms of the crown ether. This macrocyclic cation is bound to the anions through the C-H...F contacts (H...F, 2.48-2.58 A). The conformation of the macrocycle in complex I differs substantially from that in complex II (III)« less

  6. Neutrino oscillation parameter sampling with MonteCUBES

    NASA Astrophysics Data System (ADS)

    Blennow, Mattias; Fernandez-Martinez, Enrique

    2010-01-01

    We present MonteCUBES ("Monte Carlo Utility Based Experiment Simulator"), a software package designed to sample the neutrino oscillation parameter space through Markov Chain Monte Carlo algorithms. MonteCUBES makes use of the GLoBES software so that the existing experiment definitions for GLoBES, describing long baseline and reactor experiments, can be used with MonteCUBES. MonteCUBES consists of two main parts: The first is a C library, written as a plug-in for GLoBES, implementing the Markov Chain Monte Carlo algorithm to sample the parameter space. The second part is a user-friendly graphical Matlab interface to easily read, analyze, plot and export the results of the parameter space sampling. Program summaryProgram title: MonteCUBES (Monte Carlo Utility Based Experiment Simulator) Catalogue identifier: AEFJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 69 634 No. of bytes in distributed program, including test data, etc.: 3 980 776 Distribution format: tar.gz Programming language: C Computer: MonteCUBES builds and installs on 32 bit and 64 bit Linux systems where GLoBES is installed Operating system: 32 bit and 64 bit Linux RAM: Typically a few MBs Classification: 11.1 External routines: GLoBES [1,2] and routines/libraries used by GLoBES Subprograms used:Cat Id ADZI_v1_0, Title GLoBES, Reference CPC 177 (2007) 439 Nature of problem: Since neutrino masses do not appear in the standard model of particle physics, many models of neutrino masses also induce other types of new physics, which could affect the outcome of neutrino oscillation experiments. In general, these new physics imply high-dimensional parameter spaces that are difficult to explore using classical methods such as multi-dimensional projections and minimizations, such as those used in GLoBES [1,2]. Solution method: MonteCUBES is written as a plug-in to the GLoBES software [1,2] and provides the necessary methods to perform Markov Chain Monte Carlo sampling of the parameter space. This allows an efficient sampling of the parameter space and has a complexity which does not grow exponentially with the parameter space dimension. The integration of the MonteCUBES package with the GLoBES software makes sure that the experimental definitions already in use by the community can also be used with MonteCUBES, while also lowering the learning threshold for users who already know GLoBES. Additional comments: A Matlab GUI for interpretation of results is included in the distribution. Running time: The typical running time varies depending on the dimensionality of the parameter space, the complexity of the experiment, and how well the parameter space should be sampled. The running time for our simulations [3] with 15 free parameters at a Neutrino Factory with O(10) samples varied from a few hours to tens of hours. References:P. Huber, M. Lindner, W. Winter, Comput. Phys. Comm. 167 (2005) 195, hep-ph/0407333. P. Huber, J. Kopp, M. Lindner, M. Rolinec, W. Winter, Comput. Phys. Comm. 177 (2007) 432, hep-ph/0701187. S. Antusch, M. Blennow, E. Fernandez-Martinez, J. Lopez-Pavon, arXiv:0903.3986 [hep-ph].

  7. Parameter Optimization for Turbulent Reacting Flows Using Adjoints

    NASA Astrophysics Data System (ADS)

    Lapointe, Caelan; Hamlington, Peter E.

    2017-11-01

    The formulation of a new adjoint solver for topology optimization of turbulent reacting flows is presented. This solver provides novel configurations (e.g., geometries and operating conditions) based on desired system outcomes (i.e., objective functions) for complex reacting flow problems of practical interest. For many such problems, it would be desirable to know optimal values of design parameters (e.g., physical dimensions, fuel-oxidizer ratios, and inflow-outflow conditions) prior to real-world manufacture and testing, which can be expensive, time-consuming, and dangerous. However, computational optimization of these problems is made difficult by the complexity of most reacting flows, necessitating the use of gradient-based optimization techniques in order to explore a wide design space at manageable computational cost. The adjoint method is an attractive way to obtain the required gradients, because the cost of the method is determined by the dimension of the objective function rather than the size of the design space. Here, the formulation of a novel solver is outlined that enables gradient-based parameter optimization of turbulent reacting flows using the discrete adjoint method. Initial results and an outlook for future research directions are provided.

  8. Using Space Syntax to Assess Safety in Public Areas - Case Study of Tarbiat Pedestrian Area, Tabriz-Iran

    NASA Astrophysics Data System (ADS)

    Cihangir Çamur, Kübra; Roshani, Mehdi; Pirouzi, Sania

    2017-10-01

    In studying the urban complex issues, simulation and modelling of public space use considerably helps in determining and measuring factors such as urban safety. Depth map software for determining parameters of the spatial layout techniques; and Statistical Package for Social Sciences (SPSS) software for analysing and evaluating the views of the pedestrians on public safety were used in this study. Connectivity, integration, and depth of the area in the Tarbiat city blocks were measured using the Space Syntax Method, and these parameters are presented as graphical and mathematical data. The combination of the results obtained from the questionnaire and statistical analysis with the results of spatial arrangement technique represents the appropriate and inappropriate spaces for pedestrians. This method provides a useful and effective instrument for decision makers, planners, urban designers and programmers in order to evaluate public spaces in the city. Prior to physical modification of urban public spaces, space syntax simulates the pedestrian safety to be used as an analytical tool by the city management. Finally, regarding the modelled parameters and identification of different characteristics of the case, this study represents the strategies and policies in order to increase the safety of the pedestrians of Tarbiat in Tabriz.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, Carolyn L.; Guo, Hanqi; Peterka, Tom

    In type-II superconductors, the dynamics of magnetic flux vortices determine their transport properties. In the Ginzburg-Landau theory, vortices correspond to topological defects in the complex order parameter field. Earlier, in Phillips et al. [Phys. Rev. E 91, 023311 (2015)], we introduced a method for extracting vortices from the discretized complex order parameter field generated by a large-scale simulation of vortex matter. With this method, at a fixed time step, each vortex [simplistically, a one-dimensional (1D) curve in 3D space] can be represented as a connected graph extracted from the discretized field. Here we extend this method as a function ofmore » time as well. A vortex now corresponds to a 2D space-time sheet embedded in 4D space time that can be represented as a connected graph extracted from the discretized field over both space and time. Vortices that interact by merging or splitting correspond to disappearance and appearance of holes in the connected graph in the time direction. This method of tracking vortices, which makes no assumptions about the scale or behavior of the vortices, can track the vortices with a resolution as good as the discretization of the temporally evolving complex scalar field. Additionally, even details of the trajectory between time steps can be reconstructed from the connected graph. With this form of vortex tracking, the details of vortex dynamics in a model of a superconducting materials can be understood in greater detail than previously possible.« less

  10. Information geometric methods for complexity

    NASA Astrophysics Data System (ADS)

    Felice, Domenico; Cafaro, Carlo; Mancini, Stefano

    2018-03-01

    Research on the use of information geometry (IG) in modern physics has witnessed significant advances recently. In this review article, we report on the utilization of IG methods to define measures of complexity in both classical and, whenever available, quantum physical settings. A paradigmatic example of a dramatic change in complexity is given by phase transitions (PTs). Hence, we review both global and local aspects of PTs described in terms of the scalar curvature of the parameter manifold and the components of the metric tensor, respectively. We also report on the behavior of geodesic paths on the parameter manifold used to gain insight into the dynamics of PTs. Going further, we survey measures of complexity arising in the geometric framework. In particular, we quantify complexity of networks in terms of the Riemannian volume of the parameter space of a statistical manifold associated with a given network. We are also concerned with complexity measures that account for the interactions of a given number of parts of a system that cannot be described in terms of a smaller number of parts of the system. Finally, we investigate complexity measures of entropic motion on curved statistical manifolds that arise from a probabilistic description of physical systems in the presence of limited information. The Kullback-Leibler divergence, the distance to an exponential family and volumes of curved parameter manifolds, are examples of essential IG notions exploited in our discussion of complexity. We conclude by discussing strengths, limits, and possible future applications of IG methods to the physics of complexity.

  11. A Tool for Parameter-space Explorations

    NASA Astrophysics Data System (ADS)

    Murase, Yohsuke; Uchitane, Takeshi; Ito, Nobuyasu

    A software for managing simulation jobs and results, named "OACIS", is presented. It controls a large number of simulation jobs executed in various remote servers, keeps these results in an organized way, and manages the analyses on these results. The software has a web browser front end, and users can submit various jobs to appropriate remote hosts from a web browser easily. After these jobs are finished, all the result files are automatically downloaded from the computational hosts and stored in a traceable way together with the logs of the date, host, and elapsed time of the jobs. Some visualization functions are also provided so that users can easily grasp the overview of the results distributed in a high-dimensional parameter space. Thus, OACIS is especially beneficial for the complex simulation models having many parameters for which a lot of parameter searches are required. By using API of OACIS, it is easy to write a code that automates parameter selection depending on the previous simulation results. A few examples of the automated parameter selection are also demonstrated.

  12. Parameter Estimation for Geoscience Applications Using a Measure-Theoretic Approach

    NASA Astrophysics Data System (ADS)

    Dawson, C.; Butler, T.; Mattis, S. A.; Graham, L.; Westerink, J. J.; Vesselinov, V. V.; Estep, D.

    2016-12-01

    Effective modeling of complex physical systems arising in the geosciences is dependent on knowing parameters which are often difficult or impossible to measure in situ. In this talk we focus on two such problems, estimating parameters for groundwater flow and contaminant transport, and estimating parameters within a coastal ocean model. The approach we will describe, proposed by collaborators D. Estep, T. Butler and others, is based on a novel stochastic inversion technique based on measure theory. In this approach, given a probability space on certain observable quantities of interest, one searches for the sets of highest probability in parameter space which give rise to these observables. When viewed as mappings between sets, the stochastic inversion problem is well-posed in certain settings, but there are computational challenges related to the set construction. We will focus the talk on estimating scalar parameters and fields in a contaminant transport setting, and in estimating bottom friction in a complicated near-shore coastal application.

  13. Front and pulse solutions for the complex Ginzburg-Landau equation with higher-order terms.

    PubMed

    Tian, Huiping; Li, Zhonghao; Tian, Jinping; Zhou, Guosheng

    2002-12-01

    We investigate one-dimensional complex Ginzburg-Landau equation with higher-order terms and discuss their influences on the multiplicity of solutions. An exact analytic front solution is presented. By stability analysis for the original partial differential equation, we derive its necessary stability condition for amplitude perturbations. This condition together with the exact front solution determine the region of parameter space where the uniformly translating front solution can exist. In addition, stable pulses, chaotic pulses, and attenuation pulses appear generally if the parameters are out of the range. Finally, applying these analysis into the optical transmission system numerically we find that the stable transmission of optical pulses can be achieved if the parameters are appropriately chosen.

  14. Results of the Irkutsk Incoherent Scattering Radar for space debris studies in 2013

    NASA Astrophysics Data System (ADS)

    Lebedev, Valentin; Kushnarev, Dmitriy; Nevidimov, Nikolay

    We present result of space object (SO) registration received on the Irkutsk Incoherent Scattering Radar (IISR) in June 2013 during regular ionospheric measurement. Diagnostic the of the radar for definition of the SO characteristics: range, beam velocity, azimuth angle, elevation, and signal amplitude were improved after the carried-out technological modernization and SO we have possibility of simultaneous measurement of parameters of parameters ionosphere and SO. Now the IISR new hardware-software complex allows to operate in a mode of ionospheric measurements up to 1000 SO flights per day, and to register objects of 10 cm in size at range of 800-900 km.

  15. Parameter estimation uncertainty: Comparing apples and apples?

    NASA Astrophysics Data System (ADS)

    Hart, D.; Yoon, H.; McKenna, S. A.

    2012-12-01

    Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  16. How to couple identical ring oscillators to get quasiperiodicity, extended chaos, multistability, and the loss of symmetry

    NASA Astrophysics Data System (ADS)

    Hellen, Edward H.; Volkov, Evgeny

    2018-09-01

    We study the dynamical regimes demonstrated by a pair of identical 3-element ring oscillators (reduced version of synthetic 3-gene genetic Repressilator) coupled using the design of the 'quorum sensing (QS)' process natural for interbacterial communications. In this work QS is implemented as an additional network incorporating elements of the ring as both the source and the activation target of the fast diffusion QS signal. This version of indirect nonlinear coupling, in cooperation with the reasonable extension of the parameters which control properties of the isolated oscillators, exhibits the formation of a very rich array of attractors. Using a parameter-space defined by the individual oscillator amplitude and the coupling strength, we found the extended area of parameter-space where the identical oscillators demonstrate quasiperiodicity, which evolves to chaos via the period doubling of either resonant limit cycles or complex antiphase symmetric limit cycles with five winding numbers. The symmetric chaos extends over large parameter areas up to its loss of stability, followed by a system transition to an unexpected mode: an asymmetric limit cycle with a winding number of 1:2. In turn, after long evolution across the parameter-space, this cycle demonstrates a period doubling cascade which restores the symmetry of dynamics by formation of symmetric chaos, which nevertheless preserves the memory of the asymmetric limit cycles in the form of stochastic alternating "polarization" of the time series. All stable attractors coexist with some others, forming remarkable and complex multistability including the coexistence of torus and limit cycles, chaos and regular attractors, symmetric and asymmetric regimes. We traced the paths and bifurcations leading to all areas of chaos, and presented a detailed map of all transformations of the dynamics.

  17. Control of Space-Based Electron Beam Free Form Fabrication

    NASA Technical Reports Server (NTRS)

    Seifzer. W. J.; Taminger, K. M.

    2007-01-01

    Engineering a closed-loop control system for an electron beam welder for space-based additive manufacturing is challenging. For earth and space based applications, components must work in a vacuum and optical components become occluded with metal vapor deposition. For extraterrestrial applications added components increase launch weight, increase complexity, and increase space flight certification efforts. Here we present a software tool that closely couples path planning and E-beam parameter controls into the build process to increase flexibility. In an environment where data collection hinders real-time control, another approach is considered that will still yield a high quality build.

  18. Ionospheric effects during severe space weather events seen in ionospheric service data products

    NASA Astrophysics Data System (ADS)

    Jakowski, Norbert; Danielides, Michael; Mayer, Christoph; Borries, Claudia

    Space weather effects are closely related to complex perturbation processes in the magnetosphere-ionosphere-thermosphere systems, initiated by enhanced solar energy input. To understand and model complex space weather processes, different views on the same subject are helpful. One of the ionosphere key parameters is the Total Electron Content (TEC) which provides a first or-der approximation of the ionospheric range error in Global Navigation Satellite System (GNSS) applications. Additionally, horizontal gradients and time rate of change of TEC are important for estimating the perturbation degree of the ionosphere. TEC maps can effectively be gener-ated using ground based GNSS measurements from global receiver networks. Whereas ground based GNSS measurements provide good horizontal resolution, space based radio occultation measurements can complete the view by providing information on the vertical plasma density distribution. The combination of ground based TEC and vertical sounding measurements pro-vide essential information on the shape of the vertical electron density profile by computing the equivalent slab thickness at the ionosonde station site. Since radio beacon measurements at 150/400 MHz are well suited to trace the horizontal structure of Travelling Ionospheric Dis-turbances (TIDs), these data products essentially complete GNSS based TEC mapping results. Radio scintillation data products, characterising small scale irregularities in the ionosphere, are useful to estimate the continuity and availability of transionospheric radio signals. The different data products are addressed while discussing severe space weather events in the ionosphere e.g. events in October/November 2003. The complementary view of different near real time service data products is helpful to better understand the complex dynamics of ionospheric perturbation processes and to forecast the development of parameters customers are interested in.

  19. New method for rekindling the nonlinear solitary waves in Maxwellian complex space plasma

    NASA Astrophysics Data System (ADS)

    Das, G. C.; Sarma, Ridip

    2018-04-01

    Our interest is to study the nonlinear wave phenomena in complex plasma constituents with Maxwellian electrons and ions. The main reason for this consideration is to exhibit the effects of dust charge fluctuations on acoustic modes evaluated by the use of a new method. A special method (G'/G) has been developed to yield the coherent features of nonlinear waves augmented through the derivation of a Korteweg-de Vries equation and found successfully the different nature of solitons recognized in space plasmas. Evolutions have shown with the input of appropriate typical plasma parameters to support our theoretical observations in space plasmas. All conclusions are in good accordance with the actual occurrences and could be of interest to further the investigations in experiments and satellite observations in space. In this paper, we present not only the model that exhibited nonlinear solitary wave propagation but also a new mathematical method to the execution.

  20. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  1. Nested Sampling for Bayesian Model Comparison in the Context of Salmonella Disease Dynamics

    PubMed Central

    Dybowski, Richard; McKinley, Trevelyan J.; Mastroeni, Pietro; Restif, Olivier

    2013-01-01

    Understanding the mechanisms underlying the observed dynamics of complex biological systems requires the statistical assessment and comparison of multiple alternative models. Although this has traditionally been done using maximum likelihood-based methods such as Akaike's Information Criterion (AIC), Bayesian methods have gained in popularity because they provide more informative output in the form of posterior probability distributions. However, comparison between multiple models in a Bayesian framework is made difficult by the computational cost of numerical integration over large parameter spaces. A new, efficient method for the computation of posterior probabilities has recently been proposed and applied to complex problems from the physical sciences. Here we demonstrate how nested sampling can be used for inference and model comparison in biological sciences. We present a reanalysis of data from experimental infection of mice with Salmonella enterica showing the distribution of bacteria in liver cells. In addition to confirming the main finding of the original analysis, which relied on AIC, our approach provides: (a) integration across the parameter space, (b) estimation of the posterior parameter distributions (with visualisations of parameter correlations), and (c) estimation of the posterior predictive distributions for goodness-of-fit assessments of the models. The goodness-of-fit results suggest that alternative mechanistic models and a relaxation of the quasi-stationary assumption should be considered. PMID:24376528

  2. Tool Support for Parametric Analysis of Large Software Simulation Systems

    NASA Technical Reports Server (NTRS)

    Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony

    2008-01-01

    The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.

  3. Crystallization and X-ray diffraction analysis of a putative bacterial class I labdane-related diterpene synthase.

    PubMed

    Serrano-Posada, Hugo; Centeno-Leija, Sara; Rojas-Trejo, Sonia; Stojanoff, Vivian; Rodríguez-Sanoja, Romina; Rudiño-Piñera, Enrique; Sánchez, Sergio

    2015-09-01

    Labdane-related diterpenoids are natural products with potential pharmaceutical applications that are rarely found in bacteria. Here, a putative class I labdane-related diterpene synthase (LrdC) identified by genome mining in a streptomycete was successfully crystallized using the microbatch method. Crystals of the LrdC enzyme were obtained in a holo form with its natural cofactor Mg(2+) (LrdC-Mg(2+)) and in complex with inorganic pyrophosphate (PPi) (LrdC-Mg(2+)-PPi). Crystals of native LrdC-Mg(2+) diffracted to 2.50 Å resolution and belonged to the trigonal space group P3221, with unit-cell parameters a = b = 107.1, c = 89.2 Å. Crystals of the LrdC-Mg(2+)-PPi complex grown in the same conditions as the native enzyme with PEG 8000 diffracted to 2.36 Å resolution and also belonged to the trigonal space group P3221. Crystals of the LrdC-Mg(2+)-PPi complex grown in a second crystallization condition with PEG 3350 diffracted to 2.57 Å resolution and belonged to the monoclinic space group P21, with unit-cell parameters a = 49.9, b = 104.1, c = 66.5 Å, β = 111.4°. The structure was determined by the single-wavelength anomalous dispersion (SAD) technique using the osmium signal from a potassium hexachloroosmate (IV) derivative.

  4. Packing of Russian doll clusters to form a nanometer-scale CsCl-type compound in a Cr–Zn–Sn complex metallic alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Weiwei; Cava, Robert J.; Miller, Gordon J.

    A new cubic complex metallic alloy phase, Cr 22Zn 72Sn 24, with a lattice parameter near 2.5 nm was discovered in crystals grown using a Zn/Sn flux. The structure consists of Russian doll clusters or a 3-d network of Cr-centered icosahedra (shown) with bcc-metal fragments in void spaces.

  5. Packing of Russian doll clusters to form a nanometer-scale CsCl-type compound in a Cr–Zn–Sn complex metallic alloy

    DOE PAGES

    Xie, Weiwei; Cava, Robert J.; Miller, Gordon J.

    2017-07-03

    A new cubic complex metallic alloy phase, Cr 22Zn 72Sn 24, with a lattice parameter near 2.5 nm was discovered in crystals grown using a Zn/Sn flux. The structure consists of Russian doll clusters or a 3-d network of Cr-centered icosahedra (shown) with bcc-metal fragments in void spaces.

  6. Crystallization of a 2:2 complex of granulocyte-colony stimulating factor (GCSF) with the ligand-binding region of the GCSF receptor

    PubMed Central

    Honjo, Eijiro; Tamada, Taro; Maeda, Yoshitake; Koshiba, Takumi; Matsukura, Yasuko; Okamoto, Tomoyuki; Ishibashi, Matsujiro; Tokunaga, Masao; Kuroki, Ryota

    2005-01-01

    The granulocyte-colony stimulating factor (GCSF) receptor receives signals for regulating the maturation, proliferation and differentiation of the precursor cells of neutrophilic granulocytes. The signalling complex composed of two GCSFs (GCSF, 19 kDa) and two GCSF receptors (GCSFR, 34 kDa) consisting of an Ig-like domain and a cytokine-receptor homologous (CRH) domain was crystallized. A crystal of the complex was grown in 1.0 M sodium formate and 0.1 M sodium acetate pH 4.6 and belongs to space group P41212 (or its enantiomorph P43212), with unit-cell parameters a = b = 110.1, c = 331.8 Å. Unfortunately, this crystal form did not diffract beyond 5 Å resolution. Since the heterogeneity of GCSF receptor appeared to prevent the growth of good-quality crystals, the GCSF receptor was fractionated by anion-exchange chromatography. Crystals of the GCSF–fractionated GCSF receptor complex were grown as a new crystal form in 0.2 M ammonium phosphate. This new crystal form diffracted to beyond 3.0 Å resolution and belonged to space group P3121 (or its enantiomorph P3221), with unit-cell parameters a = b = 134.8, c = 105.7 Å. PMID:16511159

  7. Equivariant Verlinde Formula from Fivebranes and Vortices

    NASA Astrophysics Data System (ADS)

    Gukov, Sergei; Pei, Du

    2017-10-01

    We study complex Chern-Simons theory on a Seifert manifold M 3 by embedding it into string theory. We show that complex Chern-Simons theory on M 3 is equivalent to a topologically twisted supersymmetric theory and its partition function can be naturally regularized by turning on a mass parameter. We find that the dimensional reduction of this theory to 2d gives the low energy dynamics of vortices in four-dimensional gauge theory, the fact apparently overlooked in the vortex literature. We also generalize the relations between (1) the Verlinde algebra, (2) quantum cohomology of the Grassmannian, (3) Chern-Simons theory on {Σ× S^1} and (4) index of a spin c Dirac operator on the moduli space of flat connections to a new set of relations between (1) the "equivariant Verlinde algebra" for a complex group, (2) the equivariant quantum K-theory of the vortex moduli space, (3) complex Chern-Simons theory on {Σ × S^1} and (4) the equivariant index of a spin c Dirac operator on the moduli space of Higgs bundles.

  8. Internal thermotopography and shifts in general thermal balance in man under special heat transfer conditions

    NASA Technical Reports Server (NTRS)

    Gorodinskiy, S. M.; Gramenitskiy, P. M.; Kuznets, Y. I.; Ozerov, O. Y.; Yakovleva, E. V.; Groza, P.; Kozlovskiy, S.; Naremski, Y.

    1974-01-01

    Thermal regulation for astronauts working in pressure suits in open space provides for protection by a system of artificial heat removal and compensation to counteract possible changes in the heat regulating function of the human body that occur under the complex effects of space flight conditions. Most important of these factors are prolonged weightlessness, prolonged limitation of motor activity, and possible deviations of microclimatic environmental parameters.

  9. Computational exploration of neuron and neural network models in neurobiology.

    PubMed

    Prinz, Astrid A

    2007-01-01

    The electrical activity of individual neurons and neuronal networks is shaped by the complex interplay of a large number of non-linear processes, including the voltage-dependent gating of ion channels and the activation of synaptic receptors. These complex dynamics make it difficult to understand how individual neuron or network parameters-such as the number of ion channels of a given type in a neuron's membrane or the strength of a particular synapse-influence neural system function. Systematic exploration of cellular or network model parameter spaces by computational brute force can overcome this difficulty and generate comprehensive data sets that contain information about neuron or network behavior for many different combinations of parameters. Searching such data sets for parameter combinations that produce functional neuron or network output provides insights into how narrowly different neural system parameters have to be tuned to produce a desired behavior. This chapter describes the construction and analysis of databases of neuron or neuronal network models and describes some of the advantages and downsides of such exploration methods.

  10. Colorimetric detection of hydrogen peroxide by dioxido-vanadium(V) complex containing hydrazone ligand: synthesis and crystal structure

    NASA Astrophysics Data System (ADS)

    Kurbah, Sunshine D.; Syiemlieh, Ibanphylla; Lal, Ram A.

    2018-03-01

    Dioxido-vanadium(V) complex has been synthesized in good yield, the complex was characterized by IR, UV-visible and 1H NMR spectroscopy. Single crystal X-ray crystallography techniques were used to assign the structure of the complex. Complex crystallized with monoclinic P21/c space group with cell parameters a (Å) = 39.516(5), b (Å) = 6.2571(11), c (Å) = 17.424(2), α (°) = 90, β (°) = 102.668(12) and γ (°) = 90. The hydrazone ligand is coordinate to metal ion in tridentate fashion through -ONO- donor atoms forming a distorted square pyramidal geometry around the metal ion.

  11. Berry phases for Landau Hamiltonians on deformed tori

    NASA Astrophysics Data System (ADS)

    Lévay, Péter

    1995-06-01

    Parametrized families of Landau Hamiltonians are introduced, where the parameter space is the Teichmüller space (topologically the complex upper half plane) corresponding to deformations of tori. The underlying SO(2,1) symmetry of the families enables an explicit calculation of the Berry phases picked up by the eigenstates when the torus is slowly deformed. It is also shown that apart from these phases that are local in origin, there are global non-Abelian ones too, related to the hidden discrete symmetry group Γϑ (the theta group, which is a subgroup of the modular group) of the families. The induced Riemannian structure on the parameter space is the usual Poincare metric on the upper half plane of constant negative curvature. Due to the discrete symmetry Γϑ the geodesic motion restricted to the fundamental domain of this group is chaotic.

  12. Parameter Validation for Evaluation of Spaceflight Hardware Reusability

    NASA Technical Reports Server (NTRS)

    Childress-Thompson, Rhonda; Dale, Thomas L.; Farrington, Phillip

    2017-01-01

    Within recent years, there has been an influx of companies around the world pursuing reusable systems for space flight. Much like NASA, many of these new entrants are learning that reusable systems are complex and difficult to acheive. For instance, in its first attempts to retrieve spaceflight hardware for future reuse, SpaceX unsuccessfully tried to land on a barge at sea, resulting in a crash-landing. As this new generation of launch developers continues to develop concepts for reusable systems, having a systematic approach for determining the most effective systems for reuse is paramount. Three factors that influence the effective implementation of reusability are cost, operability and reliability. Therefore, a method that integrates these factors into the decision-making process must be utilized to adequately determine whether hardware used in space flight should be reused or discarded. Previous research has identified seven features that contribute to the successful implementation of reusability for space flight applications, defined reusability for space flight applications, highlighted the importance of reusability, and presented areas that hinder successful implementation of reusability. The next step is to ensure that the list of reusability parameters previously identified is comprehensive, and any duplication is either removed or consolidated. The characteristics to judge the seven features as good indicators for successful reuse are identified and then assessed using multiattribute decision making. Next, discriminators in the form of metrics or descriptors are assigned to each parameter. This paper explains the approach used to evaluate these parameters, define the Measures of Effectiveness (MOE) for reusability, and quantify these parameters. Using the MOEs, each parameter is assessed for its contribution to the reusability of the hardware. Potential data sources needed to validate the approach will be identified.

  13. Using SpaceClaimTD Direct for Modeling Components with Complex Geometries for the Thermal Desktop-Based Advanced Stirling Radioisotope Generator Model

    NASA Technical Reports Server (NTRS)

    Fabanich, William A., Jr.

    2014-01-01

    SpaceClaim/TD Direct has been used extensively in the development of the Advanced Stirling Radioisotope Generator (ASRG) thermal model. This paper outlines the workflow for that aspect of the task and includes proposed best practices and lessons learned. The ASRG thermal model was developed to predict component temperatures and power output and to provide insight into the prime contractor's thermal modeling efforts. The insulation blocks, heat collectors, and cold side adapter flanges (CSAFs) were modeled with this approach. The model was constructed using mostly TD finite difference (FD) surfaces/solids. However, some complex geometry could not be reproduced with TD primitives while maintaining the desired degree of geometric fidelity. Using SpaceClaim permitted the import of original CAD files and enabled the defeaturing/repair of those geometries. TD Direct (a SpaceClaim add-on from CRTech) adds features that allowed the "mark-up" of that geometry. These so-called "mark-ups" control how finite element (FE) meshes are to be generated through the "tagging" of features (e.g. edges, solids, surfaces). These tags represent parameters that include: submodels, material properties, material orienters, optical properties, and radiation analysis groups. TD aliases were used for most tags to allow analysis to be performed with a variety of parameter values. "Domain-tags" were also attached to individual and groups of surfaces and solids to allow them to be used later within TD to populate objects like, for example, heaters and contactors. These tools allow the user to make changes to the geometry in SpaceClaim and then easily synchronize the mesh in TD without having to redefine the objects each time as one would if using TDMesher. The use of SpaceClaim/TD Direct helps simplify the process for importing existing geometries and in the creation of high fidelity FE meshes to represent complex parts. It also saves time and effort in the subsequent analysis.

  14. Using SpaceClaim/TD Direct for Modeling Components with Complex Geometries for the Thermal Desktop-Based Advanced Stirling Radioisotope Generator Model

    NASA Technical Reports Server (NTRS)

    Fabanich, William

    2014-01-01

    SpaceClaim/TD Direct has been used extensively in the development of the Advanced Stirling Radioisotope Generator (ASRG) thermal model. This paper outlines the workflow for that aspect of the task and includes proposed best practices and lessons learned. The ASRG thermal model was developed to predict component temperatures and power output and to provide insight into the prime contractors thermal modeling efforts. The insulation blocks, heat collectors, and cold side adapter flanges (CSAFs) were modeled with this approach. The model was constructed using mostly TD finite difference (FD) surfaces solids. However, some complex geometry could not be reproduced with TD primitives while maintaining the desired degree of geometric fidelity. Using SpaceClaim permitted the import of original CAD files and enabled the defeaturing repair of those geometries. TD Direct (a SpaceClaim add-on from CRTech) adds features that allowed the mark-up of that geometry. These so-called mark-ups control how finite element (FE) meshes were generated and allowed the tagging of features (e.g. edges, solids, surfaces). These tags represent parameters that include: submodels, material properties, material orienters, optical properties, and radiation analysis groups. TD aliases were used for most tags to allow analysis to be performed with a variety of parameter values. Domain-tags were also attached to individual and groups of surfaces and solids to allow them to be used later within TD to populate objects like, for example, heaters and contactors. These tools allow the user to make changes to the geometry in SpaceClaim and then easily synchronize the mesh in TD without having to redefine these objects each time as one would if using TD Mesher.The use of SpaceClaim/TD Direct has helped simplify the process for importing existing geometries and in the creation of high fidelity FE meshes to represent complex parts. It has also saved time and effort in the subsequent analysis.

  15. Digit replacement: A generic map for nonlinear dynamical systems.

    PubMed

    García-Morales, Vladimir

    2016-09-01

    A simple discontinuous map is proposed as a generic model for nonlinear dynamical systems. The orbit of the map admits exact solutions for wide regions in parameter space and the method employed (digit manipulation) allows the mathematical design of useful signals, such as regular or aperiodic oscillations with specific waveforms, the construction of complex attractors with nontrivial properties as well as the coexistence of different basins of attraction in phase space with different qualitative properties. A detailed analysis of the dynamical behavior of the map suggests how the latter can be used in the modeling of complex nonlinear dynamics including, e.g., aperiodic nonchaotic attractors and the hierarchical deposition of grains of different sizes on a surface.

  16. Imitate or innovate: Competition of strategy updating attitudes in spatial social dilemma games

    NASA Astrophysics Data System (ADS)

    Danku, Zsuzsa; Wang, Zhen; Szolnoki, Attila

    2018-01-01

    Evolution is based on the assumption that competing players update their strategies to increase their individual payoffs. However, while the applied updating method can be different, most of previous works proposed uniform models where players use identical way to revise their strategies. In this work we explore how imitation-based or learning attitude and innovation-based or myopic best-response attitude compete for space in a complex model where both attitudes are available. In the absence of additional cost the best response trait practically dominates the whole snow-drift game parameter space which is in agreement with the average payoff difference of basic models. When additional cost is involved then the imitation attitude can gradually invade the whole parameter space but this transition happens in a highly nontrivial way. However, the role of competing attitudes is reversed in the stag-hunt parameter space where imitation is more successful in general. Interestingly, a four-state solution can be observed for the latter game which is a consequence of an emerging cyclic dominance between possible states. These phenomena can be understood by analyzing the microscopic invasion processes, which reveals the unequal propagation velocities of strategies and attitudes.

  17. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  18. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  19. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less

  20. Synchronization and chaotic dynamics of coupled mechanical metronomes

    NASA Astrophysics Data System (ADS)

    Ulrichs, Henning; Mann, Andreas; Parlitz, Ulrich

    2009-12-01

    Synchronization scenarios of coupled mechanical metronomes are studied by means of numerical simulations showing the onset of synchronization for two, three, and 100 globally coupled metronomes in terms of Arnol'd tongues in parameter space and a Kuramoto transition as a function of coupling strength. Furthermore, we study the dynamics of metronomes where overturning is possible. In this case hyperchaotic dynamics associated with some diffusion process in configuration space is observed, indicating the potential complexity of metronome dynamics.

  1. Space weather modeling using artificial neural network. (Slovak Title: Modelovanie kozmického počasia umelou neurónovou sietou)

    NASA Astrophysics Data System (ADS)

    Valach, F.; Revallo, M.; Hejda, P.; Bochníček, J.

    2010-12-01

    Our modern society with its advanced technology is becoming increasingly vulnerable to the Earth's system disorders originating in explosive processes on the Sun. Coronal mass ejections (CMEs) blasted into interplanetary space as gigantic clouds of ionized gas can hit Earth within a few hours or days and cause, among other effects, geomagnetic storms - perhaps the best known manifestation of solar wind interaction with Earth's magnetosphere. Solar energetic particles (SEP), accelerated to near relativistic energy during large solar storms, arrive at the Earth's orbit even in few minutes and pose serious risk to astronauts traveling through the interplanetary space. These and many other threats are the reason why experts pay increasing attention to space weather and its predictability. For research on space weather, it is typically necessary to examine a large number of parameters which are interrelated in a complex non-linear way. One way to cope with such a task is to use an artificial neural network for space weather modeling, a tool originally developed for artificial intelligence. In our contribution, we focus on practical aspects of the neural networks application to modeling and forecasting selected space weather parameters.

  2. Code IN Exhibits - Supercomputing 2000

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; McCann, Karen M.; Biswas, Rupak; VanderWijngaart, Rob F.; Kwak, Dochan (Technical Monitor)

    2000-01-01

    The creation of parameter study suites has recently become a more challenging problem as the parameter studies have become multi-tiered and the computational environment has become a supercomputer grid. The parameter spaces are vast, the individual problem sizes are getting larger, and researchers are seeking to combine several successive stages of parameterization and computation. Simultaneously, grid-based computing offers immense resource opportunities but at the expense of great difficulty of use. We present ILab, an advanced graphical user interface approach to this problem. Our novel strategy stresses intuitive visual design tools for parameter study creation and complex process specification, and also offers programming-free access to grid-based supercomputer resources and process automation.

  3. A new space-time information expression and analysis approach based on 3S technology: a case study of China's coastland

    NASA Astrophysics Data System (ADS)

    Cao, Bao; Luo, Hong; Gao, Zhenji

    2009-10-01

    Space-time Information Expression and Analysis (SIEA) uses vivid graphic images of thinking to deal with information units according to series distribution rules with a variety of arranging, which combined with the use of information technology, powerful data-processing capabilities to carry out analysis and integration of information units. In this paper, a new SIEA approach was proposed and its model was constructed. And basic units, methodologies and steps of SIEA were discussed. Taking China's coastland as an example, the new SIEA approach were applied for the parameters of air humidity, rainfall and surface temperature from the year 1981 to 2000. The case study shows that the parameters change within month alternation, but little change within year alternation. From the view of spatial distribution, it was significantly different for the parameters in north and south of China's coastland. The new SIEA approach proposed in this paper not only has the intuitive, image characteristics, but also can solved the problem that it is difficult to express the biophysical parameters of space-time distribution using traditional charts and tables. It can reveal the complexity of the phenomenon behind the movement of things and laws of nature. And it can quantitatively analyze the phenomenon and nature law of the parameters, which inherited the advantages of graphics of traditional ways of thinking. SIEA provides a new space-time analysis and expression approach, using comprehensive 3S technologies, for the research of Earth System Science.

  4. A Graphical Approach to the Standard Principal-Agent Model.

    ERIC Educational Resources Information Center

    Zhou, Xianming

    2002-01-01

    States the principal-agent theory is difficult to teach because of its technical complexity and intractability. Indicates the equilibrium in the contract space is defined by the incentive parameter and insurance component of pay under a linear contract. Describes a graphical approach that students with basic knowledge of algebra and…

  5. On the theory of multi-pulse vibro-impact mechanisms

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Metrikin, V. S.; Nikiforova, I. V.; Ipatov, A. A.

    2017-11-01

    This paper presents a mathematical model of a new multi-striker eccentric shock-vibration mechanism with a crank-sliding bar vibration exciter and an arbitrary number of pistons. Analytical solutions for the parameters of the model are obtained to determine the regions of existence of stable periodic motions. Under the assumption of an absolutely inelastic collision of the piston, we derive equations that single out a bifurcational unattainable boundary in the parameter space, which has a countable number of arbitrarily complex stable periodic motions in its neighbourhood. We present results of numerical simulations, which illustrate the existence of periodic and stochastic motions. The methods proposed in this paper for investigating the dynamical characteristics of the new crank-type conrod mechanisms allow practitioners to indicate regions in the parameter space, which allow tuning these mechanisms into the most efficient periodic mode of operation, and to effectively analyze the main changes in their operational regimes when the system parameters are changed.

  6. A phase transition in energy-filtered RNA secondary structures.

    PubMed

    Han, Hillary S W; Reidys, Christian M

    2012-10-01

    In this article we study the effect of energy parameters on minimum free energy (mfe) RNA secondary structures. Employing a simplified combinatorial energy model that is only dependent on the diagram representation and is not sequence-specific, we prove the following dichotomy result. Mfe structures derived via the Turner energy parameters contain only finitely many complex irreducible substructures, and just minor parameter changes produce a class of mfe structures that contain a large number of small irreducibles. We localize the exact point at which the distribution of irreducibles experiences this phase transition from a discrete limit to a central limit distribution and, subsequently, put our result into the context of quantifying the effect of sparsification of the folding of these respective mfe structures. We show that the sparsification of realistic mfe structures leads to a constant time and space reduction, and that the sparsification of the folding of structures with modified parameters leads to a linear time and space reduction. We, furthermore, identify the limit distribution at the phase transition as a Rayleigh distribution.

  7. Crystallization of the rice immune receptor RGA5A_S with the rice blast fungus effector AVR1-CO39 prepared via mixture and tandem strategies.

    PubMed

    Guo, Liwei; Zhang, Yikun; Ma, Mengqi; Liu, Qiang; Zhang, Yanan; Peng, Youliang; Liu, Junfeng

    2018-04-01

    RGA5 is a component of the Pia resistance-protein pair (RGA4/RGA5) from Oryza sativa L. japonica. It acts as an immune receptor that directly recognizes the effector AVR1-CO39 from Magnaporthe oryzae via a C-terminal non-LRR domain (RGA5A_S). The interaction between RGA5A_S and AVR1-CO39 relieves the repression of RGA4, leading to effector-independent cell death. To determine the structure of the complex of RGA5A_S and AVR1-CO39 and to understand the details of this interaction, the complex was prepared by fusing the proteins together, by mixing them in vitro or by co-expressing them in one host cell. Samples purified via the first two strategies were crystallized under two different conditions. A mixture of AVR1-CO39 and RGA5A_S (complex I) crystallized in 1.1 M ammonium tartrate dibasic, 0.1 M sodium acetate-HCl pH 4.6, while crystals of the fusion complex RGA5A_S-TEV-AVR1-CO39 (complex II) were grown in 2 M NaCl. The crystal of complex I belonged to space group P3 1 21, with unit-cell parameters a = b = 66.2, c = 108.8 Å, α = β = 90, γ = 120°. The crystals diffracted to a Bragg spacing of 2.4 Å, and one molecule each of RGA5A_S and AVR1-CO39 were present in the asymmetric unit of the initial model. The crystal of complex II belonged to space group I4, with unit-cell parameters a = b = 137.4, c = 66.2 Å, α = β = γ = 90°. The crystals diffracted to a Bragg spacing of 2.72 Å, and there were two molecules of RGA5A_S and two molecules of AVR1-CO39 in the asymmetric unit of the initial model. Further structural characterization of the interaction between RGA5A_S and AVR1-CO39 will lead to a better understanding of the mechanism underlying effector recognition by R proteins.

  8. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-08-30

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  9. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    PubMed Central

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  10. Acoustic interference and recognition space within a complex assemblage of dendrobatid frogs

    PubMed Central

    Amézquita, Adolfo; Flechas, Sandra Victoria; Lima, Albertina Pimentel; Gasser, Herbert; Hödl, Walter

    2011-01-01

    In species-rich assemblages of acoustically communicating animals, heterospecific sounds may constrain not only the evolution of signal traits but also the much less-studied signal-processing mechanisms that define the recognition space of a signal. To test the hypothesis that the recognition space is optimally designed, i.e., that it is narrower toward the species that represent the higher potential for acoustic interference, we studied an acoustic assemblage of 10 diurnally active frog species. We characterized their calls, estimated pairwise correlations in calling activity, and, to model the recognition spaces of five species, conducted playback experiments with 577 synthetic signals on 531 males. Acoustic co-occurrence was not related to multivariate distance in call parameters, suggesting a minor role for spectral or temporal segregation among species uttering similar calls. In most cases, the recognition space overlapped but was greater than the signal space, indicating that signal-processing traits do not act as strictly matched filters against sounds other than homospecific calls. Indeed, the range of the recognition space was strongly predicted by the acoustic distance to neighboring species in the signal space. Thus, our data provide compelling evidence of a role of heterospecific calls in evolutionarily shaping the frogs' recognition space within a complex acoustic assemblage without obvious concomitant effects on the signal. PMID:21969562

  11. Uniaxial strain control of spin-polarization in multicomponent nematic order of BaFe 2As 2

    DOE PAGES

    Kissikov, T.; Sarkar, R.; Lawson, M.; ...

    2018-03-13

    The iron-based high temperature superconductors exhibit a rich phase diagram reflecting a complex interplay between spin, lattice, and orbital degrees of freedom. The nematic state observed in these compounds epitomizes this complexity, by entangling a real-space anisotropy in the spin fluctuation spectrum with ferro-orbital order and an orthorhombic lattice distortion. A subtle and less-explored facet of the interplay between these degrees of freedom arises from the sizable spin-orbit coupling present in these systems, which translates anisotropies in real space into anisotropies in spin space. We present nuclear magnetic resonance studies, which reveal that the magnetic fluctuation spectrum in the paramagneticmore » phase of BaFe 2As 2 acquires an anisotropic response in spin-space upon application of a tetragonal symmetry-breaking strain field. Lastly, our results unveil an internal spin structure of the nematic order parameter, indicating that electronic nematic materials may offer a route to magneto-mechanical control.« less

  12. State of the Data Union, 1992

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This is the first report on the State of the Data Union (SDU) for the NASA Office of Space Science and Applications (OSSA). OSSA responsibilities include the collection, analysis, and permanent archival of data critical to space science research. The nature of how this is done by OSSA is evolving to keep pace with changes in space research. Current and planned missions have evolved to be more complex and multidisciplinary, and are generating much more data and lasting longer than earlier missions. New technologies enable global access to data, transfer of huge volumes of data, and increasingly complex analysis. The SDU provides a snapshot of this dynamic environment, identifying trends in capabilities and requirements. The current space science data environment is described and parameters which capture the pulse of key functions within that environment are presented. Continuous efforts of OSSA to improve the availability and quality of data provided to the scientific community are reported, highlighting efforts such as the Data Management Initiative.

  13. Shape parameters explain data from spatial transformations: comment on Pearce et al. (2004) and Tommasi & Polli (2004).

    PubMed

    Cheng, Ken; Gallistel, C R

    2005-04-01

    In 2 recent studies on rats (J. M. Pearce, M. A. Good, P. M. Jones, & A. McGregor, see record 2004-12429-006) and chicks (L. Tommasi & C. Polli, see record 2004-15642-007), the animals were trained to search in 1 corner of a rectilinear space. When tested in transformed spaces of different shapes, the animals still showed systematic choices. Both articles rejected the global matching of shape in favor of local matching processes. The present authors show that although matching by shape congruence is unlikely, matching by the shape parameter of the 1st principal axis can explain all the data. Other shape parameters, such as symmetry axes, may do even better. Animals are likely to use some global matching to constrain and guide the use of local cues; such use keeps local matching processes from exploding in complexity.

  14. Simplifying the complexity of a coupled carbon turnover and pesticide degradation model

    NASA Astrophysics Data System (ADS)

    Marschmann, Gianna; Erhardt, André H.; Pagel, Holger; Kügler, Philipp; Streck, Thilo

    2016-04-01

    The mechanistic one-dimensional model PECCAD (PEsticide degradation Coupled to CArbon turnover in the Detritusphere; Pagel et al. 2014, Biogeochemistry 117, 185-204) has been developed as a tool to elucidate regulation mechanisms of pesticide degradation in soil. A feature of this model is that it integrates functional traits of microorganisms, identifiable by molecular tools, and physicochemical processes such as transport and sorption that control substrate availability. Predicting the behavior of microbially active interfaces demands a fundamental understanding of factors controlling their dynamics. Concepts from dynamical systems theory allow us to study general properties of the model such as its qualitative behavior, intrinsic timescales and dynamic stability: Using a Latin hypercube method we sampled the parameter space for physically realistic steady states of the PECCAD ODE system and set up a numerical continuation and bifurcation problem with the open-source toolbox MatCont in order to obtain a complete classification of the dynamical system's behaviour. Bifurcation analysis reveals an equilibrium state of the system entirely controlled by fungal kinetic parameters. The equilibrium is generally unstable in response to small perturbations except for a small band in parameter space where the pesticide pool is stable. Time scale separation is a phenomenon that occurs in almost every complex open physical system. Motivated by the notion of "initial-stage" and "late-stage" decomposers and the concept of r-, K- or L-selected microbial life strategies, we test the applicability of geometric singular perturbation theory to identify fast and slow time scales of PECCAD. Revealing a generic fast-slow structure would greatly simplify the analysis of complex models of organic matter turnover by reducing the number of unknowns and parameters and providing a systematic mathematical framework for studying their properties.

  15. Complex free-energy landscapes in biaxial nematic liquid crystals and the role of repulsive interactions: A Wang-Landau study

    NASA Astrophysics Data System (ADS)

    Kamala Latha, B.; Murthy, K. P. N.; Sastry, V. S. S.

    2017-09-01

    General quadratic Hamiltonian models, describing the interaction between liquid-crystal molecules (typically with D2 h symmetry), take into account couplings between their uniaxial and biaxial tensors. While the attractive contributions arising from interactions between similar tensors of the participating molecules provide for eventual condensation of the respective orders at suitably low temperatures, the role of cross coupling between unlike tensors is not fully appreciated. Our recent study with an advanced Monte Carlo technique (entropic sampling) showed clearly the increasing relevance of this cross term in determining the phase diagram (contravening in some regions of model parameter space), the predictions of mean-field theory, and standard Monte Carlo simulation results. In this context, we investigated the phase diagrams and the nature of the phases therein on two trajectories in the parameter space: one is a line in the interior region of biaxial stability believed to be representative of the real systems, and the second is the extensively investigated parabolic path resulting from the London dispersion approximation. In both cases, we find the destabilizing effect of increased cross-coupling interactions, which invariably result in the formation of local biaxial organizations inhomogeneously distributed. This manifests as a small, but unmistakable, contribution of biaxial order in the uniaxial phase. The free-energy profiles computed in the present study as a function of the two dominant order parameters indicate complex landscapes. On the one hand, these profiles account for the unusual thermal behavior of the biaxial order parameter under significant destabilizing influence from the cross terms. On the other, they also allude to the possibility that in real systems, these complexities might indeed be inhibiting the formation of a low-temperature biaxial order itself—perhaps reflecting the difficulties in their ready realization in the laboratory.

  16. Charged boson stars and black holes with nonminimal coupling to gravity

    NASA Astrophysics Data System (ADS)

    Verbin, Y.; Brihaye, Y.

    2018-02-01

    We find new spherically symmetric charged boson star solutions of a complex scalar field coupled nonminimally to gravity by a "John-type" term of Horndeski theory, that is a coupling between the kinetic scalar term and Einstein tensor. We study the parameter space of the solutions and find two distinct families according to their position in parameter space. More widespread is the family of solutions (which we call branch 1) existing for a finite interval of the central value of the scalar field starting from zero and ending at some finite maximal value. This branch contains as a special case the charged boson stars of the minimally coupled theory. In some regions of parameter space we find a new second branch ("branch 2") of solutions which are more massive and more stable than those of branch 1. This second branch exists also in a finite interval of the central value of the scalar field, but its end points (either both or in some cases only one) are extremal Reissner-Nordström black hole solutions.

  17. Design Space Toolbox V2: Automated Software Enabling a Novel Phenotype-Centric Modeling Strategy for Natural and Synthetic Biological Systems

    PubMed Central

    Lomnitz, Jason G.; Savageau, Michael A.

    2016-01-01

    Mathematical models of biochemical systems provide a means to elucidate the link between the genotype, environment, and phenotype. A subclass of mathematical models, known as mechanistic models, quantitatively describe the complex non-linear mechanisms that capture the intricate interactions between biochemical components. However, the study of mechanistic models is challenging because most are analytically intractable and involve large numbers of system parameters. Conventional methods to analyze them rely on local analyses about a nominal parameter set and they do not reveal the vast majority of potential phenotypes possible for a given system design. We have recently developed a new modeling approach that does not require estimated values for the parameters initially and inverts the typical steps of the conventional modeling strategy. Instead, this approach relies on architectural features of the model to identify the phenotypic repertoire and then predict values for the parameters that yield specific instances of the system that realize desired phenotypic characteristics. Here, we present a collection of software tools, the Design Space Toolbox V2 based on the System Design Space method, that automates (1) enumeration of the repertoire of model phenotypes, (2) prediction of values for the parameters for any model phenotype, and (3) analysis of model phenotypes through analytical and numerical methods. The result is an enabling technology that facilitates this radically new, phenotype-centric, modeling approach. We illustrate the power of these new tools by applying them to a synthetic gene circuit that can exhibit multi-stability. We then predict values for the system parameters such that the design exhibits 2, 3, and 4 stable steady states. In one example, inspection of the basins of attraction reveals that the circuit can count between three stable states by transient stimulation through one of two input channels: a positive channel that increases the count, and a negative channel that decreases the count. This example shows the power of these new automated methods to rapidly identify behaviors of interest and efficiently predict parameter values for their realization. These tools may be applied to understand complex natural circuitry and to aid in the rational design of synthetic circuits. PMID:27462346

  18. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  19. On the Complexity of Item Response Theory Models.

    PubMed

    Bonifay, Wes; Cai, Li

    2017-01-01

    Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. We examined four popular IRT models-exploratory factor analytic, bifactor, DINA, and DINO-with different functional forms but the same number of free parameters. In comparison, a simpler (unidimensional 3PL) model was specified such that it had 1 more parameter than the previous models. All models were then evaluated according to the minimum description length principle. Specifically, each model was fit to 1,000 data sets that were randomly and uniformly sampled from the complete data space and then assessed using global and item-level fit and diagnostic measures. The findings revealed that the factor analytic and bifactor models possess a strong tendency to fit any possible data. The unidimensional 3PL model displayed minimal fitting propensity, despite the fact that it included an additional free parameter. The DINA and DINO models did not demonstrate a proclivity to fit any possible data, but they did fit well to distinct data patterns. Applied researchers and psychometricians should therefore consider functional form-and not goodness-of-fit alone-when selecting an IRT model.

  20. Fault Analysis of Space Station DC Power Systems-Using Neural Network Adaptive Wavelets to Detect Faults

    NASA Technical Reports Server (NTRS)

    Momoh, James A.; Wang, Yanchun; Dolce, James L.

    1997-01-01

    This paper describes the application of neural network adaptive wavelets for fault diagnosis of space station power system. The method combines wavelet transform with neural network by incorporating daughter wavelets into weights. Therefore, the wavelet transform and neural network training procedure become one stage, which avoids the complex computation of wavelet parameters and makes the procedure more straightforward. The simulation results show that the proposed method is very efficient for the identification of fault locations.

  1. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  2. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  3. The complex-scaled multiconfigurational spin-tensor electron propagator method for low-lying shape resonances in Be-, Mg- and Ca-

    NASA Astrophysics Data System (ADS)

    Tsogbayar, Tsednee; Yeager, Danny L.

    2017-01-01

    We further apply the complex scaled multiconfigurational spin-tensor electron propagator method (CMCSTEP) for the theoretical determination of resonance parameters with electron-atom systems including open-shell and highly correlated (non-dynamical correlation) atoms and molecules. The multiconfigurational spin-tensor electron propagator method (MCSTEP) developed and implemented by Yeager and his coworkers for real space gives very accurate and reliable ionization potentials and electron affinities. CMCSTEP uses a complex scaled multiconfigurational self-consistent field (CMCSCF) state as an initial state along with a dilated Hamiltonian where all of the electronic coordinates are scaled by a complex factor. CMCSTEP is designed for determining resonances. We apply CMCSTEP to get the lowest 2P (Be-, Mg-) and 2D (Mg-, Ca-) shape resonances using several different basis sets each with several complete active spaces. Many of these basis sets we employ have been used by others with different methods. Hence, we can directly compare results with different methods but using the same basis sets.

  4. Geomagnetic effects caused by rocket exhaust jets

    NASA Astrophysics Data System (ADS)

    Lipko, Yuriy; Pashinin, Aleksandr; Khakhinov, Vitaliy; Rahmatulin, Ravil

    2016-09-01

    In the space experiment Radar-Progress, we have made 33 series of measurements of geomagnetic variations during ignitions of engines of Progress cargo spacecraft in low Earth orbit. We used magneto-measuring complexes, installed at observatories of the Institute of Solar-Terrestrial Physics of Siberian Branch of the Russian Academy of Sciences, and magnetotelluric equipment of a mobile complex. We assumed that engine running can cause geomagnetic disturbances in flux tubes crossed by the spacecraft. When analyzing experimental data, we took into account space weather factors: solar wind parameters, total daily mid-latitude geomagnetic activity index Kp, geomagnetic auroral electrojet index AE, global geomagnetic activity. The empirical data we obtained indicate that 18 of the 33 series showed geomagnetic variations in various time ranges.

  5. Non-Relativistic Twistor Theory and Newton-Cartan Geometry

    NASA Astrophysics Data System (ADS)

    Dunajski, Maciej; Gundry, James

    2016-03-01

    We develop a non-relativistic twistor theory, in which Newton-Cartan structures of Newtonian gravity correspond to complex three-manifolds with a four-parameter family of rational curves with normal bundle O oplus O(2)}. We show that the Newton-Cartan space-times are unstable under the general Kodaira deformation of the twistor complex structure. The Newton-Cartan connections can nevertheless be reconstructed from Merkulov's generalisation of the Kodaira map augmented by a choice of a holomorphic line bundle over the twistor space trivial on twistor lines. The Coriolis force may be incorporated by holomorphic vector bundles, which in general are non-trivial on twistor lines. The resulting geometries agree with non-relativistic limits of anti-self-dual gravitational instantons.

  6. Brightness analysis of an electron beam with a complex profile

    NASA Astrophysics Data System (ADS)

    Maesaka, Hirokazu; Hara, Toru; Togawa, Kazuaki; Inagaki, Takahiro; Tanaka, Hitoshi

    2018-05-01

    We propose a novel analysis method to obtain the core bright part of an electron beam with a complex phase-space profile. This method is beneficial to evaluate the performance of simulation data of a linear accelerator (linac), such as an x-ray free electron laser (XFEL) machine, since the phase-space distribution of a linac electron beam is not simple, compared to a Gaussian beam in a synchrotron. In this analysis, the brightness of undulator radiation is calculated and the core of an electron beam is determined by maximizing the brightness. We successfully extracted core electrons from a complex beam profile of XFEL simulation data, which was not expressed by a set of slice parameters. FEL simulations showed that the FEL intensity was well remained even after extracting the core part. Consequently, the FEL performance can be estimated by this analysis without time-consuming FEL simulations.

  7. An approach to determination of shunt circuits parameters for damping vibrations

    NASA Astrophysics Data System (ADS)

    Matveenko; Iurlova; Oshmarin; Sevodina; Iurlov

    2018-04-01

    This paper considers the problem of natural vibrations of a deformable structure containing elements made of piezomaterials. The piezoelectric elements are connected through electrodes to an external electric circuit, which consists of resistive, inductive and capacitive elements. Based on the solution of this problem, the parameters of external electric circuits are searched for to allow optimal passive control of the structural vibrations. The solution to the problem is complex natural vibration frequencies, the real part of which corresponds to the circular eigenfrequency of vibrations and the imaginary part corresponds to its damping rate (damping ratio). The analysis of behaviour of the imaginary parts of complex eigenfrequencies in the space of external circuit parameters allows one to damp given modes of structure vibrations. The effectiveness of the proposed approach is demonstrated using a cantilever-clamped plate and a shell structure in the form of a semi-cylinder connected to series resonant ? circuits.

  8. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  9. Crystallization and preliminary X-ray crystallographic analysis of Escherichia coli glutaredoxin 2 in complex with glutathione and of a cysteine-less variant without glutathione

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Ju; Ye, Jun; Rosen, Barry P., E-mail: brosen@med.wayne.edu

    2007-04-01

    Glutaredoxin 2 from E. coli was cocrystallized with glutathione and data were collected to 1.60 Å. A mutant with the active-site residues Cys9 and Cys12 changed to serine was crystallized in the absence of glutathione and data were collected to 2.4 Å. Glutaredoxin 2 (Grx2) from Escherichia coli is larger in size than classical glutaredoxins. It is extremely efficient in the catalysis of reduced glutathione-dependent disulfide reduction. A complex of Grx2 with reduced glutathione (GSH) has been crystallized. Data were collected to 1.60 Å. The crystals belong to space group P3{sub 2}21, with one Grx2–GSH complex in the asymmetric unit.more » The unit-cell parameters are a = b = 50.10, c = 152.47 Å. A Grx2 mutant, C9S/C12S, which cannot form a disulfide bond with GSH was also crystallized. The crystals diffracted to 2.40 Å and belong to space group P2{sub 1}2{sub 1}2{sub 1}, with one molecule in the asymmetric unit. The unit-cell parameters are a = 28.16, b = 78.65, c = 89.16 Å.« less

  10. Dynamical complexity detection in geomagnetic activity indices using wavelet transforms and Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Balasis, G.; Daglis, I. A.; Papadimitriou, C.; Kalimeri, M.; Anastasiadis, A.; Eftaxias, K.

    2008-12-01

    Dynamical complexity detection for output time series of complex systems is one of the foremost problems in physics, biology, engineering, and economic sciences. Especially in magnetospheric physics, accurate detection of the dissimilarity between normal and abnormal states (e.g. pre-storm activity and magnetic storms) can vastly improve space weather diagnosis and, consequently, the mitigation of space weather hazards. Herein, we examine the fractal spectral properties of the Dst data using a wavelet analysis technique. We show that distinct changes in associated scaling parameters occur (i.e., transition from anti- persistent to persistent behavior) as an intense magnetic storm approaches. We then analyze Dst time series by introducing the non-extensive Tsallis entropy, Sq, as an appropriate complexity measure. The Tsallis entropy sensitively shows the complexity dissimilarity among different "physiological" (normal) and "pathological" states (intense magnetic storms). The Tsallis entropy implies the emergence of two distinct patterns: (i) a pattern associated with the intense magnetic storms, which is characterized by a higher degree of organization, and (ii) a pattern associated with normal periods, which is characterized by a lower degree of organization.

  11. Fire behavior of halogen-free flame retardant electrical cables with the cone calorimeter.

    PubMed

    Meinier, Romain; Sonnier, Rodolphe; Zavaleta, Pascal; Suard, Sylvain; Ferry, Laurent

    2018-01-15

    Fires involving electrical cables are one of the main hazards in Nuclear Power Plants (NPPs). Cables are complex assemblies including several polymeric parts (insulation, bedding, sheath) constituting fuel sources. This study provides an in-depth characterization of the fire behavior of two halogen-free flame retardant cables used in NPPs using the cone calorimeter. The influence of two key parameters, namely the external heat flux and the spacing between cables, on the cable fire characteristics is especially investigated. The prominent role of the outer sheath material on the ignition and the burning at early times was highlighted. A parameter of utmost importance called transition heat flux, was identified and depends on the composition and the structure of the cable. Below this heat flux, the decomposition is limited and concerns only the sheath. Above it, fire hazard is greatly enhanced because most often non-flame retarded insulation part contributes to heat release. The influence of spacing appears complex, and depends on the considered fire property. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.

    PubMed

    Will, Sebastian; Jabbari, Hosna

    2016-01-01

    RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.

  13. Astrophysical Model Selection in Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  14. An evaluation of behavior inferences from Bayesian state-space models: A case study with the Pacific walrus

    USGS Publications Warehouse

    Beatty, William; Jay, Chadwick V.; Fischbach, Anthony S.

    2016-01-01

    State-space models offer researchers an objective approach to modeling complex animal location data sets, and state-space model behavior classifications are often assumed to have a link to animal behavior. In this study, we evaluated the behavioral classification accuracy of a Bayesian state-space model in Pacific walruses using Argos satellite tags with sensors to detect animal behavior in real time. We fit a two-state discrete-time continuous-space Bayesian state-space model to data from 306 Pacific walruses tagged in the Chukchi Sea. We matched predicted locations and behaviors from the state-space model (resident, transient behavior) to true animal behavior (foraging, swimming, hauled out) and evaluated classification accuracy with kappa statistics (κ) and root mean square error (RMSE). In addition, we compared biased random bridge utilization distributions generated with resident behavior locations to true foraging behavior locations to evaluate differences in space use patterns. Results indicated that the two-state model fairly classified true animal behavior (0.06 ≤ κ ≤ 0.26, 0.49 ≤ RMSE ≤ 0.59). Kernel overlap metrics indicated utilization distributions generated with resident behavior locations were generally smaller than utilization distributions generated with true foraging behavior locations. Consequently, we encourage researchers to carefully examine parameters and priors associated with behaviors in state-space models, and reconcile these parameters with the study species and its expected behaviors.

  15. Planetary and Space Simulation Facilities PSI at DLR for Astrobiology

    NASA Astrophysics Data System (ADS)

    Rabbow, E.; Rettberg, P.; Panitz, C.; Reitz, G.

    2008-09-01

    Ground based experiments, conducted in the controlled planetary and space environment simulation facilities PSI at DLR, are used to investigate astrobiological questions and to complement the corresponding experiments in LEO, for example on free flying satellites or on space exposure platforms on the ISS. In-orbit exposure facilities can only accommodate a limited number of experiments for exposure to space parameters like high vacuum, intense radiation of galactic and solar origin and microgravity, sometimes also technically adapted to simulate extraterrestrial planetary conditions like those on Mars. Ground based experiments in carefully equipped and monitored simulation facilities allow the investigation of the effects of simulated single environmental parameters and selected combinations on a much wider variety of samples. In PSI at DLR, international science consortia performed astrobiological investigations and space experiment preparations, exposing organic compounds and a wide range of microorganisms, reaching from bacterial spores to complex microbial communities, lichens and even animals like tardigrades to simulated planetary or space environment parameters in pursuit of exobiological questions on the resistance to extreme environments and the origin and distribution of life. The Planetary and Space Simulation Facilities PSI of the Institute of Aerospace Medicine at DLR in Köln, Germany, providing high vacuum of controlled residual composition, ionizing radiation of a X-ray tube, polychromatic UV radiation in the range of 170-400 nm, VIS and IR or individual monochromatic UV wavelengths, and temperature regulation from -20°C to +80°C at the sample size individually or in selected combinations in 9 modular facilities of varying sizes are presented with selected experiments performed within.

  16. Crystallization and x-ray diffraction analysis of a putative bacterial class I labdane-related diterpene synthase [Crysallization and preliminary x-ray diffraction analysis of a bacterial class I labdane-related diterpene synthase

    DOE PAGES

    Serrano-Posada, Hugo; Centeno-Leija, Sara; Rojas-Trejo, Sonia; ...

    2015-08-25

    Here, labdane-related diterpenoids are natural products with potential pharmaceutical applications that are rarely found in bacteria. Here, a putative class I labdane-related diterpene synthase (LrdC) identified by genome mining in a streptomycete was successfully crystallized using the microbatch method. Crystals of the LrdC enzyme were obtained in a holo form with its natural cofactor Mg 2+ (LrdC-Mg 2+) and in complex with inorganic pyrophosphate (PP i) (LrdC-Mg 2+–PP i). Crystals of native LrdC-Mg 2+ diffracted to 2.50 Å resolution and belonged to the trigonal space group P3 221, with unit-cell parameters a = b = 107.1, c = 89.2 Å.more » Crystals of the LrdC-Mg 2+–PP i complex grown in the same conditions as the native enzyme with PEG 8000 diffracted to 2.36 Å resolution and also belonged to the trigonal space group P3 221. Crystals of the LrdC-Mg 2+–PP i complex grown in a second crystallization condition with PEG 3350 diffracted to 2.57 Å resolution and belonged to the monoclinic space group P2 1, with unit-cell parameters a = 49.9, b = 104.1, c = 66.5 Å, β = 111.4°. The structure was determined by the single-wavelength anomalous dispersion (SAD) technique using the osmium signal from a potassium hexachloroosmate (IV) derivative.« less

  17. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  18. On parametric Gevrey asymptotics for some nonlinear initial value Cauchy problems

    NASA Astrophysics Data System (ADS)

    Lastra, A.; Malek, S.

    2015-11-01

    We study a nonlinear initial value Cauchy problem depending upon a complex perturbation parameter ɛ with vanishing initial data at complex time t = 0 and whose coefficients depend analytically on (ɛ, t) near the origin in C2 and are bounded holomorphic on some horizontal strip in C w.r.t. the space variable. This problem is assumed to be non-Kowalevskian in time t, therefore analytic solutions at t = 0 cannot be expected in general. Nevertheless, we are able to construct a family of actual holomorphic solutions defined on a common bounded open sector with vertex at 0 in time and on the given strip above in space, when the complex parameter ɛ belongs to a suitably chosen set of open bounded sectors whose union form a covering of some neighborhood Ω of 0 in C*. These solutions are achieved by means of Laplace and Fourier inverse transforms of some common ɛ-depending function on C × R, analytic near the origin and with exponential growth on some unbounded sectors with appropriate bisecting directions in the first variable and exponential decay in the second, when the perturbation parameter belongs to Ω. Moreover, these solutions satisfy the remarkable property that the difference between any two of them is exponentially flat for some integer order w.r.t. ɛ. With the help of the classical Ramis-Sibuya theorem, we obtain the existence of a formal series (generally divergent) in ɛ which is the common Gevrey asymptotic expansion of the built up actual solutions considered above.

  19. Accurate Nanoscale Crystallography in Real-Space Using Scanning Transmission Electron Microscopy.

    PubMed

    Dycus, J Houston; Harris, Joshua S; Sang, Xiahan; Fancher, Chris M; Findlay, Scott D; Oni, Adedapo A; Chan, Tsung-Ta E; Koch, Carl C; Jones, Jacob L; Allen, Leslie J; Irving, Douglas L; LeBeau, James M

    2015-08-01

    Here, we report reproducible and accurate measurement of crystallographic parameters using scanning transmission electron microscopy. This is made possible by removing drift and residual scan distortion. We demonstrate real-space lattice parameter measurements with <0.1% error for complex-layered chalcogenides Bi2Te3, Bi2Se3, and a Bi2Te2.7Se0.3 nanostructured alloy. Pairing the technique with atomic resolution spectroscopy, we connect local structure with chemistry and bonding. Combining these results with density functional theory, we show that the incorporation of Se into Bi2Te3 causes charge redistribution that anomalously increases the van der Waals gap between building blocks of the layered structure. The results show that atomic resolution imaging with electrons can accurately and robustly quantify crystallography at the nanoscale.

  20. Averaging of random walks and shift-invariant measures on a Hilbert space

    NASA Astrophysics Data System (ADS)

    Sakbaev, V. Zh.

    2017-06-01

    We study random walks in a Hilbert space H and representations using them of solutions of the Cauchy problem for differential equations whose initial conditions are numerical functions on H. We construct a finitely additive analogue of the Lebesgue measure: a nonnegative finitely additive measure λ that is defined on a minimal subset ring of an infinite-dimensional Hilbert space H containing all infinite-dimensional rectangles with absolutely converging products of the side lengths and is invariant under shifts and rotations in H. We define the Hilbert space H of equivalence classes of complex-valued functions on H that are square integrable with respect to a shift-invariant measure λ. Using averaging of the shift operator in H over random vectors in H with a distribution given by a one-parameter semigroup (with respect to convolution) of Gaussian measures on H, we define a one-parameter semigroup of contracting self-adjoint transformations on H, whose generator is called the diffusion operator. We obtain a representation of solutions of the Cauchy problem for the Schrödinger equation whose Hamiltonian is the diffusion operator.

  1. Numerical analysis of seismic events distributions on the planetary scale and celestial bodies astrometrical parameters

    NASA Astrophysics Data System (ADS)

    Bulatova, Dr.

    2012-04-01

    Modern research in the domains of Earth sciences is developing from the descriptions of each individual natural phenomena to the systematic complex research in interdisciplinary areas. For studies of its kind in the form numerical analysis of three-dimensional (3D) systems, the author proposes space-time Technology (STT), based on a Ptolemaic geocentric system, consist of two modules, each with its own coordinate system: (1) - 3D model of a Earth, the coordinates of which provides databases of the Earth's events (here seismic), and (2) - a compact model of the relative motion of celestial bodies in space - time on Earth known as the "Method of a moving source" (MDS), which was developed in MDS (Bulatova, 1998-2000) for the 3D space. Module (2) was developed as a continuation of the geocentric Ptolemaic system of the world, built on the astronomical parameters heavenly bodies. Based on the aggregation data of Space and Earth Sciences, systematization, and cooperative analysis, this is an attempt to establish a cause-effect relationship between the position of celestial bodies (Moon, Sun) and Earth's seismic events.

  2. Quantitative calculations of fluorescence polarization and absorption anisotropy kinetics of double- and triple-chromophore complexes with energy transfer.

    PubMed Central

    Demidov, A A

    1994-01-01

    A new method is presented for calculation of the fluorescence depolarization and kinetics of absorption anisotropy for molecular complexes with a limited number of chromophores. The method considers absorption and emission of light by both chromophores, and also energy transfer between them, with regard to their mutual orientations. The chromophores in each individual complex are rigidly positioned. The complexes are randomly distributed and oriented in space, and there is no energy transfer between them. The new "practical" formula for absorption anisotropy and fluorescence depolarization kinetics, P(t) = [3B(t) - 1 + 2A(t)]/[3 + B(t) + 4A(t)], is derived both for double- and triple-chromophore complexes with delta-pulse excitation. The parameter B(t) is given by (a) B(t) = cos2(theta) for double-chromophore complexes, and (b) B(t) = q12(t)cos2(theta 12) + q13(t)-cos2(theta 13) + q23(t)cos2(theta 23) for triple-chromophore complexes, where q12(t) + q13(t) + q23(t) = 1. Here theta ij are the angles between the chromophore transition dipole moments in the individual molecular complex. The parameters qij(t) and A(t) are dependent on chromophore spectroscopic features and on the rates of energy transfer. PMID:7696461

  3. Automated Design Space Exploration with Aspen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spafford, Kyle L.; Vetter, Jeffrey S.

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  4. Automated Design Space Exploration with Aspen

    DOE PAGES

    Spafford, Kyle L.; Vetter, Jeffrey S.

    2015-01-01

    Architects and applications scientists often use performance models to explore a multidimensional design space of architectural characteristics, algorithm designs, and application parameters. With traditional performance modeling tools, these explorations forced users to first develop a performance model and then repeatedly evaluate and analyze the model manually. These manual investigations proved laborious and error prone. More importantly, the complexity of this traditional process often forced users to simplify their investigations. To address this challenge of design space exploration, we extend our Aspen (Abstract Scalable Performance Engineering Notation) language with three new language constructs: user-defined resources, parameter ranges, and a collection ofmore » costs in the abstract machine model. Then, we use these constructs to enable automated design space exploration via a nonlinear optimization solver. We show how four interesting classes of design space exploration scenarios can be derived from Aspen models and formulated as pure nonlinear programs. The analysis tools are demonstrated using examples based on Aspen models for a three-dimensional Fast Fourier Transform, the CoMD molecular dynamics proxy application, and the DARPA Streaming Sensor Challenge Problem. Our results show that this approach can compose and solve arbitrary performance modeling questions quickly and rigorously when compared to the traditional manual approach.« less

  5. Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.

    PubMed

    Wang, Jun; Deng, Zhaohong; Luo, Xiaoqing; Jiang, Yizhang; Wang, Shitong

    2016-06-01

    Training feedforward neural networks (FNNs) is one of the most critical issues in FNNs studies. However, most FNNs training methods cannot be directly applied for very large datasets because they have high computational and space complexity. In order to tackle this problem, the CCMEB (Center-Constrained Minimum Enclosing Ball) problem in hidden feature space of FNN is discussed and a novel learning algorithm called HFSR-GCVM (hidden-feature-space regression using generalized core vector machine) is developed accordingly. In HFSR-GCVM, a novel learning criterion using L2-norm penalty-based ε-insensitive function is formulated and the parameters in the hidden nodes are generated randomly independent of the training sets. Moreover, the learning of parameters in its output layer is proved equivalent to a special CCMEB problem in FNN hidden feature space. As most CCMEB approximation based machine learning algorithms, the proposed HFSR-GCVM training algorithm has the following merits: The maximal training time of the HFSR-GCVM training is linear with the size of training datasets and the maximal space consumption is independent of the size of training datasets. The experiments on regression tasks confirm the above conclusions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Task planning and control synthesis for robotic manipulation in space applications

    NASA Technical Reports Server (NTRS)

    Sanderson, A. C.; Peshkin, M. A.; Homem-De-mello, L. S.

    1987-01-01

    Space-based robotic systems for diagnosis, repair and assembly of systems will require new techniques of planning and manipulation to accomplish these complex tasks. Results of work in assembly task representation, discrete task planning, and control synthesis which provide a design environment for flexible assembly systems in manufacturing applications, and which extend to planning of manipulatiuon operations in unstructured environments are summarized. Assembly planning is carried out using the AND/OR graph representation which encompasses all possible partial orders of operations and may be used to plan assembly sequences. Discrete task planning uses the configuration map which facilitates search over a space of discrete operations parameters in sequential operations in order to achieve required goals in the space of bounded configuration sets.

  7. Use of an informed search space maximizes confidence of site-specific assignment of glycoprotein glycosylation.

    PubMed

    Khatri, Kshitij; Klein, Joshua A; Zaia, Joseph

    2017-01-01

    In order to interpret glycopeptide tandem mass spectra, it is necessary to estimate the theoretical glycan compositions and peptide sequences, known as the search space. The simplest way to do this is to build a naïve search space from sets of glycan compositions from public databases and to assume that the target glycoprotein is pure. Often, however, purified glycoproteins contain co-purified glycoprotein contaminants that have the potential to confound assignment of tandem mass spectra based on naïve assumptions. In addition, there is increasing need to characterize glycopeptides from complex biological mixtures. Fortunately, liquid chromatography-mass spectrometry (LC-MS) methods for glycomics and proteomics are now mature and accessible. We demonstrate the value of using an informed search space built from measured glycomes and proteomes to define the search space for interpretation of glycoproteomics data. We show this using α-1-acid glycoprotein (AGP) mixed into a set of increasingly complex matrices. As the mixture complexity increases, the naïve search space balloons and the ability to assign glycopeptides with acceptable confidence diminishes. In addition, it is not possible to identify glycopeptides not foreseen as part of the naïve search space. A search space built from released glycan glycomics and proteomics data is smaller than its naïve counterpart while including the full range of proteins detected in the mixture. This maximizes the ability to assign glycopeptide tandem mass spectra with confidence. As the mixture complexity increases, the number of tandem mass spectra per glycopeptide precursor ion decreases, resulting in lower overall scores and reduced depth of coverage for the target glycoprotein. We suggest use of α-1-acid glycoprotein as a standard to gauge effectiveness of analytical methods and bioinformatics search parameters for glycoproteomics studies. Graphical Abstract Assignment of site specific glycosylation from LC-tandemMS data.

  8. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  9. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  10. Object oriented fault diagnosis system for space shuttle main engine redlines

    NASA Technical Reports Server (NTRS)

    Rogers, John S.; Mohapatra, Saroj Kumar

    1990-01-01

    A great deal of attention has recently been given to Artificial Intelligence research in the area of computer aided diagnostics. Due to the dynamic and complex nature of space shuttle red-line parameters, a research effort is under way to develop a real time diagnostic tool that will employ historical and engineering rulebases as well as a sensor validity checking. The capability of AI software development tools (KEE and G2) will be explored by applying object oriented programming techniques in accomplishing the diagnostic evaluation.

  11. State-space models’ dirty little secrets: even simple linear Gaussian models can have estimation problems

    NASA Astrophysics Data System (ADS)

    Auger-Méthé, Marie; Field, Chris; Albertsen, Christoffer M.; Derocher, Andrew E.; Lewis, Mark A.; Jonsen, Ian D.; Mills Flemming, Joanna

    2016-05-01

    State-space models (SSMs) are increasingly used in ecology to model time-series such as animal movement paths and population dynamics. This type of hierarchical model is often structured to account for two levels of variability: biological stochasticity and measurement error. SSMs are flexible. They can model linear and nonlinear processes using a variety of statistical distributions. Recent ecological SSMs are often complex, with a large number of parameters to estimate. Through a simulation study, we show that even simple linear Gaussian SSMs can suffer from parameter- and state-estimation problems. We demonstrate that these problems occur primarily when measurement error is larger than biological stochasticity, the condition that often drives ecologists to use SSMs. Using an animal movement example, we show how these estimation problems can affect ecological inference. Biased parameter estimates of a SSM describing the movement of polar bears (Ursus maritimus) result in overestimating their energy expenditure. We suggest potential solutions, but show that it often remains difficult to estimate parameters. While SSMs are powerful tools, they can give misleading results and we urge ecologists to assess whether the parameters can be estimated accurately before drawing ecological conclusions from their results.

  12. Observability of ionospheric space-time structure with ISR: A simulation study

    NASA Astrophysics Data System (ADS)

    Swoboda, John; Semeter, Joshua; Zettergren, Matthew; Erickson, Philip J.

    2017-02-01

    The sources of error from electronically steerable array (ESA) incoherent scatter radar (ISR) systems are investigated both theoretically and with use of an open-source ISR simulator, developed by the authors, called Simulator for ISR (SimISR). The main sources of error incorporated in the simulator include statistical uncertainty, which arises due to nature of the measurement mechanism and the inherent space-time ambiguity from the sensor. SimISR can take a field of plasma parameters, parameterized by time and space, and create simulated ISR data at the scattered electric field (i.e., complex receiver voltage) level, subsequently processing these data to show possible reconstructions of the original parameter field. To demonstrate general utility, we show a number of simulation examples, with two cases using data from a self-consistent multifluid transport model. Results highlight the significant influence of the forward model of the ISR process and the resulting statistical uncertainty on plasma parameter measurements and the core experiment design trade-offs that must be made when planning observations. These conclusions further underscore the utility of this class of measurement simulator as a design tool for more optimal experiment design efforts using flexible ESA class ISR systems.

  13. Effective degrees of freedom: a flawed metaphor

    PubMed Central

    Janson, Lucas; Fithian, William; Hastie, Trevor J.

    2015-01-01

    Summary To most applied statisticians, a fitting procedure’s degrees of freedom is synonymous with its model complexity, or its capacity for overfitting to data. In particular, it is often used to parameterize the bias-variance tradeoff in model selection. We argue that, on the contrary, model complexity and degrees of freedom may correspond very poorly. We exhibit and theoretically explore various fitting procedures for which degrees of freedom is not monotonic in the model complexity parameter, and can exceed the total dimension of the ambient space even in very simple settings. We show that the degrees of freedom for any non-convex projection method can be unbounded. PMID:26977114

  14. Time Scale for Adiabaticity Breakdown in Driven Many-Body Systems and Orthogonality Catastrophe

    NASA Astrophysics Data System (ADS)

    Lychkovskiy, Oleg; Gamayun, Oleksandr; Cheianov, Vadim

    2017-11-01

    The adiabatic theorem is a fundamental result in quantum mechanics, which states that a system can be kept arbitrarily close to the instantaneous ground state of its Hamiltonian if the latter varies in time slowly enough. The theorem has an impressive record of applications ranging from foundations of quantum field theory to computational molecular dynamics. In light of this success it is remarkable that a practicable quantitative understanding of what "slowly enough" means is limited to a modest set of systems mostly having a small Hilbert space. Here we show how this gap can be bridged for a broad natural class of physical systems, namely, many-body systems where a small move in the parameter space induces an orthogonality catastrophe. In this class, the conditions for adiabaticity are derived from the scaling properties of the parameter-dependent ground state without a reference to the excitation spectrum. This finding constitutes a major simplification of a complex problem, which otherwise requires solving nonautonomous time evolution in a large Hilbert space.

  15. Intelligent Space Tube Optimization for speeding ground water remedial design.

    PubMed

    Kalwij, Ineke M; Peralta, Richard C

    2008-01-01

    An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.

  16. A fractional factorial probabilistic collocation method for uncertainty propagation of hydrologic model parameters in a reduced dimensional space

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Huang, W.; Fan, Y. R.; Li, Z.

    2015-10-01

    In this study, a fractional factorial probabilistic collocation method is proposed to reveal statistical significance of hydrologic model parameters and their multi-level interactions affecting model outputs, facilitating uncertainty propagation in a reduced dimensional space. The proposed methodology is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability, as well as its capability of revealing complex and dynamic parameter interactions. A set of reduced polynomial chaos expansions (PCEs) only with statistically significant terms can be obtained based on the results of factorial analysis of variance (ANOVA), achieving a reduction of uncertainty in hydrologic predictions. The predictive performance of reduced PCEs is verified by comparing against standard PCEs and the Monte Carlo with Latin hypercube sampling (MC-LHS) method in terms of reliability, sharpness, and Nash-Sutcliffe efficiency (NSE). Results reveal that the reduced PCEs are able to capture hydrologic behaviors of the Xiangxi River watershed, and they are efficient functional representations for propagating uncertainties in hydrologic predictions.

  17. Anomalous solute transport in saturated porous media: Relating transport model parameters to electrical and nuclear magnetic resonance properties

    USGS Publications Warehouse

    Swanson, Ryan D; Binley, Andrew; Keating, Kristina; France, Samantha; Osterman, Gordon; Day-Lewis, Frederick D.; Singha, Kamini

    2015-01-01

    The advection-dispersion equation (ADE) fails to describe commonly observed non-Fickian solute transport in saturated porous media, necessitating the use of other models such as the dual-domain mass-transfer (DDMT) model. DDMT model parameters are commonly calibrated via curve fitting, providing little insight into the relation between effective parameters and physical properties of the medium. There is a clear need for material characterization techniques that can provide insight into the geometry and connectedness of pore spaces related to transport model parameters. Here, we consider proton nuclear magnetic resonance (NMR), direct-current (DC) resistivity, and complex conductivity (CC) measurements for this purpose, and assess these methods using glass beads as a control and two different samples of the zeolite clinoptilolite, a material that demonstrates non-Fickian transport due to intragranular porosity. We estimate DDMT parameters via calibration of a transport model to column-scale solute tracer tests, and compare NMR, DC resistivity, CC results, which reveal that grain size alone does not control transport properties and measured geophysical parameters; rather, volume and arrangement of the pore space play important roles. NMR cannot provide estimates of more-mobile and less-mobile pore volumes in the absence of tracer tests because these estimates depend critically on the selection of a material-dependent and flow-dependent cutoff time. Increased electrical connectedness from DC resistivity measurements are associated with greater mobile pore space determined from transport model calibration. CC was hypothesized to be related to length scales of mass transfer, but the CC response is unrelated to DDMT.

  18. Crystallization and preliminary X-ray diffraction analysis of the peripheral light-harvesting complex LH2 from Marichromatium purpuratum.

    PubMed

    Cranston, Laura J; Roszak, Aleksander W; Cogdell, Richard J

    2014-06-01

    LH2 from the purple photosynthetic bacterium Marichromatium (formerly known as Chromatium) purpuratum is an integral membrane pigment-protein complex that is involved in harvesting light energy and transferring it to the LH1-RC `core' complex. The purified LH2 complex was crystallized using the sitting-drop vapour-diffusion method at 294 K. The crystals diffracted to a resolution of 6 Å using synchrotron radiation and belonged to the tetragonal space group I4, with unit-cell parameters a=b=109.36, c=80.45 Å. The data appeared to be twinned, producing apparent diffraction symmetry I422. The tetragonal symmetry of the unit cell and diffraction for the crystals of the LH2 complex from this species reveal that this complex is an octamer.

  19. Crystallization and preliminary X-ray diffraction analysis of the peripheral light-harvesting complex LH2 from Marichromatium purpuratum

    PubMed Central

    Cranston, Laura J.; Roszak, Aleksander W.; Cogdell, Richard J.

    2014-01-01

    LH2 from the purple photosynthetic bacterium Marichromatium (formerly known as Chromatium) purpuratum is an integral membrane pigment–protein complex that is involved in harvesting light energy and transferring it to the LH1–RC ‘core’ complex. The purified LH2 complex was crystallized using the sitting-drop vapour-diffusion method at 294 K. The crystals diffracted to a resolution of 6 Å using synchrotron radiation and belonged to the tetragonal space group I4, with unit-cell parameters a = b = 109.36, c = 80.45 Å. The data appeared to be twinned, producing apparent diffraction symmetry I422. The tetragonal symmetry of the unit cell and diffraction for the crystals of the LH2 complex from this species reveal that this complex is an octamer. PMID:24915099

  20. Integrated Logistics Support Analysis of the International Space Station Alpha: An Overview of the Maintenance Time Dependent Parameter Prediction Methods Enhancement

    NASA Technical Reports Server (NTRS)

    Sepehry-Fard, F.; Coulthard, Maurice H.

    1995-01-01

    The objective of this publication is to introduce the enhancement methods for the overall reliability and maintainability methods of assessment on the International Space Station. It is essential that the process to predict the values of the maintenance time dependent variable parameters such as mean time between failure (MTBF) over time do not in themselves generate uncontrolled deviation in the results of the ILS analysis such as life cycle costs, spares calculation, etc. Furthermore, the very acute problems of micrometeorite, Cosmic rays, flares, atomic oxygen, ionization effects, orbital plumes and all the other factors that differentiate maintainable space operations from non-maintainable space operations and/or ground operations must be accounted for. Therefore, these parameters need be subjected to a special and complex process. Since reliability and maintainability strongly depend on the operating conditions that are encountered during the entire life of the International Space Station, it is important that such conditions are accurately identified at the beginning of the logistics support requirements process. Environmental conditions which exert a strong influence on International Space Station will be discussed in this report. Concurrent (combined) space environments may be more detrimental to the reliability and maintainability of the International Space Station than the effects of a single environment. In characterizing the logistics support requirements process, the developed design/test criteria must consider both the single and/or combined environments in anticipation of providing hardware capability to withstand the hazards of the International Space Station profile. The effects of the combined environments (typical) in a matrix relationship on the International Space Station will be shown. The combinations of the environments where the total effect is more damaging than the cumulative effects of the environments acting singly, may include a combination such as temperature, humidity, altitude, shock, and vibration while an item is being transported. The item's acceptance to its end-of-life sequence must be examined for these effects.

  1. Embodied Space: a Sensorial Approach to Spatial Experience

    NASA Astrophysics Data System (ADS)

    Durão, Maria João

    2009-03-01

    A reflection is presented on the significance of the role of the body in the interpretation and future creation of spatial living structures. The paper draws on the body as cartography of sensorial meaning that includes vision, touch, smell, hearing, orientation and movement to discuss possible relationships with psychological and sociological parameters of 'sensorial space'. The complex dynamics of body-space is further explored from the standpoint of perceptual variables such as color, light, materialities, texture and their connections with design, technology, culture and symbology. Finally, the paper discusses the integration of knowledge and experimentation in the design of future habitats where body-sensitive frameworks encompass flexibility, communication, interaction and cognitive-driven solutions.

  2. Handy elementary algebraic properties of the geometry of entanglement

    NASA Astrophysics Data System (ADS)

    Blair, Howard A.; Alsing, Paul M.

    2013-05-01

    The space of separable states of a quantum system is a hyperbolic surface in a high dimensional linear space, which we call the separation surface, within the exponentially high dimensional linear space containing the quantum states of an n component multipartite quantum system. A vector in the linear space is representable as an n-dimensional hypermatrix with respect to bases of the component linear spaces. A vector will be on the separation surface iff every determinant of every 2-dimensional, 2-by-2 submatrix of the hypermatrix vanishes. This highly rigid constraint can be tested merely in time asymptotically proportional to d, where d is the dimension of the state space of the system due to the extreme interdependence of the 2-by-2 submatrices. The constraint on 2-by-2 determinants entails an elementary closed formformula for a parametric characterization of the entire separation surface with d-1 parameters in the char- acterization. The state of a factor of a partially separable state can be calculated in time asymptotically proportional to the dimension of the state space of the component. If all components of the system have approximately the same dimension, the time complexity of calculating a component state as a function of the parameters is asymptotically pro- portional to the time required to sort the basis. Metric-based entanglement measures of pure states are characterized in terms of the separation hypersurface.

  3. Structure parameters in rotating Couette-Poiseuille channel flow

    NASA Technical Reports Server (NTRS)

    Knightly, George H.; Sather, D.

    1986-01-01

    It is well-known that a number of steady state problems in fluid mechanics involving systems of nonlinear partial differential equations can be reduced to the problem of solving a single operator equation of the form: v + lambda Av + lambda B(v) = 0, v is the summation of H, lambda is the summation of one-dimensional Euclid space, where H is an appropriate (real or complex) Hilbert space. Here lambda is a typical load parameter, e.g., the Reynolds number, A is a linear operator, and B is a quadratic operator generated by a bilinear form. In this setting many bifurcation and stability results for problems were obtained. A rotating Couette-Poiseuille channel flow was studied, and it showed that, in general, the superposition of a Poiseuille flow on a rotating Couette channel flow is destabilizing.

  4. Purification, crystallization and preliminary X-ray analysis of the BseCI DNA methyltransferase from Bacillus stearothermophilus in complex with its cognate DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapetaniou, Evangelia G.; Kotsifaki, Dina; Providaki, Mary

    2007-01-01

    The DNA methyltransferase M.BseCI from B. stearothermophilus was crystallized as a complex with its cognate DNA. Crystals belong to space group P6 and diffract to 2.5 Å resolution at a synchrotron source. The DNA methyltransferase M.BseCI from Bacillus stearothermophilus (EC 2.1.1.72), a 579-amino-acid enzyme, methylates the N6 atom of the 3′ adenine in the sequence 5′-ATCGAT-3′. M.BseCI was crystallized in complex with its cognate DNA. The crystals were found to belong to the hexagonal space group P6, with unit-cell parameters a = b = 87.0, c = 156.1 Å, β = 120.0° and one molecule in the asymmetric unit. Twomore » complete data sets were collected at wavelengths of 1.1 and 2.0 Å to 2.5 and 2.8 Å resolution, respectively, using synchrotron radiation at 100 K.« less

  5. Complex mode indication function and its applications to spatial domain parameter estimation

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  6. Experimental Study of Sound Waves in Sandy Sediment

    DTIC Science & Technology

    2003-05-01

    parameter model ) and measurements (using a reflection ratio technique) includes derivations and measurements of acoustic imped- ances, effective densities...22 2.9 Model Used to Find Acoustic Impedance of Biot Medium . . . . . . . . . . . . . . 24 2.10 Free Body Diagram of...38] derived the complex reflection coefficient of plane acoustic waves from a poro-elastic sediment half-space. The boundary condition model is

  7. Expression, crystallization and preliminary crystallographic analysis of RNA-binding protein Hfq (YmaH) from Bacillus subtilis in complex with an RNA aptamer.

    PubMed

    Baba, Seiki; Someya, Tatsuhiko; Kawai, Gota; Nakamura, Kouji; Kumasaka, Takashi

    2010-05-01

    The Hfq protein is a hexameric RNA-binding protein which regulates gene expression by binding to RNA under the influence of diverse environmental stresses. Its ring structure binds various types of RNA, including mRNA and sRNA. RNA-bound structures of Hfq from Escherichia coli and Staphylococcus aureus have been revealed to have poly(A) RNA at the distal site and U-rich RNA at the proximal site, respectively. Here, crystals of a complex of the Bacillus subtilis Hfq protein with an A/G-repeat 7-mer RNA (Hfq-RNA) that were prepared using the hanging-drop vapour-diffusion technique are reported. The type 1 Hfq-RNA crystals belonged to space group I422, with unit-cell parameters a = b = 123.70, c = 119.13 A, while the type 2 Hfq-RNA crystals belonged to space group F222, with unit-cell parameters a = 91.92, b = 92.50, c = 114.92 A. Diffraction data were collected to a resolution of 2.20 A from both crystal forms. The hexameric structure of the Hfq protein was clearly shown by self-rotation analysis.

  8. Mechanically tunable actin networks using programmable DNA based cross-linkers

    NASA Astrophysics Data System (ADS)

    Schnauss, Joerg; Lorenz, Jessica; Schuldt, Carsten; Kaes, Josef; Smith, David

    Cells employ multiple cross-linkers with very different properties. Studies of the entire phase space, however, were infeasible since they were restricted to naturally occurring cross-linkers. These components cannot be controllably varied and differ in many parameters. We resolve this limitation by forming artificial actin cross-linkers, which can be controllably varied. The basic building block is DNA enabling a well-defined length variation. DNA can be attached to actin binding peptides with known binding affinities. We used bulk rheology to investigate mechanical properties of these networks. We were able to reproduce mechanical features of actin networks cross-linked by fascin by using a short version of our artificial complex with a high binding affinity. Additionally, we were able to resemble findings for the cross-linker alpha-actinin by employing a long cross-linker with a low binding affinity. Between these natural limits we investigated three different cross-linker lengths each with two different binding affinities. With these controlled variations we are able to precisely screen the phase space of cross-linked actin networks by changing only one specific parameter and not the entire set of properties as in the case of naturally occurring cross-linking complexes.

  9. Models and observations of Arctic melt ponds

    NASA Astrophysics Data System (ADS)

    Golden, K. M.

    2016-12-01

    During the Arctic melt season, the sea ice surface undergoes a striking transformation from vast expanses of snow covered ice to complex mosaics of ice and melt ponds. Sea ice albedo, a key parameter in climate modeling, is largely determined by the complex evolution of melt pond configurations. In fact, ice-albedo feedback has played a significant role in the recent declines of the summer Arctic sea ice pack. However, understanding melt pond evolution remains a challenge to improving climate projections. It has been found that as the ponds grow and coalesce, the fractal dimension of their boundaries undergoes a transition from 1 to about 2, around a critical length scale of 100 square meters in area. As the ponds evolve they take complex, self-similar shapes with boundaries resembling space-filling curves. I will outline how mathematical models of composite materials and statistical physics, such as percolation and Ising models, are being used to describe this evolution and predict key geometrical parameters that agree very closely with observations.

  10. Fundamental properties of resonances

    PubMed Central

    Ceci, S.; Hadžimehmedović, M.; Osmanović, H.; Percan, A.; Zauner, B.

    2017-01-01

    All resonances, from hydrogen nuclei excited by the high-energy gamma rays in deep space to newly discovered particles produced in Large Hadron Collider, should be described by the same fundamental physical quantities. However, two distinct sets of properties are used to describe resonances: the pole parameters (complex pole position and residue) and the Breit-Wigner parameters (mass, width, and branching fractions). There is an ongoing decades-old debate on which one of them should be abandoned. In this study of nucleon resonances appearing in the elastic pion-nucleon scattering we discover an intricate interplay of the parameters from both sets, and realize that neither set is completely independent or fundamental on its own. PMID:28345595

  11. Autonomous perception and decision making in cyber-physical systems

    NASA Astrophysics Data System (ADS)

    Sarkar, Soumik

    2011-07-01

    The cyber-physical system (CPS) is a relatively new interdisciplinary technology area that includes the general class of embedded and hybrid systems. CPSs require integration of computation and physical processes that involves the aspects of physical quantities such as time, energy and space during information processing and control. The physical space is the source of information and the cyber space makes use of the generated information to make decisions. This dissertation proposes an overall architecture of autonomous perception-based decision & control of complex cyber-physical systems. Perception involves the recently developed framework of Symbolic Dynamic Filtering for abstraction of physical world in the cyber space. For example, under this framework, sensor observations from a physical entity are discretized temporally and spatially to generate blocks of symbols, also called words that form a language. A grammar of a language is the set of rules that determine the relationships among words to build sentences. Subsequently, a physical system is conjectured to be a linguistic source that is capable of generating a specific language. The proposed technology is validated on various (experimental and simulated) case studies that include health monitoring of aircraft gas turbine engines, detection and estimation of fatigue damage in polycrystalline alloys, and parameter identification. Control of complex cyber-physical systems involve distributed sensing, computation, control as well as complexity analysis. A novel statistical mechanics-inspired complexity analysis approach is proposed in this dissertation. In such a scenario of networked physical systems, the distribution of physical entities determines the underlying network topology and the interaction among the entities forms the abstract cyber space. It is envisioned that the general contributions, made in this dissertation, will be useful for potential application areas such as smart power grids and buildings, distributed energy systems, advanced health care procedures and future ground and air transportation systems.

  12. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    PubMed

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  13. Color image encryption based on color blend and chaos permutation in the reality-preserving multiple-parameter fractional Fourier transform domain

    NASA Astrophysics Data System (ADS)

    Lang, Jun

    2015-03-01

    In this paper, we propose a novel color image encryption method by using Color Blend (CB) and Chaos Permutation (CP) operations in the reality-preserving multiple-parameter fractional Fourier transform (RPMPFRFT) domain. The original color image is first exchanged and mixed randomly from the standard red-green-blue (RGB) color space to R‧G‧B‧ color space by rotating the color cube with a random angle matrix. Then RPMPFRFT is employed for changing the pixel values of color image, three components of the scrambled RGB color space are converted by RPMPFRFT with three different transform pairs, respectively. Comparing to the complex output transform, the RPMPFRFT transform ensures that the output is real which can save storage space of image and convenient for transmission in practical applications. To further enhance the security of the encryption system, the output of the former steps is scrambled by juxtaposition of sections of the image in the reality-preserving multiple-parameter fractional Fourier domains and the alignment of sections is determined by two coupled chaotic logistic maps. The parameters in the Color Blend, Chaos Permutation and the RPMPFRFT transform are regarded as the key in the encryption algorithm. The proposed color image encryption can also be applied to encrypt three gray images by transforming the gray images into three RGB color components of a specially constructed color image. Numerical simulations are performed to demonstrate that the proposed algorithm is feasible, secure, sensitive to keys and robust to noise attack and data loss.

  14. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  15. Viterbi decoding for satellite and space communication.

    NASA Technical Reports Server (NTRS)

    Heller, J. A.; Jacobs, I. M.

    1971-01-01

    Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.

  16. Dust acoustic cnoidal waves in a polytropic complex plasma

    NASA Astrophysics Data System (ADS)

    El-Labany, S. K.; El-Taibany, W. F.; Abdelghany, A. M.

    2018-01-01

    The nonlinear characteristics of dust acoustic (DA) waves in an unmagnetized collisionless complex plasma containing adiabatic electrons and ions and negatively charged dust grains (including the effects of modified polarization force) are investigated. Employing the reductive perturbation technique, a Korteweg-de Vries-Burgers (KdVB) equation is derived. The analytical solution for the KdVB equation is discussed. Also, the bifurcation and phase portrait analyses are presented to recognize different types of possible solutions. The dependence of the properties of nonlinear DA waves on the system parameters is investigated. It has been shown that an increase in the value of the modified polarization parameter leads to a fast decay and diminishes the oscillation amplitude of the DA damped cnoidal wave. The relevance of our findings and their possible applications to laboratory and space plasma situations is discussed.

  17. Sensorimotor restriction affects complex movement topography and reachable space in the rat motor cortex.

    PubMed

    Budri, Mirco; Lodi, Enrico; Franchi, Gianfranco

    2014-01-01

    Long-duration intracortical microstimulation (ICMS) studies with 500 ms of current pulses suggest that the forelimb area of the motor cortex is organized into several spatially distinct functional zones that organize movements into complex sequences. Here we studied how sensorimotor restriction modifies the extent of functional zones, complex movements, and reachable space representation in the rat forelimb M1. Sensorimotor restriction was achieved by means of whole-forelimb casting of 30 days duration. Long-duration ICMS was carried out 12 h and 14 days after cast removal. Evoked movements were measured using a high-resolution 3D optical system. Long-term cast caused: (i) a reduction in the number of sites where complex forelimb movement could be evoked; (ii) a shrinkage of functional zones but no change in their center of gravity; (iii) a reduction in movement with proximal/distal coactivation; (iv) a reduction in maximal velocity, trajectory and vector length of movement, but no changes in latency or duration; (v) a large restriction of reachable space. Fourteen days of forelimb freedom after casting caused: (i) a recovery of the number of sites where complex forelimb movement could be evoked; (ii) a recovery of functional zone extent and movement with proximal/distal coactivation; (iii) an increase in movement kinematics, but only partial restoration of control rat values; (iv) a slight increase in reachability parameters, but these remained far below baseline values. We pose the hypothesis that specific aspects of complex movement may be stored within parallel motor cortex re-entrant systems.

  18. Sensorimotor restriction affects complex movement topography and reachable space in the rat motor cortex

    PubMed Central

    Budri, Mirco; Lodi, Enrico; Franchi, Gianfranco

    2014-01-01

    Long-duration intracortical microstimulation (ICMS) studies with 500 ms of current pulses suggest that the forelimb area of the motor cortex is organized into several spatially distinct functional zones that organize movements into complex sequences. Here we studied how sensorimotor restriction modifies the extent of functional zones, complex movements, and reachable space representation in the rat forelimb M1. Sensorimotor restriction was achieved by means of whole-forelimb casting of 30 days duration. Long-duration ICMS was carried out 12 h and 14 days after cast removal. Evoked movements were measured using a high-resolution 3D optical system. Long-term cast caused: (i) a reduction in the number of sites where complex forelimb movement could be evoked; (ii) a shrinkage of functional zones but no change in their center of gravity; (iii) a reduction in movement with proximal/distal coactivation; (iv) a reduction in maximal velocity, trajectory and vector length of movement, but no changes in latency or duration; (v) a large restriction of reachable space. Fourteen days of forelimb freedom after casting caused: (i) a recovery of the number of sites where complex forelimb movement could be evoked; (ii) a recovery of functional zone extent and movement with proximal/distal coactivation; (iii) an increase in movement kinematics, but only partial restoration of control rat values; (iv) a slight increase in reachability parameters, but these remained far below baseline values. We pose the hypothesis that specific aspects of complex movement may be stored within parallel motor cortex re-entrant systems. PMID:25565987

  19. Inkjet-printed disposable metal complexing indicator-displacement assay for sulphide determination in water.

    PubMed

    Ariza-Avidad, M; Agudo-Acemel, M; Salinas-Castillo, A; Capitán-Vallvey, L F

    2015-05-04

    A sulphide selective colorimetric metal complexing indicator-displacement assay has been developed using an immobilized copper(II) complex of the azo dye 1-(2-pyridylazo)-2-naphthol printed by inkjetting on a nylon support. The change in colour measured from the image of the disposable membrane acquired by a digital camera using the H coordinate of the HSV colour space as the analytical parameter is able to sense sulphide in aqueous solution at pH 7.4 with a dynamic range up to 145 μM, a detection limit of 0.10 μM and a precision between 2 and 11%. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  1. Mutually unbiased bases and semi-definite programming

    NASA Astrophysics Data System (ADS)

    Brierley, Stephen; Weigert, Stefan

    2010-11-01

    A complex Hilbert space of dimension six supports at least three but not more than seven mutually unbiased bases. Two computer-aided analytical methods to tighten these bounds are reviewed, based on a discretization of parameter space and on Gröbner bases. A third algorithmic approach is presented: the non-existence of more than three mutually unbiased bases in composite dimensions can be decided by a global optimization method known as semidefinite programming. The method is used to confirm that the spectral matrix cannot be part of a complete set of seven mutually unbiased bases in dimension six.

  2. Maintenance of the catalog of artificial objects in space.

    NASA Astrophysics Data System (ADS)

    Khutorovskij, Z. N.

    1994-01-01

    The catalog of artificial objects in space (AOS) is useful for estimating the safety of space flights, for constructing temporal and spatial models of the flux of AOS, for determining when and where dangerous AOS will break up, for tracking inoperative instruments and space stations, for eliminating false alarms that are triggered by observations of AOS in the Ballistic Missile Early Warning System and in the Anti-Missile system, etc. At present, the Space Surveillance System (located in the former USSR) automatically maintains a catalog consisting of more than 5000 AOS with dimensions of at least 10 cm. The orbital parameters are continuously updated from radar tracking data. The author describes the software which is used to process the information. He presents some of the features of the system itself, including the number of objects in various stages of the tracking process, the orbital parameters of AOS which break up, and how the fragments are detected, the accuracy of tracking and predicting the orbits of the AOS, and the accuracy with which we can estimate when and where an AOS will break up. As an example, the author presents the results of determination of the time when the orbiting complex Salyut-7 - Kosmos-1686 will break up, and where it will impact.

  3. Radiation investigations with Liulin-5 charged particle telescope on the International Space Station: review of results for years 2007-2015

    NASA Astrophysics Data System (ADS)

    Koleva, Rositza; Semkova, Jordanka; Krastev, Krasimir; Bankov, Nikolay; Malchev, Stefan; Benghin, Victor; Shurshakov, Vyacheslav

    2017-04-01

    The radiation field around the Earth is complex, composed of galactic cosmic rays, trapped particles of the Earth's radiation belts, solar energetic particles, albedo particles from the Earth's atmosphere and secondary radiation produced in the space vehicle shielding materials around the biological objects. Dose characteristics in near Earth and space radiation environment also depend on many other parameters such as the orbit parameters, solar cycle phase and current helio-and geophysical conditions. Since June 2007 till 2015 the Liulin-5 charged particle telescope has been observing the radiation characteristics in two different modules of the International Space Station (ISS). In the period from 2007 to 2009 measurements were conducted in the spherical tissue-equivalent phantom of MATROSHKA-R project located in the PIRS module of ISS. In the period from 2012 to 2015 measurements were conducted in and outside the phantom located in the Small Research Module of ISS. In this presentation attention is drawn to the obtained results for the dose rates, particle fluxes and dose equivalent rates in and outside the phantom from the galactic cosmic rays, trapped protons and solar energetic particle events which occurred in that period.

  4. Vortex conception of rotor and mutual effect of screw/propellers

    NASA Technical Reports Server (NTRS)

    Lepilkin, A. M.

    1986-01-01

    A vortex theory of screw/propellers with variable circulation according to the blade and its azimuth is proposed, the problem is formulated and circulation is expanded in a Fourier series. Equations are given for inductive velocities in space for crews, including those with an infinitely large number of blades and expansion of the inductive velocity by blade azimuth of a second screw. Multiparameter improper integrals are given as a combination of elliptical integrals and elementary functions, and it is shown how to reduce elliptical integrals of the third kind with a complex parameter to integrals with a real parameter.

  5. Radiation dosimetry and biophysical models of space radiation effects

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Wu, Honglu; Shavers, Mark R.; George, Kerry

    2003-01-01

    Estimating the biological risks from space radiation remains a difficult problem because of the many radiation types including protons, heavy ions, and secondary neutrons, and the absence of epidemiology data for these radiation types. Developing useful biophysical parameters or models that relate energy deposition by space particles to the probabilities of biological outcomes is a complex problem. Physical measurements of space radiation include the absorbed dose, dose equivalent, and linear energy transfer (LET) spectra. In contrast to conventional dosimetric methods, models of radiation track structure provide descriptions of energy deposition events in biomolecules, cells, or tissues, which can be used to develop biophysical models of radiation risks. In this paper, we address the biophysical description of heavy particle tracks in the context of the interpretation of both space radiation dosimetry and radiobiology data, which may provide insights into new approaches to these problems.

  6. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  7. Charged fixed point in the Ginzburg-Landau superconductor and the role of the Ginzburg parameter /κ

    NASA Astrophysics Data System (ADS)

    Kleinert, Hagen; Nogueira, Flavio S.

    2003-02-01

    We present a semi-perturbative approach which yields an infrared-stable fixed point in the Ginzburg-Landau for N=2, where N/2 is the number of complex components. The calculations are done in d=3 dimensions and below Tc, where the renormalization group functions can be expressed directly as functions of the Ginzburg parameter κ which is the ratio between the two fundamental scales of the problem, the penetration depth λ and the correlation length ξ. We find a charged fixed point for κ>1/ 2, that is, in the type II regime, where Δκ≡κ-1/ 2 is shown to be a natural expansion parameter. This parameter controls a momentum space instability in the two-point correlation function of the order field. This instability appears at a non-zero wave-vector p0 whose magnitude scales like ˜ Δκ β¯, with a critical exponent β¯=1/2 in the one-loop approximation, a behavior known from magnetic systems with a Lifshitz point in the phase diagram. This momentum space instability is argued to be the origin of the negative η-exponent of the order field.

  8. Langmuir probe measurements aboard the International Space Station

    NASA Astrophysics Data System (ADS)

    Kirov, B.; Asenovski, S.; Bachvarov, D.; Boneva, A.; Grushin, V.; Georgieva, K.; Klimov, S. I.

    2016-12-01

    In the current work we describe the Langmuir Probe (LP) and its operation on board the International Space Station. This instrument is a part of the scientific complex "Ostonovka". The main goal of the complex is to establish, on one hand how such big body as the International Space Station affects the ambient plasma and on the other how Space Weather factors influence the Station. The LP was designed and developed at BAS-SRTI. With this instrument we measure the thermal plasma parameters-electron temperature Te, electron and ion concentration, respectively Ne and Ni, and also the potential at the Station's surface. The instrument is positioned at around 1.5 meters from the surface of the Station, at the Russian module "Zvezda", located at the farthermost point of the Space Station, considering the velocity vector. The Multi- Purpose Laboratory (MLM) module is providing additional shielding for our instrument, from the oncoming plasma flow (with respect to the velocity vector). Measurements show that in this area, the plasma concentration is two orders of magnitude lower, in comparison with the unperturbed areas. The surface potential fluctuates between-3 and-25 volts with respect to the ambient plasma. Fast upsurges in the surface potential are detected when passing over the twilight zone and the Equatorial anomaly.

  9. Interdependent figure-of-merit software development

    NASA Technical Reports Server (NTRS)

    Ramohalli, K.; Kirsch, T.

    1989-01-01

    This program was undertaken in order to understand the complex nature of interdependent performance in space missions. At the first step in a planned sequence of progress, a spread sheet program was developed to evaluate different fuel/oxidizer combinations for a specific Martian mission. This program is to be linked with output attained using sophisticated software produced by Gordon and McBride. The programming to date makes use of 11 independent parameters. Optimization is essential when faced with the incredible magnitude of costs, risks, and benefits involved with space exploration. A system of weights needs to be devised on which to measure the options. It was the goal to devise a Figure of Merit (FoM) on which different choices can be presented and made. The plan was to model typical missions to Mars, identify the parameters, and vary them until the best one is found. Initially, most of the focus was placed on propellant selection.

  10. The effect of structural design parameters on FPGA-based feed-forward space-time trellis coding-orthogonal frequency division multiplexing channel encoders

    NASA Astrophysics Data System (ADS)

    Passas, Georgios; Freear, Steven; Fawcett, Darren

    2010-08-01

    Orthogonal frequency division multiplexing (OFDM)-based feed-forward space-time trellis code (FFSTTC) encoders can be synthesised as very high speed integrated circuit hardware description language (VHDL) designs. Evaluation of their FPGA implementation can lead to conclusions that help a designer to decide the optimum implementation, given the encoder structural parameters. VLSI architectures based on 1-bit multipliers and look-up tables (LUTs) are compared in terms of FPGA slices and block RAMs (area), as well as in terms of minimum clock period (speed). Area and speed graphs versus encoder memory order are provided for quadrature phase shift keying (QPSK) and 8 phase shift keying (8-PSK) modulation and two transmit antennas, revealing best implementation under these conditions. The effect of number of modulation bits and transmit antennas on the encoder implementation complexity is also investigated.

  11. Illusion optics: Optically transforming the nature and the location of electromagnetic emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Jianjia; Tichit, Paul-Henri; Burokur, Shah Nawaz, E-mail: shah-nawaz.burokur@u-psud.fr

    Complex electromagnetic structures can be designed by using the powerful concept of transformation electromagnetics. In this study, we define a spatial coordinate transformation that shows the possibility of designing a device capable of producing an illusion on an antenna radiation pattern. Indeed, by compressing the space containing a radiating element, we show that it is able to change the radiation pattern and to make the radiation location appear outside the latter space. Both continuous and discretized models with calculated electromagnetic parameter values are presented. A reduction of the electromagnetic material parameters is also proposed for a possible physical fabrication ofmore » the device with achievable values of permittivity and permeability that can be obtained from existing well-known metamaterials. Following that, the design of the proposed antenna using a layered metamaterial is presented. Full wave numerical simulations using Finite Element Method are performed to demonstrate the performances of such a device.« less

  12. Direct adaptive control of manipulators in Cartesian space

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A new adaptive-control scheme for direct control of manipulator end effector to achieve trajectory tracking in Cartesian space is developed in this article. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of adaptive feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for on-line implementation with high sampling rates. The control scheme is applied to a two-link manipulator for illustration.

  13. Space technology test facilities at the NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Gross, Anthony R.; Rodrigues, Annette T.

    1990-01-01

    The major space research and technology test facilities at the NASA Ames Research Center are divided into five categories: General Purpose, Life Support, Computer-Based Simulation, High Energy, and the Space Exploraton Test Facilities. The paper discusses selected facilities within each of the five categories and discusses some of the major programs in which these facilities have been involved. Special attention is given to the 20-G Man-Rated Centrifuge, the Human Research Facility, the Plant Crop Growth Facility, the Numerical Aerodynamic Simulation Facility, the Arc-Jet Complex and Hypersonic Test Facility, the Infrared Detector and Cryogenic Test Facility, and the Mars Wind Tunnel. Each facility is described along with its objectives, test parameter ranges, and major current programs and applications.

  14. Ion implantation effects in 'cosmic' dust grains

    NASA Technical Reports Server (NTRS)

    Bibring, J. P.; Langevin, Y.; Maurette, M.; Meunier, R.; Jouffrey, B.; Jouret, C.

    1974-01-01

    Cosmic dust grains, whatever their origin may be, have probably suffered a complex sequence of events including exposure to high doses of low-energy nuclear particles and cycles of turbulent motions. High-voltage electron microscope observations of micron-sized grains either naturally exposed to space environmental parameters on the lunar surface or artificially subjected to space simulated conditions strongly suggest that such events could drastically modify the mineralogical composition of the grains and considerably ease their aggregation during collisions at low speeds. Furthermore, combined mass spectrometer and ionic analyzer studies show that small carbon compounds can be both synthesized during the implantation of a mixture of low-energy D, C, N ions in various solids and released in space by ion sputtering.

  15. Synthesis and spectral characterization of mono- and binuclear copper(II) complexes derived from 2-benzoylpyridine-N4-methyl-3-thiosemicarbazone: Crystal structure of a novel sulfur bridged copper(II) box-dimer

    NASA Astrophysics Data System (ADS)

    Jayakumar, K.; Sithambaresan, M.; Aiswarya, N.; Kurup, M. R. Prathapachandra

    2015-03-01

    Mononuclear and binuclear copper(II) complexes of 2-benzoylpyridine-N4-methyl thiosemicarbazone (HL) were prepared and characterized by a variety of spectroscopic techniques. Structural evidence for the novel sulfur bridged copper(II) iodo binuclear complex is obtained by single crystal X-ray diffraction analysis. The complex [Cu2L2I2], a non-centrosymmetric box dimer, crystallizes in monoclinic C2/c space group and it was found to have distorted square pyramidal geometry (Addison parameter, τ = 0.238) with the square basal plane occupied by the thiosemicarbazone moiety and iodine atom whereas the sulfur atom from the other coordinated thiosemicarbazone moiety occupies the apical position. This is the first crystallographically studied system having non-centrosymmetrical entities bridged via thiolate S atoms with Cu(II)sbnd I bond. The tridentate thiosemicarbazone coordinates in mono deprotonated thionic tautomeric form in all complexes except in sulfato complex, [Cu(HL)(SO4)]·H2O (1) where it binds to the metal centre in neutral form. The magnetic moment values and the EPR spectral studies reflect the binuclearity of some of the complexes. The spin Hamiltonian and bonding parameters are calculated based on EPR studies. In all the complexes g|| > g⊥ > 2.0023 and the g values in frozen DMF are consistent with the dx2-y2 ground state. The thermal stabilities of some of the complexes were also determined.

  16. A delay differential model of ENSO variability: parametric instability and the distribution of extremes

    NASA Astrophysics Data System (ADS)

    Zaliapin, I.; Ghil, M.; Thompson, S.

    2007-12-01

    We consider a Delay Differential Equation (DDE) model for El-Nino Southern Oscillation (ENSO) variability. The model combines two key mechanisms that participate in the ENSO dynamics: delayed negative feedback and seasonal forcing. Descriptive and metric stability analyses of the model are performed in a complete 3D space of its physically relevant parameters. Existence of two regimes --- stable and unstable --- is reported. The domains of the regimes are separated by a sharp neutral curve in the parameter space. The detailed structure of the neutral curve become very complicated (possibly fractal), and individual trajectories within the unstable region become highly complex (possibly chaotic) as the atmosphere-ocean coupling increases. In the unstable regime, spontaneous transitions in the mean "temperature" (i.e., thermocline depth), period, and extreme annual values occur, for purely periodic, seasonal forcing. This indicates (via the continuous dependence theorem) the existence of numerous unstable solutions responsible for the complex dynamics of the system. In the stable regime, only periodic solutions are found. Our results illustrate the role of the distinct parameters of ENSO variability, such as strength of seasonal forcing vs. atmosphere ocean coupling and propagation period of oceanic waves across the Tropical Pacific. The model reproduces, among other phenomena, the Devil's bleachers (caused by period locking) documented in other ENSO models, such as nonlinear PDEs and GCMs, as well as in certain observations. We expect such behavior in much more detailed and realistic models, where it is harder to describe its causes as completely.

  17. The Influence of Sound Cues on the Maintenance of Temporal Organization in the Sprague-Dawley Rat

    NASA Technical Reports Server (NTRS)

    Winget, C. M.; Moeller, K. A.; Holley, D. C.; Souza, Kenneth A. (Technical Monitor)

    1994-01-01

    Temporal organization is a fundamental property of living matter. From single cells to complex animals including man, most physiological systems undergo daily periodic changes in concert with environmental cues (e.g., light, temperature etc.). It is known that pulsed Environmental synchronizers, zeitgebers, (e.g. light) can modify rhythm parameters. Rhythm stability is a necessary requirement for most animal experiments. The extent to which sound can influence the circadian system of laboratory rats is poorly understood. This has implications to animal habitats in the novel environments of the Space-Laboratory or Space Station. A series of three white noise (88+/-0.82 db) zeitgeber experiments were conducted (n=6/experiment).The sound cue was introduced in the circadian free-running phase (DD-NQ) and in one additional case sound was added to the usual photoperiod (12L:12D) to determine masking effects. Circadian rhythm parameters of drinking frequency, feeding frequency, and gross locomotor activity were continuously monitored. Data analysis for these studies included macroscopic and microscopic methods. Raster plots to visually detect entrainment versus free-running period, were plotted for each animal, for all three parameters, during all sound perturbation tested. These data were processed through a series of detrending (robust locally weighted regression analyses) and complex demodulation analyses. In summary, these findings show that periodic "white" noise "influences" the rats circadian system but does not "entrain" the feeding, drinking or locomotor activity rhythms.

  18. Sequential Injection Analysis for Optimization of Molecular Biology Reactions

    PubMed Central

    Allen, Peter B.; Ellington, Andrew D.

    2011-01-01

    In order to automate the optimization of complex biochemical and molecular biology reactions, we developed a Sequential Injection Analysis (SIA) device and combined this with a Design of Experiment (DOE) algorithm. This combination of hardware and software automatically explores the parameter space of the reaction and provides continuous feedback for optimizing reaction conditions. As an example, we optimized the endonuclease digest of a fluorogenic substrate, and showed that the optimized reaction conditions also applied to the digest of the substrate outside of the device, and to the digest of a plasmid. The sequential technique quickly arrived at optimized reaction conditions with less reagent use than a batch process (such as a fluid handling robot exploring multiple reaction conditions in parallel) would have. The device and method should now be amenable to much more complex molecular biology reactions whose variable spaces are correspondingly larger. PMID:21338059

  19. Trajectory-probed instability and statistics of desynchronization events in coupled chaotic systems

    NASA Astrophysics Data System (ADS)

    de Oliveira, Gilson F.; Chevrollier, Martine; Passerat de Silans, Thierry; Oriá, Marcos; de Souza Cavalcante, Hugo L. D.

    2015-11-01

    Complex systems, such as financial markets, earthquakes, and neurological networks, exhibit extreme events whose mechanisms of formation are not still completely understood. These mechanisms may be identified and better studied in simpler systems with dynamical features similar to the ones encountered in the complex system of interest. For instance, sudden and brief departures from the synchronized state observed in coupled chaotic systems were shown to display non-normal statistical distributions similar to events observed in the complex systems cited above. The current hypothesis accepted is that these desynchronization events are influenced by the presence of unstable object(s) in the phase space of the system. Here, we present further evidence that the occurrence of large events is triggered by the visitation of the system's phase-space trajectory to the vicinity of these unstable objects. In the system studied here, this visitation is controlled by a single parameter, and we exploit this feature to observe the effect of the visitation rate in the overall instability of the synchronized state. We find that the probability of escapes from the synchronized state and the size of those desynchronization events are enhanced in attractors whose shapes permit the chaotic trajectories to approach the region of strong instability. This result shows that the occurrence of large events requires not only a large local instability to amplify noise, or to amplify the effect of parameter mismatch between the coupled subsystems, but also that the trajectories of the system wander close to this local instability.

  20. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  1. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  2. A Review of the Scattering-Parameter Extraction Method with Clarification of Ambiguity Issues in Relation to Metamaterial Homogenization

    NASA Astrophysics Data System (ADS)

    Arslanagic, S.; Hansen, T. V.; Mortensen, N. A.; Gregersen, A. H.; Sigmund, O.; Ziolkowski, R. W.; Breinbjerg, O.

    2013-04-01

    The scattering parameter extraction method of metamaterial homogenization is reviewed to show that the only ambiguity is the one related to the choice of the branch of the complex logarithmic function (or the complex inverse cosine function), whereas it has no ambiguity for the sign of the wave number and intrinsic impedance. While the method indeed yields two signs of the intrinsic impedance, and thus the wave number, the signs are dependent, and moreover, both sign combinations lead to the same permittivity and permeability, and are thus permissible. This observation is in distinct contrast to a number of statements in the literature where the correct sign of the intrinsic impedance and wave number, resulting from the scattering parameter method, is chosen by imposing additional physical requirements such as passivity. The scattering parameter method is reviewed through an investigation of a uniform plane wave normally incident on a planar slab in free-space, and the severity of the branch ambiguity is illustrated through simulations of a known metamaterial realization. Several approaches for proper branch selection are reviewed and their suitability to metamaterial samples is discussed.

  3. Methods for performing fast discrete curvelet transforms of data

    DOEpatents

    Candes, Emmanuel; Donoho, David; Demanet, Laurent

    2010-11-23

    Fast digital implementations of the second generation curvelet transform for use in data processing are disclosed. One such digital transformation is based on unequally-spaced fast Fourier transforms (USFFT) while another is based on the wrapping of specially selected Fourier samples. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. Both implementations are fast in the sense that they run in about O(n.sup.2 log n) flops for n by n Cartesian arrays or about O(N log N) flops for Cartesian arrays of size N=n.sup.3; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity.

  4. Field-theoretic simulations of block copolymer nanocomposites in a constant interfacial tension ensemble.

    PubMed

    Koski, Jason P; Riggleman, Robert A

    2017-04-28

    Block copolymers, due to their ability to self-assemble into periodic structures with long range order, are appealing candidates to control the ordering of functionalized nanoparticles where it is well-accepted that the spatial distribution of nanoparticles in a polymer matrix dictates the resulting material properties. The large parameter space associated with block copolymer nanocomposites makes theory and simulation tools appealing to guide experiments and effectively isolate parameters of interest. We demonstrate a method for performing field-theoretic simulations in a constant volume-constant interfacial tension ensemble (nVγT) that enables the determination of the equilibrium properties of block copolymer nanocomposites, including when the composites are placed under tensile or compressive loads. Our approach is compatible with the complex Langevin simulation framework, which allows us to go beyond the mean-field approximation. We validate our approach by comparing our nVγT approach with free energy calculations to determine the ideal domain spacing and modulus of a symmetric block copolymer melt. We analyze the effect of numerical and thermodynamic parameters on the efficiency of the nVγT ensemble and subsequently use our method to investigate the ideal domain spacing, modulus, and nanoparticle distribution of a lamellar forming block copolymer nanocomposite. We find that the nanoparticle distribution is directly linked to the resultant domain spacing and is dependent on polymer chain density, nanoparticle size, and nanoparticle chemistry. Furthermore, placing the system under tension or compression can qualitatively alter the nanoparticle distribution within the block copolymer.

  5. Non-Cartesian MRI Reconstruction With Automatic Regularization Via Monte-Carlo SURE

    PubMed Central

    Weller, Daniel S.; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.

    2013-01-01

    Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate (based on the principle of Stein’s unbiased risk estimate—SURE) of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the ℓ1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error (MSE) optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction. PMID:23591478

  6. Design of bearings for rotor systems based on stability

    NASA Technical Reports Server (NTRS)

    Dhar, D.; Barrett, L. E.; Knospe, C. R.

    1992-01-01

    Design of rotor systems incorporating stable behavior is of great importance to manufacturers of high speed centrifugal machinery since destabilizing mechanisms (from bearings, seals, aerodynamic cross coupling, noncolocation effects from magnetic bearings, etc.) increase with machine efficiency and power density. A new method of designing bearing parameters (stiffness and damping coefficients or coefficients of the controller transfer function) is proposed, based on a numerical search in the parameter space. The feedback control law is based on a decentralized low order controller structure, and the various design requirements are specified as constraints in the specification and parameter spaces. An algorithm is proposed for solving the problem as a sequence of constrained 'minimax' problems, with more and more eigenvalues into an acceptable region in the complex plane. The algorithm uses the method of feasible directions to solve the nonlinear constrained minimization problem at each stage. This methodology emphasizes the designer's interaction with the algorithm to generate acceptable designs by relaxing various constraints and changing initial guesses interactively. A design oriented user interface is proposed to facilitate the interaction.

  7. The Gamma-Ray Burst ToolSHED is Open for Business

    NASA Astrophysics Data System (ADS)

    Giblin, Timothy W.; Hakkila, Jon; Haglin, David J.; Roiger, Richard J.

    2004-09-01

    The GRB ToolSHED, a Gamma-Ray Burst SHell for Expeditions in Data-Mining, is now online and available via a web browser to all in the scientific community. The ToolSHED is an online web utility that contains pre-processed burst attributes of the BATSE catalog and a suite of induction-based machine learning and statistical tools for classification and cluster analysis. Users create their own login account and study burst properties within user-defined multi-dimensional parameter spaces. Although new GRB attributes are periodically added to the database for user selection, the ToolSHED has a feature that allows users to upload their own burst attributes (e.g. spectral parameters, etc.) so that additional parameter spaces can be explored. A data visualization feature using GNUplot and web-based IDL has also been implemented to provide interactive plotting of user-selected session output. In an era in which GRB observations and attributes are becoming increasingly more complex, a utility such as the GRB ToolSHED may play an important role in deciphering GRB classes and understanding intrinsic burst properties.

  8. Prospects for discovering the Higgs-like pseudo-Nambu-Goldstone boson of the classical scale symmetry

    NASA Astrophysics Data System (ADS)

    Farzinnia, Arsham

    2015-11-01

    We examine the impact of the expected reach of the LHC and the XENON1T experiments on the parameter space of the minimal classically scale invariant extension of the standard model (SM), where all the mass scales are induced dynamically by means of the Coleman-Weinberg mechanism. In this framework, the SM content is enlarged by the addition of one complex gauge-singlet scalar with a scale invariant and C P -symmetric potential. The massive pseudoscalar component, protected by the C P symmetry, forms a viable dark matter candidate, and three flavors of the right-handed Majorana neutrinos are included to account for the nonzero masses of the SM neutrinos via the seesaw mechanism. The projected constraints on the parameter space arise by applying the ATLAS heavy Higgs discovery prospects, with an integrated luminosity of 300 and 3000 fb-1 at √{s }=14 TeV , to the pseudo-Nambu-Goldstone boson of the (approximate) scale symmetry, as well as by utilizing the expected reach of the XENON1T direct detection experiment for the discovery of the pseudoscalar dark matter candidate. A null-signal discovery by these future experiments implies that vast regions of the model's parameter space can be thoroughly explored; the combined projections are expected to confine a mixing between the SM and the singlet sector to very small values while probing the viability of the TeV scale pseudoscalar's thermal relic abundance as the dominant dark matter component in the Universe. Furthermore, the vacuum stability and triviality requirements of the framework up to the Planck scale are studied, and the viable region of the parameter space is identified. The results are summarized in extensive exclusion plots, incorporating additionally the prior theoretical and experimental bounds for comparison.

  9. Constructing a polynomial whose nodal set is the three-twist knot 52

    NASA Astrophysics Data System (ADS)

    Dennis, Mark R.; Bode, Benjamin

    2017-06-01

    We describe a procedure that creates an explicit complex-valued polynomial function of three-dimensional space, whose nodal lines are the three-twist knot 52. The construction generalizes a similar approach for lemniscate knots: a braid representation is engineered from finite Fourier series and then considered as the nodal set of a certain complex polynomial which depends on an additional parameter. For sufficiently small values of this parameter, the nodal lines form the three-twist knot. Further mathematical properties of this map are explored, including the relationship of the phase critical points with the Morse-Novikov number, which is nonzero as this knot is not fibred. We also find analogous functions for other simple knots and links. The particular function we find, and the general procedure, should be useful for designing knotted fields of particular knot types in various physical systems.

  10. Deep Learning Methods for Improved Decoding of Linear Codes

    NASA Astrophysics Data System (ADS)

    Nachmani, Eliya; Marciano, Elad; Lugosch, Loren; Gross, Warren J.; Burshtein, David; Be'ery, Yair

    2018-02-01

    The problem of low complexity, close to optimal, channel decoding of linear codes with short to moderate block length is considered. It is shown that deep learning methods can be used to improve a standard belief propagation decoder, despite the large example space. Similar improvements are obtained for the min-sum algorithm. It is also shown that tying the parameters of the decoders across iterations, so as to form a recurrent neural network architecture, can be implemented with comparable results. The advantage is that significantly less parameters are required. We also introduce a recurrent neural decoder architecture based on the method of successive relaxation. Improvements over standard belief propagation are also observed on sparser Tanner graph representations of the codes. Furthermore, we demonstrate that the neural belief propagation decoder can be used to improve the performance, or alternatively reduce the computational complexity, of a close to optimal decoder of short BCH codes.

  11. Synthesis, characterization, crystal structure, DNA/BSA binding ability and antibacterial activity of asymmetric europium complex based on 1,10- phenanthroline

    NASA Astrophysics Data System (ADS)

    Alfi, Nafiseh; Khorasani-Motlagh, Mozhgan; Rezvani, Ali Reza; Noroozifar, Meissam; Molčanov, Krešimir

    2017-06-01

    A heteroleptic europium coordination compound formulated as [Eu(phen)2(OH2)2(Cl)2](Cl)(H2O) (phen = 1,10-phenanthroline), has been synthesized and characterized by elemental analysis, FT-IR spectroscopy, and single-crystal X-ray diffractometer. Crystal structure analysis reveals the complex is crystallized in orthorhombic system with Pca21 space group. Electronic absorption and various emission methods for investigation of the binding system of europium(III) complex to Fish Salmon deoxyribonucleic acid (FS-DNA) and Bovamin Serum Albumin (BSA) have been explored. Furthermore, the binding constants, binding sites and the corresponding thermodynamic parameters of the interaction system based on the van't Hoff equation for FS-DNA and BSA were calculated. The thermodynamic parameters reflect the exothermic nature of emission process (ΔH°<0 and ΔS°<0). The experimental results seem to indicate that the [Eu(phen)2(OH2)2(Cl)2](Cl)(H2O) bound to FS-DNA by non-intercalative mode which the groove binding is preferable mode. Also, the complex exhibits a brilliant antimicrobial activity in vitro against standard bacterial strains.

  12. Real-time control for manufacturing space shuttle main engines: Work in progress

    NASA Technical Reports Server (NTRS)

    Ruokangas, Corinne C.

    1988-01-01

    During the manufacture of space-based assemblies such as Space Shuttle Main Engines, flexibility is required due to the high-cost and low-volume nature of the end products. Various systems have been developed pursuing the goal of adaptive, flexible manufacturing for several space applications, including an Advanced Robotic Welding System for the manufacture of complex components of the Space Shuttle Main Engines. The Advanced Robotic Welding System (AROWS) is an on-going joint effort, funded by NASA, between NASA/Marshall Space Flight Center, and two divisions of Rockwell International: Rocketdyne and the Science Center. AROWS includes two levels of flexible control of both motion and process parameters: Off-line programming using both geometric and weld-process data bases, and real-time control incorporating multiple sensors during weld execution. Both control systems were implemented using conventional hardware and software architectures. The feasibility of enhancing the real-time control system using the problem-solving architecture of Schemer is investigated and described.

  13. Mesoscale behavior study of collector aggregations in a wet dust scrubber.

    PubMed

    Li, Xiaochuan; Wu, Xiang; Hu, Haibin; Jiang, Shuguang; Wei, Tao; Wang, Dongxue

    2018-01-01

    In order to address the bottleneck problem of low fine-particle removal efficiency of self-excited dust scrubbers, this paper is focused on the influence of the intermittent gas-liquid two-phase flow on the mesoscale behavior of collector aggregations. The latter is investigated by the application of high-speed dynamic image technology to the self-excited dust scrubber experimental setup. The real-time-scale monitoring of the dust removal process is provided to clarify its operating mechanism at the mesoscale level. The results obtained show that particulate capturing in self-excited dust scrubber is provided by liquid droplets, liquid films/curtains, bubbles, and their aggregations. Complex spatial and temporal structures are intrinsic to each kind of collector morphology, and these are considered as the major factors controlling the dust removal mechanism of self-excited dust scrubbers. For the specific parameters of gas-liquid two-phase flow under study, the evolution patterns of particular collectors reflect the intrinsic, intermittent, and complex characteristics of the temporal structure. The intermittent initiation of the collector and the air hole formation-collapse cyclic processes provide time and space for the fine dust to escape from being trapped by the collectors. The above mesoscale experimental data provide more insight into the factors reducing the dust removal efficiency of self-excited dust scrubbers. This paper focuses on the reconsideration of the capturer aggregations of self-excited dust scrubbers from the mesoscale. Complex structures in time and space scales exist in each kind of capturer morphology. With changes of operating parameters, the morphology and spatial distributions of capturers diversely change. The change of the capturer over time presents remarkable, intermittent, and complex characteristics of the temporal structure.

  14. Normal contour error measurement on-machine and compensation method for polishing complex surface by MRF

    NASA Astrophysics Data System (ADS)

    Chen, Hua; Chen, Jihong; Wang, Baorui; Zheng, Yongcheng

    2016-10-01

    The Magnetorheological finishing (MRF) process, based on the dwell time method with the constant normal spacing for flexible polishing, would bring out the normal contour error in the fine polishing complex surface such as aspheric surface. The normal contour error would change the ribbon's shape and removal characteristics of consistency for MRF. Based on continuously scanning the normal spacing between the workpiece and the finder by the laser range finder, the novel method was put forward to measure the normal contour errors while polishing complex surface on the machining track. The normal contour errors was measured dynamically, by which the workpiece's clamping precision, multi-axis machining NC program and the dynamic performance of the MRF machine were achieved for the verification and security check of the MRF process. The unit for measuring the normal contour errors of complex surface on-machine was designed. Based on the measurement unit's results as feedback to adjust the parameters of the feed forward control and the multi-axis machining, the optimized servo control method was presented to compensate the normal contour errors. The experiment for polishing 180mm × 180mm aspherical workpiece of fused silica by MRF was set up to validate the method. The results show that the normal contour error was controlled in less than 10um. And the PV value of the polished surface accuracy was improved from 0.95λ to 0.09λ under the conditions of the same process parameters. The technology in the paper has been being applied in the PKC600-Q1 MRF machine developed by the China Academe of Engineering Physics for engineering application since 2014. It is being used in the national huge optical engineering for processing the ultra-precision optical parts.

  15. Turing instability in reaction-diffusion models on complex networks

    NASA Astrophysics Data System (ADS)

    Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya

    2016-09-01

    In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.

  16. Ground Data System Analysis Tools to Track Flight System State Parameters for the Mars Science Laboratory (MSL) and Beyond

    NASA Technical Reports Server (NTRS)

    Allard, Dan; Deforrest, Lloyd

    2014-01-01

    Flight software parameters enable space mission operators fine-tuned control over flight system configurations, enabling rapid and dynamic changes to ongoing science activities in a much more flexible manner than can be accomplished with (otherwise broadly used) configuration file based approaches. The Mars Science Laboratory (MSL), Curiosity, makes extensive use of parameters to support complex, daily activities via commanded changes to said parameters in memory. However, as the loss of Mars Global Surveyor (MGS) in 2006 demonstrated, flight system management by parameters brings with it risks, including the possibility of losing track of the flight system configuration and the threat of invalid command executions. To mitigate this risk a growing number of missions have funded efforts to implement parameter tracking parameter state software tools and services including MSL and the Soil Moisture Active Passive (SMAP) mission. This paper will discuss the engineering challenges and resulting software architecture of MSL's onboard parameter state tracking software and discuss the road forward to make parameter management tools suitable for use on multiple missions.

  17. Aspects of job scheduling

    NASA Technical Reports Server (NTRS)

    Phillips, K.

    1976-01-01

    A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.

  18. Nanofiber Nerve Guide for Peripheral Nerve Repair and Regeneration

    DTIC Science & Technology

    2016-04-01

    faster regeneration and functional recovery. Peripheral nerve injury is a common complication of complex tissue trauma and often results in significant...having poor regeneration overall, the areas of regenerating nerve tissue could often be found in sections of the nerve guide where luminal spaces of...conducted in this Aim also provided important insight into the NGC design parameters necessary to allow for maximum nerve tissue ingrowth and regeneration

  19. Crystallization and preliminary X-ray diffraction analyses of the redox-controlled complex of terminal oxygenase and ferredoxin components in the Rieske nonhaem iron oxygenase carbazole 1,9a-dioxygenase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuzawa, Jun; Aikawa, Hiroki; Umeda, Takashi

    2014-09-25

    A crystal was obtained of the complex between reduced terminal oxygenase and oxidized ferredoxin components of carbazole 1,9a-dioxygenase. The crystal belonged to space group P2{sub 1} and diffracted to 2.25 Å resolution. The initial reaction in bacterial carbazole degradation is catalyzed by carbazole 1,9a-dioxygenase, which consists of terminal oxygenase (Oxy), ferredoxin (Fd) and ferredoxin reductase components. The electron-transfer complex between reduced Oxy and oxidized Fd was crystallized at 293 K using the hanging-drop vapour-diffusion method with PEG 3350 as the precipitant under anaerobic conditions. The crystal diffracted to a maximum resolution of 2.25 Å and belonged to space group P2{submore » 1}, with unit-cell parameters a = 97.3, b = 81.6, c = 116.2 Å, α = γ = 90, β = 100.1°. The V{sub M} value is 2.85 Å{sup 3} Da{sup −1}, indicating a solvent content of 56.8%.« less

  20. Expression, crystallization and preliminary crystallographic analysis of RNA-binding protein Hfq (YmaH) from Bacillus subtilis in complex with an RNA aptamer

    PubMed Central

    Baba, Seiki; Someya, Tatsuhiko; Kawai, Gota; Nakamura, Kouji; Kumasaka, Takashi

    2010-01-01

    The Hfq protein is a hexameric RNA-binding protein which regulates gene expression by binding to RNA under the influence of diverse environmental stresses. Its ring structure binds various types of RNA, including mRNA and sRNA. RNA-bound structures of Hfq from Escherichia coli and Staphylococcus aureus have been revealed to have poly(A) RNA at the distal site and U-rich RNA at the proximal site, respectively. Here, crystals of a complex of the Bacillus subtilis Hfq protein with an A/G-repeat 7-mer RNA (Hfq–RNA) that were prepared using the hanging-drop vapour-diffusion technique are reported. The type 1 Hfq–RNA crystals belonged to space group I422, with unit-cell parameters a = b = 123.70, c = 119.13 Å, while the type 2 Hfq–RNA crystals belonged to space group F222, with unit-cell parameters a = 91.92, b = 92.50, c = 114.92 Å. Diffraction data were collected to a resolution of 2.20 Å from both crystal forms. The hexameric structure of the Hfq protein was clearly shown by self-rotation analysis. PMID:20445260

  1. Implications of tristability in pattern-forming ecosystems

    NASA Astrophysics Data System (ADS)

    Zelnik, Yuval R.; Gandhi, Punit; Knobloch, Edgar; Meron, Ehud

    2018-03-01

    Many ecosystems show both self-organized spatial patterns and multistability of possible states. The combination of these two phenomena in different forms has a significant impact on the behavior of ecosystems in changing environments. One notable case is connected to tristability of two distinct uniform states together with patterned states, which has recently been found in model studies of dryland ecosystems. Using a simple model, we determine the extent of tristability in parameter space, explore its effects on the system dynamics, and consider its implications for state transitions or regime shifts. We analyze the bifurcation structure of model solutions that describe uniform states, periodic patterns, and hybrid states between the former two. We map out the parameter space where these states exist, and note how the different states interact with each other. We further focus on two special implications with ecological significance, breakdown of the snaking range and complex fronts. We find that the organization of the hybrid states within a homoclinic snaking structure breaks down as it meets a Maxwell point where simple fronts are stationary. We also discover a new series of complex fronts between the uniform states, each with its own velocity. We conclude with a brief discussion of the significance of these findings for the dynamics of regime shifts and their potential control.

  2. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian

    2010-12-01

    In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  3. A system performance throughput model applicable to advanced manned telescience systems

    NASA Technical Reports Server (NTRS)

    Haines, Richard F.

    1990-01-01

    As automated space systems become more complex, autonomous, and opaque to the flight crew, it becomes increasingly difficult to determine whether the total system is performing as it should. Some of the complex and interrelated human performance measurement issues are addressed that are related to total system validation. An evaluative throughput model is presented which can be used to generate a human operator-related benchmark or figure of merit for a given system which involves humans at the input and output ends as well as other automated intelligent agents. The concept of sustained and accurate command/control data information transfer is introduced. The first two input parameters of the model involve nominal and off-nominal predicted events. The first of these calls for a detailed task analysis while the second is for a contingency event assessment. The last two required input parameters involving actual (measured) events, namely human performance and continuous semi-automated system performance. An expression combining these four parameters was found using digital simulations and identical, representative, random data to yield the smallest variance.

  4. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  5. Operations analysis (study 2.1): Shuttle upper stage software requirements

    NASA Technical Reports Server (NTRS)

    Wolfe, R. R.

    1974-01-01

    An investigation of software costs related to space shuttle upper stage operations with emphasis on the additional costs attributable to space servicing was conducted. The questions and problem areas include the following: (1) the key parameters involved with software costs; (2) historical data for extrapolation of future costs; (3) elements of the basic software development effort that are applicable to servicing functions; (4) effect of multiple servicing on complexity of the operation; and (5) are recurring software costs significant. The results address these questions and provide a foundation for estimating software costs based on the costs of similar programs and a series of empirical factors.

  6. Bone tissue phantoms for optical flowmeters at large interoptode spacing generated by 3D-stereolithography

    PubMed Central

    Binzoni, Tiziano; Torricelli, Alessandro; Giust, Remo; Sanguinetti, Bruno; Bernhard, Paul; Spinelli, Lorenzo

    2014-01-01

    A bone tissue phantom prototype allowing to test, in general, optical flowmeters at large interoptode spacings, such as laser-Doppler flowmetry or diffuse correlation spectroscopy, has been developed by 3D-stereolithography technique. It has been demonstrated that complex tissue vascular systems of any geometrical shape can be conceived. Absorption coefficient, reduced scattering coefficient and refractive index of the optical phantom have been measured to ensure that the optical parameters reasonably reproduce real human bone tissue in vivo. An experimental demonstration of a possible use of the optical phantom, utilizing a laser-Doppler flowmeter, is also presented. PMID:25136496

  7. Broadband impedance-matched electromagnetic structured ferrite composite in the megahertz range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parke, L.; Hibbins, A. P.; Sambles, J. R.

    2014-06-02

    A high refractive-index structured ferrite composite is designed to experimentally demonstrate broadband impedance matching to free-space. It consists of an array of ferrite cubes that are anisotropically spaced, thereby allowing for independent control of the effective complex permeability and permittivity. Despite having a refractive index of 9.5, the array gives less than 1% reflection and over 90% transmission of normally incident radiation up to 70 MHz for one of the orthogonal linear polarisations lying in a symmetry plane of the array. This result presents a route to the design of MHz-frequency ferrite composites with bespoke electromagnetic parameters for antenna miniaturisation.

  8. The acoustic environment of a sonoluminescing bubble

    NASA Astrophysics Data System (ADS)

    Holzfuss, Joachim; Rüggeberg, Matthias; Holt, R. Glynn

    2000-07-01

    A bubble is levitated in water in a cylindrical resonator which is driven by ultrasound. It has been shown that in a certain region of parameter space the bubble is emitting light pulses (sonoluminescence). One of the properties observed is the enormous spatial stability leaving the bubble "pinned" in space allowing it to emit light with a timing of picosecond accuracy. We argue that the observed stability is due to interactions of the bubble with the resonator. A shock wave emitted at collapse time together with a self generated complex sound field, which is experimentally mapped with high resolution, is responsible for the observed effects.

  9. Verification of Space Station Secondary Power System Stability Using Design of Experiment

    NASA Technical Reports Server (NTRS)

    Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce

    1998-01-01

    This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.

  10. A Renormalization-Group Interpretation of the Connection between Criticality and Multifractals

    NASA Astrophysics Data System (ADS)

    Chang, Tom

    2014-05-01

    Turbulent fluctuations in space plasmas beget phenomena of dynamic complexity. It is known that dynamic renormalization group (DRG) may be employed to understand the concept of forced and/or self-organized criticality (FSOC), which seems to describe certain scaling features of space plasma turbulence. But, it may be argued that dynamic complexity is not just a phenomenon of criticality. It is therefore of interest to inquire if DRG may be employed to study complexity phenomena that are distinctly more complicated than dynamic criticality. Power law scaling generally comes about when the DRG trajectory is attracted to the vicinity of a fixed point in the phase space of the relevant dynamic plasma parameters. What happens if the trajectory lies within a domain influenced by more than one single fixed point or more generally if the transformation underlying the DRG is fully nonlinear? The global invariants of the group under such situations (if they exist) are generally not power laws. Nevertheless, as we shall argue, it may still be possible to talk about local invariants that are power laws with the nonlinearity of transformation prescribing a specific phenomenon as crossovers. It is with such concept in mind that we may provide a connection between the properties of dynamic criticality and multifractals from the point of view of DRG (T. Chang, Chapter VII, "An Introduction to Space Plasma Complexity", Cambridge University Press, 2014). An example in terms of the concepts of finite-size scaling (FSS) and rank-ordered multifractal analysis (ROMA) of a toy model shall be provided. Research partially supported by the US National Science Foundation and the European Community's Seventh Framework Programme (FP7/ 2007-2013) under Grant agreement no. 313038/STORM.

  11. Parametrization study of the land multiparameter VTI elastic waveform inversion

    NASA Astrophysics Data System (ADS)

    He, W.; Plessix, R.-É.; Singh, S.

    2018-06-01

    Multiparameter inversion of seismic data remains challenging due to the trade-off between the different elastic parameters and the non-uniqueness of the solution. The sensitivity of the seismic data to a given subsurface elastic parameter depends on the source and receiver ray/wave path orientations at the subsurface point. In a high-frequency approximation, this is commonly analysed through the study of the radiation patterns that indicate the sensitivity of each parameter versus the incoming (from the source) and outgoing (to the receiver) angles. In practice, this means that the inversion result becomes sensitive to the choice of parametrization, notably because the null-space of the inversion depends on this choice. We can use a least-overlapping parametrization that minimizes the overlaps between the radiation patterns, in this case each parameter is only sensitive in a restricted angle domain, or an overlapping parametrization that contains a parameter sensitive to all angles, in this case overlaps between the radiation parameters occur. Considering a multiparameter inversion in an elastic vertically transverse isotropic medium and a complex land geological setting, we show that the inversion with the least-overlapping parametrization gives less satisfactory results than with the overlapping parametrization. The difficulties come from the complex wave paths that make difficult to predict the areas of sensitivity of each parameter. This shows that the parametrization choice should not only be based on the radiation pattern analysis but also on the angular coverage at each subsurface point that depends on geology and the acquisition layout.

  12. Update on the NASA Glenn PSL Ice Crystal Cloud Characterization (2016)

    NASA Technical Reports Server (NTRS)

    Van Zante, J.; Bencic, T.; Ratvasky, Thomas P.; Struk, Peter M.

    2016-01-01

    NASA Glenn's Propulsion Systems Laboratory (PSL) is an altitude engine research test facility capable of producing ice-crystal and supercooled liquid clouds. The cloud characterization parameter space is fairly large and complex, but the phase of the cloud seems primarily governed by wet bulb temperature. The presentation will discuss some of the issues uncovered through four cloud characterization efforts to date, as well as some of instrumentation that has been used to characterize cloud parameters including cloud uniformity, bulk total water content, median volumetric diameter and max-diameter, percent freeze-out, relative humidity, and an update on the NASA Glenn PSL Ice Crystal Cloud Characterization (2016).

  13. Optical bullets and "rockets" in nonlinear dissipative systems and their transformations and interactions.

    PubMed

    Soto-Crespo, J M; Grelu, Philippe; Akhmediev, Nail

    2006-05-01

    We demonstrate the existence of stable optical light bullets in nonlinear dissipative media for both cases of normal and anomalous chromatic dispersion. The prediction is based on direct numerical simulations of the (3+1)-dimensional complex cubic-quintic Ginzburg-Landau equation. We do not impose conditions of spherical or cylindrical symmetry. Regions of existence of stable bullets are determined in the parameter space. Beyond the domain of parameters where stable bullets are found, unstable bullets can be transformed into "rockets" i.e. bullets elongated in the temporal domain. A few examples of the interaction between two optical bullets are considered using spatial and temporal interaction planes.

  14. Using Simplistic Shape/Surface Models to Predict Brightness in Estimation Filters

    NASA Astrophysics Data System (ADS)

    Wetterer, C.; Sheppard, D.; Hunt, B.

    The prerequisite for using brightness (radiometric flux intensity) measurements in an estimation filter is to have a measurement function that accurately predicts a space objects brightness for variations in the parameters of interest. These parameters include changes in attitude and articulations of particular components (e.g. solar panel east-west offsets to direct sun-tracking). Typically, shape models and bidirectional reflectance distribution functions are combined to provide this forward light curve modeling capability. To achieve precise orbit predictions with the inclusion of shape/surface dependent forces such as radiation pressure, relatively complex and sophisticated modeling is required. Unfortunately, increasing the complexity of the models makes it difficult to estimate all those parameters simultaneously because changes in light curve features can now be explained by variations in a number of different properties. The classic example of this is the connection between the albedo and the area of a surface. If, however, the desire is to extract information about a single and specific parameter or feature from the light curve, a simple shape/surface model could be used. This paper details an example of this where a complex model is used to create simulated light curves, and then a simple model is used in an estimation filter to extract out a particular feature of interest. In order for this to be successful, however, the simple model must be first constructed using training data where the feature of interest is known or at least known to be constant.

  15. Continuous-time discrete-space models for animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.

    2015-01-01

    The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.

  16. ACES: Space shuttle flight software analysis expert system

    NASA Technical Reports Server (NTRS)

    Satterwhite, R. Scott

    1990-01-01

    The Analysis Criteria Evaluation System (ACES) is a knowledge based expert system that automates the final certification of the Space Shuttle onboard flight software. Guidance, navigation and control of the Space Shuttle through all its flight phases are accomplished by a complex onboard flight software system. This software is reconfigured for each flight to allow thousands of mission-specific parameters to be introduced and must therefore be thoroughly certified prior to each flight. This certification is performed in ground simulations by executing the software in the flight computers. Flight trajectories from liftoff to landing, including abort scenarios, are simulated and the results are stored for analysis. The current methodology of performing this analysis is repetitive and requires many man-hours. The ultimate goals of ACES are to capture the knowledge of the current experts and improve the quality and reduce the manpower required to certify the Space Shuttle onboard flight software.

  17. An online spatiotemporal prediction model for dengue fever epidemic in Kaohsiung (Taiwan).

    PubMed

    Yu, Hwa-Lung; Angulo, José M; Cheng, Ming-Hung; Wu, Jiaping; Christakos, George

    2014-05-01

    The emergence and re-emergence of disease epidemics is a complex question that may be influenced by diverse factors, including the space-time dynamics of human populations, environmental conditions, and associated uncertainties. This study proposes a stochastic framework to integrate space-time dynamics in the form of a Susceptible-Infected-Recovered (SIR) model, together with uncertain disease observations, into a Bayesian maximum entropy (BME) framework. The resulting model (BME-SIR) can be used to predict space-time disease spread. Specifically, it was applied to obtain a space-time prediction of the dengue fever (DF) epidemic that took place in Kaohsiung City (Taiwan) during 2002. In implementing the model, the SIR parameters were continually updated and information on new cases of infection was incorporated. The results obtained show that the proposed model is rigorous to user-specified initial values of unknown model parameters, that is, transmission and recovery rates. In general, this model provides a good characterization of the spatial diffusion of the DF epidemic, especially in the city districts proximal to the location of the outbreak. Prediction performance may be affected by various factors, such as virus serotypes and human intervention, which can change the space-time dynamics of disease diffusion. The proposed BME-SIR disease prediction model can provide government agencies with a valuable reference for the timely identification, control, and prevention of DF spread in space and time. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Design challenges for space bioreactors

    NASA Technical Reports Server (NTRS)

    Seshan, P. K.; Petersen, G. R.

    1989-01-01

    The design of bioreactors for operation under conditions of microgravity presents problems and challenges. Absence of a significant body force such as gravity can have profound consequences for interfacial phenomena. Marangoni convection can no longer be overlooked. Many speculations on the advantages and benefits of microgravity can be found in the literature. Initial bioreactor research considerations for space applications had little regard for the suitability of the designs for conditions of microgravity. Bioreactors can be classified in terms of their function and type of operation. The complex interaction of parameters leading to optimal design and operation of a bioreactor is illustrated by the JSC mammalian cell culture system. The design of a bioreactor is strongly dependent upon its intended use as a production unit for cell mass and/or biologicals or as a research reactor for the study of cell growth and function. Therefore a variety of bioreactor configurations are presented in rapid summary. Following this, a rationale is presented for not attempting to derive key design parameters such as the oxygen transfer coefficient from ground-based data. A set of themes/objectives for flight experiments to develop the expertise for design of space bioreactors is then proposed for discussion. These experiments, carried out systematically, will provide a database from which engineering tools for space bioreactor design will be derived.

  19. Ultra-high-field (9.4 T) MRI Analysis of Contrast Agent Transport Across the Blood-Perilymph Barrier and Intrastrial Fluid-Blood Barrier in the Mouse Inner Ear.

    PubMed

    Counter, S Allen; Nikkhou-Aski, Sahar; Damberg, Peter; Berglin, Cecilia Engmér; Laurell, Göran

    2017-08-01

    Effective paramagnetic contrast agent for the penetration of the perilymphatic spaces of the scala tympani, scala vestibuli, and scala media of the mouse inner ear can be determined using intravenous injection of various gadolinium (Gd) complexes and ultra-high-field magnetic resonance imaging (MRI) at 9.4 Tesla. A number of contrast agents have been explored in experimental high-field MRI to determine the most effective Gd complex for ideal signal-to-noise ratio and maximal visualization of the in vivo mammalian inner ear in analyzing the temporal and spatial parameters involved in drug penetration of the blood-perilymph barrier and intrastrial fluid-blood barrier in the mouse model using MRI. Gadoteric acid (Dotarem), Gadobutrol (Gadovist), Gadodiamide (Omniscan), Gadopent acid (Magnevist), and Mangafodipir (Teslascan) were administered intravenously using the tail vein of 60 Balb/C mice. High-resolution T1 images of drug penetration were acquired with a horizontal 9.4 T Agilent magnet after intravenously injection. Signal intensity was used as a metric of temporal and spatial parameters of drug delivery and penetration of the perilymphatic and endolymphatic spaces. ANOVA analysis of the area under the curve of intensity enhancement in perilymph revealed a significant difference (p < 0.05) in the scalae uptake using different contrast agents (F (3,25) = 3.54, p = 0.029). The Gadoteric acid complex Dotarem was found to be the most effective Gd compound in terms of rapid, morphological enhancement for analysis of the temporal, and spatial distribution in the perilymphatic space of the inner ear. Gadoteric acid (Dotarem) demonstrated efficacy as a contrast agent for enhanced visualization of the perilymphatic spaces of the inner ear labyrinthine in the mouse, including the scala tympani and scala vestibuli of the cochlea, and the semicircular canals of the vestibular apparatus. These findings may inform the clinical application of Gd compounds in patients with inner ear fluid disorders and vertigo.

  20. Using state-space models to predict the abundance of juvenile and adult sea lice on Atlantic salmon.

    PubMed

    Elghafghuf, Adel; Vanderstichel, Raphael; St-Hilaire, Sophie; Stryhn, Henrik

    2018-04-11

    Sea lice are marine parasites affecting salmon farms, and are considered one of the most costly pests of the salmon aquaculture industry. Infestations of sea lice on farms significantly increase opportunities for the parasite to spread in the surrounding ecosystem, making control of this pest a challenging issue for salmon producers. The complexity of controlling sea lice on salmon farms requires frequent monitoring of the abundance of different sea lice stages over time. Industry-based data sets of counts of lice are amenable to multivariate time-series data analyses. In this study, two sets of multivariate autoregressive state-space models were applied to Chilean sea lice data from six Atlantic salmon production cycles on five isolated farms (at least 20 km seaway distance away from other known active farms), to evaluate the utility of these models for predicting sea lice abundance over time on farms. The models were constructed with different parameter configurations, and the analysis demonstrated large heterogeneity between production cycles for the autoregressive parameter, the effects of chemotherapeutant bath treatments, and the process-error variance. A model allowing for different parameters across production cycles had the best fit and the smallest overall prediction errors. However, pooling information across cycles for the drift and observation error parameters did not substantially affect model performance, thus reducing the number of necessary parameters in the model. Bath treatments had strong but variable effects for reducing sea lice burdens, and these effects were stronger for adult lice than juvenile lice. Our multivariate state-space models were able to handle different sea lice stages and provide predictions for sea lice abundance with reasonable accuracy up to five weeks out. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Using global sensitivity analysis to understand higher order interactions in complex models: an application of GSA on the Revised Universal Soil Loss Equation (RUSLE) to quantify model sensitivity and implications for ecosystem services management in Costa Rica

    NASA Astrophysics Data System (ADS)

    Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.

    2011-12-01

    Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch of soil (C factor), slope angle (L and S factor), and percentage of land area covered by surface cover (C factor). Our findings give further support to the importance of vegetation as a vital ecosystem service provider - soil loss reduction. Concurrent, progress is already been made in Costa Rica, where dam managers are moving forward on a Payment for Ecosystem Services scheme to help keep private lands forested and to improve crop management through targeted investments. Use of complex watershed models, such as RUSLE can help managers quantify the effect of specific land use changes. Moreover, effective land management of vegetation has other important benefits, such as bundled ecosystem services (e.g. pollination, habitat connectivity, etc) and improvements of communities' livelihoods.

  2. Insight into model mechanisms through automatic parameter fitting: a new methodological framework for model development

    PubMed Central

    2014-01-01

    Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522

  3. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  4. Diverse knowledges and competing interests: an essay on socio-technical problem-solving.

    PubMed

    di Norcia, Vincent

    2002-01-01

    Solving complex socio-technical problems, this paper claims, involves diverse knowledges (cognitive diversity), competing interests (social diversity), and pragmatism. To explain this view, this paper first explores two different cases: Canadian pulp and paper mill pollution and siting nuclear reactors in systematically sensitive areas of California. Solving such socio-technically complex problems involves cognitive diversity as well as social diversity and pragmatism. Cognitive diversity requires one to not only recognize relevant knowledges but also to assess their validity. Finally, it is suggested, integrating the resultant set of diverse relevant and valid knowledges determines the parameters of the solution space for the problem.

  5. Enhanced di-Higgs boson production in the complex Higgs singlet model

    DOE PAGES

    Dawson, S.; Sullivan, M.

    2018-01-31

    Here, we consider the standard model (SM) extended by the addition of a complex scalar singlet, with no assumptions about additional symmetries of the potential. This model provides for resonant di-Higgs production of Higgs particles with different masses. We demonstrate that regions of parameter space allowed by precision electroweak measurements, experimental limits on single Higgs production, and perturbative unitarity allow for large di-Higgs production rates relative to the SM rates. In this scenario, the dominant production mechanism of the new scalar states is di-Higgs production. Results are presented formore » $$\\sqrt{s}$$ = 13, 27 and 100 TeV.« less

  6. Enhanced di-Higgs boson production in the complex Higgs singlet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, S.; Sullivan, M.

    Here, we consider the standard model (SM) extended by the addition of a complex scalar singlet, with no assumptions about additional symmetries of the potential. This model provides for resonant di-Higgs production of Higgs particles with different masses. We demonstrate that regions of parameter space allowed by precision electroweak measurements, experimental limits on single Higgs production, and perturbative unitarity allow for large di-Higgs production rates relative to the SM rates. In this scenario, the dominant production mechanism of the new scalar states is di-Higgs production. Results are presented formore » $$\\sqrt{s}$$ = 13, 27 and 100 TeV.« less

  7. Sampling ARG of multiple populations under complex configurations of subdivision and admixture.

    PubMed

    Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi

    2016-04-01

    Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Real topological entropy versus metric entropy for birational measure-preserving transformations

    NASA Astrophysics Data System (ADS)

    Abarenkova, N.; Anglès d'Auriac, J.-Ch.; Boukraa, S.; Maillard, J.-M.

    2000-10-01

    We consider a family of birational measure-preserving transformations of two complex variables, depending on one parameter for which simple rational expressions for the dynamical zeta function have been conjectured, together with an equality between the topological entropy and the logarithm of the Arnold complexity (divided by the number of iterations). Similar results have been obtained for the adaptation of these two concepts to dynamical systems of real variables, yielding to introduce a “real topological entropy” and a “real Arnold complexity”. We try to compare, here, the Kolmogorov-Sinai metric entropy and this real Arnold complexity, or real topological entropy, on this particular example of a one-parameter dependent birational transformation of two variables. More precisely, we analyze, using an infinite precision calculation, the Lyapunov characteristic exponents for various values of the parameter of the birational transformation, in order to compare these results with the ones for the real Arnold complexity. We find a quite surprising result: for this very birational example, and, in fact, for a large set of birational measure-preserving mappings generated by involutions, the Lyapunov characteristic exponents seem to be equal to zero or, at least, extremely small, for all the orbits we have considered, and for all values of the parameter. Birational measure-preserving transformations, generated by involutions, could thus allow to better understand the difference between the topological description and the probabilistic description of discrete dynamical systems. Many birational measure-preserving transformations, generated by involutions, seem to provide examples of discrete dynamical systems which can be topologically chaotic while they are metrically almost quasi-periodic. Heuristically, this can be understood as a consequence of the fact that their orbits seem to form some kind of “transcendental foliation” of the two-dimensional space of variables.

  9. Synthesis and spectral characterization of mono- and binuclear copper(II) complexes derived from 2-benzoylpyridine-N⁴-methyl-3-thiosemicarbazone: crystal structure of a novel sulfur bridged copper(II) box-dimer.

    PubMed

    Jayakumar, K; Sithambaresan, M; Aiswarya, N; Kurup, M R Prathapachandra

    2015-03-15

    Mononuclear and binuclear copper(II) complexes of 2-benzoylpyridine-N(4)-methyl thiosemicarbazone (HL) were prepared and characterized by a variety of spectroscopic techniques. Structural evidence for the novel sulfur bridged copper(II) iodo binuclear complex is obtained by single crystal X-ray diffraction analysis. The complex [Cu2L2I2], a non-centrosymmetric box dimer, crystallizes in monoclinic C2/c space group and it was found to have distorted square pyramidal geometry (Addison parameter, τ=0.238) with the square basal plane occupied by the thiosemicarbazone moiety and iodine atom whereas the sulfur atom from the other coordinated thiosemicarbazone moiety occupies the apical position. This is the first crystallographically studied system having non-centrosymmetrical entities bridged via thiolate S atoms with Cu(II)I bond. The tridentate thiosemicarbazone coordinates in mono deprotonated thionic tautomeric form in all complexes except in sulfato complex, [Cu(HL)(SO4)]·H2O (1) where it binds to the metal centre in neutral form. The magnetic moment values and the EPR spectral studies reflect the binuclearity of some of the complexes. The spin Hamiltonian and bonding parameters are calculated based on EPR studies. In all the complexes g||>g⊥>2.0023 and the g values in frozen DMF are consistent with the d(x2-y2) ground state. The thermal stabilities of some of the complexes were also determined. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing

    NASA Astrophysics Data System (ADS)

    Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix

    2017-04-01

    It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.

  11. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  12. Maximum profile likelihood estimation of differential equation parameters through model based smoothing state estimates.

    PubMed

    Campbell, D A; Chkrebtii, O

    2013-12-01

    Statistical inference for biochemical models often faces a variety of characteristic challenges. In this paper we examine state and parameter estimation for the JAK-STAT intracellular signalling mechanism, which exemplifies the implementation intricacies common in many biochemical inference problems. We introduce an extension to the Generalized Smoothing approach for estimating delay differential equation models, addressing selection of complexity parameters, choice of the basis system, and appropriate optimization strategies. Motivated by the JAK-STAT system, we further extend the generalized smoothing approach to consider a nonlinear observation process with additional unknown parameters, and highlight how the approach handles unobserved states and unevenly spaced observations. The methodology developed is generally applicable to problems of estimation for differential equation models with delays, unobserved states, nonlinear observation processes, and partially observed histories. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  13. Two Stage Repair of Composite Craniofacial Defects with Antibiotic Releasing Porous Poly(methyl methacrylate) Space Maintainers and Bone Regeneration

    NASA Astrophysics Data System (ADS)

    Spicer, Patrick

    Craniofacial defects resulting from trauma and resection present many challenges to reconstruction due to the complex structure, combinations of tissues, and environment, with exposure to the oral, skin and nasal mucosal pathogens. Tissue engineering seeks to regenerate the tissues lost in these defects; however, the composite nature and proximity to colonizing bacteria remain difficult to overcome. Additionally, many tissue engineering approaches have further hurdles to overcome in the regulatory process to clinical translation. As such these studies investigated a two stage strategy employing an antibiotic-releasing porous polymethylmethacrylate space maintainer fabricated with materials currently part of products approved or cleared by the United States Food and Drug Administration, expediting the translation to the clinic. This porous space maintainer holds the bone defect open allowing soft tissue to heal around the defect. The space maintainer can then be removed and one regenerated in the defect. These studies investigated the individual components of this strategy. The porous space maintainer showed similar soft tissue healing and response to non-porous space maintainers in a rabbit composite tissue defect. The antibiotic-releasing space maintainers showed release of antibiotics from 1-5 weeks, which could be controlled by loading and fabrication parameters. In vivo, space maintainers releasing a high dose of antibiotics for an extended period of time increased soft tissue healing over burst release space maintainers in an infected composite tissue defect model in a rabbit mandible. Finally, stabilization of bone defects and regeneration could be improved through scaffold structures and delivery of a bone forming growth factor. These studies illustrate the possibility of the two stage strategy for repair of composite tissue defects of the craniofacial complex.

  14. Observations of Transient ISS Floating Potential Variations During High Voltage Solar Array Operations

    NASA Technical Reports Server (NTRS)

    Willis, Emily M.; Minow, Joseph I.; Parker, Linda N.; Pour, Maria Z. A.; Swenson, Charles; Nishikawa, Ken-ichi; Krause, Linda Habash

    2016-01-01

    The International Space Station (ISS) continues to be a world-class space research laboratory after over 15 years of operations, and it has proven to be a fantastic resource for observing spacecraft floating potential variations related to high voltage solar array operations in Low Earth Orbit (LEO). Measurements of the ionospheric electron density and temperature along the ISS orbit and variations in the ISS floating potential are obtained from the Floating Potential Measurement Unit (FPMU). In particular, rapid variations in ISS floating potential during solar array operations on time scales of tens of milliseconds can be recorded due to the 128 Hz sample rate of the Floating Potential Probe (FPP) pro- viding interesting insight into high voltage solar array interaction with the space plasma environment. Comparing the FPMU data with the ISS operations timeline and solar array data provides a means for correlating some of the more complex and interesting transient floating potential variations with mission operations. These complex variations are not reproduced by current models and require further study to understand the underlying physical processes. In this paper we present some of the floating potential transients observed over the past few years along with the relevant space environment parameters and solar array operations data.

  15. A General Framework for Thermodynamically Consistent Parameterization and Efficient Sampling of Enzymatic Reactions

    PubMed Central

    Saa, Pedro; Nielsen, Lars K.

    2015-01-01

    Kinetic models provide the means to understand and predict the dynamic behaviour of enzymes upon different perturbations. Despite their obvious advantages, classical parameterizations require large amounts of data to fit their parameters. Particularly, enzymes displaying complex reaction and regulatory (allosteric) mechanisms require a great number of parameters and are therefore often represented by approximate formulae, thereby facilitating the fitting but ignoring many real kinetic behaviours. Here, we show that full exploration of the plausible kinetic space for any enzyme can be achieved using sampling strategies provided a thermodynamically feasible parameterization is used. To this end, we developed a General Reaction Assembly and Sampling Platform (GRASP) capable of consistently parameterizing and sampling accurate kinetic models using minimal reference data. The former integrates the generalized MWC model and the elementary reaction formalism. By formulating the appropriate thermodynamic constraints, our framework enables parameterization of any oligomeric enzyme kinetics without sacrificing complexity or using simplifying assumptions. This thermodynamically safe parameterization relies on the definition of a reference state upon which feasible parameter sets can be efficiently sampled. Uniform sampling of the kinetics space enabled dissecting enzyme catalysis and revealing the impact of thermodynamics on reaction kinetics. Our analysis distinguished three reaction elasticity regions for common biochemical reactions: a steep linear region (0> ΔGr >-2 kJ/mol), a transition region (-2> ΔGr >-20 kJ/mol) and a constant elasticity region (ΔGr <-20 kJ/mol). We also applied this framework to model more complex kinetic behaviours such as the monomeric cooperativity of the mammalian glucokinase and the ultrasensitive response of the phosphoenolpyruvate carboxylase of Escherichia coli. In both cases, our approach described appropriately not only the kinetic behaviour of these enzymes, but it also provided insights about the particular features underpinning the observed kinetics. Overall, this framework will enable systematic parameterization and sampling of enzymatic reactions. PMID:25874556

  16. Water Quality Assessment in the Harbin Reach of the Songhuajiang River (China) Based on a Fuzzy Rough Set and an Attribute Recognition Theoretical Model

    PubMed Central

    An, Yan; Zou, Zhihong; Li, Ranran

    2014-01-01

    A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A). Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B), was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system. PMID:24675643

  17. Visualization of International Solar-Terrestrial Physics Program (ISTP) data

    NASA Technical Reports Server (NTRS)

    Kessel, Ramona L.; Candey, Robert M.; Hsieh, Syau-Yun W.; Kayser, Susan

    1995-01-01

    The International Solar-Terrestrial Physics Program (ISTP) is a multispacecraft, multinational program whose objective is to promote further understanding of the Earth's complex plasma environment. Extensive data sharing and data analysis will be needed to ensure the success of the overall ISTP program. For this reason, there has been a special emphasis on data standards throughout ISTP. One of the key tools will be the common data format (CDF), developed, maintained, and evolved at the National Space Science Data Center (NSSDC), with the set of ISTP implementation guidelines specially designed for space physics data sets by the Space Physics Data Facility (associated with the NSSDC). The ISTP guidelines were developed to facilitate searching, plotting, merging, and subsetting of data sets. We focus here on the plotting application. A prototype software package was developed to plot key parameter (KP) data from the ISTP program at the Science Planning and Operations Facility (SPOF). The ISTP Key Parameter Visualization Tool is based on the Interactive Data Language (IDL) and is keyed to the ISTP guidelines, reading data stored in CDF. With the combination of CDF, the ISTP guidelines, and the visualization software, we can look forward to easier and more effective data sharing and use among ISTP scientists.

  18. A Space-Time Signal Decomposition Algorithm for Downlink MIMO DS-CDMA Receivers

    NASA Astrophysics Data System (ADS)

    Wang, Yung-Yi; Fang, Wen-Hsien; Chen, Jiunn-Tsair

    We propose a dimension reduction algorithm for the receiver of the downlink of direct-sequence code-division multiple access (DS-CDMA) systems in which both the transmitters and the receivers employ antenna arrays of multiple elements. To estimate the high order channel parameters, we develop a layered architecture using dimension-reduced parameter estimation algorithms to estimate the frequency-selective multipath channels. In the proposed architecture, to exploit the space-time geometric characteristics of multipath channels, spatial beamformers and constrained (or unconstrained) temporal filters are adopted for clustered-multipath grouping and path isolation. In conjunction with the multiple access interference (MAI) suppression techniques, the proposed architecture jointly estimates the direction of arrivals, propagation delays, and fading amplitudes of the downlink fading multipaths. With the outputs of the proposed architecture, the signals of interest can then be naturally detected by using path-wise maximum ratio combining. Compared to the traditional techniques, such as the Joint-Angle-and-Delay-Estimation (JADE) algorithm for DOA-delay joint estimation and the space-time minimum mean square error (ST-MMSE) algorithm for signal detection, computer simulations show that the proposed algorithm substantially mitigate the computational complexity at the expense of only slight performance degradation.

  19. PyDREAM: high-dimensional parameter inference for biological models in python.

    PubMed

    Shockley, Erin M; Vrugt, Jasper A; Lopez, Carlos F; Valencia, Alfonso

    2018-02-15

    Biological models contain many parameters whose values are difficult to measure directly via experimentation and therefore require calibration against experimental data. Markov chain Monte Carlo (MCMC) methods are suitable to estimate multivariate posterior model parameter distributions, but these methods may exhibit slow or premature convergence in high-dimensional search spaces. Here, we present PyDREAM, a Python implementation of the (Multiple-Try) Differential Evolution Adaptive Metropolis [DREAM(ZS)] algorithm developed by Vrugt and ter Braak (2008) and Laloy and Vrugt (2012). PyDREAM achieves excellent performance for complex, parameter-rich models and takes full advantage of distributed computing resources, facilitating parameter inference and uncertainty estimation of CPU-intensive biological models. PyDREAM is freely available under the GNU GPLv3 license from the Lopez lab GitHub repository at http://github.com/LoLab-VU/PyDREAM. c.lopez@vanderbilt.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  20. Estimation of channel parameters and background irradiance for free-space optical link.

    PubMed

    Khatoon, Afsana; Cowley, William G; Letzepis, Nick; Giggenbach, Dirk

    2013-05-10

    Free-space optical communication can experience severe fading due to optical scintillation in long-range links. Channel estimation is also corrupted by background and electrical noise. Accurate estimation of channel parameters and scintillation index (SI) depends on perfect removal of background irradiance. In this paper, we propose three different methods, the minimum-value (MV), mean-power (MP), and maximum-likelihood (ML) based methods, to remove the background irradiance from channel samples. The MV and MP methods do not require knowledge of the scintillation distribution. While the ML-based method assumes gamma-gamma scintillation, it can be easily modified to accommodate other distributions. Each estimator's performance is compared using simulation data as well as experimental measurements. The estimators' performance are evaluated from low- to high-SI areas using simulation data as well as experimental trials. The MV and MP methods have much lower complexity than the ML-based method. However, the ML-based method shows better SI and background-irradiance estimation performance.

  1. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  2. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  3. Water sprays in space retrieval operations. [for disabled spacecraft detumbling and despinning

    NASA Technical Reports Server (NTRS)

    Freesland, D. C.

    1978-01-01

    The water spray technique (WST) for nullifying the angular momentum of a disabled spacecraft is examined. Such a despinning operation is necessary before a disabled spacecraft can be retrieved by the Space Shuttle. The WST involving the use of liquid sprays appears to be less complex and costly than other techniques proposed to despin a disabled vehicle. A series of experiments have been conducted to determine physical properties of water sprays exhausting into a vacuum. A computer model is built which together with the experimental results yields satellite despin performance parameters. The selection and retrieval of an actual disabled spacecraft is considered to demonstrate an application of the WST.

  4. Q-space analysis of light scattering by ice crystals

    NASA Astrophysics Data System (ADS)

    Heinson, Yuli W.; Maughan, Justin B.; Ding, Jiachen; Chakrabarti, Amitabha; Yang, Ping; Sorensen, Christopher M.

    2016-12-01

    Q-space analysis is applied to extensive simulations of the single-scattering properties of ice crystals with various habits/shapes over a range of sizes. The analysis uncovers features common to all the shapes: a forward scattering regime with intensity quantitatively related to the Rayleigh scattering by the particle and the internal coupling parameter, followed by a Guinier regime dependent upon the particle size, a complex power law regime with incipient two dimensional diffraction effects, and, in some cases, an enhanced backscattering regime. The effects of significant absorption on the scattering profile are also studied. The overall features found for the ice crystals are similar to features in scattering from same sized spheres.

  5. A class of traveling wave solutions for space-time fractional biological population model in mathematical physics

    NASA Astrophysics Data System (ADS)

    Akram, Ghazala; Batool, Fiza

    2017-10-01

    The (G'/G)-expansion method is utilized for a reliable treatment of space-time fractional biological population model. The method has been applied in the sense of the Jumarie's modified Riemann-Liouville derivative. Three classes of exact traveling wave solutions, hyperbolic, trigonometric and rational solutions of the associated equation are characterized with some free parameters. A generalized fractional complex transform is applied to convert the fractional equations to ordinary differential equations which subsequently resulted in number of exact solutions. It should be mentioned that the (G'/G)-expansion method is very effective and convenient for solving nonlinear partial differential equations of fractional order whose balancing number is a negative integer.

  6. Recursive Branching Simulated Annealing Algorithm

    NASA Technical Reports Server (NTRS)

    Bolcar, Matthew; Smith, J. Scott; Aronstein, David

    2012-01-01

    This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.

  7. Analysis of mesoscopic attenuation in gas-hydrate bearing sediments

    NASA Astrophysics Data System (ADS)

    Rubino, J. G.; Ravazzoli, C. L.; Santos, J. E.

    2007-05-01

    Several authors have shown that seismic wave attenuation combined with seismic velocities constitute a useful geophysical tool to infer the presence and amounts of gas hydrates lying in the pore space of the sediments. However, it is still not fully understood the loss mechanism associated to the presence of the hydrates, and most of the works dealing with this problem focuse on macroscopic fluid flow, friction between hydrates and sediment matrix and squirt flow. It is well known that an important cause of the attenuation levels observed in seismic data from some sedimentary regions is the mesoscopic loss mechanism, caused by heterogeneities in the rock and fluid properties greater than the pore size but much smaller than the wavelengths. In order to analyze this effect in heterogeneous gas-hydrate bearing sediments, we developed a finite-element procedure to obtain the effective complex modulus of an heterogeneous porous material containing gas hydrates in its pore space using compressibility tests at different oscillatory frequencies in the seismic range. The complex modulus were obtained by solving Biot's equations of motion in the space-frequency domain with appropriate boundary conditions representing a gedanken laboratory experiment measuring the complex volume change of a representative sample of heterogeneous bulk material. This complex modulus in turn allowed us to obtain the corresponding effective phase velocity and quality factor for each frequency and spatial gas hydrate distribution. Physical parameters taken from the Mallik 5L-38 Gas Hydrate Research well (Mackenzie Delta, Canada) were used to analyze the mesoscopic effects in realistic hydrated sediments.

  8. Catchment Tomography - Joint Estimation of Surface Roughness and Hydraulic Conductivity with the EnKF

    NASA Astrophysics Data System (ADS)

    Baatz, D.; Kurtz, W.; Hendricks Franssen, H. J.; Vereecken, H.; Kollet, S. J.

    2017-12-01

    Parameter estimation for physically based, distributed hydrological models becomes increasingly challenging with increasing model complexity. The number of parameters is usually large and the number of observations relatively small, which results in large uncertainties. A moving transmitter - receiver concept to estimate spatially distributed hydrological parameters is presented by catchment tomography. In this concept, precipitation, highly variable in time and space, serves as a moving transmitter. As response to precipitation, runoff and stream discharge are generated along different paths and time scales, depending on surface and subsurface flow properties. Stream water levels are thus an integrated signal of upstream parameters, measured by stream gauges which serve as the receivers. These stream water level observations are assimilated into a distributed hydrological model, which is forced with high resolution, radar based precipitation estimates. Applying a joint state-parameter update with the Ensemble Kalman Filter, the spatially distributed Manning's roughness coefficient and saturated hydraulic conductivity are estimated jointly. The sequential data assimilation continuously integrates new information into the parameter estimation problem, especially during precipitation events. Every precipitation event constrains the possible parameter space. In the approach, forward simulations are performed with ParFlow, a variable saturated subsurface and overland flow model. ParFlow is coupled to the Parallel Data Assimilation Framework for the data assimilation and the joint state-parameter update. In synthetic, 3-dimensional experiments including surface and subsurface flow, hydraulic conductivity and the Manning's coefficient are efficiently estimated with the catchment tomography approach. A joint update of the Manning's coefficient and hydraulic conductivity tends to improve the parameter estimation compared to a single parameter update, especially in cases of biased initial parameter ensembles. The computational experiments additionally show to which degree of spatial heterogeneity and to which degree of uncertainty of subsurface flow parameters the Manning's coefficient and hydraulic conductivity can be estimated efficiently.

  9. Spontaneous PT-Symmetry Breaking for Systems of Noncommutative Euclidean Lie Algebraic Type

    NASA Astrophysics Data System (ADS)

    Dey, Sanjib; Fring, Andreas; Mathanaranjan, Thilagarajah

    2015-11-01

    We propose a noncommutative version of the Euclidean Lie algebra E 2. Several types of non-Hermitian Hamiltonian systems expressed in terms of generic combinations of the generators of this algebra are investigated. Using the breakdown of the explicitly constructed Dyson maps as a criterium, we identify the domains in the parameter space in which the Hamiltonians have real energy spectra and determine the exceptional points signifying the crossover into the different types of spontaneously broken PT-symmetric regions with pairs of complex conjugate eigenvalues. We find exceptional points which remain invariant under the deformation as well as exceptional points becoming dependent on the deformation parameter of the algebra.

  10. Load Balancing in Multi Cloud Computing Environment with Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Vhansure, Fularani; Deshmukh, Apurva; Sumathy, S.

    2017-11-01

    Cloud is a pool of resources that is available on pay per use model. It provides services to the user which is increasing rapidly. Load balancing is an issue because it cannot handle so many requests at a time. It is also known as NP complete problem. In traditional system the functions consist of various parameter values to maximise it in order to achieve best optimal individualsolutions. Challenge is when there are many parameters of solutionsin the system space. Another challenge is to optimize the function which is much more complex. In this paper, various techniques to handle load balancing virtually (VM) as well as physically (nodes) using genetic algorithm is discussed.

  11. Theoretical study of production of unique glasses in space

    NASA Technical Reports Server (NTRS)

    Larsen, D. C.

    1974-01-01

    Analytical functional relationships describing homogeneous nucleation and crystallization in various supercooled liquids were developed. The time and temperature dependent relationships of nucleation and crystallization (intrinsic properties) are being used to relate glass forming tendency to extrinsic parameters such as cooling rate through computer simulation. Single oxide systems are being studied initially to aid in developing workable kinetic models and to indicate the primary materials parameters affecting glass formation. The theory and analytical expressions developed for simple systems is then extended to complex oxide systems. A thorough understanding of nucleation and crystallization kinetics of glass forming systems provides a priori knowledge of the ability of a given system to form a glass.

  12. Convergent Cross Mapping: Basic concept, influence of estimation parameters and practical application.

    PubMed

    Schiecke, Karin; Pester, Britta; Feucht, Martha; Leistritz, Lutz; Witte, Herbert

    2015-01-01

    In neuroscience, data are typically generated from neural network activity. Complex interactions between measured time series are involved, and nothing or only little is known about the underlying dynamic system. Convergent Cross Mapping (CCM) provides the possibility to investigate nonlinear causal interactions between time series by using nonlinear state space reconstruction. Aim of this study is to investigate the general applicability, and to show potentials and limitation of CCM. Influence of estimation parameters could be demonstrated by means of simulated data, whereas interval-based application of CCM on real data could be adapted for the investigation of interactions between heart rate and specific EEG components of children with temporal lobe epilepsy.

  13. A new Bayesian Earthquake Analysis Tool (BEAT)

    NASA Astrophysics Data System (ADS)

    Vasyura-Bathke, Hannes; Dutta, Rishabh; Jónsson, Sigurjón; Mai, Martin

    2017-04-01

    Modern earthquake source estimation studies increasingly use non-linear optimization strategies to estimate kinematic rupture parameters, often considering geodetic and seismic data jointly. However, the optimization process is complex and consists of several steps that need to be followed in the earthquake parameter estimation procedure. These include pre-describing or modeling the fault geometry, calculating the Green's Functions (often assuming a layered elastic half-space), and estimating the distributed final slip and possibly other kinematic source parameters. Recently, Bayesian inference has become popular for estimating posterior distributions of earthquake source model parameters given measured/estimated/assumed data and model uncertainties. For instance, some research groups consider uncertainties of the layered medium and propagate these to the source parameter uncertainties. Other groups make use of informative priors to reduce the model parameter space. In addition, innovative sampling algorithms have been developed that efficiently explore the often high-dimensional parameter spaces. Compared to earlier studies, these improvements have resulted in overall more robust source model parameter estimates that include uncertainties. However, the computational demands of these methods are high and estimation codes are rarely distributed along with the published results. Even if codes are made available, it is often difficult to assemble them into a single optimization framework as they are typically coded in different programing languages. Therefore, further progress and future applications of these methods/codes are hampered, while reproducibility and validation of results has become essentially impossible. In the spirit of providing open-access and modular codes to facilitate progress and reproducible research in earthquake source estimations, we undertook the effort of producing BEAT, a python package that comprises all the above-mentioned features in one single programing environment. The package is build on top of the pyrocko seismological toolbox (www.pyrocko.org) and makes use of the pymc3 module for Bayesian statistical model fitting. BEAT is an open-source package (https://github.com/hvasbath/beat) and we encourage and solicit contributions to the project. In this contribution, we present our strategy for developing BEAT, show application examples, and discuss future developments.

  14. Manufacture of multi-layer woven preforms

    NASA Technical Reports Server (NTRS)

    Mohamed, M. H.; Zhang, Z.; Dickinson, L.

    1988-01-01

    This paper reviews current three-dimensional weaving processes and discusses a process developed at the Mars Mission Research Center of North Carolina State University to weave three-dimensional multilayer fabrics. The fabrics may vary in size and complexity from simple panels to T-section or I-section beams to large stiffened panels. Parameters such as fiber orientation, volume fraction of the fiber required in each direction, yarn spacings or density, etc., which determine the physical properties of the composites are discussed.

  15. Friction Stir Welding at MSFC: Kinematics

    NASA Technical Reports Server (NTRS)

    Nunes, A. C., Jr.

    2001-01-01

    In 1991 The Welding Institute of the United Kingdom patented the Friction Stir Welding (FSW) process. In FSW a rotating pin-tool is inserted into a weld seam and literally stirs the faying surfaces together as it moves up the seam. By April 2000 the American Welding Society International Welding and Fabricating Exposition featured several exhibits of commercial FSW processes and the 81st Annual Convention devoted a technical session to the process. The FSW process is of interest to Marshall Space Flight Center (MSFC) as a means of avoiding hot-cracking problems presented by the 2195 aluminum-lithium alloy, which is the primary constituent of the Lightweight Space Shuttle External Tank. The process has been under development at MSFC for External Tank applications since the early 1990's. Early development of the FSW process proceeded by cut-and-try empirical methods. A substantial and complex body of data resulted. A theoretical model was wanted to deal with the complexity and reduce the data to concepts serviceable for process diagnostics, optimization, parameter selection, etc. A first step in understanding the FSW process is to determine the kinematics, i.e., the flow field in the metal in the vicinity of the pin-tool. Given the kinematics, the dynamics, i.e., the forces, can be targeted. Given a completed model of the FSW process, attempts at rational design of tools and selection of process parameters can be made.

  16. PlasmaLab/Eco-Plasma - The future of complex plasma research in space

    NASA Astrophysics Data System (ADS)

    Knapek, Christina; Thomas, Hubertus; Huber, Peter; Mohr, Daniel; Hagl, Tanja; Konopka, Uwe; Lipaev, Andrey; Morfill, Gregor; Molotkov, Vladimir

    The next Russian-German cooperation for the investigation of complex plasmas under microgravity conditions on the International Space Station (ISS) is the PlasmaLab/Eco-Plasma project. Here, a new plasma chamber -- the ``Zyflex'' chamber -- is being developed. The chamber is a cylindrical plasma chamber with parallel electrodes and a flexible system geometry. It is designed to extend the accessible plasma parameter range, i.e. neutral gas pressure, plasma density and electron temperature, and also to allow an independent control of the plasma parameters, therefore increasing the experimental quality and expected knowledge gain significantly. With this system it will be possible to reach low neutral gas pressures (which means weak damping of the particle motion) and to generate large, homogeneous 3D particle systems for studies of fundamental phenomena such as phase transitions, dynamics of liquids or phase separation. The Zyflex chamber has already been operated in several parabolic flight campaigns with different configurations during the last years, yielding a promising outlook for its future development. Here, we will present the current status of the project, the technological advancements the Zyflex chamber will offer compared to its predecessors, and the latest scientific results from experiments on ground and in microgravity conditions during parabolic flights. This work and some of the authors are funded by DLR/BMWi (FKZ 50 WP 0700).

  17. Proximity Operations for Space Situational Awareness Spacecraft Rendezvous and Maneuvering using Numerical Simulations and Fuzzy Logic

    NASA Astrophysics Data System (ADS)

    Carrico, T.; Langster, T.; Carrico, J.; Alfano, S.; Loucks, M.; Vallado, D.

    The authors present several spacecraft rendezvous and close proximity maneuvering techniques modeled with a high-precision numerical integrator using full force models and closed loop control with a Fuzzy Logic intelligent controller to command the engines. The authors document and compare the maneuvers, fuel use, and other parameters. This paper presents an innovative application of an existing capability to design, simulate and analyze proximity maneuvers; already in use for operational satellites performing other maneuvers. The system has been extended to demonstrate the capability to develop closed loop control laws to maneuver spacecraft in close proximity to another, including stand-off, docking, lunar landing and other operations applicable to space situational awareness, space based surveillance, and operational satellite modeling. The fully integrated end-to-end trajectory ephemerides are available from the authors in electronic ASCII text by request. The benefits of this system include: A realistic physics-based simulation for the development and validation of control laws A collaborative engineering environment for the design, development and tuning of spacecraft law parameters, sizing actuators (i.e., rocket engines), and sensor suite selection. An accurate simulation and visualization to communicate the complexity, criticality, and risk of spacecraft operations. A precise mathematical environment for research and development of future spacecraft maneuvering engineering tasks, operational planning and forensic analysis. A closed loop, knowledge-based control example for proximity operations. This proximity operations modeling and simulation environment will provide a valuable adjunct to programs in military space control, space situational awareness and civil space exploration engineering and decision making processes.

  18. High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Growth Fundamentals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winkler, Robert; Lewis, Brett B.; Fowlkes, Jason Davidson

    While 3D-printing is currently experiencing significant growth and having a significant impact on science and technology, the expansion into the nanoworld is still a highly challenging task. Among the increasing number of approaches, focused electron-beam-induced deposition (FEBID) was recently demonstrated to be a viable candidate toward a generic direct-write fabrication technology with spatial nanometer accuracy for complex shaped 3D-nanoarchitectures. In this comprehensive study, we explore the parameter space for 3D-FEBID and investigate the implications of individual and interdependent parameters on freestanding nanosegments, which act as a fundamental building block for complex 3D-structures. In particular, the study provides new basic insightsmore » such as precursor transport limitations and angle dependent growth rates, both essential for high-fidelity fabrication. In conclusion, complemented by practical aspects, we provide both basic insights in 3D-growth dynamics and technical guidance for specific process adaption to enable predictable and reliable direct-write synthesis of freestanding 3D-nanoarchitectures.« less

  19. High-Fidelity 3D-Nanoprinting via Focused Electron Beams: Growth Fundamentals

    DOE PAGES

    Winkler, Robert; Lewis, Brett B.; Fowlkes, Jason Davidson; ...

    2018-02-14

    While 3D-printing is currently experiencing significant growth and having a significant impact on science and technology, the expansion into the nanoworld is still a highly challenging task. Among the increasing number of approaches, focused electron-beam-induced deposition (FEBID) was recently demonstrated to be a viable candidate toward a generic direct-write fabrication technology with spatial nanometer accuracy for complex shaped 3D-nanoarchitectures. In this comprehensive study, we explore the parameter space for 3D-FEBID and investigate the implications of individual and interdependent parameters on freestanding nanosegments, which act as a fundamental building block for complex 3D-structures. In particular, the study provides new basic insightsmore » such as precursor transport limitations and angle dependent growth rates, both essential for high-fidelity fabrication. In conclusion, complemented by practical aspects, we provide both basic insights in 3D-growth dynamics and technical guidance for specific process adaption to enable predictable and reliable direct-write synthesis of freestanding 3D-nanoarchitectures.« less

  20. Tracking transcription factor mobility and interaction in Arabidopsis roots with fluorescence correlation spectroscopy

    PubMed Central

    Clark, Natalie M; Hinde, Elizabeth; Winter, Cara M; Fisher, Adam P; Crosti, Giuseppe; Blilou, Ikram; Gratton, Enrico; Benfey, Philip N; Sozzani, Rosangela

    2016-01-01

    To understand complex regulatory processes in multicellular organisms, it is critical to be able to quantitatively analyze protein movement and protein-protein interactions in time and space. During Arabidopsis development, the intercellular movement of SHORTROOT (SHR) and subsequent interaction with its downstream target SCARECROW (SCR) control root patterning and cell fate specification. However, quantitative information about the spatio-temporal dynamics of SHR movement and SHR-SCR interaction is currently unavailable. Here, we quantify parameters including SHR mobility, oligomeric state, and association with SCR using a combination of Fluorescent Correlation Spectroscopy (FCS) techniques. We then incorporate these parameters into a mathematical model of SHR and SCR, which shows that SHR reaches a steady state in minutes, while SCR and the SHR-SCR complex reach a steady-state between 18 and 24 hr. Our model reveals the timing of SHR and SCR dynamics and allows us to understand how protein movement and protein-protein stoichiometry contribute to development. DOI: http://dx.doi.org/10.7554/eLife.14770.001 PMID:27288545

  1. Dynamical complexity in a mean-field model of human EEG

    NASA Astrophysics Data System (ADS)

    Frascoli, Federico; Dafilis, Mathew P.; van Veen, Lennaert; Bojak, Ingo; Liley, David T. J.

    2008-12-01

    A recently proposed mean-field theory of mammalian cortex rhythmogenesis describes the salient features of electrical activity in the cerebral macrocolumn, with the use of inhibitory and excitatory neuronal populations (Liley et al 2002). This model is capable of producing a range of important human EEG (electroencephalogram) features such as the alpha rhythm, the 40 Hz activity thought to be associated with conscious awareness (Bojak & Liley 2007) and the changes in EEG spectral power associated with general anesthetic effect (Bojak & Liley 2005). From the point of view of nonlinear dynamics, the model entails a vast parameter space within which multistability, pseudoperiodic regimes, various routes to chaos, fat fractals and rich bifurcation scenarios occur for physiologically relevant parameter values (van Veen & Liley 2006). The origin and the character of this complex behaviour, and its relevance for EEG activity will be illustrated. The existence of short-lived unstable brain states will also be discussed in terms of the available theoretical and experimental results. A perspective on future analysis will conclude the presentation.

  2. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  3. On Chaotic and Hyperchaotic Complex Nonlinear Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Gamal M.

    Dynamical systems described by real and complex variables are currently one of the most popular areas of scientific research. These systems play an important role in several fields of physics, engineering, and computer sciences, for example, laser systems, control (or chaos suppression), secure communications, and information science. Dynamical basic properties, chaos (hyperchaos) synchronization, chaos control, and generating hyperchaotic behavior of these systems are briefly summarized. The main advantage of introducing complex variables is the reduction of phase space dimensions by a half. They are also used to describe and simulate the physics of detuned laser and thermal convection of liquid flows, where the electric field and the atomic polarization amplitudes are both complex. Clearly, if the variables of the system are complex the equations involve twice as many variables and control parameters, thus making it that much harder for a hostile agent to intercept and decipher the coded message. Chaotic and hyperchaotic complex systems are stated as examples. Finally there are many open problems in the study of chaotic and hyperchaotic complex nonlinear dynamical systems, which need further investigations. Some of these open problems are given.

  4. The magnetotelluric response over 2D media with resistivity frequency dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauriello, P.; Patella, D.; Siniscalchi, A.

    1996-09-01

    The authors investigate the magnetotelluric response of two-dimensional bodies, characterized by the presence of low-frequency dispersion phenomena of the electrical parameters. The Cole-Cole dispersion model is assumed to represent the frequency dependence of the impedivity complex function, defined as the inverse of Stoyer`s admittivity complex parameter. To simulate real geological situations, they consider three structural models, representing a sedimentary basin, a geothermal system and a magma chamber, assumed to be partially or totally dispersive. From a detailed study of the frequency and space behaviors of the magnetotelluric parameters, taking known non-dispersive results as reference, they outline the main peculiarities ofmore » the local distortion effects, caused by the presence of dispersion in the target media. Finally, they discuss the interpretive errors which can be made by neglecting the dispersion phenomena. The apparent dispersion function, which was defined in a previous paper to describe similar effects in the one-dimensional case, is again used as a reliable indicator of location, shape and spatial extent of the dispersive bodies. The general result of this study is a marked improvement in the resolution power of the magnetotelluric method.« less

  5. Interactive design optimization of magnetorheological-brake actuators using the Taguchi method

    NASA Astrophysics Data System (ADS)

    Erol, Ozan; Gurocak, Hakan

    2011-10-01

    This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.

  6. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  7. Quantitative phase and amplitude imaging using Differential-Interference Contrast (DIC) microscopy

    NASA Astrophysics Data System (ADS)

    Preza, Chrysanthe; O'Sullivan, Joseph A.

    2009-02-01

    We present an extension of the development of an alternating minimization (AM) method for the computation of a specimen's complex transmittance function (magnitude and phase) from DIC images. The ability to extract both quantitative phase and amplitude information from two rotationally-diverse DIC images (i.e., acquired by rotating the sample) extends previous efforts in computational DIC microscopy that have focused on quantitative phase imaging only. Simulation results show that the inverse problem at hand is sensitive to noise as well as to the choice of the AM algorithm parameters. The AM framework allows constraints and penalties on the magnitude and phase estimates to be incorporated in a principled manner. Towards this end, Green and De Pierro's "log-cosh" regularization penalty is applied to the magnitude of differences of neighboring values of the complex-valued function of the specimen during the AM iterations. The penalty is shown to be convex in the complex space. A procedure to approximate the penalty within the iterations is presented. In addition, a methodology to pre-compute AM parameters that are optimal with respect to the convergence rate of the AM algorithm is also presented. Both extensions of the AM method are investigated with simulations.

  8. Toward Rigorous Parameterization of Underconstrained Neural Network Models Through Interactive Visualization and Steering of Connectivity Generation

    PubMed Central

    Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander

    2018-01-01

    Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723

  9. Study of Material Densification of In718 in the Higher Throughput Parameter Regime

    NASA Technical Reports Server (NTRS)

    Cordner, Samuel

    2016-01-01

    Selective Laser Melting (SLM) is a powder bed fusion additive manufacturing process used increasingly in the aerospace industry to reduce the cost, weight, and fabrication time for complex propulsion components. Previous optimization studies for SLM using the Concept Laser M1 and M2 machines at NASA Marshall Space Flight Center have centered on machine default parameters. The objective of this project is to characterize how heat treatment affects density and porosity from a microscopic point of view. This is performs using higher throughput parameters (a previously unexplored region of the manufacturing operating envelope for this application) on material consolidation. Density blocks were analyzed to explore the relationship between build parameters (laser power, scan speed, and hatch spacing) and material consolidation (assessed in terms of density and porosity). The study also considers the impact of post-processing, specifically hot isostatic pressing and heat treatment, as well as deposition pattern on material consolidation in the higher energy parameter regime. Metallurgical evaluation of specimens will also be presented. This work will contribute to creating a knowledge base (understanding material behavior in all ranges of the AM equipment operating envelope) that is critical to transitioning AM from the custom low rate production sphere it currently occupies to the world of mass high rate production, where parts are fabricated at a rapid rate with confidence that they will meet or exceed all stringent functional requirements for spaceflight hardware. These studies will also provide important data on the sensitivity of material consolidation to process parameters that will inform the design and development of future flight articles using SLM.

  10. Testing directed evolution strategies for space exploration: genetic modification of photosystem II to increase stress tolerance under space conditions

    NASA Astrophysics Data System (ADS)

    Bertalan, I.; Giardi, M. T.; Johanningmeier, U.

    Plants and many microorganisms are able to convert and store solar energy in chemical bonds by a process called photosynthesis They remove CO 2 from the atmosphere fix it as carbohydrate and simultaneously evolve oxygen Oxygen evolution is of supreme relevance for all higher life forms and results from the splitting of water molecules This process is catalyzed by the so called photosystem II PSII complex and represents the very beginning of biomass production PS II is also a central point of regulation being responsive to various physical and physiological parameters Complex space radiation is damaging PS II and reduces photosynthetic efficiency Thus bioregenerative life-support systems are severely disturbed at this point Genetic manipulation of photosynthesis checkpoints offer the possibility to adjust biomass and oxygen production to changing environmental conditions As the photosynthetic apparatus has adapted to terrestrial and not to space conditions we are trying to adapt a central and particularly stress-susceptible element of the photosynthesis apparatus - the D1 subunit of PS II - to space radiation by a strategy of directed evolution The D1 subunit together with its sister subunit D2 form the reaction centre of PS II D1 presents a central weak point for radiation energy that hits the chloroplast We have constructed a mutant of the green alga Chlamydomonas reinhardtii with a defect D1 protein This mutant is easily transformable with D1-encoding PCR fragments without purification and cloning steps 1 When

  11. Learning Parsimonious Classification Rules from Gene Expression Data Using Bayesian Networks with Local Structure.

    PubMed

    Lustgarten, Jonathan Lyle; Balasubramanian, Jeya Balaji; Visweswaran, Shyam; Gopalakrishnan, Vanathi

    2017-03-01

    The comprehensibility of good predictive models learned from high-dimensional gene expression data is attractive because it can lead to biomarker discovery. Several good classifiers provide comparable predictive performance but differ in their abilities to summarize the observed data. We extend a Bayesian Rule Learning (BRL-GSS) algorithm, previously shown to be a significantly better predictor than other classical approaches in this domain. It searches a space of Bayesian networks using a decision tree representation of its parameters with global constraints, and infers a set of IF-THEN rules. The number of parameters and therefore the number of rules are combinatorial to the number of predictor variables in the model. We relax these global constraints to a more generalizable local structure (BRL-LSS). BRL-LSS entails more parsimonious set of rules because it does not have to generate all combinatorial rules. The search space of local structures is much richer than the space of global structures. We design the BRL-LSS with the same worst-case time-complexity as BRL-GSS while exploring a richer and more complex model space. We measure predictive performance using Area Under the ROC curve (AUC) and Accuracy. We measure model parsimony performance by noting the average number of rules and variables needed to describe the observed data. We evaluate the predictive and parsimony performance of BRL-GSS, BRL-LSS and the state-of-the-art C4.5 decision tree algorithm, across 10-fold cross-validation using ten microarray gene-expression diagnostic datasets. In these experiments, we observe that BRL-LSS is similar to BRL-GSS in terms of predictive performance, while generating a much more parsimonious set of rules to explain the same observed data. BRL-LSS also needs fewer variables than C4.5 to explain the data with similar predictive performance. We also conduct a feasibility study to demonstrate the general applicability of our BRL methods on the newer RNA sequencing gene-expression data.

  12. Discovering Planetary Nebula Geometries: Explorations with a Hierarchy of Models

    NASA Technical Reports Server (NTRS)

    Huyser, Karen A.; Knuth, Kevin H.; Fischer, Bernd; Schumann, Johann; Granquist-Fraser, Domhnull; Hajian, Arsen R.

    2004-01-01

    Astronomical objects known as planetary nebulae (PNe) consist of a shell of gas expelled by an aging medium-sized star as it makes its transition from a red giant to a white dwarf. In many cases this gas shell can be approximately described as a prolate ellipsoid. Knowledge of the physics of ionization processes in this gaseous shell enables us to construct a model in three dimensions (3D) called the Ionization-Bounded Prolate Ellipsoidal Shell model (IBPES model). Using this model we can generate synthetic nebular images, which can be used in conjunction with Hubble Space Telescope (HST) images of actual PNe to perform Bayesian model estimation. Since the IBPES model is characterized by thirteen parameters, model estimation requires the search of a 13-dimensional parameter space. The 'curse of dimensionality,' compounded by a computationally intense forward problem, makes forward searches extremely time-consuming and frequently causes them to become trapped in local solutions. We find that both the speed and of the search can be improved by judiciously reducing the dimensionality of the search space. Our basic approach employs a hierarchy of models of increasing complexity that converges to the IBPES model. Earlier studies establish that a hierarchical sequence converges more quickly, and to a better solution, than a search relying only on the most complex model. Here we report results for a hierarchy of five models. The first three models treat the nebula as a 2D image, while the last two models explore its characteristics as a 3D object and enable us to characterize the physics of the nebula. This five-model hierarchy is applied to HST images of ellipsoidal PNe to estimate their geometric properties and gas density profiles.

  13. Machine learning phases of matter

    NASA Astrophysics Data System (ADS)

    Carrasquilla, Juan; Melko, Roger G.

    2017-02-01

    Condensed-matter physics is the study of the collective behaviour of infinitely complex assemblies of electrons, nuclei, magnetic moments, atoms or qubits. This complexity is reflected in the size of the state space, which grows exponentially with the number of particles, reminiscent of the `curse of dimensionality' commonly encountered in machine learning. Despite this curse, the machine learning community has developed techniques with remarkable abilities to recognize, classify, and characterize complex sets of data. Here, we show that modern machine learning architectures, such as fully connected and convolutional neural networks, can identify phases and phase transitions in a variety of condensed-matter Hamiltonians. Readily programmable through modern software libraries, neural networks can be trained to detect multiple types of order parameter, as well as highly non-trivial states with no conventional order, directly from raw state configurations sampled with Monte Carlo.

  14. Elisa technology consolidation study overview

    NASA Astrophysics Data System (ADS)

    Fitzsimons, E. D.; Brandt, N.; Johann, U.; Kemble, S.; Schulte, H.-R.; Weise, D.; Ziegler, T.

    2017-11-01

    The eLISA (evolved Laser Interferometer Space Antenna) mission is an ESA L3 concept mission intended to detect and characterise gravitational radiation emitted from astrophysical sources [1]. Current designs for eLISA [2] are based on the ESA study conducted in 2011 to reformulate the original ESA/NASA LISA concept [3] into an ESA-only L1 candidate named NGO (New Gravitational Observatory) [4]. During this brief reformulation period, a number of significant changes were made to the baseline LISA design in order to create a more costeffective mission. Some of the key changes implemented during this reformulation were: • A reduction in the inter satellite distance (the arm length) from 5 Gm to 1 Gm. • A reduction in the diameter of the telescope from 40 cm to 20 cm. • A reduction in the required laser power by approximately 40%. • Implementation of only 2 laser arms instead of 3. Many further simplifications were then enabled by these main design changes including the elimination of payload items in the two spacecraft (S/C) with no laser-link between them (the daughter S/C), a reduction in the size and complexity of the optical bench and the elimination of the Point Ahead Angle Mechanism (PAAM), which corrects for variations in the pointing direction to the far S/C caused by orbital dynamics [4] [5]. In the run-up to an L3 mission definition phase later in the decade, it is desirable to review these design choices and analyse the inter-dependencies and scaling between the key mission parameters with the goal of better understanding the parameter space and ensuring that in the final selection of the eLISA mission parameters the optimal balance between cost, complexity and science return can be achieved.

  15. Identifying quantum phase transitions with adversarial neural networks

    NASA Astrophysics Data System (ADS)

    Huembeli, Patrick; Dauphin, Alexandre; Wittek, Peter

    2018-04-01

    The identification of phases of matter is a challenging task, especially in quantum mechanics, where the complexity of the ground state appears to grow exponentially with the size of the system. Traditionally, physicists have to identify the relevant order parameters for the classification of the different phases. We here follow a radically different approach: we address this problem with a state-of-the-art deep learning technique, adversarial domain adaptation. We derive the phase diagram of the whole parameter space starting from a fixed and known subspace using unsupervised learning. This method has the advantage that the input of the algorithm can be directly the ground state without any ad hoc feature engineering. Furthermore, the dimension of the parameter space is unrestricted. More specifically, the input data set contains both labeled and unlabeled data instances. The first kind is a system that admits an accurate analytical or numerical solution, and one can recover its phase diagram. The second type is the physical system with an unknown phase diagram. Adversarial domain adaptation uses both types of data to create invariant feature extracting layers in a deep learning architecture. Once these layers are trained, we can attach an unsupervised learner to the network to find phase transitions. We show the success of this technique by applying it on several paradigmatic models: the Ising model with different temperatures, the Bose-Hubbard model, and the Su-Schrieffer-Heeger model with disorder. The method finds unknown transitions successfully and predicts transition points in close agreement with standard methods. This study opens the door to the classification of physical systems where the phase boundaries are complex such as the many-body localization problem or the Bose glass phase.

  16. A system level model for preliminary design of a space propulsion solid rocket motor

    NASA Astrophysics Data System (ADS)

    Schumacher, Daniel M.

    Preliminary design of space propulsion solid rocket motors entails a combination of components and subsystems. Expert design tools exist to find near optimal performance of subsystems and components. Conversely, there is no system level preliminary design process for space propulsion solid rocket motors that is capable of synthesizing customer requirements into a high utility design for the customer. The preliminary design process for space propulsion solid rocket motors typically builds on existing designs and pursues feasible rather than the most favorable design. Classical optimization is an extremely challenging method when dealing with the complex behavior of an integrated system. The complexity and combinations of system configurations make the number of the design parameters that are traded off unreasonable when manual techniques are used. Existing multi-disciplinary optimization approaches generally address estimating ratios and correlations rather than utilizing mathematical models. The developed system level model utilizes the Genetic Algorithm to perform the necessary population searches to efficiently replace the human iterations required during a typical solid rocket motor preliminary design. This research augments, automates, and increases the fidelity of the existing preliminary design process for space propulsion solid rocket motors. The system level aspect of this preliminary design process, and the ability to synthesize space propulsion solid rocket motor requirements into a near optimal design, is achievable. The process of developing the motor performance estimate and the system level model of a space propulsion solid rocket motor is described in detail. The results of this research indicate that the model is valid for use and able to manage a very large number of variable inputs and constraints towards the pursuit of the best possible design.

  17. Direct Parametric Image Reconstruction in Reduced Parameter Space for Rapid Multi-Tracer PET Imaging.

    PubMed

    Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu

    2015-02-12

    The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.

  18. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  19. Characterizing the Trade Space Between Capability and Complexity in Next Generation Cloud and Precipitation Observing Systems Using Markov Chain Monte Carlos Techniques

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Mace, G. G.; Posselt, D. J.

    2017-12-01

    As we begin to contemplate the next generation atmospheric observing systems, it will be critically important that we are able to make informed decisions regarding the trade space between scientific capability and the need to keep complexity and cost within definable limits. To explore this trade space as it pertains to understanding key cloud and precipitation processes, we are developing a Markov Chain Monte Carlo (MCMC) algorithm suite that allows us to arbitrarily define the specifications of candidate observing systems and then explore how the uncertainties in key retrieved geophysical parameters respond to that observing system. MCMC algorithms produce a more complete posterior solution space, and allow for an objective examination of information contained in measurements. In our initial implementation, MCMC experiments are performed to retrieve vertical profiles of cloud and precipitation properties from a spectrum of active and passive measurements collected by aircraft during the ACE Radiation Definition Experiments (RADEX). Focusing on shallow cumulus clouds observed during the Integrated Precipitation and Hydrology EXperiment (IPHEX), observing systems in this study we consider W and Ka-band radar reflectivity, path-integrated attenuation at those frequencies, 31 and 94 GHz brightness temperatures as well as visible and near-infrared reflectance. By varying the sensitivity and uncertainty of these measurements, we quantify the capacity of various combinations of observations to characterize the physical properties of clouds and precipitation.

  20. Spacecube: A Family of Reconfigurable Hybrid On-Board Science Data Processors

    NASA Technical Reports Server (NTRS)

    Flatley, Thomas P.

    2015-01-01

    SpaceCube is a family of Field Programmable Gate Array (FPGA) based on-board science data processing systems developed at the NASA Goddard Space Flight Center (GSFC). The goal of the SpaceCube program is to provide 10x to 100x improvements in on-board computing power while lowering relative power consumption and cost. SpaceCube is based on the Xilinx Virtex family of FPGAs, which include processor, FPGA logic and digital signal processing (DSP) resources. These processing elements are leveraged to produce a hybrid science data processing platform that accelerates the execution of algorithms by distributing computational functions to the most suitable elements. This approach enables the implementation of complex on-board functions that were previously limited to ground based systems, such as on-board product generation, data reduction, calibration, classification, eventfeature detection, data mining and real-time autonomous operations. The system is fully reconfigurable in flight, including data parameters, software and FPGA logic, through either ground commanding or autonomously in response to detected eventsfeatures in the instrument data stream.

  1. Statistical physics of the symmetric group.

    PubMed

    Williams, Mobolaji

    2017-04-01

    Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.

  2. Statistical physics of the symmetric group

    NASA Astrophysics Data System (ADS)

    Williams, Mobolaji

    2017-04-01

    Ordered chains (such as chains of amino acids) are ubiquitous in biological cells, and these chains perform specific functions contingent on the sequence of their components. Using the existence and general properties of such sequences as a theoretical motivation, we study the statistical physics of systems whose state space is defined by the possible permutations of an ordered list, i.e., the symmetric group, and whose energy is a function of how certain permutations deviate from some chosen correct ordering. Such a nonfactorizable state space is quite different from the state spaces typically considered in statistical physics systems and consequently has novel behavior in systems with interacting and even noninteracting Hamiltonians. Various parameter choices of a mean-field model reveal the system to contain five different physical regimes defined by two transition temperatures, a triple point, and a quadruple point. Finally, we conclude by discussing how the general analysis can be extended to state spaces with more complex combinatorial properties and to other standard questions of statistical mechanics models.

  3. Primary and secondary electrical space power based on advanced PEM systems

    NASA Technical Reports Server (NTRS)

    Vanderborgh, N. E.; Hedstrom, J. C.; Stroh, K. R.; Huff, J. R.

    1993-01-01

    For new space ventures, power continues to be a pacing function for mission planning and experiment endurance. Although electrochemical power is a well demonstrated space power technology, current hardware limitations impact future mission viability. In order to document and augment electrochemical technology, a series of experiments for the National Aeronautics and Space Administration Lewis Research Center (NASA LeRC) are underway at the Los Alamos National Laboratory that define operational parameters on contemporary proton exchange membrane (PEM) hardware operating with hydrogen and oxygen reactants. Because of the high efficiency possible for water electrolysis, this hardware is also thought part of a secondary battery design built around stored reactants - the so-called regenerative fuel cell. An overview of stack testing at Los Alamos and of analyses related to regenerative fuel cell systems are provided in this paper. Finally, this paper describes work looking at innovative concepts that remove complexity from stack hardware with the specific intent of higher system reliability. This new concept offers the potential for unprecedented electrochemical power system energy densities.

  4. Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.

    PubMed

    Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter

    2017-09-01

    An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Space Shuttle Crawler Transporter Truck Shoe Qualification Tests and Analyses for Return-to-Flight

    NASA Technical Reports Server (NTRS)

    Margasahayam, Ravi N.; Meyer, Karl A.; Burton, Roy C.; Gosselin, Armand M.

    2005-01-01

    A vital element to Launch Complex 39 (LC39) and NASA's Kennedy Space Center (KSC) mobile launch transfer operation is a 3 million kilogram behemoth known as the Crawler Transporter (CT). Built in the 1960's, two CT's have accumulated over 1700+ miles each and have been used for the Apollo and the Space Shuttle programs. Recent observation of fatigue cracks on the CT shoes led to a comprehensive engineering, structural and metallurgical evaluation to assess the root cause that necessitated procurement of over 1000 new shoes. This paper documents the completed dynamic and compression tests on the old and new shoes respectively, so as to certify them for Space Shuttle's return-to-flight (RTF). Measured strain data from the rollout tests was used to develop stress/loading spectra and static equivalent load for qualification testing of the new shoes. Additionally, finite element analysis (FEA) was used to conduct sensitivity analyses of various contact parameters and structural characteristics for acceptance of new shoes.

  6. Evaluation of the novel algorithm of flexible ligand docking with moveable target-protein atoms.

    PubMed

    Sulimov, Alexey V; Zheltkov, Dmitry A; Oferkin, Igor V; Kutov, Danil C; Katkova, Ekaterina V; Tyrtyshnikov, Eugene E; Sulimov, Vladimir B

    2017-01-01

    We present the novel docking algorithm based on the Tensor Train decomposition and the TT-Cross global optimization. The algorithm is applied to the docking problem with flexible ligand and moveable protein atoms. The energy of the protein-ligand complex is calculated in the frame of the MMFF94 force field in vacuum. The grid of precalculated energy potentials of probe ligand atoms in the field of the target protein atoms is not used. The energy of the protein-ligand complex for any given configuration is computed directly with the MMFF94 force field without any fitting parameters. The conformation space of the system coordinates is formed by translations and rotations of the ligand as a whole, by the ligand torsions and also by Cartesian coordinates of the selected target protein atoms. Mobility of protein and ligand atoms is taken into account in the docking process simultaneously and equally. The algorithm is realized in the novel parallel docking SOL-P program and results of its performance for a set of 30 protein-ligand complexes are presented. Dependence of the docking positioning accuracy is investigated as a function of parameters of the docking algorithm and the number of protein moveable atoms. It is shown that mobility of the protein atoms improves docking positioning accuracy. The SOL-P program is able to perform docking of a flexible ligand into the active site of the target protein with several dozens of protein moveable atoms: the native crystallized ligand pose is correctly found as the global energy minimum in the search space with 157 dimensions using 4700 CPU ∗ h at the Lomonosov supercomputer.

  7. Homoclinic Bifurcation in an SIQR Model for Childhood Diseases

    NASA Astrophysics Data System (ADS)

    Wu, Lih-Ing; Feng, Zhilan

    2000-11-01

    We consider a system of ODEs which describes the transmission dynamics of childhood diseases. A center manifold reduction at a bifurcation point has the normal form x‧=y, y‧=axy+bx2y+O(4), indicating a bifurcation of codimension greater than two. A three-parameter unfolding of the normal form is studied to capture possible complex dynamics of the original system which is subjected to certain constraints on the state space due to biological considerations. It is shown that the perturbed system produces homoclinic bifurcation.

  8. Cloning, expression, and crystallization of Cpn60 proteins from Thermococcus litoralis.

    PubMed

    Osipiuk, J; Sriram, M; Mai, X; Adams, M W; Joachimiak, A

    2000-01-01

    Two genes of the extreme thermophilic archaeon Thermococcus litoralis homologous to those that code for Cpn60 chaperonins were cloned and expressed in Escherichia coli. Each of the Cpn60 subunits as well as the entire Cpn60 complex crystallize in a variety of morphological forms. The best crystals diffract to 3.6 A resolution at room temperature and belong to the space group 1422 with unit cell parameters a = b = 193.5 A, c = 204.2 A.

  9. An efficient approach to the travelling salesman problem using self-organizing maps.

    PubMed

    Vieira, Frederico Carvalho; Dória Neto, Adrião Duarte; Costa, José Alfredo Ferreira

    2003-04-01

    This paper presents an approach to the well-known Travelling Salesman Problem (TSP) using Self-Organizing Maps (SOM). The SOM algorithm has interesting topological information about its neurons configuration on cartesian space, which can be used to solve optimization problems. Aspects of initialization, parameters adaptation, and complexity analysis of the proposed SOM based algorithm are discussed. The results show an average deviation of 3.7% from the optimal tour length for a set of 12 TSP instances.

  10. Synthesis, characterisation and catalytic activity of 4, 5-imidazoledicarboxylate ligated Co(II) and Cd(II) metal-organic coordination complexes

    NASA Astrophysics Data System (ADS)

    Gangu, Kranthi Kumar; Maddila, Suresh; Mukkamala, Saratchandra Babu; Jonnalagadda, Sreekantha B.

    2017-09-01

    Two mono nuclear coordination complexes, namely, [Co(4,5-Imdc)2 (H2O)2] (1) and [Cd(4,5-Imdc)2(H2O)3]·H2O (2) were constructed using Co(II) and Cd(II) metal salts with 4,5-Imidazoledicarboxylic acid (4,5-Imdc) as organic ligand. Both 1, 2 were structurally characterized by single crystal XRD and the results reveal that 1 belongs to P21/n space group with unit cell parameters [a = 5.0514(3) Å, b = 22.5786(9) Å, c = 6.5377(3) Å, β = 111.5°] whereas, 2 belongs to P21/c space group with unit cell parameters [a = 6.9116(1) Å, b = 17.4579(2) Å, c = 13.8941(2) Å, β = 97.7°]. While Co(II) in 1 exhibited a six coordination geometry with 4,5-Imdc and water molecules, Cd(II) ion in 2 showed a seven coordination with the same ligand and solvent. In both 1 and 2, the hydrogen bond interactions with mononuclear unit generated 3D-supramolecular structures. Both complexes exhibit solid state fluorescent emission at room temperature. The efficacy of both the complexes as heterogeneous catalysts was examined in the green synthesis of six pyrano[2,3,c]pyrazole derivatives with ethanol as solvent via one-pot reaction between four components, a mixture of aromatic aldehyde, malononitrile, hydrazine hydrate and dimethyl acetylenedicarboxylate. Both 1 and 2 have produced pyrano [2,3,c]pyrazoles in impressive yields (92-98%) at room temperature in short interval of times (<20 min), with no need for any chromatographic separations. With good stability, ease of preparation and recovery plus reusability up to six cycles, both 1 and 2 prove to be excellent environmental friendly catalysts for the value-added organic transformations using green principles.

  11. Multi-objective optimisation and decision-making of space station logistics strategies

    NASA Astrophysics Data System (ADS)

    Zhu, Yue-he; Luo, Ya-zhong

    2016-10-01

    Space station logistics strategy optimisation is a complex engineering problem with multiple objectives. Finding a decision-maker-preferred compromise solution becomes more significant when solving such a problem. However, the designer-preferred solution is not easy to determine using the traditional method. Thus, a hybrid approach that combines the multi-objective evolutionary algorithm, physical programming, and differential evolution (DE) algorithm is proposed to deal with the optimisation and decision-making of space station logistics strategies. A multi-objective evolutionary algorithm is used to acquire a Pareto frontier and help determine the range parameters of the physical programming. Physical programming is employed to convert the four-objective problem into a single-objective problem, and a DE algorithm is applied to solve the resulting physical programming-based optimisation problem. Five kinds of objective preference are simulated and compared. The simulation results indicate that the proposed approach can produce good compromise solutions corresponding to different decision-makers' preferences.

  12. Online Distributed Learning Over Networks in RKH Spaces Using Random Fourier Features

    NASA Astrophysics Data System (ADS)

    Bouboulis, Pantelis; Chouvardas, Symeon; Theodoridis, Sergios

    2018-04-01

    We present a novel diffusion scheme for online kernel-based learning over networks. So far, a major drawback of any online learning algorithm, operating in a reproducing kernel Hilbert space (RKHS), is the need for updating a growing number of parameters as time iterations evolve. Besides complexity, this leads to an increased need of communication resources, in a distributed setting. In contrast, the proposed method approximates the solution as a fixed-size vector (of larger dimension than the input space) using Random Fourier Features. This paves the way to use standard linear combine-then-adapt techniques. To the best of our knowledge, this is the first time that a complete protocol for distributed online learning in RKHS is presented. Conditions for asymptotic convergence and boundness of the networkwise regret are also provided. The simulated tests illustrate the performance of the proposed scheme.

  13. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  14. Noncommutative products of Euclidean spaces

    NASA Astrophysics Data System (ADS)

    Dubois-Violette, Michel; Landi, Giovanni

    2018-05-01

    We present natural families of coordinate algebras on noncommutative products of Euclidean spaces R^{N_1} × _R R^{N_2} . These coordinate algebras are quadratic ones associated with an R -matrix which is involutive and satisfies the Yang-Baxter equations. As a consequence, they enjoy a list of nice properties, being regular of finite global dimension. Notably, we have eight-dimensional noncommutative euclidean spaces R4 × _R R4 . Among these, particularly well behaved ones have deformation parameter u \\in S^2 . Quotients include seven spheres S7_u as well as noncommutative quaternionic tori TH_u = S^3 × _u S^3 . There is invariance for an action of {{SU}}(2) × {{SU}}(2) on the torus TH_u in parallel with the action of U(1) × U(1) on a `complex' noncommutative torus T^2_θ which allows one to construct quaternionic toric noncommutative manifolds. Additional classes of solutions are disjoint from the classical case.

  15. Dynamics of a neuron model in different two-dimensional parameter-spaces

    NASA Astrophysics Data System (ADS)

    Rech, Paulo C.

    2011-03-01

    We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.

  16. Transformation to equivalent dimensions—a new methodology to study earthquake clustering

    NASA Astrophysics Data System (ADS)

    Lasocki, Stanislaw

    2014-05-01

    A seismic event is represented by a point in a parameter space, quantified by the vector of parameter values. Studies of earthquake clustering involve considering distances between such points in multidimensional spaces. However, the metrics of earthquake parameters are different, hence the metric in a multidimensional parameter space cannot be readily defined. The present paper proposes a solution of this metric problem based on a concept of probabilistic equivalence of earthquake parameters. Under this concept the lengths of parameter intervals are equivalent if the probability for earthquakes to take values from either interval is the same. Earthquake clustering is studied in an equivalent rather than the original dimensions space, where the equivalent dimension (ED) of a parameter is its cumulative distribution function. All transformed parameters are of linear scale in [0, 1] interval and the distance between earthquakes represented by vectors in any ED space is Euclidean. The unknown, in general, cumulative distributions of earthquake parameters are estimated from earthquake catalogues by means of the model-free non-parametric kernel estimation method. Potential of the transformation to EDs is illustrated by two examples of use: to find hierarchically closest neighbours in time-space and to assess temporal variations of earthquake clustering in a specific 4-D phase space.

  17. Exchange Coupling Interactions from the Density Matrix Renormalization Group and N-Electron Valence Perturbation Theory: Application to a Biomimetic Mixed-Valence Manganese Complex.

    PubMed

    Roemelt, Michael; Krewald, Vera; Pantazis, Dimitrios A

    2018-01-09

    The accurate description of magnetic level energetics in oligonuclear exchange-coupled transition-metal complexes remains a formidable challenge for quantum chemistry. The density matrix renormalization group (DMRG) brings such systems for the first time easily within reach of multireference wave function methods by enabling the use of unprecedentedly large active spaces. But does this guarantee systematic improvement in predictive ability and, if so, under which conditions? We identify operational parameters in the use of DMRG using as a test system an experimentally characterized mixed-valence bis-μ-oxo/μ-acetato Mn(III,IV) dimer, a model for the oxygen-evolving complex of photosystem II. A complete active space of all metal 3d and bridge 2p orbitals proved to be the smallest meaningful starting point; this is readily accessible with DMRG and greatly improves on the unrealistic metal-only configuration interaction or complete active space self-consistent field (CASSCF) values. Orbital optimization is critical for stabilizing the antiferromagnetic state, while a state-averaged approach over all spin states involved is required to avoid artificial deviations from isotropic behavior that are associated with state-specific calculations. Selective inclusion of localized orbital subspaces enables probing the relative contributions of different ligands and distinct superexchange pathways. Overall, however, full-valence DMRG-CASSCF calculations fall short of providing a quantitative description of the exchange coupling owing to insufficient recovery of dynamic correlation. Quantitatively accurate results can be achieved through a DMRG implementation of second order N-electron valence perturbation theory (NEVPT2) in conjunction with a full-valence metal and ligand active space. Perspectives for future applications of DMRG-CASSCF/NEVPT2 to exchange coupling in oligonuclear clusters are discussed.

  18. Self-organization of pulsing and bursting in a CO{sub 2} laser with opto-electronic feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freire, Joana G.; Instituto de Altos Estudos da Paraíba, Rua Infante Dom Henrique 100-1801, 58039-150 João Pessoa; CELC, Departamento de Matemática, Universidade de Lisboa, 1649-003 Lisboa

    We report a detailed investigation of the stability of a CO{sub 2} laser with feedback as described by a six-dimensional rate-equations model which provides satisfactory agreement between numerical and experimental results. We focus on experimentally accessible parameters, like bias voltage, feedback gain, and the bandwidth of the feedback loop. The impact of decay rates and parameters controlling cavity losses are also investigated as well as control planes which imply changes of the laser physical medium. For several parameter combinations, we report stability diagrams detailing how laser spiking and bursting is organized over extended intervals. Laser pulsations are shown to emergemore » organized in several hitherto unseen regular and irregular phases and to exhibit a much richer and complex range of behaviors than described thus far. A significant observation is that qualitatively similar organization of laser spiking and bursting can be obtained by tuning rather distinct control parameters, suggesting the existence of unexpected symmetries in the laser control space.« less

  19. Approaches in highly parameterized inversion: bgaPEST, a Bayesian geostatistical approach implementation with PEST: documentation and instructions

    USGS Publications Warehouse

    Fienen, Michael N.; D'Oria, Marco; Doherty, John E.; Hunt, Randall J.

    2013-01-01

    The application bgaPEST is a highly parameterized inversion software package implementing the Bayesian Geostatistical Approach in a framework compatible with the parameter estimation suite PEST. Highly parameterized inversion refers to cases in which parameters are distributed in space or time and are correlated with one another. The Bayesian aspect of bgaPEST is related to Bayesian probability theory in which prior information about parameters is formally revised on the basis of the calibration dataset used for the inversion. Conceptually, this approach formalizes the conditionality of estimated parameters on the specific data and model available. The geostatistical component of the method refers to the way in which prior information about the parameters is used. A geostatistical autocorrelation function is used to enforce structure on the parameters to avoid overfitting and unrealistic results. Bayesian Geostatistical Approach is designed to provide the smoothest solution that is consistent with the data. Optionally, users can specify a level of fit or estimate a balance between fit and model complexity informed by the data. Groundwater and surface-water applications are used as examples in this text, but the possible uses of bgaPEST extend to any distributed parameter applications.

  20. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  1. Application of "FLUOR-P" device for analysis of the space flight effects on the intracellular level.

    NASA Astrophysics Data System (ADS)

    Grigorieva, Olga; Rudimov, Evgeny; Buravkova, Ludmila; Galchuk, Sergey

    The mechanisms of cellular gravisensitivity still remain unclear despite the intensive research in the hypogravity effects on cellular function. In most cell culture experiments on unmanned vehicles "Bion" and "Photon", as well as on the ISS only allow post-flight analysis of biological material, including fixed cells is provided. The dynamic evaluation cellular parameters over a prolonged period of time is not possible. Thus, a promising direction is the development of equipment for onboard autonomous experiments. For this purpose, the SSC RF IBMP RAS has developed "FLUOR-P" device for measurement and recording of the dynamic differential fluorescent signal from nano- and microsized objects of organic and inorganic nature (human and animal cells, unicellular algae, bacteria, cellular organelles suspension) in hermetically sealed cuvettes. Besides, the device allows to record the main physical factors affecting the analyzed object (temperature and gravity loads: position in space, any vector acceleration, shock) in sync with the main measurements. The device is designed to perform long-term programmable autonomous experiments in space flight on biological satellites. The device software of allows to carry out complex experiments using cell. Permanent registration of data on built-in flash will give the opportunity to analyze the dynamics of the estimated parameters. FLUOR-P is designed as a monobloc (5.5 kg weight), 8 functional blocks are located in the inner space of the device. Each registration unit of the FLUOR-P has two channels of fluorescence intensity and excitation light source with the wavelength range from 300 nm to 700 nm. During biosatellite "Photon" flight is supposed to conduct a full analysis of the most important intracellular parameters (mitochondria activity and intracellular pH) dynamics under space flight factors and to assess the possible contribution of temperature on the effects of microgravity. Work is supported by Roskosmos and the Russian Academy of Sciences.

  2. Pentacoordinate and Hexacoordinate Mn(III) Complexes of Tetradentate Schiff-Base Ligands Containing Tetracyanidoplatinate(II) Bridges and Revealing Uniaxial Magnetic Anisotropy.

    PubMed

    Nemec, Ivan; Herchel, Radovan; Trávníček, Zdeněk

    2016-12-08

    Crystal structures and magnetic properties of polymeric and trinuclear heterobimetallic Mn III ···Pt II ···Mn III coordination compounds, prepared from the Ba[Pt(CN)₄] and [Mn(L4A/B)(Cl)] ( 1a / b ) precursor complexes, are reported. The polymeric complex [{Mn(L4A)}₂{μ⁴-Pt(CN)₄}] n ( 2a ), where H₂L4A = N , N '-ethylene-bis(salicylideneiminate), comprises the {Mn(L4A)} moieties covalently connected through the [Pt(CN)₄] 2- bridges, thus forming a square-grid polymeric structure with the hexacoordinate Mn III atoms. The trinuclear complex [{Mn(L4B)}₂{μ-Pt(CN)₄}] ( 2b ), where H₂L4B = N , N '-benzene-bis(4-aminodiethylene-salicylideneiminate), consists of two [{Mn(L4B)} moieties, involving pentacoordinate Mn III atoms, bridged through the tetracyanidoplatinate (II) bridges to which they are coordinated in a trans fashion. Both complexes possess uniaxial type of magnetic anisotropy, with D (the axial parameter of zero-field splitting) = -3.7(1) in 2a and -2.2(1) cm -1 in 2b . Furthermore, the parameters of magnetic anisotropy 2a and 2b were also thoroughly studied by theoretical complete active space self-consistent field (CASSCF) methods, which revealed that the former is much more sensitive to the ligand field strength of the axial ligands.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altenfeld, Anika; Wohlgemuth, Sabine; Wehenkel, Annemarie

    The 800 kDa complex of the human Rod, Zwilch and ZW10 proteins (the RZZ complex) was reconstituted in insect cells, purified, crystallized and subjected to preliminary X-ray diffraction analysis. The spindle-assembly checkpoint (SAC) monitors kinetochore–microtubule attachment during mitosis. In metazoans, the three-subunit Rod–Zwilch–ZW10 (RZZ) complex is a crucial SAC component that interacts with additional SAC-activating and SAC-silencing components, including the Mad1–Mad2 complex and cytoplasmic dynein. The RZZ complex contains two copies of each subunit and has a predicted molecular mass of ∼800 kDa. Given the low abundance of the RZZ complex in natural sources, its recombinant reconstitution was attempted bymore » co-expression of its subunits in insect cells. The RZZ complex was purified to homogeneity and subjected to systematic crystallization attempts. Initial crystals containing the entire RZZ complex were obtained using the sitting-drop method and were subjected to optimization to improve the diffraction resolution limit. The crystals belonged to space group P3{sub 1} (No. 144) or P3{sub 2} (No. 145), with unit-cell parameters a = b = 215.45, c = 458.7 Å, α = β = 90.0, γ = 120.0°.« less

  4. Machine learning action parameters in lattice quantum chromodynamics

    NASA Astrophysics Data System (ADS)

    Shanahan, Phiala E.; Trewartha, Daniel; Detmold, William

    2018-05-01

    Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.

  5. Two algorithms for neural-network design and training with application to channel equalization.

    PubMed

    Sweatman, C Z; Mulgrew, B; Gibson, G J

    1998-01-01

    We describe two algorithms for designing and training neural-network classifiers. The first, the linear programming slab algorithm (LPSA), is motivated by the problem of reconstructing digital signals corrupted by passage through a dispersive channel and by additive noise. It constructs a multilayer perceptron (MLP) to separate two disjoint sets by using linear programming methods to identify network parameters. The second, the perceptron learning slab algorithm (PLSA), avoids the computational costs of linear programming by using an error-correction approach to identify parameters. Both algorithms operate in highly constrained parameter spaces and are able to exploit symmetry in the classification problem. Using these algorithms, we develop a number of procedures for the adaptive equalization of a complex linear 4-quadrature amplitude modulation (QAM) channel, and compare their performance in a simulation study. Results are given for both stationary and time-varying channels, the latter based on the COST 207 GSM propagation model.

  6. Prosthetic avian vocal organ controlled by a freely behaving bird based on a low dimensional model of the biomechanical periphery.

    PubMed

    Arneodo, Ezequiel M; Perl, Yonatan Sanz; Goller, Franz; Mindlin, Gabriel B

    2012-01-01

    Because of the parallels found with human language production and acquisition, birdsong is an ideal animal model to study general mechanisms underlying complex, learned motor behavior. The rich and diverse vocalizations of songbirds emerge as a result of the interaction between a pattern generator in the brain and a highly nontrivial nonlinear periphery. Much of the complexity of this vocal behavior has been understood by studying the physics of the avian vocal organ, particularly the syrinx. A mathematical model describing the complex periphery as a nonlinear dynamical system leads to the conclusion that nontrivial behavior emerges even when the organ is commanded by simple motor instructions: smooth paths in a low dimensional parameter space. An analysis of the model provides insight into which parameters are responsible for generating a rich variety of diverse vocalizations, and what the physiological meaning of these parameters is. By recording the physiological motor instructions elicited by a spontaneously singing muted bird and computing the model on a Digital Signal Processor in real-time, we produce realistic synthetic vocalizations that replace the bird's own auditory feedback. In this way, we build a bio-prosthetic avian vocal organ driven by a freely behaving bird via its physiologically coded motor commands. Since it is based on a low-dimensional nonlinear mathematical model of the peripheral effector, the emulation of the motor behavior requires light computation, in such a way that our bio-prosthetic device can be implemented on a portable platform.

  7. Sensitivity of finite helical axis parameters to temporally varying realistic motion utilizing an idealized knee model.

    PubMed

    Johnson, T S; Andriacchi, T P; Erdman, A G

    2004-01-01

    Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time step increments during periods of little or no rotation.

  8. Modeling the Hot Tensile Flow Behaviors at Ultra-High-Strength Steel and Construction of Three-Dimensional Continuous Interaction Space for Forming Parameters

    NASA Astrophysics Data System (ADS)

    Quan, Guo-zheng; Zhan, Zong-yang; Wang, Tong; Xia, Yu-feng

    2017-01-01

    The response of true stress to strain rate, temperature and strain is a complex three-dimensional (3D) issue, and the accurate description of such constitutive relationships significantly contributes to the optimum process design. To obtain the true stress-strain data of ultra-high-strength steel, BR1500HS, a series of isothermal hot tensile tests were conducted in a wide temperature range of 973-1,123 K and a strain rate range of 0.01-10 s-1 on a Gleeble 3800 testing machine. Then the constitutive relationships were modeled by an optimally constructed and well-trained backpropagation artificial neural network (BP-ANN). The evaluation of BP-ANN model revealed that it has admirable performance in characterizing and predicting the flow behaviors of BR1500HS. A comparison on improved Arrhenius-type constitutive equation and BP-ANN model shows that the latter has higher accuracy. Consequently, the developed BP-ANN model was used to predict abundant stress-strain data beyond the limited experimental conditions. Then a 3D continuous interaction space for temperature, strain rate, strain and stress was constructed based on these predicted data. The developed 3D continuous interaction space for hot working parameters contributes to fully revealing the intrinsic relationships of BR1500HS steel.

  9. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  10. Gas Cromatography In Solar System Exploration:decoding Complex Chromatograms Recovered From Space Missions

    NASA Astrophysics Data System (ADS)

    Pietrogrande, M. C.; Tellini, I.; Dondi, F.; Felinger, A.; Sternberg, R.; Szopa, C.; Vidal-Madjar, C.

    GC plays a predominant role in solar system explorations: it has been applied to space research related to exobiology: i.e., Cassini-Huygens mission devoted to characterize chemical composition of TitanSs atmosphere [2], Rosetta mission to investigate the nucleus of comet p/Wirtamen (COSAC experiments) [1]. GC analysis of planetary atmosphere is a difficult analytical task because of the unknown and low level of an- alytes present in the sample, the high degree of automatization required, the strong constraints due to the flight (short analysis time, low power consumption, high accu- racy and reliability under extreme space conditions). In these circumstances the use of a signal processing procedure is practically mandatory to efficiently extract useful in- formation from the raw chromatogram ­ i.e. to decode the complex chromatogram to determine the number of components, the separation efficiency and the retention pat- tern. In this work a chemometric approach based on the Fourier analysis is applied to complex chromatograms related to space research: from the autocovariance function (ACVF) computed on the digitized chromatogram, the chromatographic parameters ­ number of components, peak shape parameters, retention pattern ­ can be estimated [3-7]. The procedure, originally developed for constant peak width [3], was extended to variable peak width [4], in order to describe chromatograms obtained in isother- mal conditions, i.e., analysis condition compatible with space flight constraints. The chemometric procedure was applied to chromatograms of standard mixtures repre- sentative of planetary atmospheres ­ hydrocarbons and oxygenated compounds with carbon atom number ranging from 2 to 8 ­ obtained in flight simulating conditions ­ isothermal or pseudo-isothermal conditions. Both the simplified graphic procedure, based on the assumption of constant peak width [3], and the complete approach de- veloped for variable peak width [4], were applied and the results compared. Also an independent procedure was used to estimate peak width, in order to validate the ob- tained results. The number of components present in the mixture and the peak width (related to separation efficiency) can be accurately estimated for the experimental chromatograms. Such information are useful to interpret data recovered from space 1 missions and to select the optimal analysis conditions compatible with flight con- straints. 1. C. Szopa et al., J. Chromatogr. A 2000, 904, 73. 2. M. C. Pietrogrande et al., J. Chromatogr. A, in press. 3. A. Felinger et al, Anal. Chem., 1990, 62, 1854. 4. A. Felinger et al, Anal. Chem., 1991, 63, 2627. 5. M. C. Pietrogrande et al., J. High Resol. Chromatogr. 1996, 19, 327. 6. F. Dondi et al, Chromatographia, 1997, 45, 435. 7. A. Felinger, M.C. Pietrogrande, Anal. Chem., 2001, 73, 618A. 2

  11. SU-E-J-261: Statistical Analysis and Chaotic Dynamics of Respiratory Signal of Patients in BodyFix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalski, D; Huq, M; Bednarz, G

    Purpose: To quantify respiratory signal of patients in BodyFix undergoing 4DCT scan with and without immobilization cover. Methods: 20 pairs of respiratory tracks recorded with RPM system during 4DCT scan were analyzed. Descriptive statistic was applied to selected parameters of exhale-inhale decomposition. Standardized signals were used with the delay method to build orbits in embedded space. Nonlinear behavior was tested with surrogate data. Sample entropy SE, Lempel-Ziv complexity LZC and the largest Lyapunov exponents LLE were compared. Results: Statistical tests show difference between scans for inspiration time and its variability, which is bigger for scans without cover. The same ismore » for variability of the end of exhalation and inhalation. Other parameters fail to show the difference. For both scans respiratory signals show determinism and nonlinear stationarity. Statistical test on surrogate data reveals their nonlinearity. LLEs show signals chaotic nature and its correlation with breathing period and its embedding delay time. SE, LZC and LLE measure respiratory signal complexity. Nonlinear characteristics do not differ between scans. Conclusion: Contrary to expectation cover applied to patients in BodyFix appears to have limited effect on signal parameters. Analysis based on trajectories of delay vectors shows respiratory system nonlinear character and its sensitive dependence on initial conditions. Reproducibility of respiratory signal can be evaluated with measures of signal complexity and its predictability window. Longer respiratory period is conducive for signal reproducibility as shown by these gauges. Statistical independence of the exhale and inhale times is also supported by the magnitude of LLE. The nonlinear parameters seem more appropriate to gauge respiratory signal complexity since its deterministic chaotic nature. It contrasts with measures based on harmonic analysis that are blind for nonlinear features. Dynamics of breathing, so crucial for 4D-based clinical technologies, can be better controlled if nonlinear-based methodology, which reflects respiration characteristic, is applied. Funding provided by Varian Medical Systems via Investigator Initiated Research Project.« less

  12. The Planetary and Space Simulation Facilities at DLR Cologne

    NASA Astrophysics Data System (ADS)

    Rabbow, Elke; Parpart, André; Reitz, Günther

    2016-06-01

    Astrobiology strives to increase our knowledge on the origin, evolution and distribution of life, on Earth and beyond. In the past centuries, life has been found on Earth in environments with extreme conditions that were expected to be uninhabitable. Scientific investigations of the underlying metabolic mechanisms and strategies that lead to the high adaptability of these extremophile organisms increase our understanding of evolution and distribution of life on Earth. Life as we know it depends on the availability of liquid water. Exposure of organisms to defined and complex extreme environmental conditions, in particular those that limit the water availability, allows the investigation of the survival mechanisms as well as an estimation of the possibility of the distribution to and survivability on other celestial bodies of selected organisms. Space missions in low Earth orbit (LEO) provide access for experiments to complex environmental conditions not available on Earth, but studies on the molecular and cellular mechanisms of adaption to these hostile conditions and on the limits of life cannot be performed exclusively in space experiments. Experimental space is limited and allows only the investigation of selected endpoints. An additional intensive ground based program is required, with easy to access facilities capable to simulate space and planetary environments, in particular with focus on temperature, pressure, atmospheric composition and short wavelength solar ultraviolet radiation (UV). DLR Cologne operates a number of Planetary and Space Simulation facilities (PSI) where microorganisms from extreme terrestrial environments or known for their high adaptability are exposed for mechanistic studies. Space or planetary parameters are simulated individually or in combination in temperature controlled vacuum facilities equipped with a variety of defined and calibrated irradiation sources. The PSI support basic research and were recurrently used for pre-flight test programs for several astrobiological space missions. Parallel experiments on ground provided essential complementary data supporting the scientific interpretation of the data received from the space missions.

  13. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  14. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    NASA Astrophysics Data System (ADS)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel hybrid ROM algorithms which can be readily integrated into existing methods and offer higher computational efficiency and defendable accuracy of the reduced models. For example, the snapshots ROM algorithm is hybridized with the range finding algorithm to render reduction in the state space, e.g. the flux in reactor calculations. In another implementation, the perturbation theory used to calculate first order derivatives of responses with respect to parameters is hybridized with a forward sensitivity analysis approach to render reduction in the parameter space. Reduction at the state and parameter spaces can be combined to render further reduction at the interface between different physics codes in a multi-physics model with the accuracy quantified in a similar manner to the single physics case. Although the proposed algorithms are generic in nature, we focus here on radiation transport models used in support of the design and analysis of nuclear reactor cores. In particular, we focus on replacing the traditional assembly calculations by ROM models to facilitate the generation of homogenized cross-sections for downstream core calculations. The implication is that assembly calculations could be done instantaneously therefore precluding the need for the expensive evaluation of the few-group cross-sections for all possible core conditions. Given the generic natures of the algorithms, we make an effort to introduce the material in a general form to allow non-nuclear engineers to benefit from this work.

  15. Order parameter aided efficient phase space exploration under extreme conditions

    NASA Astrophysics Data System (ADS)

    Samanta, Amit

    Physical processes in nature exhibit disparate time-scales, for example time scales associated with processes like phase transitions, various manifestations of creep, sintering of particles etc. are often much higher than time the system spends in the metastable states. The transition times associated with such events are also orders of magnitude higher than time-scales associated with vibration of atoms. Thus, an atomistic simulation of such transition events is a challenging task. Consequently, efficient exploration of configuration space and identification of metastable structures in condensed phase systems is challenging. In this talk I will illustrate how we can define a set of coarse-grained variables or order parameters and use these to systematically and efficiently steer a system containing thousands or millions of atoms over different parts of the configuration. This order parameter aided sampling can be used to identify metastable states, transition pathways and understand the mechanistic details of complex transition processes. I will illustrate how this sampling scheme can be used to study phase transition pathways and phase boundaries in prototypical materials, like SiO2 and Cu under high-pressure conditions. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  16. Demixing-stimulated lane formation in binary complex plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Du, C.-R.; Jiang, K.; Suetterlin, K. R.

    2011-11-29

    Recently lane formation and phase separation have been reported for experiments with binary complex plasmas in the PK3-Plus laboratory onboard the International Space Station (ISS). Positive non-additivity of particle interactions is known to stimulate phase separation (demixing), but its effect on lane formation is unknown. In this work, we used Langevin dynamics (LD) simulation to probe the role of non-additivity interactions on lane formation. The competition between laning and demixing leads to thicker lanes. Analysis based on anisotropic scaling indices reveals a crossover from normal laning mode to a demixing-stimulated laning mode. Extensive numerical simulations enabled us to identify amore » critical value of the non-additivity parameter {Delta} for the crossover.« less

  17. Improved FFT-based numerical inversion of Laplace transforms via fast Hartley transform algorithm

    NASA Technical Reports Server (NTRS)

    Hwang, Chyi; Lu, Ming-Jeng; Shieh, Leang S.

    1991-01-01

    The disadvantages of numerical inversion of the Laplace transform via the conventional fast Fourier transform (FFT) are identified and an improved method is presented to remedy them. The improved method is based on introducing a new integration step length Delta(omega) = pi/mT for trapezoidal-rule approximation of the Bromwich integral, in which a new parameter, m, is introduced for controlling the accuracy of the numerical integration. Naturally, this method leads to multiple sets of complex FFT computations. A new inversion formula is derived such that N equally spaced samples of the inverse Laplace transform function can be obtained by (m/2) + 1 sets of N-point complex FFT computations or by m sets of real fast Hartley transform (FHT) computations.

  18. Three-dimensional self-adaptive grid method for complex flows

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Deiwert, George S.

    1988-01-01

    A self-adaptive grid procedure for efficient computation of three-dimensional complex flow fields is described. The method is based on variational principles to minimize the energy of a spring system analogy which redistributes the grid points. Grid control parameters are determined by specifying maximum and minimum grid spacing. Multidirectional adaptation is achieved by splitting the procedure into a sequence of successive applications of a unidirectional adaptation. One-sided, two-directional constraints for orthogonality and smoothness are used to enhance the efficiency of the method. Feasibility of the scheme is demonstrated by application to a multinozzle, afterbody, plume flow field. Application of the algorithm for initial grid generation is illustrated by constructing a three-dimensional grid about a bump-like geometry.

  19. Forecasts and Warnings of Extreme Solar Storms at the Sun

    NASA Astrophysics Data System (ADS)

    Lundstedt, H.

    2015-12-01

    The most pressing space weather forecasts and warnings are those of the most intense solar flares and coronal mass ejections. However, in trying to develop these forecasts and warnings, we are confronted to many fundamental questions. Some of those are: How to define an observable measure for an extreme solar storm? How extreme can a solar storm become and how long is the build up time? How to make forecasts and warnings? Many have contributed to clarifying these general questions. In his presentation we will describe our latest results on the topological complexity of magnetic fields and the use of SDO SHARP parameters. The complexity concept will then be used to discuss the second question. Finally we will describe probability estimates of extreme solar storms.

  20. In silico search for functionally similar proteins involved in meiosis and recombination in evolutionarily distant organisms.

    PubMed

    Bogdanov, Yuri F; Dadashev, Sergei Y; Grishaeva, Tatiana M

    2003-01-01

    Evolutionarily distant organisms have not only orthologs, but also nonhomologous proteins that build functionally similar subcellular structures. For instance, this is true with protein components of the synaptonemal complex (SC), a universal ultrastructure that ensures the successful pairing and recombination of homologous chromosomes during meiosis. We aimed at developing a method to search databases for genes that code for such nonhomologous but functionally analogous proteins. Advantage was taken of the ultrastructural parameters of SC and the conformation of SC proteins responsible for these. Proteins involved in SC central space are known to be similar in secondary structure. Using published data, we found a highly significant correlation between the width of the SC central space and the length of rod-shaped central domain of mammalian and yeast intermediate proteins forming transversal filaments in the SC central space. Basing on this, we suggested a method for searching genome databases of distant organisms for genes whose virtual proteins meet the above correlation requirement. Our recent finding of the Drosophila melanogaster CG17604 gene coding for synaptonemal complex transversal filament protein received experimental support from another lab. With the same strategy, we showed that the Arabidopsis thaliana and Caenorhabditis elegans genomes contain unique genes coding for such proteins.

  1. Functional seismic evaluation of hospitals

    NASA Astrophysics Data System (ADS)

    Guevara, L. T.

    2003-04-01

    Functional collapse of hospitals (FCH) occurs when a medical complex, or part of it, although with neither structural nor nonstructural damage, is unable to provide required services for immediate attention to earthquake victims and for the recovery of the affected community. As it is known, FCH during and after an earthquake, is produced, not only by the damage to nonstructural components, but by an inappropriate or deficient distribution of essential and supporting medical spaces. This paper presents some conclusions on the analysis of the traditional architectural schemes for the design and construction of hospitals in the 20th Century and some recommendations for the establishment of evaluation parameters for the remodeling and seismic upgrade of existing hospitals in seismic zones based on the new concepts of: a) the relative location of each essential service (ES) into the medical complex, b) the capacity of each of these spaces for housing temporary activities required for the attention of a massive emergency (ME); c) the relationship between ES and the supporting services (SS); d) the flexibility of transformation of nonessential services into complementary spaces for the attention of extraordinary number of victims; e) the dimensions and appropriateness of evacuation routes; and d) the appropriate supply and maintenance of water, electricity and vital gases emergency installations.

  2. Extensional channel flow revisited: a dynamical systems perspective

    PubMed Central

    Meseguer, Alvaro; Mellibovsky, Fernando; Weidman, Patrick D.

    2017-01-01

    Extensional self-similar flows in a channel are explored numerically for arbitrary stretching–shrinking rates of the confining parallel walls. The present analysis embraces time integrations, and continuations of steady and periodic solutions unfolded in the parameter space. Previous studies focused on the analysis of branches of steady solutions for particular stretching–shrinking rates, although recent studies focused also on the dynamical aspects of the problems. We have adopted a dynamical systems perspective, analysing the instabilities and bifurcations the base state undergoes when increasing the Reynolds number. It has been found that the base state becomes unstable for small Reynolds numbers, and a transitional region including complex dynamics takes place at intermediate Reynolds numbers, depending on the wall acceleration values. The base flow instabilities are constitutive parts of different codimension-two bifurcations that control the dynamics in parameter space. For large Reynolds numbers, the restriction to self-similarity results in simple flows with no realistic behaviour, but the flows obtained in the transition region can be a valuable tool for the understanding of the dynamics of realistic Navier–Stokes solutions. PMID:28690413

  3. Applying transfer matrix method to the estimation of the modal characteristics of the NASA Mini-Mass Truss

    NASA Technical Reports Server (NTRS)

    Shen, Ji-Yao; Taylor, Lawrence W., Jr.

    1994-01-01

    It is beneficial to use a distributed parameter model for large space structures because the approach minimizes the number of model parameters. Holzer's transfer matrix method provides a useful means to simplify and standardize the procedure for solving the system of partial differential equations. Any large space structures can be broken down into sub-structures with simple elastic and dynamical properties. For each single element, such as beam, tether, or rigid body, we can derive the corresponding transfer matrix. Combining these elements' matrices enables the solution of the global system equations. The characteristics equation can then be formed by satisfying the appropriate boundary conditions. Then natural frequencies and mode shapes can be determined by searching the roots of the characteristic equation at frequencies within the range of interest. This paper applies this methodology, and the maximum likelihood estimation method, to refine the modal characteristics of the NASA Mini-Mast Truss by successively matching the theoretical response to the test data of the truss. The method is being applied to more complex configurations.

  4. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  5. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  6. Characterization of the NTPR and BD1 interacting domains of the human PICH-BEND3 complex.

    PubMed

    Pitchai, Ganesha P; Hickson, Ian D; Streicher, Werner; Montoya, Guillermo; Mesa, Pablo

    2016-08-01

    Chromosome integrity depends on DNA structure-specific processing complexes that resolve DNA entanglement between sister chromatids. If left unresolved, these entanglements can generate either chromatin bridging or ultrafine DNA bridging in the anaphase of mitosis. These bridge structures are defined by the presence of the PICH protein, which interacts with the BEND3 protein in mitosis. To obtain structural insights into PICH-BEND3 complex formation at the atomic level, their respective NTPR and BD1 domains were cloned, overexpressed and crystallized using 1.56 M ammonium sulfate as a precipitant at pH 7.0. The protein complex readily formed large hexagonal crystals belonging to space group P6122, with unit-cell parameters a = b = 47.28, c = 431.58 Å and with one heterodimer in the asymmetric unit. A complete multiwavelength anomalous dispersion (MAD) data set extending to 2.2 Å resolution was collected from a selenomethionine-labelled crystal at the Swiss Light Source.

  7. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  8. Systematic theoretical investigation of the zero-field splitting in Gd(III) complexes: Wave function and density functional approaches

    NASA Astrophysics Data System (ADS)

    Khan, Shehryar; Kubica-Misztal, Aleksandra; Kruk, Danuta; Kowalewski, Jozef; Odelius, Michael

    2015-01-01

    The zero-field splitting (ZFS) of the electronic ground state in paramagnetic ions is a sensitive probe of the variations in the electronic and molecular structure with an impact on fields ranging from fundamental physical chemistry to medical applications. A detailed analysis of the ZFS in a series of symmetric Gd(III) complexes is presented in order to establish the applicability and accuracy of computational methods using multiconfigurational complete-active-space self-consistent field wave functions and of density functional theory calculations. The various computational schemes are then applied to larger complexes Gd(III)DOTA(H2O)-, Gd(III)DTPA(H2O)2-, and Gd(III)(H2O)83+ in order to analyze how the theoretical results compare to experimentally derived parameters. In contrast to approximations based on density functional theory, the multiconfigurational methods produce results for the ZFS of Gd(III) complexes on the correct order of magnitude.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saijo, Shinya; Sato, Takao; Kumasaka, Takashi

    The reaction center–light-harvesting 1 core complex from R. viridis was crystallized and X-ray diffraction data were collected to 8.0 Å resolution. The reaction center–light-harvesting 1 (RC–LH1) core complex is the photosynthetic apparatus in the membrane of the purple photosynthetic bacterium Rhodopseudomonas viridis. The RC is surrounded by an LH1 complex that is constituted of oligomers of three types of apoproteins (α, β and γ chains) with associated bacteriochlorophyll bs and carotenoid. It has been crystallized by the sitting-drop vapour-diffusion method. A promising crystal diffracted to beyond 8.0 Å resolution. It belonged to space group P1, with unit-cell parameters a =more » 141.4, b = 136.9, c = 185.3 Å, α = 104.6, β = 94.0, γ = 110.7°. A Patterson function calculated using data between 15.0 and 8.0 Å resolution suggested that the LH1 complex is distributed with quasi-16-fold rotational symmetry around the RC.« less

  10. Death and revival of chaos.

    PubMed

    Kaszás, Bálint; Feudel, Ulrike; Tél, Tamás

    2016-12-01

    We investigate the death and revival of chaos under the impact of a monotonous time-dependent forcing that changes its strength with a non-negligible rate. Starting on a chaotic attractor it is found that the complexity of the dynamics remains very pronounced even when the driving amplitude has decayed to rather small values. When after the death of chaos the strength of the forcing is increased again with the same rate of change, chaos is found to revive but with a different history. This leads to the appearance of a hysteresis in the complexity of the dynamics. To characterize these dynamics, the concept of snapshot attractors is used, and the corresponding ensemble approach proves to be superior to a single trajectory description, that turns out to be nonrepresentative. The death (revival) of chaos is manifested in a drop (jump) of the standard deviation of one of the phase-space coordinates of the ensemble; the details of this chaos-nonchaos transition depend on the ratio of the characteristic times of the amplitude change and of the internal dynamics. It is demonstrated that chaos cannot die out as long as underlying transient chaos is present in the parameter space. As a condition for a "quasistatically slow" switch-off, we derive an inequality which cannot be fulfilled in practice over extended parameter ranges where transient chaos is present. These observations need to be taken into account when discussing the implications of "climate change scenarios" in any nonlinear dynamical system.

  11. Fabrication and application of heterogeneous printed mouse phantoms for whole animal optical imaging

    PubMed Central

    Bentz, Brian Z.; Chavan, Anmol V.; Lin, Dergan; Tsai, Esther H. R.; Webb, Kevin J.

    2017-01-01

    This work demonstrates the usefulness of 3D printing for optical imaging applications. Progress in developing optical imaging for biomedical applications requires customizable and often complex objects for testing and evaluation. There is therefore high demand for what have become known as tissue-simulating “phantoms.” We present a new optical phantom fabricated using inexpensive 3D printing methods with multiple materials, allowing for the placement of complex inhomogeneities in complex or anatomically realistic geometries, as opposed to previous phantoms, which were limited to simple shapes formed by molds or machining. We use diffuse optical imaging to reconstruct optical parameters in 3D space within a printed mouse to show the applicability of the phantoms for developing whole animal optical imaging methods. This phantom fabrication approach is versatile, can be applied to optical imaging methods besides diffusive imaging, and can be used in the calibration of live animal imaging data. PMID:26835763

  12. Difficult Decisions Made Easier

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.

  13. Impact of inelastic processes on the chaotic dynamics of a Bose-Einstein condensate trapped into a moving optical lattice

    NASA Astrophysics Data System (ADS)

    Tchatchueng, Sylvin; Siewe Siewe, Martin; Marie Moukam Kakmeni, François; Tchawoua, Clément

    2017-03-01

    We investigate the dynamics of a Bose-Einstein condensate with attractive two-body and repulsive three-body interactions between atoms trapped into a moving optical lattice and subjected to some inelastic processes (a linear atomic feeding and two dissipative terms related to dipolar relaxation and three-body recombination). We are interested in finding out how the nonconservative terms mentioned above act on the dynamical behaviour of the condensate, and how they can be used in the control of possible chaotic dynamics. Seeking the wave function of condensate on the form of Bloch waves, we notice that the real amplitude of the condensate is governed by an integro-differential equation. As theoretical tool of prediction of homoclinic and heteroclinic chaos, we use the Melnikov method, which provides two Melnikov functions related to homoclinic and heteroclinic bifurcations. Applying the Melnikov criterion, some regions of instability are plotted in the parameter space and reveal complex dynamics (solitonic stable solutions, weak and strong instabilities leading to collapse, growth-collapse cycles and finally to chaotic oscillations). It comes from some parameter space that coupling the optical intensity and parameters related to atomic feeding and atomic losses (dissipations) as control parameters can help to reduce or annihilate chaotic behaviours of the condensate. Moreover, the theoretical study reveals that there is a certain ratio between the atomic feeding parameter and the parameters related to the dissipation for the occurrence of chaotic oscillations in the dynamics of condensate. The theoretical predictions are verified by numerical simulations (Poincaré sections), and there is a certain reliability of our analytical treatment.

  14. The Behavioral Space of Zebrafish Locomotion and Its Neural Network Analog.

    PubMed

    Girdhar, Kiran; Gruebele, Martin; Chemla, Yann R

    2015-01-01

    How simple is the underlying control mechanism for the complex locomotion of vertebrates? We explore this question for the swimming behavior of zebrafish larvae. A parameter-independent method, similar to that used in studies of worms and flies, is applied to analyze swimming movies of fish. The motion itself yields a natural set of fish "eigenshapes" as coordinates, rather than the experimenter imposing a choice of coordinates. Three eigenshape coordinates are sufficient to construct a quantitative "postural space" that captures >96% of the observed zebrafish locomotion. Viewed in postural space, swim bouts are manifested as trajectories consisting of cycles of shapes repeated in succession. To classify behavioral patterns quantitatively and to understand behavioral variations among an ensemble of fish, we construct a "behavioral space" using multi-dimensional scaling (MDS). This method turns each cycle of a trajectory into a single point in behavioral space, and clusters points based on behavioral similarity. Clustering analysis reveals three known behavioral patterns-scoots, turns, rests-but shows that these do not represent discrete states, but rather extremes of a continuum. The behavioral space not only classifies fish by their behavior but also distinguishes fish by age. With the insight into fish behavior from postural space and behavioral space, we construct a two-channel neural network model for fish locomotion, which produces strikingly similar postural space and behavioral space dynamics compared to real zebrafish.

  15. The Behavioral Space of Zebrafish Locomotion and Its Neural Network Analog

    PubMed Central

    Girdhar, Kiran; Gruebele, Martin; Chemla, Yann R.

    2015-01-01

    How simple is the underlying control mechanism for the complex locomotion of vertebrates? We explore this question for the swimming behavior of zebrafish larvae. A parameter-independent method, similar to that used in studies of worms and flies, is applied to analyze swimming movies of fish. The motion itself yields a natural set of fish "eigenshapes" as coordinates, rather than the experimenter imposing a choice of coordinates. Three eigenshape coordinates are sufficient to construct a quantitative "postural space" that captures >96% of the observed zebrafish locomotion. Viewed in postural space, swim bouts are manifested as trajectories consisting of cycles of shapes repeated in succession. To classify behavioral patterns quantitatively and to understand behavioral variations among an ensemble of fish, we construct a "behavioral space" using multi-dimensional scaling (MDS). This method turns each cycle of a trajectory into a single point in behavioral space, and clusters points based on behavioral similarity. Clustering analysis reveals three known behavioral patterns—scoots, turns, rests—but shows that these do not represent discrete states, but rather extremes of a continuum. The behavioral space not only classifies fish by their behavior but also distinguishes fish by age. With the insight into fish behavior from postural space and behavioral space, we construct a two-channel neural network model for fish locomotion, which produces strikingly similar postural space and behavioral space dynamics compared to real zebrafish. PMID:26132396

  16. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    PubMed

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity. Copyright © 2017. Published by Elsevier Inc.

  17. Trapping-charging ability and electrical properties study of amorphous insulator by dielectric spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mekni, Omar, E-mail: omarmekni-lmop@yahoo.fr; Arifa, Hakim; Askri, Besma

    2014-09-14

    Usually, the trapping phenomenon in insulating materials is studied by injecting charges using a Scanning Electron Microscope. In this work, we use the dielectric spectroscopy technique for showing a correlation between the dielectric properties and the trapping-charging ability of insulating materials. The evolution of the complex permittivity (real and imaginary parts) as a function of frequency and temperature reveals different types of relaxation according to the trapping ability of the material. We found that the space charge relaxation at low frequencies affects the real part of the complex permittivity ε{sup ´} and the dissipation factor Tan(δ). We prove that themore » evolution of the imaginary part of the complex permittivity against temperature ε{sup ′′}=f(T) reflects the phenomenon of charge trapping and detrapping as well as trapped charge evolution Q{sub p}(T). We also use the electric modulus formalism to better identify the space charge relaxation. The investigation of trapping or conductive nature of insulating materials was mainly made by studying the activation energy and conductivity. The conduction and trapping parameters are determined using the Correlated Barrier Hopping (CBH) model in order to confirm the relation between electrical properties and charge trapping ability.« less

  18. Physical, Spatial, and Molecular Aspects of Extracellular Matrix of In Vivo Niches and Artificial Scaffolds Relevant to Stem Cells Research

    PubMed Central

    Akhmanova, Maria; Osidak, Egor; Domogatsky, Sergey; Rodin, Sergey; Domogatskaya, Anna

    2015-01-01

    Extracellular matrix can influence stem cell choices, such as self-renewal, quiescence, migration, proliferation, phenotype maintenance, differentiation, or apoptosis. Three aspects of extracellular matrix were extensively studied during the last decade: physical properties, spatial presentation of adhesive epitopes, and molecular complexity. Over 15 different parameters have been shown to influence stem cell choices. Physical aspects include stiffness (or elasticity), viscoelasticity, pore size, porosity, amplitude and frequency of static and dynamic deformations applied to the matrix. Spatial aspects include scaffold dimensionality (2D or 3D) and thickness; cell polarity; area, shape, and microscale topography of cell adhesion surface; epitope concentration, epitope clustering characteristics (number of epitopes per cluster, spacing between epitopes within cluster, spacing between separate clusters, cluster patterns, and level of disorder in epitope arrangement), and nanotopography. Biochemical characteristics of natural extracellular matrix molecules regard diversity and structural complexity of matrix molecules, affinity and specificity of epitope interaction with cell receptors, role of non-affinity domains, complexity of supramolecular organization, and co-signaling by growth factors or matrix epitopes. Synergy between several matrix aspects enables stem cells to retain their function in vivo and may be a key to generation of long-term, robust, and effective in vitro stem cell culture systems. PMID:26351461

  19. Towards General Evaluation of Intelligent Systems: Lessons Learned from Reproducing AIQ Test Results

    NASA Astrophysics Data System (ADS)

    Vadinský, Ondřej

    2018-03-01

    This paper attempts to replicate the results of evaluating several artificial agents using the Algorithmic Intelligence Quotient test originally reported by Legg and Veness. Three experiments were conducted: One using default settings, one in which the action space was varied and one in which the observation space was varied. While the performance of freq, Q0, Qλ, and HLQλ corresponded well with the original results, the resulting values differed, when using MC-AIXI. Varying the observation space seems to have no qualitative impact on the results as reported, while (contrary to the original results) varying the action space seems to have some impact. An analysis of the impact of modifying parameters of MC-AIXI on its performance in the default settings was carried out with the help of data mining techniques used to identifying highly performing configurations. Overall, the Algorithmic Intelligence Quotient test seems to be reliable, however as a general artificial intelligence evaluation method it has several limits. The test is dependent on the chosen reference machine and also sensitive to changes to its settings. It brings out some differences among agents, however, since they are limited in size, the test setting may not yet be sufficiently complex. A demanding parameter sweep is needed to thoroughly evaluate configurable agents that, together with the test format, further highlights computational requirements of an agent. These and other issues are discussed in the paper along with proposals suggesting how to alleviate them. An implementation of some of the proposals is also demonstrated.

  20. Models and applications for space weather forecasting and analysis at the Community Coordinated Modeling Center.

    NASA Astrophysics Data System (ADS)

    Kuznetsova, Maria

    The Community Coordinated Modeling Center (CCMC, http://ccmc.gsfc.nasa.gov) was established at the dawn of the new millennium as a long-term flexible solution to the problem of transition of progress in space environment modeling to operational space weather forecasting. CCMC hosts an expanding collection of state-of-the-art space weather models developed by the international space science community. Over the years the CCMC acquired the unique experience in preparing complex models and model chains for operational environment and developing and maintaining custom displays and powerful web-based systems and tools ready to be used by researchers, space weather service providers and decision makers. In support of space weather needs of NASA users CCMC is developing highly-tailored applications and services that target specific orbits or locations in space and partnering with NASA mission specialists on linking CCMC space environment modeling with impacts on biological and technological systems in space. Confidence assessment of model predictions is an essential element of space environment modeling. CCMC facilitates interaction between model owners and users in defining physical parameters and metrics formats relevant to specific applications and leads community efforts to quantify models ability to simulate and predict space environment events. Interactive on-line model validation systems developed at CCMC make validation a seamless part of model development circle. The talk will showcase innovative solutions for space weather research, validation, anomaly analysis and forecasting and review on-going community-wide model validation initiatives enabled by CCMC applications.

  1. Hyperspectral imaging simulation of object under sea-sky background

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  2. Scientific and technical complex for modeling, researching and testing of rocket-space vehicles’ electric power installations

    NASA Astrophysics Data System (ADS)

    Bezruchko, Konstantin; Davidov, Albert

    2009-01-01

    In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.

  3. Space-based magnetometers

    NASA Astrophysics Data System (ADS)

    Acuña, Mario H.

    2002-11-01

    The general characteristics and system level concepts for space-based magnetometers are presented to illustrate the instruments, principles, and tools involved in making accurate magnetic field measurements in space. Special consideration is given to the most important practical problems that need to be solved to ensure the accuracy of the measurements and their overall impact on system design and mission costs. Several types of instruments used to measure magnetic fields aboard spacecraft and their capabilities and limitations are described according to whether they measure scalar or vector fields. The very large dynamic range associated with magnetic fields of natural origin generally dictates the use of optimized designs for each particular space mission although some wide-range, multimission magnetometers have been developed and used. Earth-field magnetic mapping missions are the most demanding in terms of absolute accuracy and resolution, approaching <1 part in 100 000 in magnitude and a few arcsec in direction. The difficulties of performing sensitive measurements aboard spacecraft, which may not be magnetically clean, represent a fundamental problem which must be addressed immediately at the planning stages of any space mission that includes these measurements. The use of long, deployable booms to separate the sensors from the sources of magnetic contamination, and their impact on system design are discussed. The dual magnetometer technique, which allows the separation of fields of external and spacecraft origin, represents an important space magnetometry tool which can result in significant savings in complex contemporary spacecraft built with minimum magnetic constraints. Techniques for in-flight estimation of magnetometer biases and sensor alignment are discussed briefly, and highlight some basic considerations within the scope and complexity of magnetic field data processing and reduction. The emerging field of space weather is also discussed, including the essential role that space-based magnetic field measurements play in this complex science, which is just in its infancy. Finally, some considerations for the future of space-based magnetometers are presented. Miniature, mass produced sensors based on magnetoresistance effects and micromachined structures have made significant advances in sensitivity but have yet to reach the performance level required for accurate space measurements. The miniaturization of spacecraft and instruments to reduce launch costs usually results in significantly increased magnetic contamination problems and degraded instrument performance parameters, a challenge that has yet to be solved satisfactorily for "world-class" science missions. The rapidly disappearing manufacturing capabilities for high-grade, low noise, soft magnetic materials of the Permalloy family is a cause of concern for the development of high performance fluxgate magnetometers for future space missions.

  4. Co-evolving prisoner's dilemma: Performance indicators and analytic approaches

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Choi, C. W.; Li, Y. S.; Xu, C.; Hui, P. M.

    2017-02-01

    Understanding the intrinsic relation between the dynamical processes in a co-evolving network and the necessary ingredients in formulating a reliable theory is an important question and a challenging task. Using two slightly different definitions of performance indicator in the context of a co-evolving prisoner's dilemma game, it is shown that very different cooperative levels result and theories of different complexity are required to understand the key features. When the payoff per opponent is used as the indicator (Case A), non-cooperative strategy has an edge and dominates in a large part of the parameter space formed by the cutting-and-rewiring probability and the strategy imitation probability. When the payoff from all opponents is used (Case B), cooperative strategy has an edge and dominates the parameter space. Two distinct phases, one homogeneous and dynamical and another inhomogeneous and static, emerge and the phase boundary in the parameter space is studied in detail. A simple theory assuming an average competing environment for cooperative agents and another for non-cooperative agents is shown to perform well in Case A. The same theory, however, fails badly for Case B. It is necessary to include more spatial correlation into a theory for Case B. We show that the local configuration approximation, which takes into account of the different competing environments for agents with different strategies and degrees, is needed to give reliable results for Case B. The results illustrate that formulating a proper theory requires both a conceptual understanding of the effects of the adaptive processes in the problem and a delicate balance between simplicity and accuracy.

  5. Fusion of AIRSAR and TM Data for Parameter Classification and Estimation in Dense and Hilly Forests

    NASA Technical Reports Server (NTRS)

    Moghaddam, Mahta; Dungan, J. L.; Coughlan, J. C.

    2000-01-01

    The expanded remotely sensed data space consisting of coincident radar backscatter and optical reflectance data provides for a more complete description of the Earth surface. This is especially useful where many parameters are needed to describe a certain scene, such as in the presence of dense and complex-structured vegetation or where there is considerable underlying topography. The goal of this paper is to use a combination of radar and optical data to develop a methodology for parameter classification for dense and hilly forests, and further, class-specific parameter estimation. The area to be used in this study is the H. J. Andrews Forest in Oregon, one of the Long-Term Ecological Research (LTER) sites in the US. This area consists of various dense old-growth conifer stands, and contains significant topographic relief. The Andrews forest has been the subject of many ecological studies over several decades, resulting in an abundance of ground measurements. Recently, biomass and leaf-area index (LAI) values for approximately 30 reference stands have also become available which span a large range of those parameters. The remote sensing data types to be used are the C-, L-, and P-band polarimetric radar data from the JPL airborne SAR (AIRSAR), the C-band single-polarization data from the JPL topographic SAR (TOPSAR), and the Thematic Mapper (TM) data from Landsat, all acquired in late April 1998. The total number of useful independent data channels from the AIRSAR is 15 (three frequencies, each with three unique polarizations and amplitude and phase of the like-polarized correlation), from the TOPSAR is 2 (amplitude and phase of the interferometric correlation), and from the TM is 6 (the thermal band is not used). The range pixel spacing of the AIRSAR is 3.3m for C- and L-bands and 6.6m for P-band. The TOPSAR pixel spacing is 10m, and the TM pixel size is 30m. To achieve parameter classification, first a number of parameters are defined which are of interest to ecologists for forest process modeling. These parameters include total biomass, leaf biomass, LAI, and tree height. The remote sensing data from radar and TM are used to formulate a multivariate analysis problem given the ground measurements of the parameters. Each class of each parameter is defined by a probability density function (pdf), the spread of which defines the range of that class. High classification accuracy results from situations in which little overlap occurs between pdfs. Classification results provide the basis for the future work of class-specific parameter estimation using radar and optical data. This work was performed in part by the Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, and in part by the NASA Ames Research Center, Moffett Field, CA, both under contract from the National Aeronautics and Space Administration.

  6. Insectivorous bats respond to vegetation complexity in urban green spaces.

    PubMed

    Suarez-Rubio, Marcela; Ille, Christina; Bruckner, Alexander

    2018-03-01

    Structural complexity is known to determine habitat quality for insectivorous bats, but how bats respond to habitat complexity in highly modified areas such as urban green spaces has been little explored. Furthermore, it is uncertain whether a recently developed measure of structural complexity is as effective as field-based surveys when applied to urban environments. We assessed whether image-derived structural complexity (MIG) was as/more effective than field-based descriptors in this environment and evaluated the response of insectivorous bats to structural complexity in urban green spaces. Bat activity and species richness were assessed with ultrasonic devices at 180 locations within green spaces in Vienna, Austria. Vegetation complexity was assessed using 17 field-based descriptors and by calculating the mean information gain (MIG) using digital images. Total bat activity and species richness decreased with increasing structural complexity of canopy cover, suggesting maneuverability and echolocation (sensorial) challenges for bat species using the canopy for flight and foraging. The negative response of functional groups to increased complexity was stronger for open-space foragers than for edge-space foragers. Nyctalus noctula , a species foraging in open space, showed a negative response to structural complexity, whereas Pipistrellus pygmaeus , an edge-space forager, was positively influenced by the number of trees. Our results show that MIG is a useful, time- and cost-effective tool to measure habitat complexity that complemented field-based descriptors. Response of insectivorous bats to structural complexity was group- and species-specific, which highlights the need for manifold management strategies (e.g., increasing or reinstating the extent of ground vegetation cover) to fulfill different species' requirements and to conserve insectivorous bats in urban green spaces.

  7. Investigation into the influence of laser energy input on selective laser melted thin-walled parts by response surface method

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Zhang, Jian; Pang, Zhicong; Wu, Weihui

    2018-04-01

    Selective laser melting (SLM) provides a feasible way for manufacturing of complex thin-walled parts directly, however, the energy input during SLM process, namely derived from the laser power, scanning speed, layer thickness and scanning space, etc. has great influence on the thin wall's qualities. The aim of this work is to relate the thin wall's parameters (responses), namely track width, surface roughness and hardness to the process parameters considered in this research (laser power, scanning speed and layer thickness) and to find out the optimal manufacturing conditions. Design of experiment (DoE) was used by implementing composite central design to achieve better manufacturing qualities. Mathematical models derived from the statistical analysis were used to establish the relationships between the process parameters and the responses. Also, the effects of process parameters on each response were determined. Then, a numerical optimization was performed to find out the optimal process set at which the quality features are at their desired values. Based on this study, the relationship between process parameters and SLMed thin-walled structure was revealed and thus, the corresponding optimal process parameters can be used to manufactured thin-walled parts with high quality.

  8. FitSKIRT: genetic algorithms to automatically fit dusty galaxies with a Monte Carlo radiative transfer code

    NASA Astrophysics Data System (ADS)

    De Geyter, G.; Baes, M.; Fritz, J.; Camps, P.

    2013-02-01

    We present FitSKIRT, a method to efficiently fit radiative transfer models to UV/optical images of dusty galaxies. These images have the advantage that they have better spatial resolution compared to FIR/submm data. FitSKIRT uses the GAlib genetic algorithm library to optimize the output of the SKIRT Monte Carlo radiative transfer code. Genetic algorithms prove to be a valuable tool in handling the multi- dimensional search space as well as the noise induced by the random nature of the Monte Carlo radiative transfer code. FitSKIRT is tested on artificial images of a simulated edge-on spiral galaxy, where we gradually increase the number of fitted parameters. We find that we can recover all model parameters, even if all 11 model parameters are left unconstrained. Finally, we apply the FitSKIRT code to a V-band image of the edge-on spiral galaxy NGC 4013. This galaxy has been modeled previously by other authors using different combinations of radiative transfer codes and optimization methods. Given the different models and techniques and the complexity and degeneracies in the parameter space, we find reasonable agreement between the different models. We conclude that the FitSKIRT method allows comparison between different models and geometries in a quantitative manner and minimizes the need of human intervention and biasing. The high level of automation makes it an ideal tool to use on larger sets of observed data.

  9. Approximate Bayesian Computation in the estimation of the parameters of the Forbush decrease model

    NASA Astrophysics Data System (ADS)

    Wawrzynczak, A.; Kopka, P.

    2017-12-01

    Realistic modeling of the complicated phenomena as Forbush decrease of the galactic cosmic ray intensity is a quite challenging task. One aspect is a numerical solution of the Fokker-Planck equation in five-dimensional space (three spatial variables, the time and particles energy). The second difficulty arises from a lack of detailed knowledge about the spatial and time profiles of the parameters responsible for the creation of the Forbush decrease. Among these parameters, the central role plays a diffusion coefficient. Assessment of the correctness of the proposed model can be done only by comparison of the model output with the experimental observations of the galactic cosmic ray intensity. We apply the Approximate Bayesian Computation (ABC) methodology to match the Forbush decrease model to experimental data. The ABC method is becoming increasing exploited for dynamic complex problems in which the likelihood function is costly to compute. The main idea of all ABC methods is to accept samples as an approximate posterior draw if its associated modeled data are close enough to the observed one. In this paper, we present application of the Sequential Monte Carlo Approximate Bayesian Computation algorithm scanning the space of the diffusion coefficient parameters. The proposed algorithm is adopted to create the model of the Forbush decrease observed by the neutron monitors at the Earth in March 2002. The model of the Forbush decrease is based on the stochastic approach to the solution of the Fokker-Planck equation.

  10. Data mining and statistical inference in selective laser melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamath, Chandrika

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  11. Data mining and statistical inference in selective laser melting

    DOE PAGES

    Kamath, Chandrika

    2016-01-11

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  12. Parameter redundancy in discrete state-space and integrated models.

    PubMed

    Cole, Diana J; McCrea, Rachel S

    2016-09-01

    Discrete state-space models are used in ecology to describe the dynamics of wild animal populations, with parameters, such as the probability of survival, being of ecological interest. For a particular parametrization of a model it is not always clear which parameters can be estimated. This inability to estimate all parameters is known as parameter redundancy or a model is described as nonidentifiable. In this paper we develop methods that can be used to detect parameter redundancy in discrete state-space models. An exhaustive summary is a combination of parameters that fully specify a model. To use general methods for detecting parameter redundancy a suitable exhaustive summary is required. This paper proposes two methods for the derivation of an exhaustive summary for discrete state-space models using discrete analogues of methods for continuous state-space models. We also demonstrate that combining multiple data sets, through the use of an integrated population model, may result in a model in which all parameters are estimable, even though models fitted to the separate data sets may be parameter redundant. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Simulation of MEMS for the Next Generation Space Telescope

    NASA Technical Reports Server (NTRS)

    Mott, Brent; Kuhn, Jonathan; Broduer, Steve (Technical Monitor)

    2001-01-01

    The NASA Goddard Space Flight Center (GSFC) is developing optical micro-electromechanical system (MEMS) components for potential application in Next Generation Space Telescope (NGST) science instruments. In this work, we present an overview of the electro-mechanical simulation of three MEMS components for NGST, which include a reflective micro-mirror array and transmissive microshutter array for aperture control for a near infrared (NIR) multi-object spectrometer and a large aperture MEMS Fabry-Perot tunable filter for a NIR wide field camera. In all cases the device must operate at cryogenic temperatures with low power consumption and low, complementary metal oxide semiconductor (CMOS) compatible, voltages. The goal of our simulation efforts is to adequately predict both the performance and the reliability of the devices during ground handling, launch, and operation to prevent failures late in the development process and during flight. This goal requires detailed modeling and validation of complex electro-thermal-mechanical interactions and very large non-linear deformations, often involving surface contact. Various parameters such as spatial dimensions and device response are often difficult to measure reliably at these small scales. In addition, these devices are fabricated from a wide variety of materials including surface micro-machined aluminum, reactive ion etched (RIE) silicon nitride, and deep reactive ion etched (DRIE) bulk single crystal silicon. The above broad set of conditions combine to be a formidable challenge for space flight qualification analysis. These simulations represent NASA/GSFC's first attempts at implementing a comprehensive strategy to address complex MEMS structures.

  14. On the rogue waves propagation in non-Maxwellian complex space plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    El-Tantawy, S. A., E-mail: samireltantawy@yahoo.com; El-Awady, E. I., E-mail: eielawady@hotmail.com; Tribeche, M., E-mail: mouloudtribeche@yahoo.fr, E-mail: mtribeche@usthb.dz

    2015-11-15

    The implications of the non-Maxwellian electron distributions (nonthermal/or suprathermal/or nonextensive distributions) are examined on the dust-ion acoustic (DIA) rogue/freak waves in a dusty warm plasma. Using a reductive perturbation technique, the basic set of fluid equations is reduced to a nonlinear Schrödinger equation. The latter is used to study the nonlinear evolution of modulationally unstable DIA wavepackets and to describe the rogue waves (RWs) propagation. Rogue waves are large-amplitude short-lived wave groups, routinely observed in space plasmas. The possible region for the rogue waves to exist is defined precisely for typical parameters of space plasmas. It is shown that themore » RWs strengthen for decreasing plasma nonthermality and increasing superthermality. For nonextensive electrons, the RWs amplitude exhibits a bit more complex behavior, depending on the entropic index q. Moreover, our numerical results reveal that the RWs exist with all values of the ion-to-electron temperature ratio σ for nonthermal and superthermal distributions and there is no limitation for the freak waves to propagate in both two distributions in the present plasma system. But, for nonextensive electron distribution, the bright- and dark-type waves can propagate in this case, which means that there is a limitation for the existence of freak waves. Our systematic investigation should be useful in understanding the properties of DIA solitary waves that may occur in non-Maxwellian space plasmas.« less

  15. Bayesian parameter estimation for the Wnt pathway: an infinite mixture models approach.

    PubMed

    Koutroumpas, Konstantinos; Ballarini, Paolo; Votsi, Irene; Cournède, Paul-Henry

    2016-09-01

    Likelihood-free methods, like Approximate Bayesian Computation (ABC), have been extensively used in model-based statistical inference with intractable likelihood functions. When combined with Sequential Monte Carlo (SMC) algorithms they constitute a powerful approach for parameter estimation and model selection of mathematical models of complex biological systems. A crucial step in the ABC-SMC algorithms, significantly affecting their performance, is the propagation of a set of parameter vectors through a sequence of intermediate distributions using Markov kernels. In this article, we employ Dirichlet process mixtures (DPMs) to design optimal transition kernels and we present an ABC-SMC algorithm with DPM kernels. We illustrate the use of the proposed methodology using real data for the canonical Wnt signaling pathway. A multi-compartment model of the pathway is developed and it is compared to an existing model. The results indicate that DPMs are more efficient in the exploration of the parameter space and can significantly improve ABC-SMC performance. In comparison to alternative sampling schemes that are commonly used, the proposed approach can bring potential benefits in the estimation of complex multimodal distributions. The method is used to estimate the parameters and the initial state of two models of the Wnt pathway and it is shown that the multi-compartment model fits better the experimental data. Python scripts for the Dirichlet Process Gaussian Mixture model and the Gibbs sampler are available at https://sites.google.com/site/kkoutroumpas/software konstantinos.koutroumpas@ecp.fr. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Robust hierarchical state-space models reveal diel variation in travel rates of migrating leatherback turtles.

    PubMed

    Jonsen, Ian D; Myers, Ransom A; James, Michael C

    2006-09-01

    1. Biological and statistical complexity are features common to most ecological data that hinder our ability to extract meaningful patterns using conventional tools. Recent work on implementing modern statistical methods for analysis of such ecological data has focused primarily on population dynamics but other types of data, such as animal movement pathways obtained from satellite telemetry, can also benefit from the application of modern statistical tools. 2. We develop a robust hierarchical state-space approach for analysis of multiple satellite telemetry pathways obtained via the Argos system. State-space models are time-series methods that allow unobserved states and biological parameters to be estimated from data observed with error. We show that the approach can reveal important patterns in complex, noisy data where conventional methods cannot. 3. Using the largest Atlantic satellite telemetry data set for critically endangered leatherback turtles, we show that the diel pattern in travel rates of these turtles changes over different phases of their migratory cycle. While foraging in northern waters the turtles show similar travel rates during day and night, but on their southward migration to tropical waters travel rates are markedly faster during the day. These patterns are generally consistent with diving data, and may be related to changes in foraging behaviour. Interestingly, individuals that migrate southward to breed generally show higher daytime travel rates than individuals that migrate southward in a non-breeding year. 4. Our approach is extremely flexible and can be applied to many ecological analyses that use complex, sequential data.

  17. Introduction to a special section on ecohydrology of semiarid environments: Confronting mathematical models with ecosystem complexity

    NASA Astrophysics Data System (ADS)

    Svoray, Tal; Assouline, Shmuel; Katul, Gabriel

    2015-11-01

    Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.

  18. Hydrological model parameter dimensionality is a weak measure of prediction uncertainty

    NASA Astrophysics Data System (ADS)

    Pande, S.; Arkesteijn, L.; Savenije, H.; Bastidas, L. A.

    2015-04-01

    This paper shows that instability of hydrological system representation in response to different pieces of information and associated prediction uncertainty is a function of model complexity. After demonstrating the connection between unstable model representation and model complexity, complexity is analyzed in a step by step manner. This is done measuring differences between simulations of a model under different realizations of input forcings. Algorithms are then suggested to estimate model complexity. Model complexities of the two model structures, SAC-SMA (Sacramento Soil Moisture Accounting) and its simplified version SIXPAR (Six Parameter Model), are computed on resampled input data sets from basins that span across the continental US. The model complexities for SIXPAR are estimated for various parameter ranges. It is shown that complexity of SIXPAR increases with lower storage capacity and/or higher recession coefficients. Thus it is argued that a conceptually simple model structure, such as SIXPAR, can be more complex than an intuitively more complex model structure, such as SAC-SMA for certain parameter ranges. We therefore contend that magnitudes of feasible model parameters influence the complexity of the model selection problem just as parameter dimensionality (number of parameters) does and that parameter dimensionality is an incomplete indicator of stability of hydrological model selection and prediction problems.

  19. An Efficient Adaptive Angle-Doppler Compensation Approach for Non-Sidelooking Airborne Radar STAP

    PubMed Central

    Shen, Mingwei; Yu, Jia; Wu, Di; Zhu, Daiyin

    2015-01-01

    In this study, the effects of non-sidelooking airborne radar clutter dispersion on space-time adaptive processing (STAP) is considered, and an efficient adaptive angle-Doppler compensation (EAADC) approach is proposed to improve the clutter suppression performance. In order to reduce the computational complexity, the reduced-dimension sparse reconstruction (RDSR) technique is introduced into the angle-Doppler spectrum estimation to extract the required parameters for compensating the clutter spectral center misalignment. Simulation results to demonstrate the effectiveness of the proposed algorithm are presented. PMID:26053755

  20. Bound states and interactions of vortex solitons in the discrete Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Mejía-Cortés, C.; Soto-Crespo, J. M.; Vicencio, Rodrigo A.; Molina, Mario I.

    2012-08-01

    By using different continuation methods, we unveil a wide region in the parameter space of the discrete cubic-quintic complex Ginzburg-Landau equation, where several families of stable vortex solitons coexist. All these stationary solutions have a symmetric amplitude profile and two different topological charges. We also observe the dynamical formation of a variety of “bound-state” solutions composed of two or more of these vortex solitons. All of these stable composite structures persist in the conservative cubic limit for high values of their power content.

  1. A Monte Carlo Technique Suitable for Obtaining Complex Space System Reliability Confidence Limits from Component Test Data with Three Unknown Parameters.

    DTIC Science & Technology

    1982-12-01

    093 .031 .026 E 116,292 56,424 44,906 12,354 L.L.S. c 2280 802 351 OW Average k .360 .074* .022* .009* Slope f 116,892 51,187 35,276 8,318 Modified c...LE.O.)THEN gELY =1. RGE1=RGE 1.1 ELS E X (CT-C) /THETA)**KI IF(XeLTo20.)THE4. RErLY=EXP(-X) ELSE RELY=O. END IF E"’D IF RETURN EN) 66 I APPENDIX B

  2. Quasi-stable injection channels in a wakefield accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiltshire-Turkay, Mara; Farmer, John P.; Pukhov, Alexander

    2016-05-15

    The influence of initial position on the acceleration of externally injected electrons in a plasma wakefield is investigated. Test-particle simulations show previously unobserved complex structure in the parameter space, with quasi-stable injection channels forming for particles injected in narrow regions away from the wake centre. Particles injected into these channels remain in the wake for a considerable time after dephasing and as a result achieve significantly higher energy than their neighbours. The result is relevant to both the planning and optimisation of experiments making use of external injection.

  3. A MS-lesion pattern discrimination plot based on geostatistics.

    PubMed

    Marschallinger, Robert; Schmidt, Paul; Hofmann, Peter; Zimmer, Claus; Atkinson, Peter M; Sellner, Johann; Trinka, Eugen; Mühlau, Mark

    2016-03-01

    A geostatistical approach to characterize MS-lesion patterns based on their geometrical properties is presented. A dataset of 259 binary MS-lesion masks in MNI space was subjected to directional variography. A model function was fit to express the observed spatial variability in x, y, z directions by the geostatistical parameters Range and Sill. Parameters Range and Sill correlate with MS-lesion pattern surface complexity and total lesion volume. A scatter plot of ln(Range) versus ln(Sill), classified by pattern anisotropy, enables a consistent and clearly arranged presentation of MS-lesion patterns based on geometry: the so-called MS-Lesion Pattern Discrimination Plot. The geostatistical approach and the graphical representation of results are considered efficient exploratory data analysis tools for cross-sectional, follow-up, and medication impact analysis.

  4. Machine learning action parameters in lattice quantum chromodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shanahan, Phiala; Trewartha, Daneil; Detmold, William

    Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less

  5. Classification framework for partially observed dynamical systems

    NASA Astrophysics Data System (ADS)

    Shen, Yuan; Tino, Peter; Tsaneva-Atanasova, Krasimira

    2017-04-01

    We present a general framework for classifying partially observed dynamical systems based on the idea of learning in the model space. In contrast to the existing approaches using point estimates of model parameters to represent individual data items, we employ posterior distributions over model parameters, thus taking into account in a principled manner the uncertainty due to both the generative (observational and/or dynamic noise) and observation (sampling in time) processes. We evaluate the framework on two test beds: a biological pathway model and a stochastic double-well system. Crucially, we show that the classification performance is not impaired when the model structure used for inferring posterior distributions is much more simple than the observation-generating model structure, provided the reduced-complexity inferential model structure captures the essential characteristics needed for the given classification task.

  6. Machine learning action parameters in lattice quantum chromodynamics

    DOE PAGES

    Shanahan, Phiala; Trewartha, Daneil; Detmold, William

    2018-05-16

    Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less

  7. A sensitivity model for energy consumption in buildings. Part 1: Effect of exterior environment

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1981-01-01

    A simple analytical model is developed for the simulation of seasonal heating and cooling loads of any class of buildings to complement available computerized techniques which make hourly, daily, and monthly calculations. An expression for the annual energy utilization index, which is a common measure of rating buildings having the same functional utilization, is derived to include about 30 parameters for both building interior and exterior environments. The sensitivity of a general class building to either controlled or uncontrolled weather parameters is examined. A hypothetical office type building, located at the Goldstone Space Communication Complex, Goldstone, California, is selected as an example for the numerical sensitivity evaluations. Several expressions of variations in local outside air temperature, pressure, solar radiation, and wind velocity are presented.

  8. An adaptive Cartesian control scheme for manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    A adaptive control scheme for direct control of manipulator end-effectors to achieve trajectory tracking in Cartesian space is developed. The control structure is obtained from linear multivariable theory and is composed of simple feedforward and feedback controllers and an auxiliary input. The direct adaptation laws are derived from model reference adaptive control theory and are not based on parameter estimation of the robot model. The utilization of feedforward control and the inclusion of auxiliary input are novel features of the present scheme and result in improved dynamic performance over existing adaptive control schemes. The adaptive controller does not require the complex mathematical model of the robot dynamics or any knowledge of the robot parameters or the payload, and is computationally fast for online implementation with high sampling rates.

  9. KSC-2011-8226

    NASA Image and Video Library

    2011-12-11

    CAPE CANAVERAL, Fla. – Workers supervise the transporter carrying the high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida as it rolls onto NASA Causeway at the visitor complex on its way to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis

  10. Orbits in elementary, power-law galaxy bars - 1. Occurrence and role of single loops

    NASA Astrophysics Data System (ADS)

    Struck, Curtis

    2018-05-01

    Orbits in galaxy bars are generally complex, but simple closed loop orbits play an important role in our conceptual understanding of bars. Such orbits are found in some well-studied potentials, provide a simple model of the bar in themselves, and may generate complex orbit families. The precessing, power ellipse (p-ellipse) orbit approximation provides accurate analytic orbit fits in symmetric galaxy potentials. It remains useful for finding and fitting simple loop orbits in the frame of a rotating bar with bar-like and symmetric power-law potentials. Second-order perturbation theory yields two or fewer simple loop solutions in these potentials. Numerical integrations in the parameter space neighbourhood of perturbation solutions reveal zero or one actual loops in a range of such potentials with rising rotation curves. These loops are embedded in a small parameter region of similar, but librating orbits, which have a subharmonic frequency superimposed on the basic loop. These loops and their librating companions support annular bars. Solid bars can be produced in more complex potentials, as shown by an example with power-law indices varying with radius. The power-law potentials can be viewed as the elementary constituents of more complex potentials. Numerical integrations also reveal interesting classes of orbits with multiple loops. In two-dimensional, self-gravitating bars, with power-law potentials, single-loop orbits are very rare. This result suggests that gas bars or oval distortions are unlikely to be long-lived, and that complex orbits or three-dimensional structure must support self-gravitating stellar bars.

  11. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.

  12. 3-D model-based Bayesian classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soenneland, L.; Tenneboe, P.; Gehrmann, T.

    1994-12-31

    The challenging task of the interpreter is to integrate different pieces of information and combine them into an earth model. The sophistication level of this earth model might vary from the simplest geometrical description to the most complex set of reservoir parameters related to the geometrical description. Obviously the sophistication level also depend on the completeness of the available information. The authors describe the interpreter`s task as a mapping between the observation space and the model space. The information available to the interpreter exists in observation space and the task is to infer a model in model-space. It is well-knownmore » that this inversion problem is non-unique. Therefore any attempt to find a solution depend son constraints being added in some manner. The solution will obviously depend on which constraints are introduced and it would be desirable to allow the interpreter to modify the constraints in a problem-dependent manner. They will present a probabilistic framework that gives the interpreter the tools to integrate the different types of information and produce constrained solutions. The constraints can be adapted to the problem at hand.« less

  13. Reconstructing the hidden states in time course data of stochastic models.

    PubMed

    Zimmer, Christoph

    2015-11-01

    Parameter estimation is central for analyzing models in Systems Biology. The relevance of stochastic modeling in the field is increasing. Therefore, the need for tailored parameter estimation techniques is increasing as well. Challenges for parameter estimation are partial observability, measurement noise, and the computational complexity arising from the dimension of the parameter space. This article extends the multiple shooting for stochastic systems' method, developed for inference in intrinsic stochastic systems. The treatment of extrinsic noise and the estimation of the unobserved states is improved, by taking into account the correlation between unobserved and observed species. This article demonstrates the power of the method on different scenarios of a Lotka-Volterra model, including cases in which the prey population dies out or explodes, and a Calcium oscillation system. Besides showing how the new extension improves the accuracy of the parameter estimates, this article analyzes the accuracy of the state estimates. In contrast to previous approaches, the new approach is well able to estimate states and parameters for all the scenarios. As it does not need stochastic simulations, it is of the same order of speed as conventional least squares parameter estimation methods with respect to computational time. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. On the Essence of Space

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2003-04-01

    A new theory of space is suggested. It represents the new point of view which has arisen from the critical analysis of the foundations of physics (in particular the theory of relativity and quantum mechanics), mathematics, cosmology and philosophy. The main idea following from the analysis is that the concept of movement represents a key to understanding of the essence of space. The starting-point of the theory is represented by the following philosophical (dialectical materialistic) principles. (a) The principle of the materiality (of the objective reality) of the Nature: the Nature (the Universe) is a system (a set) of material objects (particles, bodies, fields); each object has properties, features, and the properties, the features are inseparable characteristics of material object and belong only to material object. (b) The principle of the existence of material object: an object exists as the objective reality, and movement is a form of existence of object. (c) The principle (definition) of movement of object: the movement is change (i.e. transition of some states into others) in general; the movement determines a direction, and direction characterizes the movement. (d) The principle of existence of time: the time exists as the parameter of the system of reference. These principles lead to the following statements expressing the essence of space. (1) There is no space in general, and there exist space only as a form of existence of the properties and features of the object. It means that the space is a set of the measures of the object (the measure is the philosophical category meaning unity of the qualitative and quantitative determinacy of the object). In other words, the space of the object is a set of the states of the object. (2) The states of the object are manifested only in a system of reference. The main informational property of the unitary system researched physical object + system of reference is that the system of reference determines (measures, calculates) the parameters of the subsystem researched physical object (for example, the coordinates of the object M); the parameters characterize the system of reference (for example, the system of coordinates S). (3) Each parameter of the object is its measure. Total number of the mutually independent parameters of the object is called dimension of the space of the object. (4) The set of numerical values (i.e. the range, the spectrum) of each parameter is the subspace of the object. (The coordinate space, the momentum space and the energy space are examples of the subspaces of the object). (5) The set of the parameters of the object is divided into two non intersecting (opposite) classes: the class of the internal parameters and the class of the non internal (i.e. external) parameters. The class of the external parameters is divided into two non intersecting (opposite) subclasses: the subclass of the absolute parameters (characterizing the form, the sizes of the object) and the subclass of the non absolute (relative) parameters (characterizing the position, the coordinates of the object). (6) Set of the external parameters forms the external space of object. It is called geometrical space of object. (7) Since a macroscopic object has three mutually independent sizes, the dimension of its external absolute space is equal to three. Consequently, the dimension of its external relative space is also equal to three. Thus, the total dimension of the external space of the macroscopic object is equal to six. (8) In general case, the external absolute space (i.e. the form, the sizes) and the external relative space (i.e. the position, the coordinates) of any object are mutually dependent because of influence of a medium. The geometrical space of such object is called non Euclidean space. If the external absolute space and the external relative space of some object are mutually independent, then the external relative space of such object is the homogeneous and isotropic geometrical space. It is called Euclidean space of the object. Consequences: (i) the question of true geometry of the Universe is incorrect; (ii) the theory of relativity has no physical meaning.

  15. A novel conformation of gel grown biologically active cadmium nicotinate

    NASA Astrophysics Data System (ADS)

    Nair, Lekshmi P.; Bijini, B. R.; Divya, R.; Nair, Prabitha B.; Eapen, S. M.; Dileep Kumar, B. S.; Nishanth Kumar, S.; Nair, C. M. K.; Deepa, M.; Rajendra Babu, K.

    2017-11-01

    The elimination of toxic heavy metals by the formation of stable co-ordination compounds with biologically active ligands is applicable in drug designing. A new crystalline complex of cadmium with nicotinic acid is grown at ambient temperature using the single gel diffusion method in which the crystal structure is different from those already reported. Single crystal x-ray diffraction reveals the identity of crystal structure belonging to monoclinic system, P21/c space group with cell dimensions a = 17.220 (2) Å, b = 10.2480 (2) Å, c = 7.229(9) Å, β = 91.829(4)°. Powder x-ray diffraction analysis confirmed the crystallinity of the sample. The unidentate mode of co-ordination between the metal atom and the carboxylate group is supported by the Fourier Transform Infra Red spectral data. Thermal analysis ensures the thermal stability of the complex. Kinetic and thermodynamic parameters are also calculated. The stoichiometry of the complex is confirmed by the elemental analysis. The UV-visible spectral analysis shows the wide transparency window of the complex in the visible region. The band gap of the complex is found to be 3.92 eV. The complex shows excellent antibacterial and antifungal activity.

  16. N-((5-chloropyridin-2-yl)carbamothioyl)furan-2-carboxamide and its Co(II), Ni(II) and Cu(II) complexes: Synthesis, characterization, DFT computations, thermal decomposition, antioxidant and antitumor activity

    NASA Astrophysics Data System (ADS)

    Yeşilkaynak, Tuncay; Özpınar, Celal; Emen, Fatih Mehmet; Ateş, Burhan; Kaya, Kerem

    2017-02-01

    N-((5-chloropyridin-2-yl)carbamothioyl)furan-2-carboxamide (HL: C11H8ClN3O2S) and its Co(II), Ni(II) and Cu(II) complexes have been synthesized and characterized by elemental analysis, FT-IR,1H NMR and HR-MS methods. The HL was characterized by single crystal X-ray diffraction technique. It crystallizes in the monoclinic system. The HL has the space group P 1 21/c 1, Z = 4, and its unit cell parameters are a = 4.5437(5) Å, b = 22.4550(3) Å, c = 11.8947(14) Å. The ligand coordinates the metal ions as bidentate and thus essentially yields neutral complexes of the [ML2] type. ML2 complex structures were optimized using B97D/TZVP level. Molecular orbitals of both HL ligand were calculated at the same level. Thermal decomposition of the complexes has been investigated by thermogravimetry. The complexes were screened for their anticancer and antioxidant activities. Antioxidant activity of the complexes was determined by using the DPPH and ABTS assays. The anticancer activity of the complexes was studied by using MTT assay in MCF-7 breast cancer cells.

  17. Prosthetic Avian Vocal Organ Controlled by a Freely Behaving Bird Based on a Low Dimensional Model of the Biomechanical Periphery

    PubMed Central

    Arneodo, Ezequiel M.; Perl, Yonatan Sanz; Goller, Franz; Mindlin, Gabriel B.

    2012-01-01

    Because of the parallels found with human language production and acquisition, birdsong is an ideal animal model to study general mechanisms underlying complex, learned motor behavior. The rich and diverse vocalizations of songbirds emerge as a result of the interaction between a pattern generator in the brain and a highly nontrivial nonlinear periphery. Much of the complexity of this vocal behavior has been understood by studying the physics of the avian vocal organ, particularly the syrinx. A mathematical model describing the complex periphery as a nonlinear dynamical system leads to the conclusion that nontrivial behavior emerges even when the organ is commanded by simple motor instructions: smooth paths in a low dimensional parameter space. An analysis of the model provides insight into which parameters are responsible for generating a rich variety of diverse vocalizations, and what the physiological meaning of these parameters is. By recording the physiological motor instructions elicited by a spontaneously singing muted bird and computing the model on a Digital Signal Processor in real-time, we produce realistic synthetic vocalizations that replace the bird's own auditory feedback. In this way, we build a bio-prosthetic avian vocal organ driven by a freely behaving bird via its physiologically coded motor commands. Since it is based on a low-dimensional nonlinear mathematical model of the peripheral effector, the emulation of the motor behavior requires light computation, in such a way that our bio-prosthetic device can be implemented on a portable platform. PMID:22761555

  18. General molecular mechanics method for transition metal carboxylates and its application to the multiple coordination modes in mono- and dinuclear Mn(II) complexes.

    PubMed

    Deeth, Robert J

    2008-08-04

    A general molecular mechanics method is presented for modeling the symmetric bidentate, asymmetric bidentate, and bridging modes of metal-carboxylates with a single parameter set by using a double-minimum M-O-C angle-bending potential. The method is implemented within the Molecular Operating Environment (MOE) with parameters based on the Merck molecular force field although, with suitable modifications, other MM packages and force fields could easily be used. Parameters for high-spin d (5) manganese(II) bound to carboxylate and water plus amine, pyridyl, imidazolyl, and pyrazolyl donors are developed based on 26 mononuclear and 29 dinuclear crystallographically characterized complexes. The average rmsd for Mn-L distances is 0.08 A, which is comparable to the experimental uncertainty required to cover multiple binding modes, and the average rmsd in heavy atom positions is around 0.5 A. In all cases, whatever binding mode is reported is also computed to be a stable local minimum. In addition, the structure-based parametrization implicitly captures the energetics and gives the same relative energies of symmetric and asymmetric coordination modes as density functional theory calculations in model and "real" complexes. Molecular dynamics simulations show that carboxylate rotation is favored over "flipping" while a stochastic search algorithm is described for randomly searching conformational space. The model reproduces Mn-Mn distances in dinuclear systems especially accurately, and this feature is employed to illustrate how MM calculations on models for the dimanganese active site of methionine aminopeptidase can help determine some of the details which may be missing from the experimental structure.

  19. The geometric field (gravity) as an electro-chemical potential in a Ginzburg-Landau theory of superconductivity

    NASA Astrophysics Data System (ADS)

    Atanasov, Victor

    2017-07-01

    We extend the superconductor's free energy to include an interaction of the order parameter with the curvature of space-time. This interaction leads to geometry dependent coherence length and Ginzburg-Landau parameter which suggests that the curvature of space-time can change the superconductor's type. The curvature of space-time doesn't affect the ideal diamagnetism of the superconductor but acts as chemical potential. In a particular circumstance, the geometric field becomes order-parameter dependent, therefore the superconductor's order parameter dynamics affects the curvature of space-time and electrical or internal quantum mechanical energy can be channelled into the curvature of space-time. Experimental consequences are discussed.

  20. Thin Film Physical Sensor Instrumentation Research and Development at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Wrbanek, John D.; Fralick, Gustave C.

    2006-01-01

    A range of thin film sensor technology has been demonstrated enabling measurement of multiple parameters either individually or in sensor arrays including temperature, strain, heat flux, and flow. Multiple techniques exist for refractory thin film fabrication, fabrication and integration on complex surfaces and multilayered thin film insulation. Leveraging expertise in thin films and high temperature materials, investigations for the applications of thin film ceramic sensors has begun. The current challenges of instrumentation technology are to further develop systems packaging and component testing of specialized sensors, further develop instrumentation techniques on complex surfaces, improve sensor durability, and to address needs for extreme temperature applications. The technology research and development ongoing at NASA Glenn for applications to future launch vehicles, space vehicles, and ground systems is outlined.

  1. Preliminary crystallographic analysis of mouse Elf3 C-terminal DNA-binding domain in complex with type II TGF-[beta] receptor promoter DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarkar, Vinod B.; Babayeva, Nigar D.; Rizzino, Angie

    2010-10-08

    Ets proteins are transcription factors that activate or repress the expression of genes that are involved in various biological processes, including cellular proliferation, differentiation, development, transformation and apoptosis. Like other Ets-family members, Elf3 functions as a sequence-specific DNA-binding transcriptional factor. A mouse Elf3 C-terminal fragment (amino-acid residues 269-371) containing the DNA-binding domain has been crystallized in complex with mouse type II TGF-{beta} receptor promoter (TR-II) DNA. The crystals belonged to space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 42.66, b = 52, c = 99.78 {angstrom}, and diffracted to a resolution of 2.2 {angstrom}.

  2. Purification, crystallization and preliminary X-ray analysis of uracil-DNA glycosylase from Sulfolobus tokodaii strain 7

    PubMed Central

    Kawai, Akito; Higuchi, Shigesada; Tsunoda, Masaru; Nakamura, Kazuo T.; Miyamoto, Shuichi

    2012-01-01

    Uracil-DNA glycosylase (UDG) specifically removes uracil from DNA by catalyzing hydrolysis of the N-glycosidic bond, thereby initiating the base-excision repair pathway. Although a number of UDG structures have been determined, the structure of archaeal UDG remains unknown. In this study, a deletion mutant of UDG isolated from Sulfolobus tokodaii strain 7 (stoUDGΔ) and stoUDGΔ complexed with uracil were crystallized and analyzed by X-ray crystallography. The crystals were found to belong to the orthorhombic space group P212121, with unit-cell parameters a = 52.2, b = 52.3, c = 74.7 Å and a = 52.1, b = 52.2, c = 74.1 Å for apo stoUDGΔ and stoUDGΔ complexed with uracil, respectively. PMID:22949205

  3. Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model

    NASA Technical Reports Server (NTRS)

    Richardson, D. E.; Anderson, G. L.; Macon, D. J.

    2002-01-01

    An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.

  4. The NASA Advanced Space Power Systems Project

    NASA Technical Reports Server (NTRS)

    Mercer, Carolyn R.; Hoberecht, Mark A.; Bennett, William R.; Lvovich, Vadim F.; Bugga, Ratnakumar

    2015-01-01

    The goal of the NASA Advanced Space Power Systems Project is to develop advanced, game changing technologies that will provide future NASA space exploration missions with safe, reliable, light weight and compact power generation and energy storage systems. The development effort is focused on maturing the technologies from a technology readiness level of approximately 23 to approximately 56 as defined in the NASA Procedural Requirement 7123.1B. Currently, the project is working on two critical technology areas: High specific energy batteries, and regenerative fuel cell systems with passive fluid management. Examples of target applications for these technologies are: extending the duration of extravehicular activities (EVA) with high specific energy and energy density batteries; providing reliable, long-life power for rovers with passive fuel cell and regenerative fuel cell systems that enable reduced system complexity. Recent results from the high energy battery and regenerative fuel cell technology development efforts will be presented. The technical approach, the key performance parameters and the technical results achieved to date in each of these new elements will be included. The Advanced Space Power Systems Project is part of the Game Changing Development Program under NASAs Space Technology Mission Directorate.

  5. Continuous state-space representation of a bucket-type rainfall-runoff model: a case study with the GR4 model using state-space GR4 (version 1.0)

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2018-04-01

    In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.

  6. Modelling of subarachnoid space width changes in apnoea resulting as a function of blood flow parameters.

    PubMed

    Kalicka, Renata; Mazur, Kamila; Wolf, Jacek; Frydrychowski, Andrzej F; Narkiewicz, Krzysztof; Winklewski, Pawel J

    2017-09-01

    During apnoea, the pial artery is subjected to two opposite physiological processes: vasoconstriction due to elevated blood pressure and vasorelaxation driven by rising pH in the brain parenchyma. We hypothesized that the pial artery response to apnoea may vary, depending on which process dominate. Apnoea experiments were performed in a group of 19 healthy, non-smoking volunteers (9 men and 10 women). The following parameters were obtained for further analysis: blood pressure, the cardiac (from 0.5 to 5.0Hz) and slow (<0.5Hz) components of subarachnoid space width, heart rate, mean cerebral blood flow velocity in the internal carotid artery, pulsatility and resistivity index, internal carotid artery diameter, blood oxygen saturation and end-tidal carbon dioxide. The experiment consisted of three apnoeas, sequentially: 30s, 60s and maximal apnoea. The breath-hold was separated for 5minute rest. The control process is sophisticated, involving internal cross-couplings and cross-dependences. The aim of work was to find a mathematical dependence between data. Unexpectedly, the modelling revealed two different reactions, on the same experimental procedure. As a consequence, there are two subsets of cardiac subarachnoid space width responses to breath-hold in humans. A positive cardiac subarachnoid space width change to apnoea depends on changes in heart rate and cerebral blood flow velocity. A negative cardiac subarachnoid space width change to apnoea is driven by heart rate, mean arterial pressure and pulsatility index changes. The described above two different reactions to experimental breath-hold provides new insights into our understanding of the complex mechanisms governing the adaptation to apnoea in humans. We proposed a mathematical methodology that can be used in further clinical research. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Handling Qualities Evaluations of Low Complexity Model Reference Adaptive Controllers for Reduced Pitch and Roll Damping Scenarios

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Burken, John J.; Johnson, Marcus; Nguyen, Nhan

    2011-01-01

    National Aeronautics and Space Administration (NASA) researchers have conducted a series of flight experiments designed to study the effects of varying levels of adaptive controller complexity on the performance and handling qualities of an aircraft under various simulated failure or damage conditions. A baseline, nonlinear dynamic inversion controller was augmented with three variations of a model reference adaptive control design. The simplest design consisted of a single adaptive parameter in each of the pitch and roll axes computed using a basic gradient-based update law. A second design was built upon the first by increasing the complexity of the update law. The third and most complex design added an additional adaptive parameter to each axis. Flight tests were conducted using NASA s Full-scale Advanced Systems Testbed, a highly modified F-18 aircraft that contains a research flight control system capable of housing advanced flight controls experiments. Each controller was evaluated against a suite of simulated failures and damage ranging from destabilization of the pitch and roll axes to significant coupling between the axes. Two pilots evaluated the three adaptive controllers as well as the non-adaptive baseline controller in a variety of dynamic maneuvers and precision flying tasks designed to uncover potential deficiencies in the handling qualities of the aircraft, and adverse interactions between the pilot and the adaptive controllers. The work was completed as part of the Integrated Resilient Aircraft Control Project under NASA s Aviation Safety Program.

  8. Complex Dynamical Networks Constructed with Fully Controllable Nonlinear Nanomechanical Oscillators.

    PubMed

    Fon, Warren; Matheny, Matthew H; Li, Jarvis; Krayzman, Lev; Cross, Michael C; D'Souza, Raissa M; Crutchfield, James P; Roukes, Michael L

    2017-10-11

    Control of the global parameters of complex networks has been explored experimentally in a variety of contexts. Yet, the more difficult prospect of realizing arbitrary network architectures, especially analog physical networks that provide dynamical control of individual nodes and edges, has remained elusive. Given the vast hierarchy of time scales involved, it also proves challenging to measure a complex network's full internal dynamics. These span from the fastest nodal dynamics to very slow epochs over which emergent global phenomena, including network synchronization and the manifestation of exotic steady states, eventually emerge. Here, we demonstrate an experimental system that satisfies these requirements. It is based upon modular, fully controllable, nonlinear radio frequency nanomechanical oscillators, designed to form the nodes of complex dynamical networks with edges of arbitrary topology. The dynamics of these oscillators and their surrounding network are analog and continuous-valued and can be fully interrogated in real time. They comprise a piezoelectric nanomechanical membrane resonator, which serves as the frequency-determining element within an electrical feedback circuit. This embodiment permits network interconnections entirely within the electrical domain and provides unprecedented node and edge control over a vast region of parameter space. Continuous measurement of the instantaneous amplitudes and phases of every constituent oscillator node are enabled, yielding full and detailed network data without reliance upon statistical quantities. We demonstrate the operation of this platform through the real-time capture of the dynamics of a three-node ring network as it evolves from the uncoupled state to full synchronization.

  9. Aerial view of the Kennedy Space Center Visitor Center

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This Shuttle/Gantry mockup and Post Show Dome anchor the northeast corner of the Kennedy Space Center Visitor Complex. The Astronaut Memorial is located just above. Sprawling across 70 acres on Florida's Space Coast, the complex is located off State Road 405, NASA Parkway, six miles inside the Space Center entrance. The building at the upper left is the Theater Complex. Other exhibits and buildings on the site are the Center for Space Education, Cafeteria, Space Flight Exhibit Building, Souvenir Sales Building, Spaceport Central, Ticket Pavilion and Center for Space Education.

  10. On synchronisation of a class of complex chaotic systems with complex unknown parameters via integral sliding mode control

    NASA Astrophysics Data System (ADS)

    Tirandaz, Hamed; Karami-Mollaee, Ali

    2018-06-01

    Chaotic systems demonstrate complex behaviour in their state variables and their parameters, which generate some challenges and consequences. This paper presents a new synchronisation scheme based on integral sliding mode control (ISMC) method on a class of complex chaotic systems with complex unknown parameters. Synchronisation between corresponding states of a class of complex chaotic systems and also convergence of the errors of the system parameters to zero point are studied. The designed feedback control vector and complex unknown parameter vector are analytically achieved based on the Lyapunov stability theory. Moreover, the effectiveness of the proposed methodology is verified by synchronisation of the Chen complex system and the Lorenz complex systems as the leader and the follower chaotic systems, respectively. In conclusion, some numerical simulations related to the synchronisation methodology is given to illustrate the effectiveness of the theoretical discussions.

  11. KSC-98pc1059

    NASA Image and Video Library

    1998-08-06

    This Shuttle/Gantry mockup and Post Show Dome anchor the northeast corner of the Kennedy Space Center Visitor Complex. The Astronaut Memorial is located just above. Sprawling across 70 acres on Florida's Space Coast, the complex is located off State Road 405, NASA Parkway, six miles inside the Space Center entrance. The building at the upper left is the Theater Complex. Other exhibits and buildings on the site are the Center for Space Education, Cafeteria, Space Flight Exhibit Building, Souvenir Sales Building, Spaceport Central, Ticket Pavilion and Center for Space Education

  12. Robust optimal design of diffusion-weighted magnetic resonance experiments for skin microcirculation

    NASA Astrophysics Data System (ADS)

    Choi, J.; Raguin, L. G.

    2010-10-01

    Skin microcirculation plays an important role in several diseases including chronic venous insufficiency and diabetes. Magnetic resonance (MR) has the potential to provide quantitative information and a better penetration depth compared with other non-invasive methods such as laser Doppler flowmetry or optical coherence tomography. The continuous progress in hardware resulting in higher sensitivity must be coupled with advances in data acquisition schemes. In this article, we first introduce a physical model for quantifying skin microcirculation using diffusion-weighted MR (DWMR) based on an effective dispersion model for skin leading to a q-space model of the DWMR complex signal, and then design the corresponding robust optimal experiments. The resulting robust optimal DWMR protocols improve the worst-case quality of parameter estimates using nonlinear least squares optimization by exploiting available a priori knowledge of model parameters. Hence, our approach optimizes the gradient strengths and directions used in DWMR experiments to robustly minimize the size of the parameter estimation error with respect to model parameter uncertainty. Numerical evaluations are presented to demonstrate the effectiveness of our approach as compared to conventional DWMR protocols.

  13. Odor Impression Prediction from Mass Spectra.

    PubMed

    Nozaki, Yuji; Nakamoto, Takamichi

    2016-01-01

    The sense of smell arises from the perception of odors from chemicals. However, the relationship between the impression of odor and the numerous physicochemical parameters has yet to be understood owing to its complexity. As such, there is no established general method for predicting the impression of odor of a chemical only from its physicochemical properties. In this study, we designed a novel predictive model based on an artificial neural network with a deep structure for predicting odor impression utilizing the mass spectra of chemicals, and we conducted a series of computational analyses to evaluate its performance. Feature vectors extracted from the original high-dimensional space using two autoencoders equipped with both input and output layers in the model are used to build a mapping function from the feature space of mass spectra to the feature space of sensory data. The results of predictions obtained by the proposed new method have notable accuracy (R≅0.76) in comparison with a conventional method (R≅0.61).

  14. Space shuttle/food system study. Volume 2, Appendix A: Active heating system-screening analysis. Appendix B: Reconstituted food heating techniques analysis

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Technical data are presented which were used to evaluate active heating methods to be incorporated into the space shuttle food system design, and also to evaluate the relative merits and penalties associated with various approaches to the heating of rehydrated food during space flight. Equipment heating candidates were subject to a preliminary screening performed by a selection rationale process which considered the following parameters; (1) gravitational effect; (2) safety; (3) operability; (4) system compatibility; (5) serviceability; (6) crew acceptability; (7) crew time; (8) development risk; and (9) operating cost. A hot air oven, electrically heated food tray, and microwave oven were selected for further consideration and analysis. Passive, semi-active, and active food preparation approaches were also studied in an effort to determine the optimum method for heating rehydrated food. Potential complexity, cost, vehicle impact penalties, and palatability were considered in the analysis. A summary of the study results is provided along with cost estimates for each of the potential sytems

  15. Deductive Derivation and Turing-Computerization of Semiparametric Efficient Estimation

    PubMed Central

    Frangakis, Constantine E.; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan

    2015-01-01

    Summary Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF’s functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. PMID:26237182

  16. Deductive derivation and turing-computerization of semiparametric efficient estimation.

    PubMed

    Frangakis, Constantine E; Qian, Tianchen; Wu, Zhenke; Diaz, Ivan

    2015-12-01

    Researchers often seek robust inference for a parameter through semiparametric estimation. Efficient semiparametric estimation currently requires theoretical derivation of the efficient influence function (EIF), which can be a challenging and time-consuming task. If this task can be computerized, it can save dramatic human effort, which can be transferred, for example, to the design of new studies. Although the EIF is, in principle, a derivative, simple numerical differentiation to calculate the EIF by a computer masks the EIF's functional dependence on the parameter of interest. For this reason, the standard approach to obtaining the EIF relies on the theoretical construction of the space of scores under all possible parametric submodels. This process currently depends on the correctness of conjectures about these spaces, and the correct verification of such conjectures. The correct guessing of such conjectures, though successful in some problems, is a nondeductive process, i.e., is not guaranteed to succeed (e.g., is not computerizable), and the verification of conjectures is generally susceptible to mistakes. We propose a method that can deductively produce semiparametric locally efficient estimators. The proposed method is computerizable, meaning that it does not need either conjecturing, or otherwise theoretically deriving the functional form of the EIF, and is guaranteed to produce the desired estimates even for complex parameters. The method is demonstrated through an example. © 2015, The International Biometric Society.

  17. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  18. On deformation of complex continuum immersed in a plane space

    NASA Astrophysics Data System (ADS)

    Kovalev, V. A.; Murashkin, E. V.; Radayev, Y. N.

    2018-05-01

    The present paper is devoted to mathematical modelling of complex continua deformations considered as immersed in an external plane space. The complex continuum is defined as a differential manifold supplied with metrics induced by the external space. A systematic derivation of strain tensors by notion of isometric immersion of the complex continuum into a plane space of a higher dimension is proposed. Problem of establishing complete systems of irreducible objective strain and extrastrain tensors for complex continuum immersed in an external plane space is resolved. The solution to the problem is obtained by methods of the field theory and the theory of rational algebraic invariants. Strain tensors of the complex continuum are derived as irreducible algebraic invariants of contravariant vectors of the external space emerging as functional arguments in the complex continuum action density. Present analysis is restricted to rational algebraic invariants. Completeness of the considered systems of rational algebraic invariants is established for micropolar elastic continua. Rational syzygies for non-quadratic invariants are discussed. Objective strain tensors (indifferent to frame rotations in the external plane space) for micropolar continuum are alternatively obtained by properly combining multipliers of polar decompositions of deformation and extra-deformation gradients. The latter is realized only for continua immersed in a plane space of the equal mathematical dimension.

  19. A review of cooperative and uncooperative spacecraft pose determination techniques for close-proximity operations

    NASA Astrophysics Data System (ADS)

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele

    2017-08-01

    The capability of an active spacecraft to accurately estimate its relative position and attitude (pose) with respect to an active/inactive, artificial/natural space object (target) orbiting in close-proximity is required to carry out various activities like formation flying, on-orbit servicing, active debris removal, and space exploration. According to the specific mission scenario, the pose determination task involves both theoretical and technological challenges related to the search for the most suitable algorithmic solution and sensor architecture, respectively. As regards the latter aspect, electro-optical sensors represent the best option as their use is compatible with mass and power limitation of micro and small satellites, and their measurements can be processed to estimate all the pose parameters. Overall, the degree of complexity of the challenges related to pose determination largely varies depending on the nature of the targets, which may be actively/passively cooperative, uncooperative but known, or uncooperative and unknown space objects. In this respect, while cooperative pose determination has been successfully demonstrated in orbit, the uncooperative case is still under study by universities, research centers, space agencies and private companies. However, in both the cases, the demand for space applications involving relative navigation maneuvers, also in close-proximity, for which pose determination capabilities are mandatory, is significantly increasing. In this framework, a review of state-of-the-art techniques and algorithms developed in the last decades for cooperative and uncooperative pose determination by processing data provided by electro-optical sensors is herein presented. Specifically, their main advantages and drawbacks in terms of achieved performance, computational complexity, and sensitivity to variability of pose and target geometry, are highlighted.

  20. Design of low surface roughness-low residual stress-high optoelectronic merit a-IZO thin films for flexible OLEDs

    DOE PAGES

    Kumar, Naveen; Wilkinson, Taylor M.; Packard, Corinne E.; ...

    2016-06-08

    The development of efficient and reliable large-area flexible optoelectronic devices demands low surface roughness-low residual stress-high optoelectronic merit transparent conducting oxide (TCO) thin films. Here, we correlate surface roughness-residual stress-optoelectronic properties of sputtered amorphous indium zinc oxide (a-IZO) thin films using a statistical design of experiment (DOE) approach and find a common growth space to achieve a smooth surface in a stress-free and high optoelectronic merit a-IZO thin film. The sputtering power, growth pressure, oxygen partial pressure, and RF/(RF+DC) are varied in a two-level system with a full factorial design, and results are used to deconvolve the complex growth space,more » identifying significant control growth parameters and their possible interactions. The surface roughness of a-IZO thin film varies over 0.19 nm to 3.97 nm, which is not in line with the general assumption of low surface roughness in a-IZO thin films. The initial regression model and analysis of variance reveal no single optimum growth sub-space to achieve low surface roughness (=0.5 nm), low residual stress (-1 to 0 GPa), and industrially acceptable electrical conductivity (>1000 S/cm) for a-IZO thin films. The extrapolation of growth parameters in light of the current results and previous knowledge leads to a new sub-space, resulting in a low residual stress of -0.52 +/- 0.04 GPa, a low surface roughness of 0.55 +/- 0.03 nm, and moderate electrical conductivity of 1962 +/- 3.84 S/cm in a-IZO thin films. Lastly, these results demonstrate the utility of the DOE approach to multi-parameter optimization, which provides an important tool for the development of flexible TCOs for the next-generation flexible organic light emitting diodes applications.« less

  1. Design of low surface roughness-low residual stress-high optoelectronic merit a-IZO thin films for flexible OLEDs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Naveen; Kumar, Mukesh, E-mail: mkumar@iitrpr.ac.in, E-mail: cpackard@mines.edu; Wilkinson, Taylor M.

    2016-06-14

    The development of efficient and reliable large-area flexible optoelectronic devices demands low surface roughness-low residual stress-high optoelectronic merit transparent conducting oxide (TCO) thin films. Here, we correlate surface roughness-residual stress-optoelectronic properties of sputtered amorphous indium zinc oxide (a-IZO) thin films using a statistical design of experiment (DOE) approach and find a common growth space to achieve a smooth surface in a stress-free and high optoelectronic merit a-IZO thin film. The sputtering power, growth pressure, oxygen partial pressure, and RF/(RF+DC) are varied in a two-level system with a full factorial design, and results are used to deconvolve the complex growth space,more » identifying significant control growth parameters and their possible interactions. The surface roughness of a-IZO thin film varies over 0.19 nm to 3.97 nm, which is not in line with the general assumption of low surface roughness in a-IZO thin films. The initial regression model and analysis of variance reveal no single optimum growth sub-space to achieve low surface roughness (≤0.5 nm), low residual stress (−1 to 0 GPa), and industrially acceptable electrical conductivity (>1000 S/cm) for a-IZO thin films. The extrapolation of growth parameters in light of the current results and previous knowledge leads to a new sub-space, resulting in a low residual stress of −0.52±0.04 GPa, a low surface roughness of 0.55±0.03 nm, and moderate electrical conductivity of 1962±3.84 S/cm in a-IZO thin films. These results demonstrate the utility of the DOE approach to multi-parameter optimization, which provides an important tool for the development of flexible TCOs for the next-generation flexible organic light emitting diodes applications.« less

  2. IDEAS: A multidisciplinary computer-aided conceptual design system for spacecraft

    NASA Technical Reports Server (NTRS)

    Ferebee, M. J., Jr.

    1984-01-01

    During the conceptual development of advanced aerospace vehicles, many compromises must be considered to balance economy and performance of the total system. Subsystem tradeoffs may need to be made in order to satisfy system-sensitive attributes. Due to the increasingly complex nature of aerospace systems, these trade studies have become more difficult and time-consuming to complete and involve interactions of ever-larger numbers of subsystems, components, and performance parameters. The current advances of computer-aided synthesis, modeling and analysis techniques have greatly helped in the evaluation of competing design concepts. Langley Research Center's Space Systems Division is currently engaged in trade studies for a variety of systems which include advanced ground-launched space transportation systems, space-based orbital transfer vehicles, large space antenna concepts and space stations. The need for engineering analysis tools to aid in the rapid synthesis and evaluation of spacecraft has led to the development of the Interactive Design and Evaluation of Advanced Spacecraft (IDEAS) computer-aided design system. The ADEAS system has been used to perform trade studies of competing technologies and requirements in order to pinpoint possible beneficial areas for research and development. IDEAS is presented as a multidisciplinary tool for the analysis of advanced space systems. Capabilities range from model generation and structural and thermal analysis to subsystem synthesis and performance analysis.

  3. Characterizing the Circumgalactic Medium of Nearby Galaxies with HST/COS and HST/STIS Absorption-line Spectroscopy. II. Methods and Models

    NASA Astrophysics Data System (ADS)

    Keeney, Brian A.; Stocke, John T.; Danforth, Charles W.; Shull, J. Michael; Pratt, Cameron T.; Froning, Cynthia S.; Green, James C.; Penton, Steven V.; Savage, Blair D.

    2017-05-01

    We present basic data and modeling for a survey of the cool, photoionized circumgalactic medium (CGM) of low-redshift galaxies using far-UV QSO absorption-line probes. This survey consists of “targeted” and “serendipitous” CGM subsamples, originally described in Stocke et al. (Paper I). The targeted subsample probes low-luminosity, late-type galaxies at z< 0.02 with small impact parameters (< ρ > =71 kpc), and the serendipitous subsample probes higher luminosity galaxies at z≲ 0.2 with larger impact parameters (< ρ > =222 kpc). Hubble Space Telescope and FUSE UV spectroscopy of the absorbers and basic data for the associated galaxies, derived from ground-based imaging and spectroscopy, are presented. We find broad agreement with the COS-Halos results, but our sample shows no evidence for changing ionization parameter or hydrogen density with distance from the CGM host galaxy, probably because the COS-Halos survey probes the CGM at smaller impact parameters. We find at least two passive galaxies with H I and metal-line absorption, confirming the intriguing COS-Halos result that galaxies sometimes have cool gas halos despite no on-going star formation. Using a new methodology for fitting H I absorption complexes, we confirm the CGM cool gas mass of Paper I, but this value is significantly smaller than that found by the COS-Halos survey. We trace much of this difference to the specific values of the low-z metagalactic ionization rate assumed. After accounting for this difference, a best-value for the CGM cool gas mass is found by combining the results of both surveys to obtain {log}(M/{M}⊙ )=10.5+/- 0.3, or ˜30% of the total baryon reservoir of an L≥slant {L}* , star-forming galaxy. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  4. 14 CFR 1214.813 - Computation of sharing and pricing parameters.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Computation of sharing and pricing parameters. 1214.813 Section 1214.813 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION SPACE FLIGHT Reimbursement for Spacelab Services § 1214.813 Computation of sharing and pricing...

  5. Non-Gaussian effects, space-time decoupling, and mobility bifurcation in glassy hard-sphere fluids and suspensions.

    PubMed

    Saltzman, Erica J; Schweizer, Kenneth S

    2006-12-01

    Brownian trajectory simulation methods are employed to fully establish the non-Gaussian fluctuation effects predicted by our nonlinear Langevin equation theory of single particle activated dynamics in glassy hard-sphere fluids. The consequences of stochastic mobility fluctuations associated with the space-time complexities of the transient localization and barrier hopping processes have been determined. The incoherent dynamic structure factor was computed for a range of wave vectors and becomes of an increasingly non-Gaussian form for volume fractions beyond the (naive) ideal mode coupling theory (MCT) transition. The non-Gaussian parameter (NGP) amplitude increases markedly with volume fraction and is well described by a power law in the maximum restoring force of the nonequilibrium free energy profile. The time scale associated with the NGP peak becomes much smaller than the alpha relaxation time for systems characterized by significant entropic barriers. An alternate non-Gaussian parameter that probes the long time alpha relaxation process displays a different shape, peak intensity, and time scale of its maximum. However, a strong correspondence between the classic and alternate NGP amplitudes is predicted which suggests a deep connection between the early and final stages of cage escape. Strong space-time decoupling emerges at high volume fractions as indicated by a nondiffusive wave vector dependence of the relaxation time and growth of the translation-relaxation decoupling parameter. Displacement distributions exhibit non-Gaussian behavior at intermediate times, evolving into a strongly bimodal form with slow and fast subpopulations at high volume fractions. Qualitative and semiquantitative comparisons of the theoretical results with colloid experiments, ideal MCT, and multiple simulation studies are presented.

  6. Plausible combinations: An improved method to evaluate the covariate structure of Cormack-Jolly-Seber mark-recapture models

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.

    2013-01-01

    Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.

  7. Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.

    PubMed

    Mulder, Joris

    2014-02-01

    Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.

  8. Influence of surfactant upon air entrainment hysteresis in curtain coating

    NASA Astrophysics Data System (ADS)

    Marston, J. O.; Hawkins, V.; Decent, S. P.; Simmons, M. J. H.

    2009-03-01

    The onset of air entrainment for curtain coating onto a pre-wetted substrate was studied experimentally in similar parameter regimes to commercial coating ( Re = ρ Q/μ = O(1), We = ρ Q u c /σ = O(10), Ca = μ U/σ = O(1)). Impingement speed and viscosity were previously shown to be critical parameters in correlating air entrainment data with three qualitatively different regimes of hydrodynamic assist identified (Marston et al. in Exp Fluids 42(3):483-488, 2007a). The interaction of the impinging curtain with the pre-existing film also led to a significant hysteretic effect throughout the flow rate-substrate speed parameter space. For the first time, results considering the influence of surfactants are presented in attempt to elucidate the relative importance of surface tension in this inertia-dominated system. The results show quantitative and qualitative differences to previous results with much more complex hysteretic behaviour which has only been reported previously for rough surfaces.

  9. Progressive Learning of Topic Modeling Parameters: A Visual Analytics Framework.

    PubMed

    El-Assady, Mennatallah; Sevastjanova, Rita; Sperrle, Fabian; Keim, Daniel; Collins, Christopher

    2018-01-01

    Topic modeling algorithms are widely used to analyze the thematic composition of text corpora but remain difficult to interpret and adjust. Addressing these limitations, we present a modular visual analytics framework, tackling the understandability and adaptability of topic models through a user-driven reinforcement learning process which does not require a deep understanding of the underlying topic modeling algorithms. Given a document corpus, our approach initializes two algorithm configurations based on a parameter space analysis that enhances document separability. We abstract the model complexity in an interactive visual workspace for exploring the automatic matching results of two models, investigating topic summaries, analyzing parameter distributions, and reviewing documents. The main contribution of our work is an iterative decision-making technique in which users provide a document-based relevance feedback that allows the framework to converge to a user-endorsed topic distribution. We also report feedback from a two-stage study which shows that our technique results in topic model quality improvements on two independent measures.

  10. The Hyper-Envelope Modeling Interface (HEMI): A Novel Approach Illustrated Through Predicting Tamarisk (Tamarix spp.) Habitat in the Western USA

    USGS Publications Warehouse

    Graham, Jim; Young, Nick; Jarnevich, Catherine S.; Newman, Greg; Evangelista, Paul; Stohlgren, Thomas J.

    2013-01-01

    Habitat suitability maps are commonly created by modeling a species’ environmental niche from occurrences and environmental characteristics. Here, we introduce the hyper-envelope modeling interface (HEMI), providing a new method for creating habitat suitability models using Bezier surfaces to model a species niche in environmental space. HEMI allows modeled surfaces to be visualized and edited in environmental space based on expert knowledge and does not require absence points for model development. The modeled surfaces require relatively few parameters compared to similar modeling approaches and may produce models that better match ecological niche theory. As a case study, we modeled the invasive species tamarisk (Tamarix spp.) in the western USA. We compare results from HEMI with those from existing similar modeling approaches (including BioClim, BioMapper, and Maxent). We used synthetic surfaces to create visualizations of the various models in environmental space and used modified area under the curve (AUC) statistic and akaike information criterion (AIC) as measures of model performance. We show that HEMI produced slightly better AUC values, except for Maxent and better AIC values overall. HEMI created a model with only ten parameters while Maxent produced a model with over 100 and BioClim used only eight. Additionally, HEMI allowed visualization and editing of the model in environmental space to develop alternative potential habitat scenarios. The use of Bezier surfaces can provide simple models that match our expectations of biological niche models and, at least in some cases, out-perform more complex approaches.

  11. An online spatio-temporal prediction model for dengue fever epidemic in Kaohsiung,Taiwan

    NASA Astrophysics Data System (ADS)

    Cheng, Ming-Hung; Yu, Hwa-Lung; Angulo, Jose; Christakos, George

    2013-04-01

    Dengue Fever (DF) is one of the most serious vector-borne infectious diseases in tropical and subtropical areas. DF epidemics occur in Taiwan annually especially during summer and fall seasons. Kaohsiung city has been one of the major DF hotspots in decades. The emergence and re-emergence of the DF epidemic is complex and can be influenced by various factors including space-time dynamics of human and vector populations and virus serotypes as well as the associated uncertainties. This study integrates a stochastic space-time "Susceptible-Infected-Recovered" model under Bayesian maximum entropy framework (BME-SIR) to perform real-time prediction of disease diffusion across space-time. The proposed model is applied for spatiotemporal prediction of the DF epidemic at Kaohsiung city during 2002 when the historical series of high DF cases was recorded. The online prediction by BME-SIR model updates the parameters of SIR model and infected cases across districts over time. Results show that the proposed model is rigorous to initial guess of unknown model parameters, i.e. transmission and recovery rates, which can depend upon the virus serotypes and various human interventions. This study shows that spatial diffusion can be well characterized by BME-SIR model, especially at the district surrounding the disease outbreak locations. The prediction performance at DF hotspots, i.e. Cianjhen and Sanmin, can be degraded due to the implementation of various disease control strategies during the epidemics. The proposed online disease prediction BME-SIR model can provide the governmental agency with a valuable reference to timely identify, control, and efficiently prevent DF spread across space-time.

  12. Postflight reconditioning for European Astronauts - A case report of recovery after six months in space.

    PubMed

    Petersen, Nora; Lambrecht, Gunda; Scott, Jonathan; Hirsch, Natalie; Stokes, Maria; Mester, Joachim

    2017-01-01

    Postflight reconditioning of astronauts is understudied. Despite a rigorous, daily inflight exercise countermeasures programme during six months in microgravity (μG) on-board the International Space Station (ISS), physiological impairments occur and postflight reconditioning is still required on return to Earth. Such postflight programmes are implemented by space agency reconditioning specialists. Case Description and Assessments: A 38 year old male European Space Agency (ESA) crewmember's pre- and postflight (at six and 21 days after landing) physical performance from a six-month mission to ISS are described. muscle strength (squat and bench press 1 Repetition Maximum) and power (vertical jump), core muscle endurance and hip flexibility (Sit and Reach, Thomas Test). In-flight, the astronaut undertook a rigorous daily (2-h) exercise programme. The 21 day postflight reconditioning exercise concept focused on motor control and functional training, and was delivered in close co-ordination by the ESA physiotherapist and exercise specialist to provide the crewmember with comprehensive reconditioning support. Despite an intensive inflight exercise programme for this highly motivated crewmember, postflight performance showed impairments at R+6 for most parameters, all of which recovered by R+21 except muscular power (jump tests). Regardless of intense inflight exercise countermeasures and excellent compliance to postflight reconditioning, postflight performance showed impairments at R+6 for most parameters. Complex powerful performance tasks took longer to return to preflight values. Research is needed to develop optimal inflight and postflight exercise programmes to overcome the negative effects of microgravity and return the astronaut to preflight status as rapidly as possible. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Experimental and theoretical studies of the products of reaction between Ln(hfa) 3 and Cu(acac) 2 (Ln = La, Y; acac = acetylacetonate, hfa = hexafluoroacetylacetonate)

    NASA Astrophysics Data System (ADS)

    Rogachev, Andrey Yu.; Mironov, Andrey V.; Nemukhin, Alexander V.

    2007-04-01

    The new unusual heterobimetallic complex [La(hfa) 3Cu(acac) 2(H 2O)] ( I) was obtained in the reaction La(hfa) 3·2H 2O with Cu(acac) 2 in CHCl 3. This is the first example of such type of heterobimetallic complexes based on the Cu(acac) 2 species. According to the X-ray single crystal analysis, complex I crystallizes in the monoclinic space group P2 1/c, with a = 12.516(3) Å, b = 17.757(4) Å, c = 17.446(4) Å, β = 93.90(3)° and Z = 4. The structure consists of isolated heterobinuclear molecules with the coordination number of La being 9. The molecules are further assembled into dimers via hydrogen bonds. The theoretical modeling of the structure and the properties of parent monometallic complexes Ln(hfa) 3 (Ln = La, Y) and Cu(acac) 2 is described. The comparative theoretical study of lanthanide complexes indicates relations in formation of a heterobimetallic complex to the Lewis acidity of original monometallic complexes. In particular, the Lewis acidity and charge of the central metal ion in Ln(hfa) 3 are the key parameters accounting for the formation of [Ln(hfa) 3Cu(acac) 2].

  14. Complex assembly, crystallization and preliminary X-ray crystallographic studies of rhesus macaque MHC Mamu-A*01 complexed with an immunodominant SIV-Gag nonapeptide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chu, Fuliang; Graduate School, Chinese Academy of Sciences, Beijing; Lou, Zhiyong

    2005-06-01

    Crystallization of the first rhesus macaque MHC class I complex. Simian immunodeficiency virus (SIV) infection in rhesus macaques has been used as the best model for the study of human immunodeficiency virus (HIV) infection in humans, especially in the cytotoxic T-lymphocyte (CTL) response. However, the structure of rhesus macaque (or any other monkey model) major histocompatibility complex class I (MHC I) presenting a specific peptide (the ligand for CTL) has not yet been elucidated. Here, using in vitro refolding, the preparation of the complex of the rhesus macaque MHC I allele (Mamu-A*01) with human β{sub 2}m and an immunodominant peptide,more » CTPYDINQM (Gag-CM9), derived from SIV Gag protein is reported. The complex (45 kDa) was crystallized; the crystal belongs to space group I422, with unit-cell parameters a = b = 183.8, c = 155.2 Å. The crystal contains two molecules in the asymmetric unit and diffracts X-rays to 2.8 Å resolution. The structure is being solved by molecular replacement and this is the first attempt to determined the crystal structure of a peptide–nonhuman primate MHC complex.« less

  15. Sign problem and Monte Carlo calculations beyond Lefschetz thimbles

    DOE PAGES

    Alexandru, Andrei; Basar, Gokce; Bedaque, Paulo F.; ...

    2016-05-10

    We point out that Monte Carlo simulations of theories with severe sign problems can be profitably performed over manifolds in complex space different from the one with fixed imaginary part of the action (“Lefschetz thimble”). We describe a family of such manifolds that interpolate between the tangent space at one critical point (where the sign problem is milder compared to the real plane but in some cases still severe) and the union of relevant thimbles (where the sign problem is mild but a multimodal distribution function complicates the Monte Carlo sampling). As a result, we exemplify this approach using amore » simple 0+1 dimensional fermion model previously used on sign problem studies and show that it can solve the model for some parameter values where a solution using Lefschetz thimbles was elusive.« less

  16. Novel Hybrid Adaptive Controller for Manipulation in Complex Perturbation Environments

    PubMed Central

    Smith, Alex M. C.; Yang, Chenguang; Ma, Hongbin; Culverhouse, Phil; Cangelosi, Angelo; Burdet, Etienne

    2015-01-01

    In this paper we present a hybrid control scheme, combining the advantages of task-space and joint-space control. The controller is based on a human-like adaptive design, which minimises both control effort and tracking error. Our novel hybrid adaptive controller has been tested in extensive simulations, in a scenario where a Baxter robot manipulator is affected by external disturbances in the form of interaction with the environment and tool-like end-effector perturbations. The results demonstrated improved performance in the hybrid controller over both of its component parts. In addition, we introduce a novel method for online adaptation of learning parameters, using the fuzzy control formalism to utilise expert knowledge from the experimenter. This mechanism of meta-learning induces further improvement in performance and avoids the need for tuning through trial testing. PMID:26029916

  17. Complexity in congestive heart failure: A time-frequency approach

    NASA Astrophysics Data System (ADS)

    Banerjee, Santo; Palit, Sanjay K.; Mukherjee, Sayan; Ariffin, MRK; Rondoni, Lamberto

    2016-03-01

    Reconstruction of phase space is an effective method to quantify the dynamics of a signal or a time series. Various phase space reconstruction techniques have been investigated. However, there are some issues on the optimal reconstructions and the best possible choice of the reconstruction parameters. This research introduces the idea of gradient cross recurrence (GCR) and mean gradient cross recurrence density which shows that reconstructions in time frequency domain preserve more information about the dynamics than the optimal reconstructions in time domain. This analysis is further extended to ECG signals of normal and congestive heart failure patients. By using another newly introduced measure—gradient cross recurrence period density entropy, two classes of aforesaid ECG signals can be classified with a proper threshold. This analysis can be applied to quantifying and distinguishing biomedical and other nonlinear signals.

  18. Efficacy analysis of LDPC coded APSK modulated differential space-time-frequency coded for wireless body area network using MB-pulsed OFDM UWB technology.

    PubMed

    Manimegalai, C T; Gauni, Sabitha; Kalimuthu, K

    2017-12-04

    Wireless body area network (WBAN) is a breakthrough technology in healthcare areas such as hospital and telemedicine. The human body has a complex mixture of different tissues. It is expected that the nature of propagation of electromagnetic signals is distinct in each of these tissues. This forms the base for the WBAN, which is different from other environments. In this paper, the knowledge of Ultra Wide Band (UWB) channel is explored in the WBAN (IEEE 802.15.6) system. The measurements of parameters in frequency range from 3.1-10.6 GHz are taken. The proposed system, transmits data up to 480 Mbps by using LDPC coded APSK Modulated Differential Space-Time-Frequency Coded MB-OFDM to increase the throughput and power efficiency.

  19. Calibration Laboratory Capabilities Listing as of April 2009

    NASA Technical Reports Server (NTRS)

    Kennedy, Gary W.

    2009-01-01

    This document reviews the Calibration Laboratory capabilities for various NASA centers (i.e., Glenn Research Center and Plum Brook Test Facility Kennedy Space Center Marshall Space Flight Center Stennis Space Center and White Sands Test Facility.) Some of the parameters reported are: Alternating current, direct current, dimensional, mass, force, torque, pressure and vacuum, safety, and thermodynamics parameters. Some centers reported other parameters.

  20. Dynamic positioning configuration and its first-order optimization

    NASA Astrophysics Data System (ADS)

    Xue, Shuqiang; Yang, Yuanxi; Dang, Yamin; Chen, Wu

    2014-02-01

    Traditional geodetic network optimization deals with static and discrete control points. The modern space geodetic network is, on the other hand, composed of moving control points in space (satellites) and on the Earth (ground stations). The network configuration composed of these facilities is essentially dynamic and continuous. Moreover, besides the position parameter which needs to be estimated, other geophysical information or signals can also be extracted from the continuous observations. The dynamic (continuous) configuration of the space network determines whether a particular frequency of signals can be identified by this system. In this paper, we employ the functional analysis and graph theory to study the dynamic configuration of space geodetic networks, and mainly focus on the optimal estimation of the position and clock-offset parameters. The principle of the D-optimization is introduced in the Hilbert space after the concept of the traditional discrete configuration is generalized from the finite space to the infinite space. It shows that the D-optimization developed in the discrete optimization is still valid in the dynamic configuration optimization, and this is attributed to the natural generalization of least squares from the Euclidean space to the Hilbert space. Then, we introduce the principle of D-optimality invariance under the combination operation and rotation operation, and propose some D-optimal simplex dynamic configurations: (1) (Semi) circular configuration in 2-dimensional space; (2) the D-optimal cone configuration and D-optimal helical configuration which is close to the GPS constellation in 3-dimensional space. The initial design of GPS constellation can be approximately treated as a combination of 24 D-optimal helixes by properly adjusting the ascending node of different satellites to realize a so-called Walker constellation. In the case of estimating the receiver clock-offset parameter, we show that the circular configuration, the symmetrical cone configuration and helical curve configuration are still D-optimal. It shows that the given total observation time determines the optimal frequency (repeatability) of moving known points and vice versa, and one way to improve the repeatability is to increase the rotational speed. Under the Newton's law of motion, the frequency of satellite motion determines the orbital altitude. Furthermore, we study three kinds of complex dynamic configurations, one of which is the combination of D-optimal cone configurations and a so-called Walker constellation composed of D-optimal helical configuration, the other is the nested cone configuration composed of n cones, and the last is the nested helical configuration composed of n orbital planes. It shows that an effective way to achieve high coverage is to employ the configuration composed of a certain number of moving known points instead of the simplex configuration (such as D-optimal helical configuration), and one can use the D-optimal simplex solutions or D-optimal complex configurations in any combination to achieve powerful configurations with flexile coverage and flexile repeatability. Alternately, how to optimally generate and assess the discrete configurations sampled from the continuous one is discussed. The proposed configuration optimization framework has taken the well-known regular polygons (such as equilateral triangle and quadrangular) in two-dimensional space and regular polyhedrons (regular tetrahedron, cube, regular octahedron, regular icosahedron, or regular dodecahedron) into account. It shows that the conclusions made by the proposed technique are more general and no longer limited by different sampling schemes. By the conditional equation of D-optimal nested helical configuration, the relevance issues of GNSS constellation optimization are solved and some examples are performed by GPS constellation to verify the validation of the newly proposed optimization technique. The proposed technique is potentially helpful in maintenance and quadratic optimization of single GNSS of which the orbital inclination and the orbital altitude change under the precession, as well as in optimally nesting GNSSs to perform global homogeneous coverage of the Earth.

  1. Climate Modeling with a Million CPUs

    NASA Astrophysics Data System (ADS)

    Tobis, M.; Jackson, C. S.

    2010-12-01

    Michael Tobis, Ph.D. Research Scientist Associate University of Texas Institute for Geophysics Charles S. Jackson Research Scientist University of Texas Institute for Geophysics Meteorological, oceanographic, and climatological applications have been at the forefront of scientific computing since its inception. The trend toward ever larger and more capable computing installations is unabated. However, much of the increase in capacity is accompanied by an increase in parallelism and a concomitant increase in complexity. An increase of at least four additional orders of magnitude in the computational power of scientific platforms is anticipated. It is unclear how individual climate simulations can continue to make effective use of the largest platforms. Conversion of existing community codes to higher resolution, or to more complex phenomenology, or both, presents daunting design and validation challenges. Our alternative approach is to use the expected resources to run very large ensembles of simulations of modest size, rather than to await the emergence of very large simulations. We are already doing this in exploring the parameter space of existing models using the Multiple Very Fast Simulated Annealing algorithm, which was developed for seismic imaging. Our experiments have the dual intentions of tuning the model and identifying ranges of parameter uncertainty. Our approach is less strongly constrained by the dimensionality of the parameter space than are competing methods. Nevertheless, scaling up remains costly. Much could be achieved by increasing the dimensionality of the search and adding complexity to the search algorithms. Such ensemble approaches scale naturally to very large platforms. Extensions of the approach are anticipated. For example, structurally different models can be tuned to comparable effectiveness. This can provide an objective test for which there is no realistic precedent with smaller computations. We find ourselves inventing new code to manage our ensembles. Component computations involve tens to hundreds of CPUs and tens to hundreds of hours. The results of these moderately large parallel jobs influence the scheduling of subsequent jobs, and complex algorithms may be easily contemplated for this. The operating system concept of a "thread" re-emerges at a very coarse level, where each thread manages atomic computations of thousands of CPU-hours. That is, rather than multiple threads operating on a processor, at this level, multiple processors operate within a single thread. In collaboration with the Texas Advanced Computing Center, we are developing a software library at the system level, which should facilitate the development of computations involving complex strategies which invoke large numbers of moderately large multi-processor jobs. While this may have applications in other sciences, our key intent is to better characterize the coupled behavior of a very large set of climate model configurations.

  2. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    NASA Astrophysics Data System (ADS)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-07-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper, we use massive asymptotically optimal data compression to reduce the dimensionality of the data space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parametrized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate DELFI with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological data sets.

  3. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  4. Dendritic Growth Morphologies in Al-Zn Alloys—Part II: Phase-Field Computations

    NASA Astrophysics Data System (ADS)

    Dantzig, J. A.; Di Napoli, Paolo; Friedli, J.; Rappaz, M.

    2013-12-01

    In Part I of this article, the role of the Zn content in the development of solidification microstructures in Al-Zn alloys was investigated experimentally using X-ray tomographic microscopy. The transition region between dendrites found at low Zn content and dendrites found at high Zn content was characterized by textured seaweed-type structures. This Dendrite Orientation Transition (DOT) was explained by the effect of the Zn content on the weak anisotropy of the solid-liquid interfacial energy of Al. In order to further support this interpretation and to elucidate the growth mechanisms of the complex structures that form in the DOT region, a detailed phase-field study exploring anisotropy parameters' space is presented in this paper. For equiaxed growth, our results essentially recapitulate those of Haxhimali et al.[1] in simulations for pure materials. We find distinct regions of the parameter space associated with and dendrites, separated by a region where hyperbranched dendrites are observed. In simulations of directional solidification, we find similar behavior at the extrema, but in this case, the anisotropy parameters corresponding to the hyperbranched region produce textured seaweeds. As noted in the experimental work reported in Part I, these structures are actually dendrites that prefer to grow misaligned with respect to the thermal gradient direction. We also show that in this region, the dendrites grow with a blunted tip that oscillates and splits, resulting in an oriented trunk that continuously emits side branches in other directions. We conclude by making a correlation between the alloy composition and surface energy anisotropy parameters.

  5. The swiss army knife of job submission tools: grid-control

    NASA Astrophysics Data System (ADS)

    Stober, F.; Fischer, M.; Schleper, P.; Stadie, H.; Garbers, C.; Lange, J.; Kovalchuk, N.

    2017-10-01

    grid-control is a lightweight and highly portable open source submission tool that supports all common workflows in high energy physics (HEP). It has been used by a sizeable number of HEP analyses to process tasks that sometimes consist of up to 100k jobs. grid-control is built around a powerful plugin and configuration system, that allows users to easily specify all aspects of the desired workflow. Job submission to a wide range of local or remote batch systems or grid middleware is supported. Tasks can be conveniently specified through the parameter space that will be processed, which can consist of any number of variables and data sources with complex dependencies on each other. Dataset information is processed through a configurable pipeline of dataset filters, partition plugins and partition filters. The partition plugins can take the number of files, size of the work units, metadata or combinations thereof into account. All changes to the input datasets or variables are propagated through the processing pipeline and can transparently trigger adjustments to the parameter space and the job submission. While the core functionality is completely experiment independent, full integration with the CMS computing environment is provided by a small set of plugins.

  6. A Q-Band Free-Space Characterization of Carbon Nanotube Composites

    PubMed Central

    Hassan, Ahmed M.; Garboczi, Edward J.

    2016-01-01

    We present a free-space measurement technique for non-destructive non-contact electrical and dielectric characterization of nano-carbon composites in the Q-band frequency range of 30 GHz to 50 GHz. The experimental system and error correction model accurately reconstruct the conductivity of composite materials that are either thicker than the wave penetration depth, and therefore exhibit negligible microwave transmission (less than −40 dB), or thinner than the wave penetration depth and, therefore, exhibit significant microwave transmission. This error correction model implements a fixed wave propagation distance between antennas and corrects the complex scattering parameters of the specimen from two references, an air slab having geometrical propagation length equal to that of the specimen under test, and a metallic conductor, such as an aluminum plate. Experimental results were validated by reconstructing the relative dielectric permittivity of known dielectric materials and then used to determine the conductivity of nano-carbon composite laminates. This error correction model can simplify routine characterization of thin conducting laminates to just one measurement of scattering parameters, making the method attractive for research, development, and for quality control in the manufacturing environment. PMID:28057959

  7. Crystallization and preliminary X-ray diffraction analysis of hemextin A: a unique anticoagulant protein from Hemachatus haemachatus venom

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Yajnavalka; Kumar, Sundramurthy; Jobichen, Chacko

    2007-08-01

    Crystals of hemextin A, a three-finger toxin isolated and purified from African Ringhals cobra (H. haemachatus), are orthorhombic, space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 49.27, b = 49.51, c = 57.87 Å, and diffract to 1.5 Å resolution. Hemextin A was isolated and purified from African Ringhals cobra (Hemachatus haemachatus). It is a three-finger toxin that specifically inhibits blood coagulation factor VIIa and clot formation and that also interacts with hemextin B to form a unique anticoagulant complex. Hemextin A was crystallized by the hanging-drop vapour-diffusion method by equilibration against 0.2 M ammonium acetate, 0.1more » M sodium acetate trihydrate pH 4.6 and 30% PEG 4000 as the precipitating agent. The crystals belong to space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 49.27, b = 49.51, c = 57.87 Å and two molecules in the asymmetric unit. They diffracted to 1.5 Å resolution at beamline X25 at BNL.« less

  8. Improving the realism of white matter numerical phantoms: a step towards a better understanding of the influence of structural disorders in diffusion MRI

    NASA Astrophysics Data System (ADS)

    Ginsburger, Kévin; Poupon, Fabrice; Beaujoin, Justine; Estournet, Delphine; Matuschke, Felix; Mangin, Jean-François; Axer, Markus; Poupon, Cyril

    2018-02-01

    White matter is composed of irregularly packed axons leading to a structural disorder in the extra-axonal space. Diffusion MRI experiments using oscillating gradient spin echo sequences have shown that the diffusivity transverse to axons in this extra-axonal space is dependent on the frequency of the employed sequence. In this study, we observe the same frequency-dependence using 3D simulations of the diffusion process in disordered media. We design a novel white matter numerical phantom generation algorithm which constructs biomimicking geometric configurations with few design parameters, and enables to control the level of disorder of the generated phantoms. The influence of various geometrical parameters present in white matter, such as global angular dispersion, tortuosity, presence of Ranvier nodes, beading, on the extra-cellular perpendicular diffusivity frequency dependence was investigated by simulating the diffusion process in numerical phantoms of increasing complexity and fitting the resulting simulated diffusion MR signal attenuation with an adequate analytical model designed for trapezoidal OGSE sequences. This work suggests that angular dispersion and especially beading have non-negligible effects on this extracellular diffusion metrics that may be measured using standard OGSE DW-MRI clinical protocols.

  9. Thermal dark matter through the Dirac neutrino portal

    NASA Astrophysics Data System (ADS)

    Batell, Brian; Han, Tao; McKeen, David; Haghi, Barmak Shams Es

    2018-04-01

    We study a simple model of thermal dark matter annihilating to standard model neutrinos via the neutrino portal. A (pseudo-)Dirac sterile neutrino serves as a mediator between the visible and the dark sectors, while an approximate lepton number symmetry allows for a large neutrino Yukawa coupling and, in turn, efficient dark matter annihilation. The dark sector consists of two particles, a Dirac fermion and complex scalar, charged under a symmetry that ensures the stability of the dark matter. A generic prediction of the model is a sterile neutrino with a large active-sterile mixing angle that decays primarily invisibly. We derive existing constraints and future projections from direct detection experiments, colliders, rare meson and tau decays, electroweak precision tests, and small scale structure observations. Along with these phenomenological tests, we investigate the consequences of perturbativity and scalar mass fine tuning on the model parameter space. A simple, conservative scheme to confront the various tests with the thermal relic target is outlined, and we demonstrate that much of the cosmologically-motivated parameter space is already constrained. We also identify new probes of this scenario such as multibody kaon decays and Drell-Yan production of W bosons at the LHC.

  10. Excursion Processes Associated with Elliptic Combinatorics

    NASA Astrophysics Data System (ADS)

    Baba, Hiroya; Katori, Makoto

    2018-06-01

    Researching elliptic analogues for equalities and formulas is a new trend in enumerative combinatorics which has followed the previous trend of studying q-analogues. Recently Schlosser proposed a lattice path model in the square lattice with a family of totally elliptic weight-functions including several complex parameters and discussed an elliptic extension of the binomial theorem. In the present paper, we introduce a family of discrete-time excursion processes on Z starting from the origin and returning to the origin in a given time duration 2 T associated with Schlosser's elliptic combinatorics. The processes are inhomogeneous both in space and time and hence expected to provide new models in non-equilibrium statistical mechanics. By numerical calculation we show that the maximum likelihood trajectories on the spatio-temporal plane of the elliptic excursion processes and of their reduced trigonometric versions are not straight lines in general but are nontrivially curved depending on parameters. We analyze asymptotic probability laws in the long-term limit T → ∞ for a simplified trigonometric version of excursion process. Emergence of nontrivial curves of trajectories in a large scale of space and time from the elementary elliptic weight-functions exhibits a new aspect of elliptic combinatorics.

  11. Excursion Processes Associated with Elliptic Combinatorics

    NASA Astrophysics Data System (ADS)

    Baba, Hiroya; Katori, Makoto

    2018-04-01

    Researching elliptic analogues for equalities and formulas is a new trend in enumerative combinatorics which has followed the previous trend of studying q-analogues. Recently Schlosser proposed a lattice path model in the square lattice with a family of totally elliptic weight-functions including several complex parameters and discussed an elliptic extension of the binomial theorem. In the present paper, we introduce a family of discrete-time excursion processes on Z starting from the origin and returning to the origin in a given time duration 2T associated with Schlosser's elliptic combinatorics. The processes are inhomogeneous both in space and time and hence expected to provide new models in non-equilibrium statistical mechanics. By numerical calculation we show that the maximum likelihood trajectories on the spatio-temporal plane of the elliptic excursion processes and of their reduced trigonometric versions are not straight lines in general but are nontrivially curved depending on parameters. We analyze asymptotic probability laws in the long-term limit T → ∞ for a simplified trigonometric version of excursion process. Emergence of nontrivial curves of trajectories in a large scale of space and time from the elementary elliptic weight-functions exhibits a new aspect of elliptic combinatorics.

  12. X-ray crystallographic characterization of rhesus macaque MHC Mamu-A*02 complexed with an immunodominant SIV-Gag nonapeptide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Youjun; Graduate School, Chinese Academy of Sciences, Beijing; Qi, Jianxun

    2006-01-01

    X-ray crystallographic characterization of rhesus macaque MHC Mamu-A*02 complexed with an immunodominant SIV-Gag nonapeptide. Simian immunodeficiency virus (SIV) in the rhesus macaque is regarded as a classic animal model, playing a crucial role in HIV vaccine strategies and therapeutics by characterizing various cytotoxic T-lymphocyte (CTL) responses in macaque monkeys. However, the availability of well documented structural reports focusing on rhesus macaque major histocompatibility complex class I (MHC I) molecules remains extremely limited. Here, a complex of the rhesus macaque MHC I molecule (Mamu-A*02) with human β{sub 2}m and an immunodominant SIV-Gag nonapeptide, GESNLKSLY (GY9), has been crystallized. The crystal diffractsmore » X-rays to 2.7 Å resolution and belongs to space group C2, with unit-cell parameters a = 124.11, b = 110.45, c = 100.06 Å, and contains two molecules in the asymmetric unit. The availability of the structure, which is being solved by molecular replacement, will provide new insights into rhesus macaque MHC I (Mamu-A*02) presenting pathogenic SIV peptides.« less

  13. Ultrasound assisted synthesis, characterization and electrochemical study of a tetradentate oxovanadium diazomethine complex

    NASA Astrophysics Data System (ADS)

    Merzougui, Moufida; Ouari, Kamel; Weiss, Jean

    2016-09-01

    The oxovanadium (IV) complex ;VOL; of a tetradentate Schiff base ligand derived from the condensation of diaminoethane and 2-hydroxy-1-naphthaldehyde was efficiently prepared via ultrasound irradiation and the template effect of VO(acac)2. The resulting product was characterized by elemental analysis, infrared, electronic absorption and molar conductance measurement. Single X-ray structure analysis showed that the complex is a monomeric five-coordinate with a distorted square pyramidal geometry. It crystallizes in monoclinic system having unit cell parameters a = 8.3960 (5) Å; b = 12.5533 (8) Å and c = 18.7804 (11) Å; α = γ = 90°; β = 104.843°(2), with P 21/c space group. Cyclic voltammetry of the complex, carried out on a glassy carbon (GC) electrode in DMF, showed reversible cyclic voltammograms response in the potential range 0.15-0.60 V involving a single electron redox wave VV/VIV, the diffusion coefficient is determinedusing GC rotating disk electrode. The Levich plot Ilim = f(ω1/2), was used to calculate the diffusion-convection controlled currents.

  14. Validating an Air Traffic Management Concept of Operation Using Statistical Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning; Davies, Misty Dawn

    2013-01-01

    Validating a concept of operation for a complex, safety-critical system (like the National Airspace System) is challenging because of the high dimensionality of the controllable parameters and the infinite number of states of the system. In this paper, we use statistical modeling techniques to explore the behavior of a conflict detection and resolution algorithm designed for the terminal airspace. These techniques predict the robustness of the system simulation to both nominal and off-nominal behaviors within the overall airspace. They also can be used to evaluate the output of the simulation against recorded airspace data. Additionally, the techniques carry with them a mathematical value of the worth of each prediction-a statistical uncertainty for any robustness estimate. Uncertainty Quantification (UQ) is the process of quantitative characterization and ultimately a reduction of uncertainties in complex systems. UQ is important for understanding the influence of uncertainties on the behavior of a system and therefore is valuable for design, analysis, and verification and validation. In this paper, we apply advanced statistical modeling methodologies and techniques on an advanced air traffic management system, namely the Terminal Tactical Separation Assured Flight Environment (T-TSAFE). We show initial results for a parameter analysis and safety boundary (envelope) detection in the high-dimensional parameter space. For our boundary analysis, we developed a new sequential approach based upon the design of computer experiments, allowing us to incorporate knowledge from domain experts into our modeling and to determine the most likely boundary shapes and its parameters. We carried out the analysis on system parameters and describe an initial approach that will allow us to include time-series inputs, such as the radar track data, into the analysis

  15. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  16. Hands-on parameter search for neural simulations by a MIDI-controller.

    PubMed

    Eichner, Hubert; Borst, Alexander

    2011-01-01

    Computational neuroscientists frequently encounter the challenge of parameter fitting--exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems.

  17. Hands-On Parameter Search for Neural Simulations by a MIDI-Controller

    PubMed Central

    Eichner, Hubert; Borst, Alexander

    2011-01-01

    Computational neuroscientists frequently encounter the challenge of parameter fitting – exploring a usually high dimensional variable space to find a parameter set that reproduces an experimental data set. One common approach is using automated search algorithms such as gradient descent or genetic algorithms. However, these approaches suffer several shortcomings related to their lack of understanding the underlying question, such as defining a suitable error function or getting stuck in local minima. Another widespread approach is manual parameter fitting using a keyboard or a mouse, evaluating different parameter sets following the users intuition. However, this process is often cumbersome and time-intensive. Here, we present a new method for manual parameter fitting. A MIDI controller provides input to the simulation software, where model parameters are then tuned according to the knob and slider positions on the device. The model is immediately updated on every parameter change, continuously plotting the latest results. Given reasonably short simulation times of less than one second, we find this method to be highly efficient in quickly determining good parameter sets. Our approach bears a close resemblance to tuning the sound of an analog synthesizer, giving the user a very good intuition of the problem at hand, such as immediate feedback if and how results are affected by specific parameter changes. In addition to be used in research, our approach should be an ideal teaching tool, allowing students to interactively explore complex models such as Hodgkin-Huxley or dynamical systems. PMID:22066027

  18. Effects of two successive parity-invariant point interactions on one-dimensional quantum transmission: Resonance conditions for the parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konno, Kohkichi, E-mail: kohkichi@tomakomai-ct.ac.jp; Nagasawa, Tomoaki, E-mail: nagasawa@tomakomai-ct.ac.jp; Takahashi, Rohta, E-mail: takahashi@tomakomai-ct.ac.jp

    We consider the scattering of a quantum particle by two independent, successive parity-invariant point interactions in one dimension. The parameter space for the two point interactions is given by the direct product of two tori, which is described by four parameters. By investigating the effects of the two point interactions on the transmission probability of plane wave, we obtain the conditions for the parameter space under which perfect resonant transmission occur. The resonance conditions are found to be described by symmetric and anti-symmetric relations between the parameters.

  19. Mapping an operator's perception of a parameter space

    NASA Technical Reports Server (NTRS)

    Pew, R. W.; Jagacinski, R. J.

    1972-01-01

    Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.

  20. Intermittent turbulence in the heliosheath and the magnetosheath plasmas based on Voyager and THEMIS data

    NASA Astrophysics Data System (ADS)

    Macek, Wiesław M.; Wawrzaszek, Anna; Kucharuk, Beata

    2018-01-01

    Turbulence is complex behavior that is ubiquitous in space, including the environments of the heliosphere and the magnetosphere. Our studies on solar wind turbulence including the heliosheath, and even at the heliospheric boundaries, also beyond the ecliptic plane, have shown that turbulence is intermittent in the entire heliosphere. As is known, turbulence in space plasmas often exhibits substantial deviations from normal Gaussian distributions. Therefore, we analyze the fluctuations of plasma and magnetic field parameters also in the magnetosheath behind the Earth's bow shock. Based on THEMIS observations, we have already suggested that turbulence behind the quasi-perpendicular shock is more intermittent with larger kurtosis than that behind the quasi-parallel shocks. Following this study, we would like to present a detailed analysis of intermittent anisotropic turbulence in the magnetosheath depending on various characteristics of plasma behind the bow shock and now also near the magnetopause. In particular, for very high Alfvénic Mach numbers and high plasma beta we have clear non-Gaussian statistics in the directions perpendicular to the magnetic field. On the other hand, for directions parallel to this field the kurtosis is small and the plasma is close to equilibrium. However, the level of intermittency for the outgoing fluctuations seems to be similar to that for the ingoing fluctuations, which is consistent with approximate equipartition of energy between the oppositely propagating Alfvén waves. We hope that the difference in characteristic behavior of these fluctuations in various regions of space plasmas can help to detect some complex structures in space missions in the near future.

  1. Role of Green Spaces in Favorable Microclimate Creating in Urban Environment (Exemplified by Italian Cities)

    NASA Astrophysics Data System (ADS)

    Finaeva, O.

    2017-11-01

    The article represents a brief analysis of factors that influence the development of an urban green space system: territorial and climatic conditions, cultural and historical background as well as the modern strategy of historic cities development. The introduction defines the concept of urban greening, green spaces and green space distribution. The environmental parameters influenced by green spaces are determined. By the example of Italian cities the principles of the urban greening system development are considered: the historical aspects of formation of the urban greening system in Italian cities are analyzed, the role of green spaces in the formation of the urban environment structure and the creation of a favorable microclimate is determined, and a set of measures aimed at its improvement is highlighted. The modern principles of urban greening systems development and their characteristic features are considered. Special attention is paid to the interrelation of architectural and green structures in the formation of a favorable microclimate and psychological comfort in the urban environment; various methods of greening are considered by the example of existing architectural complexes depending on the climate of the area and the landscape features. The examples for the choice of plants and the application of compositional techniques are given. The results represent the basic principles of developing an urban green spaces system. The conclusion summarizes the techniques aimed at the microclimate improvement in the urban environment.

  2. Balancing novelty with confined chemical space in modern drug discovery.

    PubMed

    Medina-Franco, José L; Martinez-Mayorga, Karina; Meurice, Nathalie

    2014-02-01

    The concept of chemical space has broad applications in drug discovery. In response to the needs of drug discovery campaigns, different approaches are followed to efficiently populate, mine and select relevant chemical spaces that overlap with biologically relevant chemical spaces. This paper reviews major trends in current drug discovery and their impact on the mining and population of chemical space. We also survey different approaches to develop screening libraries with confined chemical spaces balancing physicochemical properties. In this context, the confinement is guided by criteria that can be divided in two broad categories: i) library design focused on a relevant therapeutic target or disease and ii) library design focused on the chemistry or a desired molecular function. The design and development of chemical libraries should be associated with the specific purpose of the library and the project goals. The high complexity of drug discovery and the inherent imperfection of individual experimental and computational technologies prompt the integration of complementary library design and screening approaches to expedite the identification of new and better drugs. Library design approaches including diversity-oriented synthesis, biological-oriented synthesis or combinatorial library design, to name a few, and the design of focused libraries driven by target/disease, chemical structure or molecular function are more efficient if they are guided by multi-parameter optimization. In this context, consideration of pharmaceutically relevant properties is essential for balancing novelty with chemical space in drug discovery.

  3. Improved digital filters for evaluating Fourier and Hankel transform integrals

    USGS Publications Warehouse

    Anderson, Walter L.

    1975-01-01

    New algorithms are described for evaluating Fourier (cosine, sine) and Hankel (J0,J1) transform integrals by means of digital filters. The filters have been designed with extended lengths so that a variable convolution operation can be applied to a large class of integral transforms having the same system transfer function. A f' lagged-convolution method is also presented to significantly decrease the computation time when computing a series of like-transforms over a parameter set spaced the same as the filters. Accuracy of the new filters is comparable to Gaussian integration, provided moderate parameter ranges and well-behaved kernel functions are used. A collection of Fortran IV subprograms is included for both real and complex functions for each filter type. The algorithms have been successfully used in geophysical applications containing a wide variety of integral transforms

  4. On estimating the phase of periodic waveform in additive Gaussian noise, part 2

    NASA Astrophysics Data System (ADS)

    Rauch, L. L.

    1984-11-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehgal, Ray M.; Maroudas, Dimitrios, E-mail: maroudas@ecs.umass.edu, E-mail: ford@ecs.umass.edu; Ford, David M., E-mail: maroudas@ecs.umass.edu, E-mail: ford@ecs.umass.edu

    We have developed a coarse-grained description of the phase behavior of the isolated 38-atom Lennard-Jones cluster (LJ{sub 38}). The model captures both the solid-solid polymorphic transitions at low temperatures and the complex cluster breakup and melting transitions at higher temperatures. For this coarse model development, we employ the manifold learning technique of diffusion mapping. The outcome of the diffusion mapping analysis over a broad temperature range indicates that two order parameters are sufficient to describe the cluster's phase behavior; we have chosen two such appropriate order parameters that are metrics of condensation and overall crystallinity. In this well-justified coarse-variable space,more » we calculate the cluster's free energy landscape (FEL) as a function of temperature, employing Monte Carlo umbrella sampling. These FELs are used to quantify the phase behavior and onsets of phase transitions of the LJ{sub 38} cluster.« less

  6. On Estimating the Phase of Periodic Waveform in Additive Gaussian Noise, Part 2

    NASA Technical Reports Server (NTRS)

    Rauch, L. L.

    1984-01-01

    Motivated by advances in signal processing technology that support more complex algorithms, a new look is taken at the problem of estimating the phase and other parameters of a periodic waveform in additive Gaussian noise. The general problem was introduced and the maximum a posteriori probability criterion with signal space interpretation was used to obtain the structures of optimum and some suboptimum phase estimators for known constant frequency and unknown constant phase with an a priori distribution. Optimal algorithms are obtained for some cases where the frequency is a parameterized function of time with the unknown parameters and phase having a joint a priori distribution. In the last section, the intrinsic and extrinsic geometry of hypersurfaces is introduced to provide insight to the estimation problem for the small noise and large noise cases.

  7. Parallel stochastic simulation of macroscopic calcium currents.

    PubMed

    González-Vélez, Virginia; González-Vélez, Horacio

    2007-06-01

    This work introduces MACACO, a macroscopic calcium currents simulator. It provides a parameter-sweep framework which computes macroscopic Ca(2+) currents from the individual aggregation of unitary currents, using a stochastic model for L-type Ca(2+) channels. MACACO uses a simplified 3-state Markov model to simulate the response of each Ca(2+) channel to different voltage inputs to the cell. In order to provide an accurate systematic view for the stochastic nature of the calcium channels, MACACO is composed of an experiment generator, a central simulation engine and a post-processing script component. Due to the computational complexity of the problem and the dimensions of the parameter space, the MACACO simulation engine employs a grid-enabled task farm. Having been designed as a computational biology tool, MACACO heavily borrows from the way cell physiologists conduct and report their experimental work.

  8. Applying Numerical Relativity to Gravitational Wave Astronomy using LISA

    NASA Astrophysics Data System (ADS)

    McWilliams, Sean T.

    2007-12-01

    We present recently calculated waveforms from numerical relativity and their application to the search for and precision measurement of black hole binary coalescences using LISA. In particular, we focus on the advances made in moving beyond the equal mass, nonspinning case into other regions of parameter space, particularly the case of nonspinning holes with ever-increasing mass ratios as the state of the art has progressed. Also, we investigate the potential contribution from the merger portion of the waveform to measurement uncertainties of the binary's parameters. Until now, only the inspiral has been investigated due to the lack of availability of mergers and the increased complexity required in moving beyond the low frequency approximation of the interferometer, which is necessary when mergers are included. We discuss the subtleties of the problem, and present preliminary results.

  9. Noncommutative complex structures on quantum homogeneous spaces

    NASA Astrophysics Data System (ADS)

    Ó Buachalla, Réamonn

    2016-01-01

    A new framework for noncommutative complex geometry on quantum homogeneous spaces is introduced. The main ingredients used are covariant differential calculi and Takeuchi's categorical equivalence for quantum homogeneous spaces. A number of basic results are established, producing a simple set of necessary and sufficient conditions for noncommutative complex structures to exist. Throughout, the framework is applied to the quantum projective spaces endowed with the Heckenberger-Kolb calculus.

  10. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis.

    PubMed

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.

  11. Approaches to automatic parameter fitting in a microscopy image segmentation pipeline: An exploratory parameter space analysis

    PubMed Central

    Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas

    2013-01-01

    Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941

  12. Designing Hyperchaotic Cat Maps With Any Desired Number of Positive Lyapunov Exponents.

    PubMed

    Hua, Zhongyun; Yi, Shuang; Zhou, Yicong; Li, Chengqing; Wu, Yue

    2018-02-01

    Generating chaotic maps with expected dynamics of users is a challenging topic. Utilizing the inherent relation between the Lyapunov exponents (LEs) of the Cat map and its associated Cat matrix, this paper proposes a simple but efficient method to construct an -dimensional ( -D) hyperchaotic Cat map (HCM) with any desired number of positive LEs. The method first generates two basic -D Cat matrices iteratively and then constructs the final -D Cat matrix by performing similarity transformation on one basic -D Cat matrix by the other. Given any number of positive LEs, it can generate an -D HCM with desired hyperchaotic complexity. Two illustrative examples of -D HCMs were constructed to show the effectiveness of the proposed method, and to verify the inherent relation between the LEs and Cat matrix. Theoretical analysis proves that the parameter space of the generated HCM is very large. Performance evaluations show that, compared with existing methods, the proposed method can construct -D HCMs with lower computation complexity and their outputs demonstrate strong randomness and complex ergodicity.

  13. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data.

    PubMed

    Won, Sungho; Choi, Hosik; Park, Suyeon; Lee, Juyoung; Park, Changyi; Kwon, Sunghoon

    2015-01-01

    Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  14. Synthesis, characterization and biological assay of Salicylaldehyde Schiff base Cu(II) complexes and their precursors

    NASA Astrophysics Data System (ADS)

    Iftikhar, Bushra; Javed, Kanwal; Khan, Muhammad Saif Ullah; Akhter, Zareen; Mirza, Bushra; Mckee, Vickie

    2018-03-01

    Three new Schiff base ligands were synthesized by the reaction of Salicylaldehyde with semi-aromatic diamines, prepared by the reduction of corresponding dinitro-compounds, and were further used for the formation of complexes with Cu(II) metal ion. The structural features of the synthesized compounds were confirmed by their physical properties and infrared, electronic and NMR spectroscopic techniques. The studies revealed that the synthesized Schiff bases existed as tetradentate ligands and bonded to the metal ion through the phenolic oxygen and azomethine nitrogen. One of the dinitro precursors was also analyzed by single crystal X-ray crystallography, which showed that it crystallizes in monoclinic system with space group P2/n. The thermal behavior of the Cu(II) complexes was determined by thermogravimetric analysis (TGA) and kinetic parameters were evaluated from the data. Schiff base ligands, their precursors and metal complexes were also screened for antibacterial, antifungal, antitumor, Brine shrimp lethality, DPPH free radical scavenging and DNA damage assays. The results of these analyses indicated the substantial potential of the synthesized Schiff bases, their precursors and Cu(II) complexes in biological field as future drugs.

  15. Overview of Solar Radio Bursts and their Sources

    NASA Astrophysics Data System (ADS)

    Golla, Thejappa; MacDowall, Robert J.

    2018-06-01

    Properties of radio bursts emitted by the Sun at frequencies below tens of MHz are reviewed. In this frequency range, the most prominent radio emissions are those of solar type II, complex type III and solar type IV radio bursts, excited probably by the energetic electron populations accelerated in completely different environments: (1) type II bursts are due to non-relativistic electrons accelerated by the CME driven interplanetary shocks, (2) complex type III bursts are due to near-relativistic electrons accelerated either by the solar flare reconnection process or by the SEP shocks, and (3) type IV bursts are due to relativistic electrons, trapped in the post-eruption arcades behind CMEs; these relativistic electrons probably are accelerated by the continued reconnection processes occurring beneath the CME. These radio bursts, which can serve as the natural plasma probes traversing the heliosphere by providing information about various crucial space plasma parameters, are also an ideal instrument for investigating acceleration mechanisms responsible for the high energy particles. The rich collection of valuable high quality radio and high time resolution in situ wave data from the WAVES experiments of the STEREO A, STEREO B and WIND spacecraft has provided an unique opportunity to study these different radio phenomena and understand the complex physics behind their excitation. We have developed Monte Carlo simulation techniques to estimate the propagation effects on the observed characteristics of these low frequency radio bursts. We will present some of the new results and describe how one can use these radio burst observations for space weather studies. We will also describe some of the non-linear plasma processes detected in the source regions of both solar type III and type II radio bursts. The analysis and simulation techniques used in these studies will be of immense use for future space based radio observations.

  16. Local spatial frequency analysis for computer vision

    NASA Technical Reports Server (NTRS)

    Krumm, John; Shafer, Steven A.

    1990-01-01

    A sense of vision is a prerequisite for a robot to function in an unstructured environment. However, real-world scenes contain many interacting phenomena that lead to complex images which are difficult to interpret automatically. Typical computer vision research proceeds by analyzing various effects in isolation (e.g., shading, texture, stereo, defocus), usually on images devoid of realistic complicating factors. This leads to specialized algorithms which fail on real-world images. Part of this failure is due to the dichotomy of useful representations for these phenomena. Some effects are best described in the spatial domain, while others are more naturally expressed in frequency. In order to resolve this dichotomy, we present the combined space/frequency representation which, for each point in an image, shows the spatial frequencies at that point. Within this common representation, we develop a set of simple, natural theories describing phenomena such as texture, shape, aliasing and lens parameters. We show these theories lead to algorithms for shape from texture and for dealiasing image data. The space/frequency representation should be a key aid in untangling the complex interaction of phenomena in images, allowing automatic understanding of real-world scenes.

  17. Phase Transition Behavior in a Neutral Evolution Model

    NASA Astrophysics Data System (ADS)

    King, Dawn; Scott, Adam; Maric, Nevena; Bahar, Sonya

    2014-03-01

    The complexity of interactions among individuals and between individuals and the environment make agent based modeling ideal for studying emergent speciation. This is a dynamically complex problem that can be characterized via the critical behavior of a continuous phase transition. Concomitant with the main tenets of natural selection, we allow organisms to reproduce, mutate, and die within a neutral phenotype space. Previous work has shown phase transition behavior in an assortative mating model with variable fitness landscapes as the maximum mutation size (μ) was varied (Dees and Bahar, 2010). Similarly, this behavior was recently presented in the work of Scott et al. (2013), even on a completely neutral landscape, for bacterial-like fission as well as for assortative mating. Here we present another neutral model to investigate the `critical' phase transition behavior of three mating types - assortative, bacterial, and random - in a phenotype space as a function of the percentage of random death. Results show two types of phase transitions occurring for the parameters of the population size and the number of clusters (an analogue of species), indicating different evolutionary dynamics for system survival and clustering. This research was supported by funding from: University of Missouri Research Board and James S. McDonnell Foundation.

  18. Structure, vibrations and quantum chemical investigations of hydrogen bonded complex of bis(1-hydroxy-2-methylpropan-2-aminium)selenate

    NASA Astrophysics Data System (ADS)

    Thirunarayanan, S.; Arjunan, V.; Marchewka, M. K.; Mohan, S.

    2017-04-01

    The hydrogen bonded molecular complex bis(1-hydroxy-2-methylpropan-2-aminium)selenate (C8H24N2O6Se) has been prepared by the reaction of 2-amino-2-methyl propanol and selenic acid. The X-ray diffraction analysis revealed that the intermolecular proton transfer from selenic acid (SeO4H2) to 2-amino-2-methylpropanol results in the formation of bis(1-hydroxy-2-methylpropan-2-aminium)selenate (HMPAS) salt and the fragments are connected through H-bonding and ion pairing. The N-H⋯O and O-H⋯O interactions between 2-amino-2-methylpropanol and selenic acid determine the supramolecular arrangement in three-dimensional space. The salt crystallises in the space group P121/n1 of monoclinic system. The complete vibrational assignments of HMPAS have been performed by FTIR and FT-Raman spectroscopy. The experimental data are correlated with the structural properties namely the energy, thermodynamic parameters, atomic charges, hybridization concepts and vibrational frequencies determined by quantum chemical studies performed with B3LYP method using 6-311++G*, 6-31+G* and 6-31G** basis sets.

  19. Configuration and localization of the nipple-areola complex in men.

    PubMed

    Beer, G M; Budi, S; Seifert, B; Morgenthaler, W; Infanger, M; Meyer, V E

    2001-12-01

    The causes of bilateral absence of the nipple-areola complex in men are seldom congenital, but attributable rather to destruction as a result of trauma, or after mastectomy in female-to-male transsexuals and in male breast cancer, or after the correction of extreme bilateral gynecomastia. Such a bilateral loss becomes a major reconstructive challenge with respect to the configuration and localization of a new nipple-areola complex. Because there is very little information available in the literature, we carried out a cross-sectional study on the configuration and localization of the nipple-areola complex in men.A total of 100 healthy men aged 20 to 36 years were examined under standardized conditions. The first part of the study dealt with the configuration of the nipple-areola complex (dimensions, round or oval shape). The second part concentrated on the localization of the complex on the thoracic wall with respect to anatomic landmarks and in correlation to various parameters such as weight and height of the body, circumference of the thorax, length of sternum, and position in the intercostal space. Of the 100 subjects examined, 91 had oval and seven had a round nipple-areola complex. An asymmetry between the right and the left side was found in two cases. The mean ratio of the horizontal/vertical diameter of an oval nipple-areola complex was 27:20 mm and the mean diameter for a round nipple-areola complex was 23 mm. The center of the nipple-areola complex was in the fourth intercostal space in 75 percent and in the fifth intercostal space in 23 percent of the subjects. To localize the nipple-areola complex on the thoracic wall de novo, at least two reproducible measurements proved to be necessary, composed of a horizontal line (distance from the midsternal line to the nipple = A) and a vertical line (distance from the sternal notch to the intersection of line A, = B). The closest correlation for the horizontal distance A was given by the circumference of the thorax: A = 2.4 cm + [0.09 x circumference of thorax (cm)], (r = 0.68). The best correlation to calculate the vertical distance B was found using the distance A and the length of the sternum: B = 1.2 cm + [0.28 x length of sternum (cm)] + [0.1 x circumference of thorax (cm)], (R = 0.50). In cases of bilateral absence, we recommend creating an oval nipple-areola complex in men. The appropriate localization can be calculated by means of two simple equations derived from the circumference of the thorax and the length of the sternum.

  20. Dissipative quantum trajectories in complex space: Damped harmonic oscillator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw

    Dissipative quantum trajectories in complex space are investigated in the framework of the logarithmic nonlinear Schrödinger equation. The logarithmic nonlinear Schrödinger equation provides a phenomenological description for dissipative quantum systems. Substituting the wave function expressed in terms of the complex action into the complex-extended logarithmic nonlinear Schrödinger equation, we derive the complex quantum Hamilton–Jacobi equation including the dissipative potential. It is shown that dissipative quantum trajectories satisfy a quantum Newtonian equation of motion in complex space with a friction force. Exact dissipative complex quantum trajectories are analyzed for the wave and solitonlike solutions to the logarithmic nonlinear Schrödinger equation formore » the damped harmonic oscillator. These trajectories converge to the equilibrium position as time evolves. It is indicated that dissipative complex quantum trajectories for the wave and solitonlike solutions are identical to dissipative complex classical trajectories for the damped harmonic oscillator. This study develops a theoretical framework for dissipative quantum trajectories in complex space.« less

  1. 14 CFR 1214.117 - Launch and orbit parameters for a standard launch.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...

  2. 14 CFR 1214.117 - Launch and orbit parameters for a standard launch.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...

  3. 14 CFR 1214.117 - Launch and orbit parameters for a standard launch.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...) Launch from Kennedy Space Center (KSC) into the customer's choice of two standard mission orbits: 160 NM... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Launch and orbit parameters for a standard launch. 1214.117 Section 1214.117 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION...

  4. KSC-2012-2893

    NASA Image and Video Library

    2012-05-21

    CAPE CANAVERAL, Fla. – A barge arrives at NASA Kennedy Space Center’s Launch Complex 39 turn basin in Florida. The high-fidelity space shuttle model is being transported from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will be transported via barge to Texas. The model was built in Apopka, Fla., by Guard-Lee and installed at the Kennedy Space Center Visitor Complex in 1993.The model has been parked at the turn basin the past five months to allow the Kennedy Space Center Visitor Complex to begin building a new facility to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Frankie Martin

  5. KSC-2012-2892

    NASA Image and Video Library

    2012-05-21

    CAPE CANAVERAL, Fla. – A barge arrives at NASA Kennedy Space Center’s Launch Complex 39 turn basin in Florida. The high-fidelity space shuttle model is being transported from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will be transported via barge to Texas. The model was built in Apopka, Fla., by Guard-Lee and installed at the Kennedy Space Center Visitor Complex in 1993.The model has been parked at the turn basin the past five months to allow the Kennedy Space Center Visitor Complex to begin building a new facility to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Frankie Martin

  6. Experimental identification of a comb-shaped chaotic region in multiple parameter spaces simulated by the Hindmarsh—Rose neuron model

    NASA Astrophysics Data System (ADS)

    Jia, Bing

    2014-03-01

    A comb-shaped chaotic region has been simulated in multiple two-dimensional parameter spaces using the Hindmarsh—Rose (HR) neuron model in many recent studies, which can interpret almost all of the previously simulated bifurcation processes with chaos in neural firing patterns. In the present paper, a comb-shaped chaotic region in a two-dimensional parameter space was reproduced, which presented different processes of period-adding bifurcations with chaos with changing one parameter and fixed the other parameter at different levels. In the biological experiments, different period-adding bifurcation scenarios with chaos by decreasing the extra-cellular calcium concentration were observed from some neural pacemakers at different levels of extra-cellular 4-aminopyridine concentration and from other pacemakers at different levels of extra-cellular caesium concentration. By using the nonlinear time series analysis method, the deterministic dynamics of the experimental chaotic firings were investigated. The period-adding bifurcations with chaos observed in the experiments resembled those simulated in the comb-shaped chaotic region using the HR model. The experimental results show that period-adding bifurcations with chaos are preserved in different two-dimensional parameter spaces, which provides evidence of the existence of the comb-shaped chaotic region and a demonstration of the simulation results in different two-dimensional parameter spaces in the HR neuron model. The results also present relationships between different firing patterns in two-dimensional parameter spaces.

  7. KSC-2011-8236

    NASA Image and Video Library

    2011-12-11

    CAPE CANAVERAL, Fla. – The high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida nears the intersection of NASA Causeway and Kennedy Parkway. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The model is being moved from the visitor complex to NASA Kennedy Space Center's Launch Complex 39 turn basin. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis

  8. KSC-2011-8228

    NASA Image and Video Library

    2011-12-11

    CAPE CANAVERAL, Fla. – A transporter carrying the high-fidelity space shuttle model that was on display at the NASA Kennedy Space Center Visitor Complex in Florida rolls along the NASA Causeway as it leaves the visitor complex on its way to NASA Kennedy Space Center's Launch Complex 39 turn basin. It is standard procedure for large payloads and equipment to travel against the normal flow of traffic under the supervision of a move crew when being transported on or off center property. The shuttle was part of a display at the visitor complex that also included an external tank and two solid rocket boosters that were used to show visitors the size of actual space shuttle components. The full-scale shuttle model is being transferred from Kennedy to Space Center Houston, NASA Johnson Space Center's visitor center. The model will stay at the turn basin for a few months until it is ready to be transported to Texas via barge. The move also helps clear the way for the Kennedy Space Center Visitor Complex to begin construction of a new facility next year to display space shuttle Atlantis in 2013. For more information about Space Center Houston, visit http://www.spacecenter.org. Photo credit: NASA/Dimitri Gerondidakis

  9. Parameter-space metric of semicoherent searches for continuous gravitational waves

    NASA Astrophysics Data System (ADS)

    Pletsch, Holger J.

    2010-08-01

    Continuous gravitational-wave (CW) signals such as emitted by spinning neutron stars are an important target class for current detectors. However, the enormous computational demand prohibits fully coherent broadband all-sky searches for prior unknown CW sources over wide ranges of parameter space and for yearlong observation times. More efficient hierarchical “semicoherent” search strategies divide the data into segments much shorter than one year, which are analyzed coherently; then detection statistics from different segments are combined incoherently. To optimally perform the incoherent combination, understanding of the underlying parameter-space structure is requisite. This problem is addressed here by using new coordinates on the parameter space, which yield the first analytical parameter-space metric for the incoherent combination step. This semicoherent metric applies to broadband all-sky surveys (also embedding directed searches at fixed sky position) for isolated CW sources. Furthermore, the additional metric resolution attained through the combination of segments is studied. From the search parameters (sky position, frequency, and frequency derivatives), solely the metric resolution in the frequency derivatives is found to significantly increase with the number of segments.

  10. Crystallization and preliminary X-ray analysis of the atrial natriuretic peptide (ANP) receptor extracellular domain complex with ANP: use of ammonium sulfate as the cryosalt.

    PubMed

    Ogawa, Haruo; Zhang, Xiaolun; Qiu, Yue; Ogata, Craig M; Misono, Kunio S

    2003-10-01

    Atrial natriuretic peptide (ANP) plays a major role in blood pressure and volume regulation owing to its natriuretic and vasodilatory activities. The ANP receptor is a single-span transmembrane receptor coupled to its intrinsic guanylyl cyclase activity. The extracellular hormone-binding domain of rat ANP receptor (ANPR) was overexpressed by permanent transfection in CHO cells and purified. ANPR complexed with ANP was crystallized at 301 K by the hanging-drop vapor-diffusion method. The crystals were frozen in 3.4 M ammonium sulfate used as a cryoprotectant. The crystals diffracted to 3.1 A resolution using synchrotron radiation and belonged to the hexagonal space group P6(1), with unit-cell parameters a = b = 100.3, c = 258.6 A.

  11. Crystallization of mitochondrial rhodoquinol-fumarate reductase from the parasitic nematode Ascaris suum with the specific inhibitor flutolanil

    PubMed Central

    Osanai, Arihiro; Harada, Shigeharu; Sakamoto, Kimitoshi; Shimizu, Hironari; Inaoka, Daniel Ken; Kita, Kiyoshi

    2009-01-01

    In adult Ascaris suum (roundworm) mitochondrial membrane-bound complex II acts as a rhodoquinol-fumarate reductase, which is the reverse reaction to that of mammalian complex II (succinate-ubiquinone reductase). The adult A. suum rhodoquinol-fumarate reductase was crystallized in the presence of octaethyleneglycol monododecyl ether and n-dodecyl-β-d-maltopyranoside in a 3:2 weight ratio. The crystals belonged to the orthorhombic space group P212121, with unit-cell parameters a = 123.75, b = 129.08, c = 221.12 Å, and diffracted to 2.8 Å resolution using synchrotron radiation. The presence of two molecules in the asymmetric unit (120 kDa × 2) gives a crystal volume per protein mass (V M) of 3.6 Å3 Da−1. PMID:19724139

  12. Crystallization and preliminary X-ray analysis of the Man(α1-2)Man-specific lectin from Bowringia mildbraedii in complex with its carbohydrate ligand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Pino, Abel; Loris, Remy; Wyns, Lode

    2005-10-01

    The lectin from the Nigerian legume B. mildbraedii was crystallized in complex with Man(α1-2)Man and data were collected to a resolution of 1.90 Å using synchrotron radiation. The lectin from Bowringia mildbraedii seeds crystallizes in the presence of the disaccharide Man(α1-2)Man. The best crystals grow at 293 K within four weeks after a pre-incubation at 277 K to induce nucleation. A complete data set was collected to a resolution of 1.90 Å using synchrotron radiation. The crystals belong to space group I222, with unit-cell parameters a = 66.06, b = 86.35, c = 91.76 Å, and contain one lectin monomermore » in the asymmetric unit.« less

  13. Standard model with a complex scalar singlet: Cosmological implications and theoretical considerations

    NASA Astrophysics Data System (ADS)

    Chiang, Cheng-Wei; Ramsey-Musolf, Michael J.; Senaha, Eibun

    2018-01-01

    We analyze the theoretical and phenomenological considerations for the electroweak phase transition and dark matter in an extension of the standard model with a complex scalar singlet (cxSM). In contrast with earlier studies, we use a renormalization group improved scalar potential and treat its thermal history in a gauge-invariant manner. We find that the parameter space consistent with a strong first-order electroweak phase transition (SFOEWPT) and present dark matter phenomenological constraints is significantly restricted compared to results of a conventional, gauge-noninvariant analysis. In the simplest variant of the cxSM, recent LUX data and a SFOEWPT require a dark matter mass close to half the mass of the standard model-like Higgs boson. We also comment on various caveats regarding the perturbative treatment of the phase transition dynamics.

  14. Spectral characteristics of tramadol in different solvents and β-cyclodextrin

    NASA Astrophysics Data System (ADS)

    Anton Smith, A.; Manavalan, R.; Kannan, K.; Rajendiran, N.

    2009-10-01

    Effect of solvents and β-cyclodextrin on the absorption and fluorescence spectra of tramadol drug has been investigated and compared with anisole. The solid inclusion complex of tramadol with β-CD is investigated by FT-IR, 1H NMR, scanning electron microscope (SEM), DSC and semiempirical methods. The thermodynamic parameter (Δ G) of inclusion process is determined. A solvent study shows (i) the spectral behaviour of both tramadol and anisole molecules is similar to each other and (ii) the cyclohexanol group in tramadol is not effectively conjugated with anisole group. However, in β-CD, due to space restriction of the CD cavity, a weak interaction is present between the above groups in tramadol. β-Cyclodextrin studies show that tramadol forms 1:2 inclusion complex with β-CD. A mechanism is proposed for the inclusion process.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemi, Merja, E-mail: merja.niemi@joensuu.fi; Jänis, Janne; Jylhä, Sirpa

    The high-resolution mass-spectrometric characterization, crystallization and X-ray diffraction studies of a recombinant IgE Fab fragment in complex with bovine β-lactoglobulin are reported. A D1 Fab fragment containing the allergen-binding variable domains of the IgE antibody was characterized by ESI FT–ICR mass spectrometry and crystallized with bovine β-lactoglobulin (BLG) using the hanging-drop vapour-diffusion method at 293 K. X-ray data suitable for structure determination were collected to 2.8 Å resolution using synchrotron radiation. The crystal belonged to the orthorhombic space group P2{sub 1}2{sub 1}2{sub 1}, with unit-cell parameters a = 67.0, b = 100.6, c = 168.1 Å. The three-dimensional structure ofmore » the D1 Fab fragment–BLG complex will provide the first insight into IgE antibody–allergen interactions at the molecular level.« less

  16. Design of simplified maximum-likelihood receivers for multiuser CPM systems.

    PubMed

    Bing, Li; Bai, Baoming

    2014-01-01

    A class of simplified maximum-likelihood receivers designed for continuous phase modulation based multiuser systems is proposed. The presented receiver is built upon a front end employing mismatched filters and a maximum-likelihood detector defined in a low-dimensional signal space. The performance of the proposed receivers is analyzed and compared to some existing receivers. Some schemes are designed to implement the proposed receivers and to reveal the roles of different system parameters. Analysis and numerical results show that the proposed receivers can approach the optimum multiuser receivers with significantly (even exponentially in some cases) reduced complexity and marginal performance degradation.

  17. Space Shuttle Plume and Plume Impingement Study

    NASA Technical Reports Server (NTRS)

    Tevepaugh, J. A.; Penny, M. M.

    1977-01-01

    The extent of the influence of the propulsion system exhaust plumes on the vehicle performance and control characteristics is a complex function of vehicle geometry, propulsion system geometry, engine operating conditions and vehicle flight trajectory were investigated. Analytical support of the plume technology test program was directed at the two latter problem areas: (1) definition of the full-scale exhaust plume characteristics, (2) application of appropriate similarity parameters; and (3) analysis of wind tunnel test data. Verification of the two-phase plume and plume impingement models was directed toward the definition of the full-scale exhaust plume characteristics and the separation motor impingement problem.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hattrick-Simpers, Jason R.; Gregoire, John M.; Kusne, A. Gilad

    With their ability to rapidly elucidate composition-structure-property relationships, high-throughput experimental studies have revolutionized how materials are discovered, optimized, and commercialized. It is now possible to synthesize and characterize high-throughput libraries that systematically address thousands of individual cuts of fabrication parameter space. An unresolved issue remains transforming structural characterization data into phase mappings. This difficulty is related to the complex information present in diffraction and spectroscopic data and its variation with composition and processing. Here, we review the field of automated phase diagram attribution and discuss the impact that emerging computational approaches will have in the generation of phase diagrams andmore » beyond.« less

  19. M-Split: A Graphical User Interface to Analyze Multilayered Anisotropy from Shear Wave Splitting

    NASA Astrophysics Data System (ADS)

    Abgarmi, Bizhan; Ozacar, A. Arda

    2017-04-01

    Shear wave splitting analysis are commonly used to infer deep anisotropic structure. For simple cases, obtained delay times and fast-axis orientations are averaged from reliable results to define anisotropy beneath recording seismic stations. However, splitting parameters show systematic variations with back azimuth in the presence of complex anisotropy and cannot be represented by average time delay and fast axis orientation. Previous researchers had identified anisotropic complexities at different tectonic settings and applied various approaches to model them. Most commonly, such complexities are modeled by using multiple anisotropic layers with priori constraints from geologic data. In this study, a graphical user interface called M-Split is developed to easily process and model multilayered anisotropy with capabilities to properly address the inherited non-uniqueness. M-Split program runs user defined grid searches through the model parameter space for two-layer anisotropy using formulation of Silver and Savage (1994) and creates sensitivity contour plots to locate local maximas and analyze all possible models with parameter tradeoffs. In order to minimize model ambiguity and identify the robust model parameters, various misfit calculation procedures are also developed and embedded to M-Split which can be used depending on the quality of the observations and their back-azimuthal coverage. Case studies carried out to evaluate the reliability of the program using real noisy data and for this purpose stations from two different networks are utilized. First seismic network is the Kandilli Observatory and Earthquake research institute (KOERI) which includes long term running permanent stations and second network comprises seismic stations deployed temporary as part of the "Continental Dynamics-Central Anatolian Tectonics (CD-CAT)" project funded by NSF. It is also worth to note that M-Split is designed as open source program which can be modified by users for additional capabilities or for other applications.

  20. Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters

    NASA Astrophysics Data System (ADS)

    Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.

    2004-12-01

    Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.

Top