Adaptive Numerical Algorithms in Space Weather Modeling
NASA Technical Reports Server (NTRS)
Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.;
2010-01-01
Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical schemes. Depending on the application, we find that different time stepping methods are optimal. Several of the time integration schemes exploit the block-based granularity of the grid structure. The framework and the adaptive algorithms enable physics based space weather modeling and even forecasting.
Sauterey, Boris; Ward, Ben A.; Follows, Michael J.; Bowler, Chris; Claessen, David
2015-01-01
The functional and taxonomic biogeography of marine microbial systems reflects the current state of an evolving system. Current models of marine microbial systems and biogeochemical cycles do not reflect this fundamental organizing principle. Here, we investigate the evolutionary adaptive potential of marine microbial systems under environmental change and introduce explicit Darwinian adaptation into an ocean modelling framework, simulating evolving phytoplankton communities in space and time. To this end, we adopt tools from adaptive dynamics theory, evaluating the fitness of invading mutants over annual timescales, replacing the resident if a fitter mutant arises. Using the evolutionary framework, we examine how community assembly, specifically the emergence of phytoplankton cell size diversity, reflects the combined effects of bottom-up and top-down controls. When compared with a species-selection approach, based on the paradigm that “Everything is everywhere, but the environment selects”, we show that (i) the selected optimal trait values are similar; (ii) the patterns emerging from the adaptive model are more robust, but (iii) the two methods lead to different predictions in terms of emergent diversity. We demonstrate that explicitly evolutionary approaches to modelling marine microbial populations and functionality are feasible and practical in time-varying, space-resolving settings and provide a new tool for exploring evolutionary interactions on a range of timescales in the ocean. PMID:25852217
Sauterey, Boris; Ward, Ben A; Follows, Michael J; Bowler, Chris; Claessen, David
2015-01-01
The functional and taxonomic biogeography of marine microbial systems reflects the current state of an evolving system. Current models of marine microbial systems and biogeochemical cycles do not reflect this fundamental organizing principle. Here, we investigate the evolutionary adaptive potential of marine microbial systems under environmental change and introduce explicit Darwinian adaptation into an ocean modelling framework, simulating evolving phytoplankton communities in space and time. To this end, we adopt tools from adaptive dynamics theory, evaluating the fitness of invading mutants over annual timescales, replacing the resident if a fitter mutant arises. Using the evolutionary framework, we examine how community assembly, specifically the emergence of phytoplankton cell size diversity, reflects the combined effects of bottom-up and top-down controls. When compared with a species-selection approach, based on the paradigm that "Everything is everywhere, but the environment selects", we show that (i) the selected optimal trait values are similar; (ii) the patterns emerging from the adaptive model are more robust, but (iii) the two methods lead to different predictions in terms of emergent diversity. We demonstrate that explicitly evolutionary approaches to modelling marine microbial populations and functionality are feasible and practical in time-varying, space-resolving settings and provide a new tool for exploring evolutionary interactions on a range of timescales in the ocean.
F-8C adaptive control law refinement and software development
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.
1981-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters was designed. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm was implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer, surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software.
NASA Astrophysics Data System (ADS)
Clark, Martyn P.; Kavetski, Dmitri
2010-10-01
A major neglected weakness of many current hydrological models is the numerical method used to solve the governing model equations. This paper thoroughly evaluates several classes of time stepping schemes in terms of numerical reliability and computational efficiency in the context of conceptual hydrological modeling. Numerical experiments are carried out using 8 distinct time stepping algorithms and 6 different conceptual rainfall-runoff models, applied in a densely gauged experimental catchment, as well as in 12 basins with diverse physical and hydroclimatic characteristics. Results show that, over vast regions of the parameter space, the numerical errors of fixed-step explicit schemes commonly used in hydrology routinely dwarf the structural errors of the model conceptualization. This substantially degrades model predictions, but also, disturbingly, generates fortuitously adequate performance for parameter sets where numerical errors compensate for model structural errors. Simply running fixed-step explicit schemes with shorter time steps provides a poor balance between accuracy and efficiency: in some cases daily-step adaptive explicit schemes with moderate error tolerances achieved comparable or higher accuracy than 15 min fixed-step explicit approximations but were nearly 10 times more efficient. From the range of simple time stepping schemes investigated in this work, the fixed-step implicit Euler method and the adaptive explicit Heun method emerge as good practical choices for the majority of simulation scenarios. In combination with the companion paper, where impacts on model analysis, interpretation, and prediction are assessed, this two-part study vividly highlights the impact of numerical errors on critical performance aspects of conceptual hydrological models and provides practical guidelines for robust numerical implementation.
Inverse dynamics of adaptive structures used as space cranes
NASA Technical Reports Server (NTRS)
Das, S. K.; Utku, S.; Wada, B. K.
1990-01-01
As a precursor to the real-time control of fast moving adaptive structures used as space cranes, a formulation is given for the flexibility induced motion relative to the nominal motion (i.e., the motion that assumes no flexibility) and for obtaining the open loop time varying driving forces. An algorithm is proposed for the computation of the relative motion and driving forces. The governing equations are given in matrix form with explicit functional dependencies. A simulator is developed to implement the algorithm on a digital computer. In the formulations, the distributed mass of the crane is lumped by two schemes, vz., 'trapezoidal' lumping and 'Simpson's rule' lumping. The effects of the mass lumping schemes are shown by simulator runs.
Mavritsaki, Eirini; Heinke, Dietmar; Humphreys, Glyn W; Deco, Gustavo
2006-01-01
In the real world, visual information is selected over time as well as space, when we prioritise new stimuli for attention. Watson and Humphreys [Watson, D., Humphreys, G.W., 1997. Visual marking: prioritizing selection for new objects by top-down attentional inhibition of old objects. Psychological Review 104, 90-122] presented evidence that new information in search tasks is prioritised by (amongst other processes) active ignoring of old items - a process they termed visual marking. In this paper we present, for the first time, an explicit computational model of visual marking using biologically plausible activation functions. The "spiking search over time and space" model (sSoTS) incorporates different synaptic components (NMDA, AMPA, GABA) and a frequency adaptation mechanism based on [Ca(2+)] sensitive K(+) current. This frequency adaptation current can act as a mechanism that suppresses the previously attended items. We show that, when coupled with a process of active inhibition applied to old items, frequency adaptation leads to old items being de-prioritised (and new items prioritised) across time in search. Furthermore, the time course of these processes mimics the time course of the preview effect in human search. The results indicate that the sSoTS model can provide a biologically plausible account of human search over time as well as space.
WAKES: Wavelet Adaptive Kinetic Evolution Solvers
NASA Astrophysics Data System (ADS)
Mardirian, Marine; Afeyan, Bedros; Larson, David
2016-10-01
We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.
Manzano-Piedras, Esperanza; Marcer, Arnald; Alonso-Blanco, Carlos; Picó, F Xavier
2014-01-01
The role that different life-history traits may have in the process of adaptation caused by divergent selection can be assessed by using extensive collections of geographically-explicit populations. This is because adaptive phenotypic variation shifts gradually across space as a result of the geographic patterns of variation in environmental selective pressures. Hence, large-scale experiments are needed to identify relevant adaptive life-history traits as well as their relationships with putative selective agents. We conducted a field experiment with 279 geo-referenced accessions of the annual plant Arabidopsis thaliana collected across a native region of its distribution range, the Iberian Peninsula. We quantified variation in life-history traits throughout the entire life cycle. We built a geographic information system to generate an environmental data set encompassing climate, vegetation and soil data. We analysed the spatial autocorrelation patterns of environmental variables and life-history traits, as well as the relationship between environmental and phenotypic data. Almost all environmental variables were significantly spatially autocorrelated. By contrast, only two life-history traits, seed weight and flowering time, exhibited significant spatial autocorrelation. Flowering time, and to a lower extent seed weight, were the life-history traits with the highest significant correlation coefficients with environmental factors, in particular with annual mean temperature. In general, individual fitness was higher for accessions with more vigorous seed germination, higher recruitment and later flowering times. Variation in flowering time mediated by temperature appears to be the main life-history trait by which A. thaliana adjusts its life history to the varying Iberian environmental conditions. The use of extensive geographically-explicit data sets obtained from field experiments represents a powerful approach to unravel adaptive patterns of variation. In a context of current global warming, geographically-explicit approaches, evaluating the match between organisms and the environments where they live, may contribute to better assess and predict the consequences of global warming.
NASA Astrophysics Data System (ADS)
Falugi, P.; Olaru, S.; Dumur, D.
2010-08-01
This article proposes an explicit robust predictive control solution based on linear matrix inequalities (LMIs). The considered predictive control strategy uses different local descriptions of the system dynamics and uncertainties and thus allows the handling of less conservative input constraints. The computed control law guarantees constraint satisfaction and asymptotic stability. The technique is effective for a class of nonlinear systems embedded into polytopic models. A detailed discussion of the procedures which adapt the partition of the state space is presented. For the practical implementation the construction of suitable (explicit) descriptions of the control law are described upon concrete algorithms.
NASA Astrophysics Data System (ADS)
Taitano, W. T.; Chacón, L.; Simakov, A. N.
2018-07-01
We consider a 1D-2V Vlasov-Fokker-Planck multi-species ionic description coupled to fluid electrons. We address temporal stiffness with implicit time stepping, suitably preconditioned. To address temperature disparity in time and space, we extend the conservative adaptive velocity-space discretization scheme proposed in [Taitano et al., J. Comput. Phys., 318, 391-420, (2016)] to a spatially inhomogeneous system. In this approach, we normalize the velocity-space coordinate to a temporally and spatially varying local characteristic speed per species. We explicitly consider the resulting inertial terms in the Vlasov equation, and derive a discrete formulation that conserves mass, momentum, and energy up to a prescribed nonlinear tolerance upon convergence. Our conservation strategy employs nonlinear constraints to enforce these properties discretely for both the Vlasov operator and the Fokker-Planck collision operator. Numerical examples of varying degrees of complexity, including shock-wave propagation, demonstrate the favorable efficiency and accuracy properties of the scheme.
Flight data processing with the F-8 adaptive algorithm
NASA Technical Reports Server (NTRS)
Hartmann, G.; Stein, G.; Petersen, K.
1977-01-01
An explicit adaptive control algorithm based on maximum likelihood estimation of parameters has been designed for NASA's DFBW F-8 aircraft. To avoid iterative calculations, the algorithm uses parallel channels of Kalman filters operating at fixed locations in parameter space. This algorithm has been implemented in NASA/DFRC's Remotely Augmented Vehicle (RAV) facility. Real-time sensor outputs (rate gyro, accelerometer and surface position) are telemetered to a ground computer which sends new gain values to an on-board system. Ground test data and flight records were used to establish design values of noise statistics and to verify the ground-based adaptive software. The software and its performance evaluation based on flight data are described
Techniques for grid manipulation and adaptation. [computational fluid dynamics
NASA Technical Reports Server (NTRS)
Choo, Yung K.; Eisemann, Peter R.; Lee, Ki D.
1992-01-01
Two approaches have been taken to provide systematic grid manipulation for improved grid quality. One is the control point form (CPF) of algebraic grid generation. It provides explicit control of the physical grid shape and grid spacing through the movement of the control points. It works well in the interactive computer graphics environment and hence can be a good candidate for integration with other emerging technologies. The other approach is grid adaptation using a numerical mapping between the physical space and a parametric space. Grid adaptation is achieved by modifying the mapping functions through the effects of grid control sources. The adaptation process can be repeated in a cyclic manner if satisfactory results are not achieved after a single application.
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Angulo, Raul E.
2016-01-01
N-body simulations are essential for understanding the formation and evolution of structure in the Universe. However, the discrete nature of these simulations affects their accuracy when modelling collisionless systems. We introduce a new approach to simulate the gravitational evolution of cold collisionless fluids by solving the Vlasov-Poisson equations in terms of adaptively refineable `Lagrangian phase-space elements'. These geometrical elements are piecewise smooth maps between Lagrangian space and Eulerian phase-space and approximate the continuum structure of the distribution function. They allow for dynamical adaptive splitting to accurately follow the evolution even in regions of very strong mixing. We discuss in detail various one-, two- and three-dimensional test problems to demonstrate the performance of our method. Its advantages compared to N-body algorithms are: (I) explicit tracking of the fine-grained distribution function, (II) natural representation of caustics, (III) intrinsically smooth gravitational potential fields, thus (IV) eliminating the need for any type of ad hoc force softening. We show the potential of our method by simulating structure formation in a warm dark matter scenario. We discuss how spurious collisionality and large-scale discreteness noise of N-body methods are both strongly suppressed, which eliminates the artificial fragmentation of filaments. Therefore, we argue that our new approach improves on the N-body method when simulating self-gravitating cold and collisionless fluids, and is the first method that allows us to explicitly follow the fine-grained evolution in six-dimensional phase-space.
Kalske, Aino; Leimu, Roosa; Scheepens, J F; Mutikainen, Pia
2016-09-01
Local adaptation of interacting species to one another indicates geographically variable reciprocal selection. This process of adaptation is central in the organization and maintenance of genetic variation across populations. Given that the strength of selection and responses to it often vary in time and space, the strength of local adaptation should in theory vary between generations and among populations. However, such spatiotemporal variation has rarely been explicitly demonstrated in nature and local adaptation is commonly considered to be relatively static. We report persistent local adaptation of the short-lived herbivore Abrostola asclepiadis to its long-lived host plant Vincetoxicum hirundinaria over three successive generations in two studied populations and considerable temporal variation in local adaptation in six populations supporting the geographic mosaic theory. The observed variation in local adaptation among populations was best explained by geographic distance and population isolation, suggesting that gene flow reduces local adaptation. Changes in herbivore population size did not conclusively explain temporal variation in local adaptation. Our results also imply that short-term studies are likely to capture only a part of the existing variation in local adaptation. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.
A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms
NASA Astrophysics Data System (ADS)
Hasbestan, Jaber J.; Senocak, Inanc
2017-12-01
Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.
Li, Zhijun; Su, Chun-Yi
2013-09-01
In this paper, adaptive neural network control is investigated for single-master-multiple-slaves teleoperation in consideration of time delays and input dead-zone uncertainties for multiple mobile manipulators carrying a common object in a cooperative manner. Firstly, concise dynamics of teleoperation systems consisting of a single master robot, multiple coordinated slave robots, and the object are developed in the task space. To handle asymmetric time-varying delays in communication channels and unknown asymmetric input dead zones, the nonlinear dynamics of the teleoperation system are transformed into two subsystems through feedback linearization: local master or slave dynamics including the unknown input dead zones and delayed dynamics for the purpose of synchronization. Then, a model reference neural network control strategy based on linear matrix inequalities (LMI) and adaptive techniques is proposed. The developed control approach ensures that the defined tracking errors converge to zero whereas the coordination internal force errors remain bounded and can be made arbitrarily small. Throughout this paper, stability analysis is performed via explicit Lyapunov techniques under specific LMI conditions. The proposed adaptive neural network control scheme is robust against motion disturbances, parametric uncertainties, time-varying delays, and input dead zones, which is validated by simulation studies.
NASA Astrophysics Data System (ADS)
D'Ambrosio, Raffaele; Moccaldi, Martina; Paternoster, Beatrice
2018-05-01
In this paper, an adapted numerical scheme for reaction-diffusion problems generating periodic wavefronts is introduced. Adapted numerical methods for such evolutionary problems are specially tuned to follow prescribed qualitative behaviors of the solutions, making the numerical scheme more accurate and efficient as compared with traditional schemes already known in the literature. Adaptation through the so-called exponential fitting technique leads to methods whose coefficients depend on unknown parameters related to the dynamics and aimed to be numerically computed. Here we propose a strategy for a cheap and accurate estimation of such parameters, which consists essentially in minimizing the leading term of the local truncation error whose expression is provided in a rigorous accuracy analysis. In particular, the presented estimation technique has been applied to a numerical scheme based on combining an adapted finite difference discretization in space with an implicit-explicit time discretization. Numerical experiments confirming the effectiveness of the approach are also provided.
User-Centered Indexing for Adaptive Information Access
NASA Technical Reports Server (NTRS)
Chen, James R.; Mathe, Nathalie
1996-01-01
We are focusing on information access tasks characterized by large volume of hypermedia connected technical documents, a need for rapid and effective access to familiar information, and long-term interaction with evolving information. The problem for technical users is to build and maintain a personalized task-oriented model of the information to quickly access relevant information. We propose a solution which provides user-centered adaptive information retrieval and navigation. This solution supports users in customizing information access over time. It is complementary to information discovery methods which provide access to new information, since it lets users customize future access to previously found information. It relies on a technique, called Adaptive Relevance Network, which creates and maintains a complex indexing structure to represent personal user's information access maps organized by concepts. This technique is integrated within the Adaptive HyperMan system, which helps NASA Space Shuttle flight controllers organize and access large amount of information. It allows users to select and mark any part of a document as interesting, and to index that part with user-defined concepts. Users can then do subsequent retrieval of marked portions of documents. This functionality allows users to define and access personal collections of information, which are dynamically computed. The system also supports collaborative review by letting users share group access maps. The adaptive relevance network provides long-term adaptation based both on usage and on explicit user input. The indexing structure is dynamic and evolves over time. Leading and generalization support flexible retrieval of information under similar concepts. The network is geared towards more recent information access, and automatically manages its size in order to maintain rapid access when scaling up to large hypermedia space. We present results of simulated learning experiments.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Yusuf, Abdullahi; Aliyu, Aliyu Isa; Baleanu, Dumitru
2018-04-01
This paper studies the symmetry analysis, explicit solutions, convergence analysis, and conservation laws (Cls) for two different space-time fractional nonlinear evolution equations with Riemann-Liouville (RL) derivative. The governing equations are reduced to nonlinear ordinary differential equation (ODE) of fractional order using their Lie point symmetries. In the reduced equations, the derivative is in Erdelyi-Kober (EK) sense, power series technique is applied to derive an explicit solutions for the reduced fractional ODEs. The convergence of the obtained power series solutions is also presented. Moreover, the new conservation theorem and the generalization of the Noether operators are developed to construct the nonlocal Cls for the equations . Some interesting figures for the obtained explicit solutions are presented.
Motor learning and consolidation: the case of visuomotor rotation.
Krakauer, John W
2009-01-01
Adaptation to visuomotor rotation is a particular form of motor learning distinct from force-field adaptation, sequence learning, and skill learning. Nevertheless, study of adaptation to visuomotor rotation has yielded a number of findings and principles that are likely of general importance to procedural learning and memory. First, rotation learning is implicit and appears to proceed through reduction in a visual prediction error generated by a forward model, such implicit adaptation occurs even when it is in conflict with an explicit task goal. Second, rotation learning is subject to different forms of interference: retrograde, anterograde through aftereffects, and contextual blocking of retrieval. Third, opposite rotations can be recalled within a short time interval without interference if implicit contextual cues (effector change) rather than explicit cues (color change) are used. Fourth, rotation learning consolidates both over time and with increased initial training (saturation learning).
Adapting livestock management to spatio-temporal heterogeneity in semi-arid rangelands.
Jakoby, O; Quaas, M F; Baumgärtner, S; Frank, K
2015-10-01
Management strategies in rotational grazing systems differ in their level of complexity and adaptivity. Different components of such grazing strategies are expected to allow for adaptation to environmental heterogeneities in space and time. However, most models investigating general principles of rangeland management strategies neglect spatio-temporal system properties including seasonality and spatial heterogeneity of environmental variables. We developed an ecological-economic rangeland model that combines a spatially explicit farm structure with intra-annual time steps. This allows investigating different management components in rotational grazing systems (including stocking and rotation rules) and evaluating their effect on the ecological and economic states of semi-arid grazing systems. Our results show that adaptive stocking is less sensitive to overstocking compared to a constant stocking strategy. Furthermore, the rotation rule becomes important only at stocking numbers that maximize expected income. Altogether, the best of the tested strategies is adaptive stocking combined with a rotation that adapts to both spatial forage availability and seasonality. This management strategy maximises mean income and at the same time maintains the rangeland in a viable condition. However, we could also show that inappropriate adaptation that neglects seasonality even leads to deterioration. Rangelands characterised by higher inter-annual climate variability show a higher risk of income losses under a non-adaptive stocking rule, and non-adaptive rotation is least able to buffer increasing climate variability. Overall, all important system properties including seasonality and spatial heterogeneity of available resources need to be considered when designing an appropriate rangeland management system. Resulting adaptive rotational grazing strategies can be valuable for improving management and mitigating income risks. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sector-Based Detection for Hands-Free Speech Enhancement in Cars
NASA Astrophysics Data System (ADS)
Lathoud, Guillaume; Bourgeois, Julien; Freudenberger, Jürgen
2006-12-01
Adaptation control of beamforming interference cancellation techniques is investigated for in-car speech acquisition. Two efficient adaptation control methods are proposed that avoid target cancellation. The "implicit" method varies the step-size continuously, based on the filtered output signal. The "explicit" method decides in a binary manner whether to adapt or not, based on a novel estimate of target and interference energies. It estimates the average delay-sum power within a volume of space, for the same cost as the classical delay-sum. Experiments on real in-car data validate both methods, including a case with[InlineEquation not available: see fulltext.] km/h background road noise.
Time Synchronization and Distribution Mechanisms for Space Networks
NASA Technical Reports Server (NTRS)
Woo, Simon S.; Gao, Jay L.; Clare, Loren P.; Mills, David L.
2011-01-01
This work discusses research on the problems of synchronizing and distributing time information between spacecraft based on the Network Time Protocol (NTP), where NTP is a standard time synchronization protocol widely used in the terrestrial network. The Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol was designed and developed for synchronizing spacecraft that are in proximity where proximity is less than 100,000 km distant. A particular application is synchronization between a Mars orbiter and rover. Lunar scenarios as well as outer-planet deep space mother-ship-probe missions may also apply. Spacecraft with more accurate time information functions as a time-server, and the other spacecraft functions as a time-client. PITS can be easily integrated and adaptable to the CCSDS Proximity-1 Space Link Protocol with minor modifications. In particular, PITS can take advantage of the timestamping strategy that underlying link layer functionality provides for accurate time offset calculation. The PITS algorithm achieves time synchronization with eight consecutive space network time packet exchanges between two spacecraft. PITS can detect and avoid possible errors from receiving duplicate and out-of-order packets by comparing with the current state variables and timestamps. Further, PITS is able to detect error events and autonomously recover from unexpected events that can possibly occur during the time synchronization and distribution process. This capability achieves an additional level of protocol protection on top of CRC or Error Correction Codes. PITS is a lightweight and efficient protocol, eliminating the needs for explicit frame sequence number and long buffer storage. The PITS protocol is capable of providing time synchronization and distribution services for a more general domain where multiple entities need to achieve time synchronization using a single point-to-point link.
The time course of explicit and implicit categorization.
Smith, J David; Zakrzewski, Alexandria C; Herberger, Eric R; Boomer, Joseph; Roeder, Jessica L; Ashby, F Gregory; Church, Barbara A
2015-10-01
Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization.
Daniel R. Williams
2018-01-01
Nature conservation constitutes an important realm of professional practice with strong connections to the discourses on nature and sustainability. In recent decades much of that discourse has taken an explicitly spatial turn, observable across numerous domains of ecological, social, and political thought (Williams et al., 2013; Wu, 2006). The aim of this chapter is to...
Awareness-based game-theoretic space resource management
NASA Astrophysics Data System (ADS)
Chen, Genshe; Chen, Huimin; Pham, Khanh; Blasch, Erik; Cruz, Jose B., Jr.
2009-05-01
Over recent decades, the space environment becomes more complex with a significant increase in space debris and a greater density of spacecraft, which poses great difficulties to efficient and reliable space operations. In this paper we present a Hierarchical Sensor Management (HSM) method to space operations by (a) accommodating awareness modeling and updating and (b) collaborative search and tracking space objects. The basic approach is described as follows. Firstly, partition the relevant region of interest into district cells. Second, initialize and model the dynamics of each cell with awareness and object covariance according to prior information. Secondly, explicitly assign sensing resources to objects with user specified requirements. Note that when an object has intelligent response to the sensing event, the sensor assigned to observe an intelligent object may switch from time-to-time between a strong, active signal mode and a passive mode to maximize the total amount of information to be obtained over a multi-step time horizon and avoid risks. Thirdly, if all explicitly specified requirements are satisfied and there are still more sensing resources available, we assign the additional sensing resources to objects without explicitly specified requirements via an information based approach. Finally, sensor scheduling is applied to each sensor-object or sensor-cell pair according to the object type. We demonstrate our method with realistic space resources management scenario using NASA's General Mission Analysis Tool (GMAT) for space object search and track with multiple space borne observers.
Behind the Mosaic: Insurgent Centers of Gravity and Counterinsurgency
2011-12-01
centers of gravity vary by time, space , and purpose. While Clausewitz’s key statement on a center of gravity defines a single center of gravity, he...explicitly or implicitly, that multiple centers of gravity can vary with time, space , and purpose. Shimon Naveh Retired Israeli Reserve Brigadier...century military forces, which in turn expanded operations in time and space . The integration of operations distributed in time and space distributed
Adaptive implicit-explicit and parallel element-by-element iteration schemes
NASA Technical Reports Server (NTRS)
Tezduyar, T. E.; Liou, J.; Nguyen, T.; Poole, S.
1989-01-01
Adaptive implicit-explicit (AIE) and grouped element-by-element (GEBE) iteration schemes are presented for the finite element solution of large-scale problems in computational mechanics and physics. The AIE approach is based on the dynamic arrangement of the elements into differently treated groups. The GEBE procedure, which is a way of rewriting the EBE formulation to make its parallel processing potential and implementation more clear, is based on the static arrangement of the elements into groups with no inter-element coupling within each group. Various numerical tests performed demonstrate the savings in the CPU time and memory.
The exaptive excellence of spandrels as a term and prototype
Gould, Stephen Jay
1997-01-01
In 1979, Lewontin and I borrowed the architectural term “spandrel” (using the pendentives of San Marco in Venice as an example) to designate the class of forms and spaces that arise as necessary byproducts of another decision in design, and not as adaptations for direct utility in themselves. This proposal has generated a large literature featuring two critiques: (i) the terminological claim that the spandrels of San Marco are not true spandrels at all and (ii) the conceptual claim that they are adaptations and not byproducts. The features of the San Marco pendentives that we explicitly defined as spandrel-properties—their necessary number (four) and shape (roughly triangular)—are inevitable architectural byproducts, whatever the structural attributes of the pendentives themselves. The term spandrel may be extended from its particular architectural use for two-dimensional byproducts to the generality of “spaces left over,” a definition that properly includes the San Marco pendentives. Evolutionary biology needs such an explicit term for features arising as byproducts, rather than adaptations, whatever their subsequent exaptive utility. The concept of biological spandrels—including the examples here given of masculinized genitalia in female hyenas, exaptive use of an umbilicus as a brooding chamber by snails, the shoulder hump of the giant Irish deer, and several key features of human mentality—anchors the critique of overreliance upon adaptive scenarios in evolutionary explanation. Causes of historical origin must always be separated from current utilities; their conflation has seriously hampered the evolutionary analysis of form in the history of life. PMID:11038582
Slower speed and stronger coupling: adaptive mechanisms of chaos synchronization.
Wang, Xiao Fan
2002-06-01
We show that two initially weakly coupled chaotic systems can achieve synchronization by adaptively reducing their speed and/or enhancing the coupling strength. Explicit adaptive algorithms for speed reduction and coupling enhancement are provided. We apply these algorithms to the synchronization of two coupled Lorenz systems. It is found that after a long-time adaptive process, the two coupled chaotic systems can achieve synchronization with almost the minimum required coupling-speed ratio.
The Time Course of Explicit and Implicit Categorization
Zakrzewski, Alexandria C.; Herberger, Eric; Boomer, Joseph; Roeder, Jessica; Ashby, F. Gregory; Church, Barbara A.
2015-01-01
Contemporary theory in cognitive neuroscience distinguishes, among the processes and utilities that serve categorization, explicit and implicit systems of category learning that learn, respectively, category rules by active hypothesis testing or adaptive behaviors by association and reinforcement. Little is known about the time course of categorization within these systems. Accordingly, the present experiments contrasted tasks that fostered explicit categorization (because they had a one-dimensional, rule-based solution) or implicit categorization (because they had a two-dimensional, information-integration solution). In Experiment 1, participants learned categories under unspeeded or speeded conditions. In Experiment 2, they applied previously trained category knowledge under unspeeded or speeded conditions. Speeded conditions selectively impaired implicit category learning and implicit mature categorization. These results illuminate the processing dynamics of explicit/implicit categorization. PMID:26025556
Implicit LES using adaptive filtering
NASA Astrophysics Data System (ADS)
Sun, Guangrui; Domaradzki, Julian A.
2018-04-01
In implicit large eddy simulations (ILES) numerical dissipation prevents buildup of small scale energy in a manner similar to the explicit subgrid scale (SGS) models. If spectral methods are used the numerical dissipation is negligible but it can be introduced by applying a low-pass filter in the physical space, resulting in an effective ILES. In the present work we provide a comprehensive analysis of the numerical dissipation produced by different filtering operations in a turbulent channel flow simulated using a non-dissipative, pseudo-spectral Navier-Stokes solver. The amount of numerical dissipation imparted by filtering can be easily adjusted by changing how often a filter is applied. We show that when the additional numerical dissipation is close to the subgrid-scale (SGS) dissipation of an explicit LES the overall accuracy of ILES is also comparable, indicating that periodic filtering can replace explicit SGS models. A new method is proposed, which does not require any prior knowledge of a flow, to determine the filtering period adaptively. Once an optimal filtering period is found, the accuracy of ILES is significantly improved at low implementation complexity and computational cost. The method is general, performing well for different Reynolds numbers, grid resolutions, and filter shapes.
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa; Cannizzaro, Frank; Melson, N. D.
1991-01-01
A general multiblock method for the solution of the three-dimensional, unsteady, compressible, thin-layer Navier-Stokes equations has been developed. The convective and pressure terms are spatially discretized using Roe's flux differencing technique while the viscous terms are centrally differenced. An explicit Runge-Kutta method is used to advance the solution in time. Local time stepping, adaptive implicit residual smoothing, and the Full Approximation Storage (FAS) multigrid scheme are added to the explicit time stepping scheme to accelerate convergence to steady state. Results for three-dimensional test cases are presented and discussed.
Taitano, William; Chacon, Luis; Simakov, Andrei Nikolaevich
2016-04-25
In this paper, we propose an adaptive velocity-space discretization scheme for the multi-species, multidimensional Rosenbluth–Fokker–Planck (RFP) equation, which is exactly mass-, momentum-, and energy-conserving. Unlike most earlier studies, our approach normalizes the velocity-space coordinate to the temporally varying individual plasma species' local thermal velocity, v th (t), and explicitly considers the resulting inertial terms in the Fokker–Planck equation. Our conservation strategy employs nonlinear constraints to enforce discretely the conservation properties of these inertial terms and the Fokker–Planck collision operator. To deal with situations of extreme thermal velocity disparities among different species, we employ an asymptotic v th -ratio-based expansion ofmore » the Rosenbluth potentials that only requires the computation of several velocity-space integrals. Numerical examples demonstrate the favorable efficiency and accuracy properties of the scheme. Specifically, we show that the combined use of the velocity-grid adaptivity and asymptotic expansions delivers many orders-of-magnitude savings in mesh resolution requirements compared to a single, static uniform mesh.« less
Coelho, Daniel Boari; Teixeira, Luis Augusto
2017-08-01
Processing of predictive contextual cues of an impending perturbation is thought to induce adaptive postural responses. Cueing in previous research has been provided through repeated perturbations with a constant foreperiod. This experimental strategy confounds explicit predictive cueing with adaptation and non-specific properties of temporal cueing. Two experiments were performed to assess those factors separately. To perturb upright balance, the base of support was suddenly displaced backwards in three amplitudes: 5, 10 and 15 cm. In Experiment 1, we tested the effect of cueing the amplitude of the impending postural perturbation by means of visual signals, and the effect of adaptation to repeated exposures by comparing block versus random sequences of perturbation. In Experiment 2, we evaluated separately the effects of cueing the characteristics of an impending balance perturbation and cueing the timing of perturbation onset. Results from Experiment 1 showed that the block sequence of perturbations led to increased stability of automatic postural responses, and modulation of magnitude and onset latency of muscular responses. Results from Experiment 2 showed that only the condition cueing timing of platform translation onset led to increased balance stability and modulation of onset latency of muscular responses. Conversely, cueing platform displacement amplitude failed to induce any effects on automatic postural responses in both experiments. Our findings support the interpretation of improved postural responses via optimized sensorimotor processes, at the same time that cast doubt on the notion that cognitive processing of explicit contextual cues advancing the magnitude of an impending perturbation can preset adaptive postural responses.
Adaptive methods for nonlinear structural dynamics and crashworthiness analysis
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1993-01-01
The objective is to describe three research thrusts in crashworthiness analysis: adaptivity; mixed time integration, or subcycling, in which different timesteps are used for different parts of the mesh in explicit methods; and methods for contact-impact which are highly vectorizable. The techniques are being developed to improve the accuracy of calculations, ease-of-use of crashworthiness programs, and the speed of calculations. The latter is still of importance because crashworthiness calculations are often made with models of 20,000 to 50,000 elements using explicit time integration and require on the order of 20 to 100 hours on current supercomputers. The methodologies are briefly reviewed and then some example calculations employing these methods are described. The methods are also of value to other nonlinear transient computations.
Experimental study of adaptive pointing and tracking for large flexible space structures
NASA Technical Reports Server (NTRS)
Boussalis, D.; Bayard, D. S.; Ih, C.; Wang, S. J.; Ahmed, A.
1991-01-01
This paper describes an experimental study of adaptive pointing and tracking control for flexible spacecraft conducted on a complex ground experiment facility. The algorithm used in this study is based on a multivariable direct model reference adaptive control law. Several experimental validation studies were performed earlier using this algorithm for vibration damping and robust regulation, with excellent results. The current work extends previous studies by addressing the pointing and tracking problem. As is consistent with an adaptive control framework, the plant is assumed to be poorly known to the extent that only system level knowledge of its dynamics is available. Explicit bounds on the steady-state pointing error are derived as functions of the adaptive controller design parameters. It is shown that good tracking performance can be achieved in an experimental setting by adjusting adaptive controller design weightings according to the guidelines indicated by the analytical expressions for the error.
NASA Astrophysics Data System (ADS)
Gotovac, Hrvoje; Srzic, Veljko
2014-05-01
Contaminant transport in natural aquifers is a complex, multiscale process that is frequently studied using different Eulerian, Lagrangian and hybrid numerical methods. Conservative solute transport is typically modeled using the advection-dispersion equation (ADE). Despite the large number of available numerical methods that have been developed to solve it, the accurate numerical solution of the ADE still presents formidable challenges. In particular, current numerical solutions of multidimensional advection-dominated transport in non-uniform velocity fields are affected by one or all of the following problems: numerical dispersion that introduces artificial mixing and dilution, grid orientation effects, unresolved spatial and temporal scales and unphysical numerical oscillations (e.g., Herrera et al, 2009; Bosso et al., 2012). In this work we will present Eulerian Lagrangian Adaptive Fup Collocation Method (ELAFCM) based on Fup basis functions and collocation approach for spatial approximation and explicit stabilized Runge-Kutta-Chebyshev temporal integration (public domain routine SERK2) which is especially well suited for stiff parabolic problems. Spatial adaptive strategy is based on Fup basis functions which are closely related to the wavelets and splines so that they are also compactly supported basis functions; they exactly describe algebraic polynomials and enable a multiresolution adaptive analysis (MRA). MRA is here performed via Fup Collocation Transform (FCT) so that at each time step concentration solution is decomposed using only a few significant Fup basis functions on adaptive collocation grid with appropriate scales (frequencies) and locations, a desired level of accuracy and a near minimum computational cost. FCT adds more collocations points and higher resolution levels only in sensitive zones with sharp concentration gradients, fronts and/or narrow transition zones. According to the our recent achievements there is no need for solving the large linear system on adaptive grid because each Fup coefficient is obtained by predefined formulas equalizing Fup expansion around corresponding collocation point and particular collocation operator based on few surrounding solution values. Furthermore, each Fup coefficient can be obtained independently which is perfectly suited for parallel processing. Adaptive grid in each time step is obtained from solution of the last time step or initial conditions and advective Lagrangian step in the current time step according to the velocity field and continuous streamlines. On the other side, we implement explicit stabilized routine SERK2 for dispersive Eulerian part of solution in the current time step on obtained spatial adaptive grid. Overall adaptive concept does not require the solving of large linear systems for the spatial and temporal approximation of conservative transport. Also, this new Eulerian-Lagrangian-Collocation scheme resolves all mentioned numerical problems due to its adaptive nature and ability to control numerical errors in space and time. Proposed method solves advection in Lagrangian way eliminating problems in Eulerian methods, while optimal collocation grid efficiently describes solution and boundary conditions eliminating usage of large number of particles and other problems in Lagrangian methods. Finally, numerical tests show that this approach enables not only accurate velocity field, but also conservative transport even in highly heterogeneous porous media resolving all spatial and temporal scales of concentration field.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted
1990-01-01
Techniques are discussed for the implementation and improvement of vectorization and concurrency in nonlinear explicit structural finite element codes. In explicit integration methods, the computation of the element internal force vector consumes the bulk of the computer time. The program can be efficiently vectorized by subdividing the elements into blocks and executing all computations in vector mode. The structuring of elements into blocks also provides a convenient way to implement concurrency by creating tasks which can be assigned to available processors for evaluation. The techniques were implemented in a 3-D nonlinear program with one-point quadrature shell elements. Concurrency and vectorization were first implemented in a single time step version of the program. Techniques were developed to minimize processor idle time and to select the optimal vector length. A comparison of run times between the program executed in scalar, serial mode and the fully vectorized code executed concurrently using eight processors shows speed-ups of over 25. Conjugate gradient methods for solving nonlinear algebraic equations are also readily adapted to a parallel environment. A new technique for improving convergence properties of conjugate gradients in nonlinear problems is developed in conjunction with other techniques such as diagonal scaling. A significant reduction in the number of iterations required for convergence is shown for a statically loaded rigid bar suspended by three equally spaced springs.
Overcoming Challenges in Kinetic Modeling of Magnetized Plasmas and Vacuum Electronic Devices
NASA Astrophysics Data System (ADS)
Omelchenko, Yuri; Na, Dong-Yeop; Teixeira, Fernando
2017-10-01
We transform the state-of-the art of plasma modeling by taking advantage of novel computational techniques for fast and robust integration of multiscale hybrid (full particle ions, fluid electrons, no displacement current) and full-PIC models. These models are implemented in 3D HYPERS and axisymmetric full-PIC CONPIC codes. HYPERS is a massively parallel, asynchronous code. The HYPERS solver does not step fields and particles synchronously in time but instead executes local variable updates (events) at their self-adaptive rates while preserving fundamental conservation laws. The charge-conserving CONPIC code has a matrix-free explicit finite-element (FE) solver based on a sparse-approximate inverse (SPAI) algorithm. This explicit solver approximates the inverse FE system matrix (``mass'' matrix) using successive sparsity pattern orders of the original matrix. It does not reduce the set of Maxwell's equations to a vector-wave (curl-curl) equation of second order but instead utilizes the standard coupled first-order Maxwell's system. We discuss the ability of our codes to accurately and efficiently account for multiscale physical phenomena in 3D magnetized space and laboratory plasmas and axisymmetric vacuum electronic devices.
Kitchen, James L.; Allaby, Robin G.
2013-01-01
Selection and adaptation of individuals to their underlying environments are highly dynamical processes, encompassing interactions between the individual and its seasonally changing environment, synergistic or antagonistic interactions between individuals and interactions amongst the regulatory genes within the individual. Plants are useful organisms to study within systems modeling because their sedentary nature simplifies interactions between individuals and the environment, and many important plant processes such as germination or flowering are dependent on annual cycles which can be disrupted by climate behavior. Sedentism makes plants relevant candidates for spatially explicit modeling that is tied in with dynamical environments. We propose that in order to fully understand the complexities behind plant adaptation, a system that couples aspects from systems biology with population and landscape genetics is required. A suitable system could be represented by spatially explicit individual-based models where the virtual individuals are located within time-variable heterogeneous environments and contain mutable regulatory gene networks. These networks could directly interact with the environment, and should provide a useful approach to studying plant adaptation. PMID:27137364
Wang, Qian; Molenaar, Peter; Harsh, Saurabh; Freeman, Kenneth; Xie, Jinyu; Gold, Carol; Rovine, Mike; Ulbrecht, Jan
2014-03-01
An essential component of any artificial pancreas is on the prediction of blood glucose levels as a function of exogenous and endogenous perturbations such as insulin dose, meal intake, and physical activity and emotional tone under natural living conditions. In this article, we present a new data-driven state-space dynamic model with time-varying coefficients that are used to explicitly quantify the time-varying patient-specific effects of insulin dose and meal intake on blood glucose fluctuations. Using the 3-variate time series of glucose level, insulin dose, and meal intake of an individual type 1 diabetic subject, we apply an extended Kalman filter (EKF) to estimate time-varying coefficients of the patient-specific state-space model. We evaluate our empirical modeling using (1) the FDA-approved UVa/Padova simulator with 30 virtual patients and (2) clinical data of 5 type 1 diabetic patients under natural living conditions. Compared to a forgetting-factor-based recursive ARX model of the same order, the EKF model predictions have higher fit, and significantly better temporal gain and J index and thus are superior in early detection of upward and downward trends in glucose. The EKF based state-space model developed in this article is particularly suitable for model-based state-feedback control designs since the Kalman filter estimates the state variable of the glucose dynamics based on the measured glucose time series. In addition, since the model parameters are estimated in real time, this model is also suitable for adaptive control. © 2014 Diabetes Technology Society.
1994-02-01
numerical treatment. An explicit numerical procedure based on Runqe-Kutta time stepping for cell-centered, hexahedral finite volumes is...An explicit numerical procedure based on Runge-Kutta time stepping for cell-centered, hexahedral finite volumes is outlined for the approximate...Discretization 16 3.1 Cell-Centered Finite -Volume Discretization in Space 16 3.2 Artificial Dissipation 17 3.3 Time Integration 21 3.4 Convergence
High-Order Space-Time Methods for Conservation Laws
NASA Technical Reports Server (NTRS)
Huynh, H. T.
2013-01-01
Current high-order methods such as discontinuous Galerkin and/or flux reconstruction can provide effective discretization for the spatial derivatives. Together with a time discretization, such methods result in either too small a time step size in the case of an explicit scheme or a very large system in the case of an implicit one. To tackle these problems, two new high-order space-time schemes for conservation laws are introduced: the first is explicit and the second, implicit. The explicit method here, also called the moment scheme, achieves a Courant-Friedrichs-Lewy (CFL) condition of 1 for the case of one-spatial dimension regardless of the degree of the polynomial approximation. (For standard explicit methods, if the spatial approximation is of degree p, then the time step sizes are typically proportional to 1/p(exp 2)). Fourier analyses for the one and two-dimensional cases are carried out. The property of super accuracy (or super convergence) is discussed. The implicit method is a simplified but optimal version of the discontinuous Galerkin scheme applied to time. It reduces to a collocation implicit Runge-Kutta (RK) method for ordinary differential equations (ODE) called Radau IIA. The explicit and implicit schemes are closely related since they employ the same intermediate time levels, and the former can serve as a key building block in an iterative procedure for the latter. A limiting technique for the piecewise linear scheme is also discussed. The technique can suppress oscillations near a discontinuity while preserving accuracy near extrema. Preliminary numerical results are shown
On dynamical systems approaches and methods in f ( R ) cosmology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alho, Artur; Carloni, Sante; Uggla, Claes, E-mail: aalho@math.ist.utl.pt, E-mail: sante.carloni@tecnico.ulisboa.pt, E-mail: claes.uggla@kau.se
We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f ( R )-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f ( R ) = R + α R {sup 2}, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniquesmore » involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f ( R ) cosmology and discuss their advantages and disadvantages.« less
NASA Technical Reports Server (NTRS)
Liou, J.; Tezduyar, T. E.
1990-01-01
Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.
Bosonization of fermions coupled to topologically massive gravity
NASA Astrophysics Data System (ADS)
Fradkin, Eduardo; Moreno, Enrique F.; Schaposnik, Fidel A.
2014-03-01
We establish a duality between massive fermions coupled to topologically massive gravity (TMG) in d=3 space-time dimensions and a purely gravity theory which also will turn out to be a TMG theory but with different parameters: the original graviton mass in the TMG theory coupled to fermions picks up a contribution from fermion bosonization. We obtain explicit bosonization rules for the fermionic currents and for the energy-momentum tensor showing that the identifications do not depend explicitly on the parameters of the theory. These results are the gravitational analog of the results for 2+1 Abelian and non-Abelian bosonization in flat space-time.
NASA Astrophysics Data System (ADS)
Wilde, M. V.; Sergeeva, N. V.
2018-05-01
An explicit asymptotic model extracting the contribution of a surface wave to the dynamic response of a viscoelastic half-space is derived. Fractional exponential Rabotnov's integral operators are used for describing of material properties. The model is derived by extracting the principal part of the poles corresponding to the surface waves after applying Laplace and Fourier transforms. The simplified equations for the originals are written by using power series expansions. Padè approximation is constructed to unite short-time and long-time models. The form of this approximation allows to formulate the explicit model using a fractional exponential Rabotnov's integral operator with parameters depending on the properties of surface wave. The applicability of derived models is studied by comparing with the exact solutions of a model problem. It is revealed that the model based on Padè approximation is highly effective for all the possible time domains.
Challenges of Representing Sub-Grid Physics in an Adaptive Mesh Refinement Atmospheric Model
NASA Astrophysics Data System (ADS)
O'Brien, T. A.; Johansen, H.; Johnson, J. N.; Rosa, D.; Benedict, J. J.; Keen, N. D.; Collins, W.; Goodfriend, E.
2015-12-01
Some of the greatest potential impacts from future climate change are tied to extreme atmospheric phenomena that are inherently multiscale, including tropical cyclones and atmospheric rivers. Extremes are challenging to simulate in conventional climate models due to existing models' coarse resolutions relative to the native length-scales of these phenomena. Studying the weather systems of interest requires an atmospheric model with sufficient local resolution, and sufficient performance for long-duration climate-change simulations. To this end, we have developed a new global climate code with adaptive spatial and temporal resolution. The dynamics are formulated using a block-structured conservative finite volume approach suitable for moist non-hydrostatic atmospheric dynamics. By using both space- and time-adaptive mesh refinement, the solver focuses computational resources only where greater accuracy is needed to resolve critical phenomena. We explore different methods for parameterizing sub-grid physics, such as microphysics, macrophysics, turbulence, and radiative transfer. In particular, we contrast the simplified physics representation of Reed and Jablonowski (2012) with the more complex physics representation used in the System for Atmospheric Modeling of Khairoutdinov and Randall (2003). We also explore the use of a novel macrophysics parameterization that is designed to be explicitly scale-aware.
CASTRO: A NEW COMPRESSIBLE ASTROPHYSICAL SOLVER. II. GRAY RADIATION HYDRODYNAMICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, W.; Almgren, A.; Bell, J.
We describe the development of a flux-limited gray radiation solver for the compressible astrophysics code, CASTRO. CASTRO uses an Eulerian grid with block-structured adaptive mesh refinement based on a nested hierarchy of logically rectangular variable-sized grids with simultaneous refinement in both space and time. The gray radiation solver is based on a mixed-frame formulation of radiation hydrodynamics. In our approach, the system is split into two parts, one part that couples the radiation and fluid in a hyperbolic subsystem, and another parabolic part that evolves radiation diffusion and source-sink terms. The hyperbolic subsystem is solved explicitly with a high-order Godunovmore » scheme, whereas the parabolic part is solved implicitly with a first-order backward Euler method.« less
ERIC Educational Resources Information Center
Blasco, Maribel
2015-01-01
The article proposes an approach, broadly inspired by culturally inclusive pedagogy, to facilitate international student academic adaptation based on rendering tacit aspects of local learning cultures explicit to international full degree students, rather than adapting them. Preliminary findings are presented from a focus group-based exploratory…
Strength of the singularities, equation of state and asymptotic expansion in Kaluza-Klein space time
NASA Astrophysics Data System (ADS)
Samanta, G. C.; Goel, Mayank; Myrzakulov, R.
2018-04-01
In this paper an explicit cosmological model which allows cosmological singularities are discussed in Kaluza-Klein space time. The generalized power-law and asymptotic expansions of the baro-tropic fluid index ω and equivalently the deceleration parameter q, in terms of cosmic time 't' are considered. Finally, the strength of the found singularities is discussed.
Adaptive management: Chapter 1
Allen, Craig R.; Garmestani, Ahjond S.; Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Allen, Craig R.; Garmestani, Ahjond S.
2015-01-01
Adaptive management is an approach to natural resource management that emphasizes learning through management where knowledge is incomplete, and when, despite inherent uncertainty, managers and policymakers must act. Unlike a traditional trial and error approach, adaptive management has explicit structure, including a careful elucidation of goals, identification of alternative management objectives and hypotheses of causation, and procedures for the collection of data followed by evaluation and reiteration. The process is iterative, and serves to reduce uncertainty, build knowledge and improve management over time in a goal-oriented and structured process.
Adaptive control in the presence of unmodeled dynamics. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rohrs, C. E.
1982-01-01
Stability and robustness properties of a wide class of adaptive control algorithms in the presence of unmodeled dynamics and output disturbances were investigated. The class of adaptive algorithms considered are those commonly referred to as model reference adaptive control algorithms, self-tuning controllers, and dead beat adaptive controllers, developed for both continuous-time systems and discrete-time systems. A unified analytical approach was developed to examine the class of existing adaptive algorithms. It was discovered that all existing algorithms contain an infinite gain operator in the dynamic system that defines command reference errors and parameter errors; it is argued that such an infinite gain operator appears to be generic to all adaptive algorithms, whether they exhibit explicit or implicit parameter identification. It is concluded that none of the adaptive algorithms considered can be used with confidence in a practical control system design, because instability will set in with a high probability.
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1984-01-01
A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.
A Declarative Design Approach to Modeling Traditional and Non-Traditional Space Systems
NASA Astrophysics Data System (ADS)
Hoag, Lucy M.
The space system design process is known to be laborious, complex, and computationally demanding. It is highly multi-disciplinary, involving several interdependent subsystems that must be both highly optimized and reliable due to the high cost of launch. Satellites must also be capable of operating in harsh and unpredictable environments, so integrating high-fidelity analysis is important. To address each of these concerns, a holistic design approach is necessary. However, while the sophistication of space systems has evolved significantly in the last 60 years, improvements in the design process have been comparatively stagnant. Space systems continue to be designed using a procedural, subsystem-by-subsystem approach. This method is inadequate since it generally requires extensive iteration and limited or heuristic-based search, which can be slow, labor-intensive, and inaccurate. The use of a declarative design approach can potentially address these inadequacies. In the declarative programming style, the focus of a problem is placed on what the objective is, and not necessarily how it should be achieved. In the context of design, this entails knowledge expressed as a declaration of statements that are true about the desired artifact instead of explicit instructions on how to implement it. A well-known technique is through constraint-based reasoning, where a design problem is represented as a network of rules and constraints that are reasoned across by a solver to dynamically discover the optimal candidate(s). This enables implicit instantiation of the tradespace and allows for automatic generation of all feasible design candidates. As such, this approach also appears to be well-suited to modeling adaptable space systems, which generally have large tradespaces and possess configurations that are not well-known a priori. This research applied a declarative design approach to holistic satellite design and to tradespace exploration for adaptable space systems. The approach was tested during the design of USC's Aeneas nanosatellite project, and a case study was performed to assess the advantages of the new approach over past procedural approaches. It was found that use of the declarative approach improved design accuracy through exhaustive tradespace search and provable optimality; decreased design time through improved model generation, faster run time, and reduction in time and number of iteration cycles; and enabled modular and extensible code. Observed weaknesses included non-intuitive model abstraction; increased debugging time; and difficulty of data extrapolation and analysis.
Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine
2018-01-01
Behavioral evidence for the link between numerical and spatial representations comes from the spatial-numerical association of response codes (SNARC) effect, consisting in faster reaction times to small/large numbers with the left/right hand respectively. The SNARC effect is, however, characterized by considerable intra- and inter-individual variability. It depends not only on the explicit or implicit nature of the numerical task, but also relates to interference control. To determine whether the prevalence of the latter relation in the elderly could be ascribed to younger individuals’ ceiling performances on executive control tasks, we determined whether the SNARC effect related to Stroop and/or Flanker effects in 26 young adults with ADHD. We observed a divergent pattern of correlation depending on the type of numerical task used to assess the SNARC effect and the type of interference control measure involved in number-space associations. Namely, stronger number-space associations during parity judgments involving implicit magnitude processing related to weaker interference control in the Stroop but not Flanker task. Conversely, stronger number-space associations during explicit magnitude classifications tended to be associated with better interference control in the Flanker but not Stroop paradigm. The association of stronger parity and magnitude SNARC effects with weaker and better interference control respectively indicates that different mechanisms underlie these relations. Activation of the magnitude-associated spatial code is irrelevant and potentially interferes with parity judgments, but in contrast assists explicit magnitude classifications. Altogether, the present study confirms the contribution of interference control to number-space associations also in young adults. It suggests that magnitude-associated spatial codes in implicit and explicit tasks are monitored by different interference control mechanisms, thereby explaining task-related intra-individual differences in number-space associations. PMID:29881363
A bulk viscosity approach for shock capturing on unstructured grids
NASA Astrophysics Data System (ADS)
Shoeybi, Mohammad; Larsson, Nils Johan; Ham, Frank; Moin, Parviz
2008-11-01
The bulk viscosity approach for shock capturing (Cook and Cabot, JCP, 2005) augments the bulk part of the viscous stress tensor. The intention is to capture shock waves without dissipating turbulent structures. The present work extends and modifies this method for unstructured grids. We propose a method that properly scales the bulk viscosity with the grid spacing normal to the shock for unstructured grid for which the shock is not necessarily aligned with the grid. The magnitude of the strain rate tensor used in the original formulation is replaced with the dilatation, which appears to be more appropriate in the vortical turbulent flow regions (Mani et al., 2008). The original form of the model is found to have an impact on dilatational motions away form the shock wave, which is eliminated by a proposed localization of the bulk viscosity. Finally, to allow for grid adaptation around shock waves, an explicit/implicit time advancement scheme has been developed that adaptively identifies the stiff regions. The full method has been verified with several test cases, including 2D shock-vorticity entropy interaction, homogenous isotropic turbulence, and turbulent flow over a cylinder.
Fast but fleeting: adaptive motor learning processes associated with aging and cognitive decline.
Trewartha, Kevin M; Garcia, Angeles; Wolpert, Daniel M; Flanagan, J Randall
2014-10-01
Motor learning has been shown to depend on multiple interacting learning processes. For example, learning to adapt when moving grasped objects with novel dynamics involves a fast process that adapts and decays quickly-and that has been linked to explicit memory-and a slower process that adapts and decays more gradually. Each process is characterized by a learning rate that controls how strongly motor memory is updated based on experienced errors and a retention factor determining the movement-to-movement decay in motor memory. Here we examined whether fast and slow motor learning processes involved in learning novel dynamics differ between younger and older adults. In addition, we investigated how age-related decline in explicit memory performance influences learning and retention parameters. Although the groups adapted equally well, they did so with markedly different underlying processes. Whereas the groups had similar fast processes, they had different slow processes. Specifically, the older adults exhibited decreased retention in their slow process compared with younger adults. Within the older group, who exhibited considerable variation in explicit memory performance, we found that poor explicit memory was associated with reduced retention in the fast process, as well as the slow process. These findings suggest that explicit memory resources are a determining factor in impairments in the both the fast and slow processes for motor learning but that aging effects on the slow process are independent of explicit memory declines. Copyright © 2014 the authors 0270-6474/14/3413411-11$15.00/0.
EXPONENTIAL TIME DIFFERENCING FOR HODGKIN–HUXLEY-LIKE ODES
Börgers, Christoph; Nectow, Alexander R.
2013-01-01
Several authors have proposed the use of exponential time differencing (ETD) for Hodgkin–Huxley-like partial and ordinary differential equations (PDEs and ODEs). For Hodgkin–Huxley-like PDEs, ETD is attractive because it can deal effectively with the stiffness issues that diffusion gives rise to. However, large neuronal networks are often simulated assuming “space-clamped” neurons, i.e., using the Hodgkin–Huxley ODEs, in which there are no diffusion terms. Our goal is to clarify whether ETD is a good idea even in that case. We present a numerical comparison of first- and second-order ETD with standard explicit time-stepping schemes (Euler’s method, the midpoint method, and the classical fourth-order Runge–Kutta method). We find that in the standard schemes, the stable computation of the very rapid rising phase of the action potential often forces time steps of a small fraction of a millisecond. This can result in an expensive calculation yielding greater overall accuracy than needed. Although it is tempting at first to try to address this issue with adaptive or fully implicit time-stepping, we argue that neither is effective here. The main advantage of ETD for Hodgkin–Huxley-like systems of ODEs is that it allows underresolution of the rising phase of the action potential without causing instability, using time steps on the order of one millisecond. When high quantitative accuracy is not necessary and perhaps, because of modeling inaccuracies, not even useful, ETD allows much faster simulations than standard explicit time-stepping schemes. The second-order ETD scheme is found to be substantially more accurate than the first-order one even for large values of Δt. PMID:24058276
Large-deviation properties of Brownian motion with dry friction.
Chen, Yaming; Just, Wolfram
2014-10-01
We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.
NASA Technical Reports Server (NTRS)
Kim, Jonnathan H.
1995-01-01
Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).
Patané, Ivan; Farnè, Alessandro; Frassinetti, Francesca
2016-01-01
A large literature has documented interactions between space and time suggesting that the two experiential domains may share a common format in a generalized magnitude system (ATOM theory). To further explore this hypothesis, here we measured the extent to which time and space are sensitive to the same sensorimotor plasticity processes, as induced by classical prismatic adaptation procedures (PA). We also exanimated whether spatial-attention shifts on time and space processing, produced through PA, extend to stimuli presented beyond the immediate near space. Results indicated that PA affected both temporal and spatial representations not only in the near space (i.e., the region within which the adaptation occurred), but also in the far space. In addition, both rightward and leftward PA directions caused opposite and symmetrical modulations on time processing, whereas only leftward PA biased space processing rightward. We discuss these findings within the ATOM framework and models that account for PA effects on space and time processing. We propose that the differential and asymmetrical effects following PA may suggest that temporal and spatial representations are not perfectly aligned.
State-space self-tuner for on-line adaptive control
NASA Technical Reports Server (NTRS)
Shieh, L. S.
1994-01-01
Dynamic systems, such as flight vehicles, satellites and space stations, operating in real environments, constantly face parameter and/or structural variations owing to nonlinear behavior of actuators, failure of sensors, changes in operating conditions, disturbances acting on the system, etc. In the past three decades, adaptive control has been shown to be effective in dealing with dynamic systems in the presence of parameter uncertainties, structural perturbations, random disturbances and environmental variations. Among the existing adaptive control methodologies, the state-space self-tuning control methods, initially proposed by us, are shown to be effective in designing advanced adaptive controllers for multivariable systems. In our approaches, we have embedded the standard Kalman state-estimation algorithm into an online parameter estimation algorithm. Thus, the advanced state-feedback controllers can be easily established for digital adaptive control of continuous-time stochastic multivariable systems. A state-space self-tuner for a general multivariable stochastic system has been developed and successfully applied to the space station for on-line adaptive control. Also, a technique for multistage design of an optimal momentum management controller for the space station has been developed and reported in. Moreover, we have successfully developed various digital redesign techniques which can convert a continuous-time controller to an equivalent digital controller. As a result, the expensive and unreliable continuous-time controller can be implemented using low-cost and high performance microprocessors. Recently, we have developed a new hybrid state-space self tuner using a new dual-rate sampling scheme for on-line adaptive control of continuous-time uncertain systems.
Leow, Li-Ann; Gunn, Reece; Marinovic, Welber; Carroll, Timothy J
2017-08-01
When sensory feedback is perturbed, accurate movement is restored by a combination of implicit processes and deliberate reaiming to strategically compensate for errors. Here, we directly compare two methods used previously to dissociate implicit from explicit learning on a trial-by-trial basis: 1 ) asking participants to report the direction that they aim their movements, and contrasting this with the directions of the target and the movement that they actually produce, and 2 ) manipulating movement preparation time. By instructing participants to reaim without a sensory perturbation, we show that reaiming is possible even with the shortest possible preparation times, particularly when targets are narrowly distributed. Nonetheless, reaiming is effortful and comes at the cost of increased variability, so we tested whether constraining preparation time is sufficient to suppress strategic reaiming during adaptation to visuomotor rotation with a broad target distribution. The rate and extent of error reduction under preparation time constraints were similar to estimates of implicit learning obtained from self-report without time pressure, suggesting that participants chose not to apply a reaiming strategy to correct visual errors under time pressure. Surprisingly, participants who reported aiming directions showed less implicit learning according to an alternative measure, obtained during trials performed without visual feedback. This suggests that the process of reporting can affect the extent or persistence of implicit learning. The data extend existing evidence that restricting preparation time can suppress explicit reaiming and provide an estimate of implicit visuomotor rotation learning that does not require participants to report their aiming directions. NEW & NOTEWORTHY During sensorimotor adaptation, implicit error-driven learning can be isolated from explicit strategy-driven reaiming by subtracting self-reported aiming directions from movement directions, or by restricting movement preparation time. Here, we compared the two methods. Restricting preparation times did not eliminate reaiming but was sufficient to suppress reaiming during adaptation with widely distributed targets. The self-report method produced a discrepancy in implicit learning estimated by subtracting aiming directions and implicit learning measured in no-feedback trials. Copyright © 2017 the American Physiological Society.
An Online Learning Space Facilitating Supervision Pedagogies in Science
ERIC Educational Resources Information Center
Picard, M. Y.; Wilkinson, K.; Wirthensohn, M.
2011-01-01
Quality research supervision leading to timely completion and student satisfaction involves explicit pedagogy and effective communication. This article describes the development within an action research cycle of an online learning space designed to achieve these goals. The research "spirals" involved interventions in the form of instructive…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guangye; Chacon, Luis; Knoll, Dana Alan
2015-07-31
A multi-rate PIC formulation was developed that employs large timesteps for slow field evolution, and small (adaptive) timesteps for particle orbit integrations. Implementation is based on a JFNK solver with nonlinear elimination and moment preconditioning. The approach is free of numerical instabilities (ω peΔt >>1, and Δx >> λ D), and requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant gains (vs. conventional explicit PIC) may be possible for large scale simulations. The paper is organized as follows: Vlasov-Maxwell Particle-in-cell (PIC) methods for plasmas; Explicit, semi-implicit, and implicit time integrations; Implicit PIC formulation (Jacobian-Free Newton-Krylovmore » (JFNK) with nonlinear elimination allows different treatments of disparate scales, discrete conservation properties (energy, charge, canonical momentum, etc.)); Some numerical examples; and Summary.« less
An explicit scheme for ohmic dissipation with smoothed particle magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Tsukamoto, Yusuke; Iwasaki, Kazunari; Inutsuka, Shu-ichiro
2013-09-01
In this paper, we present an explicit scheme for Ohmic dissipation with smoothed particle magnetohydrodynamics (SPMHD). We propose an SPH discretization of Ohmic dissipation and solve Ohmic dissipation part of induction equation with the super-time-stepping method (STS) which allows us to take a longer time step than Courant-Friedrich-Levy stability condition. Our scheme is second-order accurate in space and first-order accurate in time. Our numerical experiments show that optimal choice of the parameters of STS for Ohmic dissipation of SPMHD is νsts ˜ 0.01 and Nsts ˜ 5.
NASA Astrophysics Data System (ADS)
Chandramouli, Rajarathnam; Li, Grace; Memon, Nasir D.
2002-04-01
Steganalysis techniques attempt to differentiate between stego-objects and cover-objects. In recent work we developed an explicit analytic upper bound for the steganographic capacity of LSB based steganographic techniques for a given false probability of detection. In this paper we look at adaptive steganographic techniques. Adaptive steganographic techniques take explicit steps to escape detection. We explore different techniques that can be used to adapt message embedding to the image content or to a known steganalysis technique. We investigate the advantages of adaptive steganography within an analytical framework. We also give experimental results with a state-of-the-art steganalysis technique demonstrating that adaptive embedding results in a significant number of bits embedded without detection.
Adaptive optics system performance approximations for atmospheric turbulence correction
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-10-01
Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.
NASA Astrophysics Data System (ADS)
Zhang, Ruili; Wang, Yulei; He, Yang; Xiao, Jianyuan; Liu, Jian; Qin, Hong; Tang, Yifa
2018-02-01
Relativistic dynamics of a charged particle in time-dependent electromagnetic fields has theoretical significance and a wide range of applications. The numerical simulation of relativistic dynamics is often multi-scale and requires accurate long-term numerical simulations. Therefore, explicit symplectic algorithms are much more preferable than non-symplectic methods and implicit symplectic algorithms. In this paper, we employ the proper time and express the Hamiltonian as the sum of exactly solvable terms and product-separable terms in space-time coordinates. Then, we give the explicit symplectic algorithms based on the generating functions of orders 2 and 3 for relativistic dynamics of a charged particle. The methodology is not new, which has been applied to non-relativistic dynamics of charged particles, but the algorithm for relativistic dynamics has much significance in practical simulations, such as the secular simulation of runaway electrons in tokamaks.
NASA Technical Reports Server (NTRS)
Verstraete, Michel M.
1987-01-01
Understanding the details of the interaction between the radiation field and plant structures is important climatically because of the influence of vegetation on the surface water and energy balance, but also biologically, since solar radiation provides the energy necessary for photosynthesis. The problem is complex because of the extreme variety of vegetation forms in space and time, as well as within and across plant species. This one-dimensional vertical multilayer model describes the transfer of direct solar radiation through a leaf canopy, accounting explicitly for the vertical inhomogeneities of a plant stand and leaf orientation, as well as heliotropic plant behavior. This model reproduces observational results on homogeneous canopies, but it is also well adapted to describe vertically inhomogeneous canopies. Some of the implications of leaf orientation and plant structure as far as light collection is concerned are briefly reviewed.
Meyniel, Florent; Safra, Lou; Pessiglione, Mathias
2014-01-01
A pervasive case of cost-benefit problem is how to allocate effort over time, i.e. deciding when to work and when to rest. An economic decision perspective would suggest that duration of effort is determined beforehand, depending on expected costs and benefits. However, the literature on exercise performance emphasizes that decisions are made on the fly, depending on physiological variables. Here, we propose and validate a general model of effort allocation that integrates these two views. In this model, a single variable, termed cost evidence, accumulates during effort and dissipates during rest, triggering effort cessation and resumption when reaching bounds. We assumed that such a basic mechanism could explain implicit adaptation, whereas the latent parameters (slopes and bounds) could be amenable to explicit anticipation. A series of behavioral experiments manipulating effort duration and difficulty was conducted in a total of 121 healthy humans to dissociate implicit-reactive from explicit-predictive computations. Results show 1) that effort and rest durations are adapted on the fly to variations in cost-evidence level, 2) that the cost-evidence fluctuations driving the behavior do not match explicit ratings of exhaustion, and 3) that actual difficulty impacts effort duration whereas expected difficulty impacts rest duration. Taken together, our findings suggest that cost evidence is implicitly monitored online, with an accumulation rate proportional to actual task difficulty. In contrast, cost-evidence bounds and dissipation rate might be adjusted in anticipation, depending on explicit task difficulty. PMID:24743711
Segmentation-based wavelet transform for still-image compression
NASA Astrophysics Data System (ADS)
Mozelle, Gerard; Seghier, Abdellatif; Preteux, Francoise J.
1996-10-01
In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical properties of multiresolution analysis and the flexibility of region-based approaches for image compression. The associated methodology has two stages: 1) image segmentation into convex and polygonal regions; 2) 2D-wavelet transform of the signal corresponding to each region. In this paper, we have mathematically studied a method for constructing a multiresolution analysis (VjOmega)j (epsilon) N adapted to a polygonal region which provides an adaptive region-based filtering. The explicit construction of scaling functions, pre-wavelets and orthonormal wavelets bases defined on a polygon is carried out by using scaling functions is established by using the theory of Toeplitz operators. The corresponding expression can be interpreted as a location property which allow defining interior and boundary scaling functions. Concerning orthonormal wavelets and pre-wavelets, a similar expansion is obtained by taking advantage of the properties of the orthogonal projector P(V(j(Omega )) perpendicular from the space Vj(Omega ) + 1 onto the space (Vj(Omega )) perpendicular. Finally the mathematical results provide a simple and fast algorithm adapted to polygonal regions.
Slave finite elements: The temporal element approach to nonlinear analysis
NASA Technical Reports Server (NTRS)
Gellin, S.
1984-01-01
A formulation method for finite elements in space and time incorporating nonlinear geometric and material behavior is presented. The method uses interpolation polynomials for approximating the behavior of various quantities over the element domain, and only explicit integration over space and time. While applications are general, the plate and shell elements that are currently being programmed are appropriate to model turbine blades, vanes, and combustor liners.
A space-time lower-upper symmetric Gauss-Seidel scheme for the time-spectral method
NASA Astrophysics Data System (ADS)
Zhan, Lei; Xiong, Juntao; Liu, Feng
2016-05-01
The time-spectral method (TSM) offers the advantage of increased order of accuracy compared to methods using finite-difference in time for periodic unsteady flow problems. Explicit Runge-Kutta pseudo-time marching and implicit schemes have been developed to solve iteratively the space-time coupled nonlinear equations resulting from TSM. Convergence of the explicit schemes is slow because of the stringent time-step limit. Many implicit methods have been developed for TSM. Their computational efficiency is, however, still limited in practice because of delayed implicit temporal coupling, multiple iterative loops, costly matrix operations, or lack of strong diagonal dominance of the implicit operator matrix. To overcome these shortcomings, an efficient space-time lower-upper symmetric Gauss-Seidel (ST-LU-SGS) implicit scheme with multigrid acceleration is presented. In this scheme, the implicit temporal coupling term is split as one additional dimension of space in the LU-SGS sweeps. To improve numerical stability for periodic flows with high frequency, a modification to the ST-LU-SGS scheme is proposed. Numerical results show that fast convergence is achieved using large or even infinite Courant-Friedrichs-Lewy (CFL) numbers for unsteady flow problems with moderately high frequency and with the use of moderately high numbers of time intervals. The ST-LU-SGS implicit scheme is also found to work well in calculating periodic flow problems where the frequency is not known a priori and needed to be determined by using a combined Fourier analysis and gradient-based search algorithm.
Visuomotor adaptation in head-mounted virtual reality versus conventional training
Anglin, J. M.; Sugiyama, T.; Liew, S.-L.
2017-01-01
Immersive, head-mounted virtual reality (HMD-VR) provides a unique opportunity to understand how changes in sensory environments affect motor learning. However, potential differences in mechanisms of motor learning and adaptation in HMD-VR versus a conventional training (CT) environment have not been extensively explored. Here, we investigated whether adaptation on a visuomotor rotation task in HMD-VR yields similar adaptation effects in CT and whether these effects are achieved through similar mechanisms. Specifically, recent work has shown that visuomotor adaptation may occur via both an implicit, error-based internal model and a more cognitive, explicit strategic component. We sought to measure both overall adaptation and balance between implicit and explicit mechanisms in HMD-VR versus CT. Twenty-four healthy individuals were placed in either HMD-VR or CT and trained on an identical visuomotor adaptation task that measured both implicit and explicit components. Our results showed that the overall timecourse of adaption was similar in both HMD-VR and CT. However, HMD-VR participants utilized a greater cognitive strategy than CT, while CT participants engaged in greater implicit learning. These results suggest that while both conditions produce similar results in overall adaptation, the mechanisms by which visuomotor adaption occurs in HMD-VR appear to be more reliant on cognitive strategies. PMID:28374808
Context-Sensitive Adjustment of Cognitive Control in Dual-Task Performance
ERIC Educational Resources Information Center
Fischer, Rico; Gottschalk, Caroline; Dreisbach, Gesine
2014-01-01
Performing 2 highly similar tasks at the same time requires an adaptive regulation of cognitive control to shield prioritized primary task processing from between-task (cross-talk) interference caused by secondary task processing. In the present study, the authors investigated how implicitly and explicitly delivered information promotes the…
The "Negotiated Space" of University Researchers' Pursuit of a Research Agenda
ERIC Educational Resources Information Center
Luukkonen, Terttu; Thomas, Duncan A.
2016-01-01
The paper introduces a concept of a "negotiated space" to describe university researchers' attempts to balance pragmatically, continually and dynamically over time, their own agency and autonomy in the selection of research topics and pursuit of scientific research to filter out the explicit steering and tacit signals of external…
Interacting particle systems in time-dependent geometries
NASA Astrophysics Data System (ADS)
Ali, A.; Ball, R. C.; Grosskinsky, S.; Somfai, E.
2013-09-01
Many complex structures and stochastic patterns emerge from simple kinetic rules and local interactions, and are governed by scale invariance properties in combination with effects of the global geometry. We consider systems that can be described effectively by space-time trajectories of interacting particles, such as domain boundaries in two-dimensional growth or river networks. We study trajectories embedded in time-dependent geometries, and the main focus is on uniformly expanding or decreasing domains for which we obtain an exact mapping to simple fixed domain systems while preserving the local scale invariance properties. This approach was recently introduced in Ali et al (2013 Phys. Rev. E 87 020102(R)) and here we provide a detailed discussion on its applicability for self-affine Markovian models, and how it can be adapted to self-affine models with memory or explicit time dependence. The mapping corresponds to a nonlinear time transformation which converges to a finite value for a large class of trajectories, enabling an exact analysis of asymptotic properties in expanding domains. We further provide a detailed discussion of different particle interactions and generalized geometries. All our findings are based on exact computations and are illustrated numerically for various examples, including Lévy processes and fractional Brownian motion.
Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram
2010-01-01
MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794
Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.
2008-01-01
A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.
Visco-elastic controlled-source full waveform inversion without surface waves
NASA Astrophysics Data System (ADS)
Paschke, Marco; Krause, Martin; Bleibinhaus, Florian
2016-04-01
We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.
Mission Data System Java Edition Version 7
NASA Technical Reports Server (NTRS)
Reinholtz, William K.; Wagner, David A.
2013-01-01
The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.
Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.
Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal
2015-08-28
We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.
Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems
NASA Astrophysics Data System (ADS)
Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal
2015-08-01
We report a new limitation on the ability of physical systems to perform computation—one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system—such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.
NASA Technical Reports Server (NTRS)
Usab, William J., Jr.; Jiang, Yi-Tsann
1991-01-01
The objective of the present research is to develop a general solution adaptive scheme for the accurate prediction of inviscid quasi-three-dimensional flow in advanced compressor and turbine designs. The adaptive solution scheme combines an explicit finite-volume time-marching scheme for unstructured triangular meshes and an advancing front triangular mesh scheme with a remeshing procedure for adapting the mesh as the solution evolves. The unstructured flow solver has been tested on a series of two-dimensional airfoil configurations including a three-element analytic test case presented here. Mesh adapted quasi-three-dimensional Euler solutions are presented for three spanwise stations of the NASA rotor 67 transonic fan. Computed solutions are compared with available experimental data.
Research in digital adaptive flight controllers
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.
Relativistic bound states in three space-time dimensions in Minkowski space
NASA Astrophysics Data System (ADS)
Gutierrez, C.; Gigante, V.; Frederico, T.; Tomio, Lauro
2016-01-01
With the aim to derive a workable framework for bound states in Minkowski space, we have investigated the Nakanishi perturbative integral representation of the Bethe-Salpeter (BS) amplitude in two-dimensions (2D) in space and time (2+1). The homogeneous BS amplitude, projected onto the light-front plane, is used to derive an equation for the Nakanishi weight function. The formal development is illustrated in detail and applied to the bound system composed by two scalar particles interacting through the exchange of a massive scalar. The explicit forms of the integral equations are obtained in ladder approximation.
An algebraic structure of discrete-time biaffine systems
NASA Technical Reports Server (NTRS)
Tarn, T.-J.; Nonoyama, S.
1979-01-01
New results on the realization of finite-dimensional, discrete-time, internally biaffine systems are presented in this paper. The external behavior of such systems is described by multiaffine functions and the state space is constructed via Nerode equivalence relations. We prove that the state space is an affine space. An algorithm which amounts to choosing a frame for the affine space is presented. Our algorithm reduces in the linear and bilinear case to a generalization of algorithms existing in the literature. Explicit existence criteria for span-canonical realizations as well as an affine isomorphism theorem are given.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Sachetto Oliveira, Rafael; Martins Rocha, Bernardo; Burgarelli, Denise; Meira, Wagner; Constantinides, Christakis; Weber Dos Santos, Rodrigo
2018-02-01
The use of computer models as a tool for the study and understanding of the complex phenomena of cardiac electrophysiology has attained increased importance nowadays. At the same time, the increased complexity of the biophysical processes translates into complex computational and mathematical models. To speed up cardiac simulations and to allow more precise and realistic uses, 2 different techniques have been traditionally exploited: parallel computing and sophisticated numerical methods. In this work, we combine a modern parallel computing technique based on multicore and graphics processing units (GPUs) and a sophisticated numerical method based on a new space-time adaptive algorithm. We evaluate each technique alone and in different combinations: multicore and GPU, multicore and GPU and space adaptivity, multicore and GPU and space adaptivity and time adaptivity. All the techniques and combinations were evaluated under different scenarios: 3D simulations on slabs, 3D simulations on a ventricular mouse mesh, ie, complex geometry, sinus-rhythm, and arrhythmic conditions. Our results suggest that multicore and GPU accelerate the simulations by an approximate factor of 33×, whereas the speedups attained by the space-time adaptive algorithms were approximately 48. Nevertheless, by combining all the techniques, we obtained speedups that ranged between 165 and 498. The tested methods were able to reduce the execution time of a simulation by more than 498× for a complex cellular model in a slab geometry and by 165× in a realistic heart geometry simulating spiral waves. The proposed methods will allow faster and more realistic simulations in a feasible time with no significant loss of accuracy. Copyright © 2017 John Wiley & Sons, Ltd.
Financial incentives enhance adaptation to a sensorimotor transformation.
Gajda, Kathrin; Sülzenbrück, Sandra; Heuer, Herbert
2016-10-01
Adaptation to sensorimotor transformations has received much attention in recent years. However, the role of motivation and its relation to the implicit and explicit processes underlying adaptation has been neglected thus far. Here, we examine the influence of extrinsic motivation on adaptation to a visuomotor rotation by way of providing financial incentives for accurate movements. Participants in the experimental group "bonus" received a defined amount of money for high end-point accuracy in a visuomotor rotation task; participants in the control group "no bonus" did not receive a financial incentive. Results showed better overall adaptation to the visuomotor transformation in participants who were extrinsically motivated. However, there was no beneficial effect of financial incentives on the implicit component, as assessed by the after-effects, and on separately assessed explicit knowledge. These findings suggest that the positive influence of financial incentives on adaptation is due to a component which cannot be measured by after-effects or by our test of explicit knowledge. A likely candidate is model-free learning based on reward-prediction errors, which could be enhanced by the financial bonuses.
Increased gamma band power during movement planning coincides with motor memory retrieval.
Thürer, Benjamin; Stockinger, Christian; Focke, Anne; Putze, Felix; Schultz, Tanja; Stein, Thorsten
2016-01-15
The retrieval of motor memory requires a previous memory encoding and subsequent consolidation of the specific motor memory. Previous work showed that motor memory seems to rely on different memory components (e.g., implicit, explicit). However, it is still unknown if explicit components contribute to the retrieval of motor memories formed by dynamic adaptation tasks and which neural correlates are linked to memory retrieval. We investigated the lower and higher gamma bands of subjects' electroencephalography during encoding and retrieval of a dynamic adaptation task. A total of 24 subjects were randomly assigned to a treatment and control group. Both groups adapted to a force field A on day 1 and were re-exposed to the same force field A on day 3 of the experiment. On day 2, treatment group learned an interfering force field B whereas control group had a day rest. Kinematic analyses showed that control group improved their initial motor performance from day 1 to day 3 but treatment group did not. This behavioral result coincided with an increased higher gamma band power in the electrodes over prefrontal areas on the initial trials of day 3 for control but not treatment group. Intriguingly, this effect vanished with the subsequent re-adaptation on day 3. We suggest that improved re-test performance in a dynamic motor adaptation task is contributed by explicit memory and that gamma bands in the electrodes over the prefrontal cortex are linked to these explicit components. Furthermore, we suggest that the contribution of explicit memory vanishes with the subsequent re-adaptation while task automaticity increases. Copyright © 2015 Elsevier Inc. All rights reserved.
Realistic mass ratio magnetic reconnection simulations with the Multi Level Multi Domain method
NASA Astrophysics Data System (ADS)
Innocenti, Maria Elena; Beck, Arnaud; Lapenta, Giovanni; Markidis, Stefano
2014-05-01
Space physics simulations with the ambition of realistically representing both ion and electron dynamics have to be able to cope with the huge scale separation between the electron and ion parameters while respecting the stability constraints of the numerical method of choice. Explicit Particle In Cell (PIC) simulations with realistic mass ratio are limited in the size of the problems they can tackle by the restrictive stability constraints of the explicit method (Birdsall and Langdon, 2004). Many alternatives are available to reduce such computation costs. Reduced mass ratios can be used, with the caveats highlighted in Bret and Dieckmann (2010). Fully implicit (Chen et al., 2011a; Markidis and Lapenta, 2011) or semi implicit (Vu and Brackbill, 1992; Lapenta et al., 2006; Cohen et al., 1989) methods can bypass the strict stability constraints of explicit PIC codes. Adaptive Mesh Refinement (AMR) techniques (Vay et al., 2004; Fujimoto and Sydora, 2008) can be employed to change locally the simulation resolution. We focus here on the Multi Level Multi Domain (MLMD) method introduced in Innocenti et al. (2013) and Beck et al. (2013). The method combines the advantages of implicit algorithms and adaptivity. Two levels are fully simulated with fields and particles. The so called "refined level" simulates a fraction of the "coarse level" with a resolution RF times bigger than the coarse level resolution, where RF is the Refinement Factor between the levels. This method is particularly suitable for magnetic reconnection simulations (Biskamp, 2005), where the characteristic Ion and Electron Diffusion Regions (IDR and EDR) develop at the ion and electron scales respectively (Daughton et al., 2006). In Innocenti et al. (2013) we showed that basic wave and instability processes are correctly reproduced by MLMD simulations. In Beck et al. (2013) we applied the technique to plasma expansion and magnetic reconnection problems. We showed that notable computational time savings can be achieved. More importantly, we were able to correctly reproduce EDR features, such as the inversion layer of the electric field observed in Chen et al. (2011b), with a MLMD simulation at a significantly lower cost. Here, we present recent results on EDR dynamics achieved with the MLMD method and a realistic mass ratio.
Stability and diversity in collective adaptation
NASA Astrophysics Data System (ADS)
Sato, Yuzuru; Akiyama, Eizo; Crutchfield, James P.
2005-10-01
We derive a class of macroscopic differential equations that describe collective adaptation, starting from a discrete-time stochastic microscopic model. The behavior of each agent is a dynamic balance between adaptation that locally achieves the best action and memory loss that leads to randomized behavior. We show that, although individual agents interact with their environment and other agents in a purely self-interested way, macroscopic behavior can be interpreted as game dynamics. Application to several familiar, explicit game interactions shows that the adaptation dynamics exhibits a diversity of collective behaviors. The simplicity of the assumptions underlying the macroscopic equations suggests that these behaviors should be expected broadly in collective adaptation. We also analyze the adaptation dynamics from an information-theoretic viewpoint and discuss self-organization induced by the dynamics of uncertainty, giving a novel view of collective adaptation.
Nonlinear dynamic theory for photorefractive phase hologram formation
NASA Technical Reports Server (NTRS)
Kim, D. M.; Shah, R. R.; Rabson, T. A.; Tittle, F. K.
1976-01-01
A nonlinear dynamic theory is developed for the formation of photorefractive volume phase holograms. A feedback mechanism existing between the photogenerated field and free-electron density, treated explicitly, yields the growth and saturation of the space-charge field in a time scale characterized by the coupling strength between them. The expression for the field reduces in the short-time limit to previous theories and approaches in the long-time limit the internal or photovoltaic field. Additionally, the phase of the space charge field is shown to be time-dependent.
Seidl, Rupert; Lexer, Manfred J
2013-01-15
The unabated continuation of anthropogenic greenhouse gas emissions and the lack of an international consensus on a stringent climate change mitigation policy underscore the importance of adaptation for coping with the all but inevitable changes in the climate system. Adaptation measures in forestry have particularly long lead times. A timely implementation is thus crucial for reducing the considerable climate vulnerability of forest ecosystems. However, since future environmental conditions as well as future societal demands on forests are inherently uncertain, a core requirement for adaptation is robustness to a wide variety of possible futures. Here we explicitly address the roles of climatic and social uncertainty in forest management, and tackle the question of robustness of adaptation measures in the context of multi-objective sustainable forest management (SFM). We used the Austrian Federal Forests (AFF) as a case study, and employed a comprehensive vulnerability assessment framework based on ecosystem modeling, multi-criteria decision analysis, and practitioner participation. We explicitly considered climate uncertainty by means of three climate change scenarios, and accounted for uncertainty in future social demands by means of three societal preference scenarios regarding SFM indicators. We found that the effects of climatic and social uncertainty on the projected performance of management were in the same order of magnitude, underlining the notion that climate change adaptation requires an integrated social-ecological perspective. Furthermore, our analysis of adaptation measures revealed considerable trade-offs between reducing adverse impacts of climate change and facilitating adaptive capacity. This finding implies that prioritization between these two general aims of adaptation is necessary in management planning, which we suggest can draw on uncertainty analysis: Where the variation induced by social-ecological uncertainty renders measures aiming to reduce climate change impacts statistically insignificant (i.e., for approximately one third of the investigated management units of the AFF case study), fostering adaptive capacity is suggested as the preferred pathway for adaptation. We conclude that climate change adaptation needs to balance between anticipating expected future conditions and building the capacity to address unknowns and surprises. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Nguyen, Dang Van; Li, Jing-Rebecca; Grebenkov, Denis; Le Bihan, Denis
2014-04-01
The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution at the cell interfaces by using double nodes. Using a transformation of the Bloch-Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge-Kutta-Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.
NASA Technical Reports Server (NTRS)
Campbell, W.
1981-01-01
A theoretical evaluation of the stability of an explicit finite difference solution of the transient temperature field in a composite medium is presented. The grid points of the field are assumed uniformly spaced, and media interfaces are either vertical or horizontal and pass through grid points. In addition, perfect contact between different media (infinite interfacial conductance) is assumed. A finite difference form of the conduction equation is not valid at media interfaces; therefore, heat balance forms are derived. These equations were subjected to stability analysis, and a computer graphics code was developed that permitted determination of a maximum time step for a given grid spacing.
Theory of the evolutionary minority game
NASA Astrophysics Data System (ADS)
Lo, T. S.; Hui, P. M.; Johnson, N. F.
2000-09-01
We present a theory describing a recently introduced model of an evolving, adaptive system in which agents compete to be in the minority. The agents themselves are able to evolve their strategies over time in an attempt to improve their performance. The theory explicitly demonstrates the self-interaction, or market impact, that agents in such systems experience.
Studies of implicit and explicit solution techniques in transient thermal analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-01-01
Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.
Studies of implicit and explicit solution techniques in transient thermal analysis of structures
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1982-08-01
Studies aimed at an increase in the efficiency of calculating transient temperature fields in complex aerospace vehicle structures are reported. The advantages and disadvantages of explicit and implicit algorithms are discussed and a promising set of implicit algorithms with variable time steps, known as GEARIB, is described. Test problems, used for evaluating and comparing various algorithms, are discussed and finite element models of the configurations are described. These problems include a coarse model of the Space Shuttle wing, an insulated frame tst article, a metallic panel for a thermal protection system, and detailed models of sections of the Space Shuttle wing. Results generally indicate a preference for implicit over explicit algorithms for transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures). The effects on algorithm performance of different models of an insulated cylinder are demonstrated. The stiffness of the problem is highly sensitive to modeling details and careful modeling can reduce the stiffness of the equations to the extent that explicit methods may become the best choice. Preliminary applications of a mixed implicit-explicit algorithm and operator splitting techniques for speeding up the solution of the algebraic equations are also described.
Parallel processors and nonlinear structural dynamics algorithms and software
NASA Technical Reports Server (NTRS)
Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.
1989-01-01
The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.
An aftereffect of adaptation to mean size
Corbett, Jennifer E.; Wurnitsch, Nicole; Schwartz, Alex; Whitney, David
2013-01-01
The visual system rapidly represents the mean size of sets of objects. Here, we investigated whether mean size is explicitly encoded by the visual system, along a single dimension like texture, numerosity, and other visual dimensions susceptible to adaptation. Observers adapted to two sets of dots with different mean sizes, presented simultaneously in opposite visual fields. After adaptation, two test patches replaced the adapting dot sets, and participants judged which test appeared to have the larger average dot diameter. They generally perceived the test that replaced the smaller mean size adapting set as being larger than the test that replaced the larger adapting set. This differential aftereffect held for single test dots (Experiment 2) and high-pass filtered displays (Experiment 3), and changed systematically as a function of the variance of the adapting dot sets (Experiment 4), providing additional support that mean size is adaptable, and therefore explicitly encoded dimension of visual scenes. PMID:24348083
Parallelization of Unsteady Adaptive Mesh Refinement for Unstructured Navier-Stokes Solvers
NASA Technical Reports Server (NTRS)
Schwing, Alan M.; Nompelis, Ioannis; Candler, Graham V.
2014-01-01
This paper explores the implementation of the MPI parallelization in a Navier-Stokes solver using adaptive mesh re nement. Viscous and inviscid test problems are considered for the purpose of benchmarking, as are implicit and explicit time advancement methods. The main test problem for comparison includes e ects from boundary layers and other viscous features and requires a large number of grid points for accurate computation. Ex- perimental validation against double cone experiments in hypersonic ow are shown. The adaptive mesh re nement shows promise for a staple test problem in the hypersonic com- munity. Extension to more advanced techniques for more complicated ows is described.
Achieving Optimal Quantum Acceleration of Frequency Estimation Using Adaptive Coherent Control.
Naghiloo, M; Jordan, A N; Murch, K W
2017-11-03
Precision measurements of frequency are critical to accurate time keeping and are fundamentally limited by quantum measurement uncertainties. While for time-independent quantum Hamiltonians the uncertainty of any parameter scales at best as 1/T, where T is the duration of the experiment, recent theoretical works have predicted that explicitly time-dependent Hamiltonians can yield a 1/T^{2} scaling of the uncertainty for an oscillation frequency. This quantum acceleration in precision requires coherent control, which is generally adaptive. We experimentally realize this quantum improvement in frequency sensitivity with superconducting circuits, using a single transmon qubit. With optimal control pulses, the theoretically ideal frequency precision scaling is reached for times shorter than the decoherence time. This result demonstrates a fundamental quantum advantage for frequency estimation.
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
NASA Technical Reports Server (NTRS)
Jaggers, R. F.
1977-01-01
A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.
NASA Astrophysics Data System (ADS)
Kang, S.; Muralikrishnan, S.; Bui-Thanh, T.
2017-12-01
We propose IMEX HDG-DG schemes for Euler systems on cubed sphere. Of interest is subsonic flow, where the speed of the acoustic wave is faster than that of the nonlinear advection. In order to simulate these flows efficiently, we split the governing system into stiff part describing the fast waves and non-stiff part associated with nonlinear advection. The former is discretized implicitly with HDG method while explicit Runge-Kutta DG discretization is employed for the latter. The proposed IMEX HDG-DG framework: 1) facilitates high-order solution both in time and space; 2) avoids overly small time stepsizes; 3) requires only one linear system solve per time step; and 4) relatively to DG generates smaller and sparser linear system while promoting further parallelism owing to HDG discretization. Numerical results for various test cases demonstrate that our methods are comparable to explicit Runge-Kutta DG schemes in terms of accuracy, while allowing for much larger time stepsizes.
Between-Trial Forgetting Due to Interference and Time in Motor Adaptation.
Kim, Sungshin; Oh, Youngmin; Schweighofer, Nicolas
2015-01-01
Learning a motor task with temporally spaced presentations or with other tasks intermixed between presentations reduces performance during training, but can enhance retention post training. These two effects are known as the spacing and contextual interference effect, respectively. Here, we aimed at testing a unifying hypothesis of the spacing and contextual interference effects in visuomotor adaptation, according to which forgetting between trials due to either spaced presentations or interference by another task will promote between-trial forgetting, which will depress performance during acquisition, but will promote retention. We first performed an experiment with three visuomotor adaptation conditions: a short inter-trial-interval (ITI) condition (SHORT-ITI); a long ITI condition (LONG-ITI); and an alternating condition with two alternated opposite tasks (ALT), with the same single-task ITI as in LONG-ITI. In the SHORT-ITI condition, there was fastest increase in performance during training and largest immediate forgetting in the retention tests. In contrast, in the ALT condition, there was slowest increase in performance during training and little immediate forgetting in the retention tests. Compared to these two conditions, in the LONG-ITI, we found intermediate increase in performance during training and intermediate immediate forgetting. To account for these results, we fitted to the data six possible adaptation models with one or two time scales, and with interference in the fast, or in the slow, or in both time scales. Model comparison confirmed that two time scales and some degree of interferences in either time scale are needed to account for our experimental results. In summary, our results suggest that retention following adaptation is modulated by the degree of between-trial forgetting, which is due to time-based decay in single adaptation task and interferences in multiple adaptation tasks.
Attitudes about race predict individual differences in face adaptation aftereffects.
Elliott, Sarah L; Chu, Kelly; Coleman, Jill
2017-12-01
This study examined whether category boundaries between Black and White faces relate to individual attitudes about race. Fifty-seven (20 Black, 37 White) participants completed measures of explicit racism, implicit racism, collective self-esteem (CSE), and racial centrality. Category boundaries between Black and White faces were measured in three separate conditions: following adaptation to (1) a neutral gray background, a sequence of (2) Black or (3) White faces. Two additional conditions measured category boundaries for facial distortion to investigate whether attitudes relate to mechanisms of racial identity alone, or to more global mechanisms of face perception. Using a two-alternative forced-choice staircase procedure, participants indicated whether a test image appeared to be Black or White (or contracted or expanded). Following neutral adaptation, participants with higher CSE showed category boundaries shifted toward faces with a higher percentage of Black features. In addition, the strength of short-term sensitivity shifts following adaptation to Black and White faces was related to explicit and implicit attitudes about race. Sensitivity shifts were weaker when participants scored higher on explicit racism, but were stronger when participants scored higher on implicit but lower on explicit racism. The results of this study indicate that attitudes about race account for some individual differences in natural category boundaries between races as well as the strength of identity aftereffects following face adaptation. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.
2006-12-01
Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Event-by-Event Study of Space-Time Dynamics in Flux-Tube Fragmentation
Wong, Cheuk-Yin
2017-05-25
In the semi-classical description of the flux-tube fragmentation process for hadron production and hadronization in high-energymore » $e^+e^-$ annihilations and $pp$ collisions, the rapidity-space-time ordering and the local conservation laws of charge, flavor, and momentum provide a set of powerful tools that may allow the reconstruction of the space-time dynamics of quarks and mesons in exclusive measurements of produced hadrons, on an event-by-event basis. We propose procedures to reconstruct the space-time dynamics from event-by-event exclusive hadron data to exhibit explicitly the ordered chain of hadrons produced in a flux tube fragmentation. As a supplementary tool, we infer the average space-time coordinates of the $q$-$$\\bar q$$ pair production vertices from the $$\\pi^-$$ rapidity distribution data obtained by the NA61/SHINE Collaboration in $pp$ collisions at $$\\sqrt{s}$$ = 6.3 to 17.3 GeV.« less
Event-by-Event Study of Space-Time Dynamics in Flux-Tube Fragmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, Cheuk-Yin
In the semi-classical description of the flux-tube fragmentation process for hadron production and hadronization in high-energymore » $e^+e^-$ annihilations and $pp$ collisions, the rapidity-space-time ordering and the local conservation laws of charge, flavor, and momentum provide a set of powerful tools that may allow the reconstruction of the space-time dynamics of quarks and mesons in exclusive measurements of produced hadrons, on an event-by-event basis. We propose procedures to reconstruct the space-time dynamics from event-by-event exclusive hadron data to exhibit explicitly the ordered chain of hadrons produced in a flux tube fragmentation. As a supplementary tool, we infer the average space-time coordinates of the $q$-$$\\bar q$$ pair production vertices from the $$\\pi^-$$ rapidity distribution data obtained by the NA61/SHINE Collaboration in $pp$ collisions at $$\\sqrt{s}$$ = 6.3 to 17.3 GeV.« less
NASA Astrophysics Data System (ADS)
Liu, Jiangen; Zhang, Yufeng
2018-01-01
This paper gives an analytical study of dynamic behavior of the exact solutions of nonlinear Korteweg-de Vries equation with space-time local fractional derivatives. By using the improved (G‧ G )-expansion method, the explicit traveling wave solutions including periodic solutions, dark soliton solutions, soliton solutions and soliton-like solutions, are obtained for the first time. They can better help us further understand the physical phenomena and provide a strong basis. Meanwhile, some solutions are presented through 3D-graphs.
Efficient adaptive pseudo-symplectic numerical integration techniques for Landau-Lifshitz dynamics
NASA Astrophysics Data System (ADS)
d'Aquino, M.; Capuano, F.; Coppola, G.; Serpico, C.; Mayergoyz, I. D.
2018-05-01
Numerical time integration schemes for Landau-Lifshitz magnetization dynamics are considered. Such dynamics preserves the magnetization amplitude and, in the absence of dissipation, also implies the conservation of the free energy. This property is generally lost when time discretization is performed for the numerical solution. In this work, explicit numerical schemes based on Runge-Kutta methods are introduced. The schemes are termed pseudo-symplectic in that they are accurate to order p, but preserve magnetization amplitude and free energy to order q > p. An effective strategy for adaptive time-stepping control is discussed for schemes of this class. Numerical tests against analytical solutions for the simulation of fast precessional dynamics are performed in order to point out the effectiveness of the proposed methods.
Improving our legacy: Incorporation of adaptive management into state wildlife action plans
Fontaine, J.J.
2011-01-01
The loss of biodiversity is a mounting concern, but despite numerous attempts there are few large scale conservation efforts that have proven successful in reversing current declines. Given the challenge of biodiversity conservation, there is a need to develop strategic conservation plans that address species declines even with the inherent uncertainty in managing multiple species in complex environments. In 2002, the State Wildlife Grant program was initiated to fulfill this need, and while not explicitly outlined by Congress follows the fundamental premise of adaptive management, 'Learning by doing'. When action is necessary, but basic biological information and an understanding of appropriate management strategies are lacking, adaptive management enables managers to be proactive in spite of uncertainty. However, regardless of the strengths of adaptive management, the development of an effective adaptive management framework is challenging. In a review of 53 State Wildlife Action Plans, I found a keen awareness by planners that adaptive management was an effective method for addressing biodiversity conservation, but the development and incorporation of explicit adaptive management approaches within each plan remained elusive. Only ???25% of the plans included a framework for how adaptive management would be implemented at the project level within their state. There was, however, considerable support across plans for further development and implementation of adaptive management. By furthering the incorporation of adaptive management principles in conservation plans and explicitly outlining the decision making process, states will be poised to meet the pending challenges to biodiversity conservation. ?? 2010 .
Improving our legacy: incorporation of adaptive management into state wildlife action plans.
Fontaine, Joseph J
2011-05-01
The loss of biodiversity is a mounting concern, but despite numerous attempts there are few large scale conservation efforts that have proven successful in reversing current declines. Given the challenge of biodiversity conservation, there is a need to develop strategic conservation plans that address species declines even with the inherent uncertainty in managing multiple species in complex environments. In 2002, the State Wildlife Grant program was initiated to fulfill this need, and while not explicitly outlined by Congress follows the fundamental premise of adaptive management, 'Learning by doing'. When action is necessary, but basic biological information and an understanding of appropriate management strategies are lacking, adaptive management enables managers to be proactive in spite of uncertainty. However, regardless of the strengths of adaptive management, the development of an effective adaptive management framework is challenging. In a review of 53 State Wildlife Action Plans, I found a keen awareness by planners that adaptive management was an effective method for addressing biodiversity conservation, but the development and incorporation of explicit adaptive management approaches within each plan remained elusive. Only ~25% of the plans included a framework for how adaptive management would be implemented at the project level within their state. There was, however, considerable support across plans for further development and implementation of adaptive management. By furthering the incorporation of adaptive management principles in conservation plans and explicitly outlining the decision making process, states will be poised to meet the pending challenges to biodiversity conservation. Published by Elsevier Ltd.
Space-time models based on random fields with local interactions
NASA Astrophysics Data System (ADS)
Hristopulos, Dionissios T.; Tsantili, Ivi C.
2016-08-01
The analysis of space-time data from complex, real-life phenomena requires the use of flexible and physically motivated covariance functions. In most cases, it is not possible to explicitly solve the equations of motion for the fields or the respective covariance functions. In the statistical literature, covariance functions are often based on mathematical constructions. In this paper, we propose deriving space-time covariance functions by solving “effective equations of motion”, which can be used as statistical representations of systems with diffusive behavior. In particular, we propose to formulate space-time covariance functions based on an equilibrium effective Hamiltonian using the linear response theory. The effective space-time dynamics is then generated by a stochastic perturbation around the equilibrium point of the classical field Hamiltonian leading to an associated Langevin equation. We employ a Hamiltonian which extends the classical Gaussian field theory by including a curvature term and leads to a diffusive Langevin equation. Finally, we derive new forms of space-time covariance functions.
Estimating the number of people in crowded scenes
NASA Astrophysics Data System (ADS)
Kim, Minjin; Kim, Wonjun; Kim, Changick
2011-01-01
This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.
NASA Technical Reports Server (NTRS)
Bosworth, John T.
2008-01-01
Adaptive flight control systems have the potential to be resilient to extreme changes in airplane behavior. Extreme changes could be a result of a system failure or of damage to the airplane. The goal for the adaptive system is to provide an increase in survivability in the event that these extreme changes occur. A direct adaptive neural-network-based flight control system was developed for the National Aeronautics and Space Administration NF-15B Intelligent Flight Control System airplane. The adaptive element was incorporated into a dynamic inversion controller with explicit reference model-following. As a test the system was subjected to an abrupt change in plant stability simulating a destabilizing failure. Flight evaluations were performed with and without neural network adaptation. The results of these flight tests are presented. Comparison with simulation predictions and analysis of the performance of the adaptation system are discussed. The performance of the adaptation system is assessed in terms of its ability to stabilize the vehicle and reestablish good onboard reference model-following. Flight evaluation with the simulated destabilizing failure and adaptation engaged showed improvement in the vehicle stability margins. The convergent properties of this initial system warrant additional improvement since continued maneuvering caused continued adaptation change. Compared to the non-adaptive system the adaptive system provided better closed-loop behavior with improved matching of the onboard reference model. A detailed discussion of the flight results is presented.
Architecture for Cognitive Networking within NASA's Future Space Communications Infrastructure
NASA Technical Reports Server (NTRS)
Clark, Gilbert; Eddy, Wesley M.; Johnson, Sandra K.; Barnes, James; Brooks, David
2016-01-01
Future space mission concepts and designs pose many networking challenges for command, telemetry, and science data applications with diverse end-to-end data delivery needs. For future end-to-end architecture designs, a key challenge is meeting expected application quality of service requirements for multiple simultaneous mission data flows with options to use diverse onboard local data buses, commercial ground networks, and multiple satellite relay constellations in LEO, GEO, MEO, or even deep space relay links. Effectively utilizing a complex network topology requires orchestration and direction that spans the many discrete, individually addressable computer systems, which cause them to act in concert to achieve the overall network goals. The system must be intelligent enough to not only function under nominal conditions, but also adapt to unexpected situations, and reorganize or adapt to perform roles not originally intended for the system or explicitly programmed. This paper describes an architecture enabling the development and deployment of cognitive networking capabilities into the envisioned future NASA space communications infrastructure. We begin by discussing the need for increased automation, including inter-system discovery and collaboration. This discussion frames the requirements for an architecture supporting cognitive networking for future missions and relays, including both existing endpoint-based networking models and emerging information-centric models. From this basis, we discuss progress on a proof-of-concept implementation of this architecture, and results of implementation and initial testing of a cognitive networking on-orbit application on the SCaN Testbed attached to the International Space Station.
Architecture for Cognitive Networking within NASAs Future Space Communications Infrastructure
NASA Technical Reports Server (NTRS)
Clark, Gilbert J., III; Eddy, Wesley M.; Johnson, Sandra K.; Barnes, James; Brooks, David
2016-01-01
Future space mission concepts and designs pose many networking challenges for command, telemetry, and science data applications with diverse end-to-end data delivery needs. For future end-to-end architecture designs, a key challenge is meeting expected application quality of service requirements for multiple simultaneous mission data flows with options to use diverse onboard local data buses, commercial ground networks, and multiple satellite relay constellations in LEO, MEO, GEO, or even deep space relay links. Effectively utilizing a complex network topology requires orchestration and direction that spans the many discrete, individually addressable computer systems, which cause them to act in concert to achieve the overall network goals. The system must be intelligent enough to not only function under nominal conditions, but also adapt to unexpected situations, and reorganize or adapt to perform roles not originally intended for the system or explicitly programmed. This paper describes architecture features of cognitive networking within the future NASA space communications infrastructure, and interacting with the legacy systems and infrastructure in the meantime. The paper begins by discussing the need for increased automation, including inter-system collaboration. This discussion motivates the features of an architecture including cognitive networking for future missions and relays, interoperating with both existing endpoint-based networking models and emerging information-centric models. From this basis, we discuss progress on a proof-of-concept implementation of this architecture as a cognitive networking on-orbit application on the SCaN Testbed attached to the International Space Station.
NASA Astrophysics Data System (ADS)
Van Londersele, Arne; De Zutter, Daniël; Vande Ginste, Dries
2017-08-01
This work focuses on efficient full-wave solutions of multiscale electromagnetic problems in the time domain. Three local implicitization techniques are proposed and carefully analyzed in order to relax the traditional time step limit of the Finite-Difference Time-Domain (FDTD) method on a nonuniform, staggered, tensor product grid: Newmark, Crank-Nicolson (CN) and Alternating-Direction-Implicit (ADI) implicitization. All of them are applied in preferable directions, alike Hybrid Implicit-Explicit (HIE) methods, as to limit the rank of the sparse linear systems. Both exponential and linear stability are rigorously investigated for arbitrary grid spacings and arbitrary inhomogeneous, possibly lossy, isotropic media. Numerical examples confirm the conservation of energy inside a cavity for a million iterations if the time step is chosen below the proposed, relaxed limit. Apart from the theoretical contributions, new accomplishments such as the development of the leapfrog Alternating-Direction-Hybrid-Implicit-Explicit (ADHIE) FDTD method and a less stringent Courant-like time step limit for the conventional, fully explicit FDTD method on a nonuniform grid, have immediate practical applications.
Adaptive control of a Stewart platform-based manipulator
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Antrazi, Sami S.; Zhou, Zhen-Lei; Campbell, Charles E., Jr.
1993-01-01
A joint-space adaptive control scheme for controlling noncompliant motion of a Stewart platform-based manipulator (SPBM) was implemented in the Hardware Real-Time Emulator at Goddard Space Flight Center. The six-degrees of freedom SPBM uses two platforms and six linear actuators driven by dc motors. The adaptive control scheme is based on proportional-derivative controllers whose gains are adjusted by an adaptation law based on model reference adaptive control and Liapunov direct method. It is concluded that the adaptive control scheme provides superior tracking capability as compared to fixed-gain controllers.
Adaptability in linkage of soil carbon nutrient cycles - the SEAM model
NASA Astrophysics Data System (ADS)
Wutzler, Thomas; Zaehle, Sönke; Schrumpf, Marion; Ahrens, Bernhard; Reichstein, Markus
2017-04-01
In order to understand the coupling of carbon (C) and nitrogen (N) cycles, it is necessary to understand C and N-use efficiencies of microbial soil organic matter (SOM) decomposition. While important controls of those efficiencies by microbial community adaptations have been shown at the scale of a soil pore, an abstract simplified representation of community adaptations is needed at ecosystem scale. Therefore we developed the soil enzyme allocation model (SEAM), which takes a holistic, partly optimality based approach to describe C and N dynamics at the spatial scale of an ecosystem and time-scales of years and longer. We explicitly modelled community adaptation strategies of resource allocation to extracellular enzymes and enzyme limitations on SOM decomposition. Using SEAM, we explored whether alternative strategy-hypotheses can have strong effects on SOM and inorganic N cycling. Results from prototypical simulations and a calibration to observations of an intensive pasture site showed that the so-called revenue enzyme allocation strategy was most viable. This strategy accounts for microbial adaptations to both, stoichiometry and amount of different SOM resources, and supported the largest microbial biomass under a wide range of conditions. Predictions of the SEAM model were qualitatively similar to models explicitly representing competing microbial groups. With adaptive enzyme allocation under conditions of high C/N ratio of litter inputs, N in formerly locked in slowly degrading SOM pools was made accessible, whereas with high N inputs, N was sequestered in SOM and protected from leaching. The finding that adaptation in enzyme allocation changes C and N-use efficiencies of SOM decomposition implies that concepts of C-nutrient cycle interactions should take account for the effects of such adaptations. This can be done using a holistic optimality approach.
Evolutionary patterns and processes in the radiation of phyllostomid bats
2011-01-01
Background The phyllostomid bats present the most extensive ecological and phenotypic radiation known among mammal families. This group is an important model system for studies of cranial ecomorphology and functional optimisation because of the constraints imposed by the requirements of flight. A number of studies supporting phyllostomid adaptation have focused on qualitative descriptions or correlating functional variables and diet, but explicit tests of possible evolutionary mechanisms and scenarios for phenotypic diversification have not been performed. We used a combination of morphometric and comparative methods to test hypotheses regarding the evolutionary processes behind the diversification of phenotype (mandible shape and size) and diet during the phyllostomid radiation. Results The different phyllostomid lineages radiate in mandible shape space, with each feeding specialisation evolving towards different axes. Size and shape evolve quite independently, as the main directions of shape variation are associated with mandible elongation (nectarivores) or the relative size of tooth rows and mandibular processes (sanguivores and frugivores), which are not associated with size changes in the mandible. The early period of phyllostomid diversification is marked by a burst of shape, size, and diet disparity (before 20 Mya), larger than expected by neutral evolution models, settling later to a period of relative phenotypic and ecological stasis. The best fitting evolutionary model for both mandible shape and size divergence was an Ornstein-Uhlenbeck process with five adaptive peaks (insectivory, carnivory, sanguivory, nectarivory and frugivory). Conclusions The radiation of phyllostomid bats presented adaptive and non-adaptive components nested together through the time frame of the family's evolution. The first 10 My of the radiation were marked by strong phenotypic and ecological divergence among ancestors of modern lineages, whereas the remaining 20 My were marked by stasis around a number of probable adaptive peaks. A considerable amount of cladogenesis and speciation in this period is likely to be the result of non-adaptive allopatric divergence or adaptations to peaks within major dietary categories. PMID:21605452
Evolutionary patterns and processes in the radiation of phyllostomid bats.
Monteiro, Leandro R; Nogueira, Marcelo R
2011-05-23
The phyllostomid bats present the most extensive ecological and phenotypic radiation known among mammal families. This group is an important model system for studies of cranial ecomorphology and functional optimisation because of the constraints imposed by the requirements of flight. A number of studies supporting phyllostomid adaptation have focused on qualitative descriptions or correlating functional variables and diet, but explicit tests of possible evolutionary mechanisms and scenarios for phenotypic diversification have not been performed. We used a combination of morphometric and comparative methods to test hypotheses regarding the evolutionary processes behind the diversification of phenotype (mandible shape and size) and diet during the phyllostomid radiation. The different phyllostomid lineages radiate in mandible shape space, with each feeding specialisation evolving towards different axes. Size and shape evolve quite independently, as the main directions of shape variation are associated with mandible elongation (nectarivores) or the relative size of tooth rows and mandibular processes (sanguivores and frugivores), which are not associated with size changes in the mandible. The early period of phyllostomid diversification is marked by a burst of shape, size, and diet disparity (before 20 Mya), larger than expected by neutral evolution models, settling later to a period of relative phenotypic and ecological stasis. The best fitting evolutionary model for both mandible shape and size divergence was an Ornstein-Uhlenbeck process with five adaptive peaks (insectivory, carnivory, sanguivory, nectarivory and frugivory). The radiation of phyllostomid bats presented adaptive and non-adaptive components nested together through the time frame of the family's evolution. The first 10 My of the radiation were marked by strong phenotypic and ecological divergence among ancestors of modern lineages, whereas the remaining 20 My were marked by stasis around a number of probable adaptive peaks. A considerable amount of cladogenesis and speciation in this period is likely to be the result of non-adaptive allopatric divergence or adaptations to peaks within major dietary categories.
Orlando, Paul A; Gatenby, Robert A; Brown, Joel S
2013-01-01
We apply competition colonization tradeoff models to tumor growth and invasion dynamics to explore the hypothesis that varying selection forces will result in predictable phenotypic differences in cells at the tumor invasive front compared to those in the core. Spatially, ecologically, and evolutionarily explicit partial differential equation models of tumor growth confirm that spatial invasion produces selection pressure for motile phenotypes. The effects of the invasive phenotype on normal adjacent tissue determine the patterns of growth and phenotype distribution. If tumor cells do not destroy their environment, colonizer and competitive phenotypes coexist with the former localized at the invasion front and the latter, to the tumor interior. If tumors cells do destroy their environment, then cell motility is strongly selected resulting in accelerated invasion speed with time. Our results suggest that the widely observed genetic heterogeneity within cancers may not be the stochastic effect of random mutations. Rather, it may be the consequence of predictable variations in environmental selection forces and corresponding phenotypic adaptations.
Orlando, Paul A.; Gatenby, Robert A.; Brown, Joel S.
2013-01-01
We apply competition colonization tradeoff models to tumor growth and invasion dynamics to explore the hypothesis that varying selection forces will result in predictable phenotypic differences in cells at the tumor invasive front compared to those in the core. Spatially, ecologically, and evolutionarily explicit partial differential equation models of tumor growth confirm that spatial invasion produces selection pressure for motile phenotypes. The effects of the invasive phenotype on normal adjacent tissue determine the patterns of growth and phenotype distribution. If tumor cells do not destroy their environment, colonizer and competitive phenotypes coexist with the former localized at the invasion front and the latter, to the tumor interior. If tumors cells do destroy their environment, then cell motility is strongly selected resulting in accelerated invasion speed with time. Our results suggest that the widely observed genetic heterogeneity within cancers may not be the stochastic effect of random mutations. Rather, it may be the consequence of predictable variations in environmental selection forces and corresponding phenotypic adaptations. PMID:23508890
Helicopter time-domain electromagnetic numerical simulation based on Leapfrog ADI-FDTD
NASA Astrophysics Data System (ADS)
Guan, S.; Ji, Y.; Li, D.; Wu, Y.; Wang, A.
2017-12-01
We present a three-dimension (3D) Alternative Direction Implicit Finite-Difference Time-Domain (Leapfrog ADI-FDTD) method for the simulation of helicopter time-domain electromagnetic (HTEM) detection. This method is different from the traditional explicit FDTD, or ADI-FDTD. Comparing with the explicit FDTD, leapfrog ADI-FDTD algorithm is no longer limited by Courant-Friedrichs-Lewy(CFL) condition. Thus, the time step is longer. Comparing with the ADI-FDTD, we reduce the equations from 12 to 6 and .the Leapfrog ADI-FDTD method will be easier for the general simulation. First, we determine initial conditions which are adopted from the existing method presented by Wang and Tripp(1993). Second, we derive Maxwell equation using a new finite difference equation by Leapfrog ADI-FDTD method. The purpose is to eliminate sub-time step and retain unconditional stability characteristics. Third, we add the convolution perfectly matched layer (CPML) absorbing boundary condition into the leapfrog ADI-FDTD simulation and study the absorbing effect of different parameters. Different absorbing parameters will affect the absorbing ability. We find the suitable parameters after many numerical experiments. Fourth, We compare the response with the 1-Dnumerical result method for a homogeneous half-space to verify the correctness of our algorithm.When the model contains 107*107*53 grid points, the conductivity is 0.05S/m. The results show that Leapfrog ADI-FDTD need less simulation time and computer storage space, compared with ADI-FDTD. The calculation speed decreases nearly four times, memory occupation decreases about 32.53%. Thus, this algorithm is more efficient than the conventional ADI-FDTD method for HTEM detection, and is more precise than that of explicit FDTD in the late time.
NASA Astrophysics Data System (ADS)
Santos, Léonard; Thirel, Guillaume; Perrin, Charles
2018-04-01
In many conceptual rainfall-runoff models, the water balance differential equations are not explicitly formulated. These differential equations are solved sequentially by splitting the equations into terms that can be solved analytically with a technique called operator splitting
. As a result, only the solutions of the split equations are used to present the different models. This article provides a methodology to make the governing water balance equations of a bucket-type rainfall-runoff model explicit and to solve them continuously. This is done by setting up a comprehensive state-space representation of the model. By representing it in this way, the operator splitting, which makes the structural analysis of the model more complex, could be removed. In this state-space representation, the lag functions (unit hydrographs), which are frequent in rainfall-runoff models and make the resolution of the representation difficult, are first replaced by a so-called Nash cascade
and then solved with a robust numerical integration technique. To illustrate this methodology, the GR4J model is taken as an example. The substitution of the unit hydrographs with a Nash cascade, even if it modifies the model behaviour when solved using operator splitting, does not modify it when the state-space representation is solved using an implicit integration technique. Indeed, the flow time series simulated by the new representation of the model are very similar to those simulated by the classic model. The use of a robust numerical technique that approximates a continuous-time model also improves the lag parameter consistency across time steps and provides a more time-consistent model with time-independent parameters.
Brownian motion with adaptive drift for remaining useful life prediction: Revisited
NASA Astrophysics Data System (ADS)
Wang, Dong; Tsui, Kwok-Leung
2018-01-01
Linear Brownian motion with constant drift is widely used in remaining useful life predictions because its first hitting time follows the inverse Gaussian distribution. State space modelling of linear Brownian motion was proposed to make the drift coefficient adaptive and incorporate on-line measurements into the first hitting time distribution. Here, the drift coefficient followed the Gaussian distribution, and it was iteratively estimated by using Kalman filtering once a new measurement was available. Then, to model nonlinear degradation, linear Brownian motion with adaptive drift was extended to nonlinear Brownian motion with adaptive drift. However, in previous studies, an underlying assumption used in the state space modelling was that in the update phase of Kalman filtering, the predicted drift coefficient at the current time exactly equalled the posterior drift coefficient estimated at the previous time, which caused a contradiction with the predicted drift coefficient evolution driven by an additive Gaussian process noise. In this paper, to alleviate such an underlying assumption, a new state space model is constructed. As a result, in the update phase of Kalman filtering, the predicted drift coefficient at the current time evolves from the posterior drift coefficient at the previous time. Moreover, the optimal Kalman filtering gain for iteratively estimating the posterior drift coefficient at any time is mathematically derived. A discussion that theoretically explains the main reasons why the constructed state space model can result in high remaining useful life prediction accuracies is provided. Finally, the proposed state space model and its associated Kalman filtering gain are applied to battery prognostics.
Coherent Multimodal Sensory Information Allows Switching between Gravitoinertial Contexts
Barbiero, Marie; Rousseau, Célia; Papaxanthis, Charalambos; White, Olivier
2017-01-01
Whether the central nervous system is capable to switch between contexts critically depends on experimental details. Motor control studies regularly adopt robotic devices to perturb the dynamics of a certain task. Other approaches investigate motor control by altering the gravitoinertial context itself as in parabolic flights and human centrifuges. In contrast to conventional robotic experiments, where only the hand is perturbed, these gravitoinertial or immersive settings coherently plunge participants into new environments. However, radically different they are, perfect adaptation of motor responses are commonly reported. In object manipulation tasks, this translates into a good matching of the grasping force or grip force to the destabilizing load force. One possible bias in these protocols is the predictability of the forthcoming dynamics. Here we test whether the successful switching and adaptation processes observed in immersive environments are a consequence of the fact that participants can predict the perturbation schedule. We used a short arm human centrifuge to decouple the effects of space and time on the dynamics of an object manipulation task by adding an unnatural explicit position-dependent force. We created different dynamical contexts by asking 20 participants to move the object at three different paces. These contextual sessions were interleaved such that we could simulate concurrent learning. We assessed adaptation by measuring how grip force was adjusted to this unnatural load force. We found that the motor system can switch between new unusual dynamical contexts, as reported by surprisingly well-adjusted grip forces, and that this capacity is not a mere consequence of the ability to predict the time course of the upcoming dynamics. We posit that a coherent flow of multimodal sensory information born in a homogeneous milieu allows switching between dynamical contexts. PMID:28553233
Higher order explicit symmetric integrators for inseparable forms of coordinates and momenta
NASA Astrophysics Data System (ADS)
Liu, Lei; Wu, Xin; Huang, Guoqing; Liu, Fuyao
2016-06-01
Pihajoki proposed the extended phase-space second-order explicit symmetric leapfrog methods for inseparable Hamiltonian systems. On the basis of this work, we survey a critical problem on how to mix the variables in the extended phase space. Numerical tests show that sequent permutations of coordinates and momenta can make the leapfrog-like methods yield the most accurate results and the optimal long-term stabilized error behaviour. We also present a novel method to construct many fourth-order extended phase-space explicit symmetric integration schemes. Each scheme represents the symmetric production of six usual second-order leapfrogs without any permutations. This construction consists of four segments: the permuted coordinates, triple product of the usual second-order leapfrog without permutations, the permuted momenta and the triple product of the usual second-order leapfrog without permutations. Similarly, extended phase-space sixth, eighth and other higher order explicit symmetric algorithms are available. We used several inseparable Hamiltonian examples, such as the post-Newtonian approach of non-spinning compact binaries, to show that one of the proposed fourth-order methods is more efficient than the existing methods; examples include the fourth-order explicit symplectic integrators of Chin and the fourth-order explicit and implicit mixed symplectic integrators of Zhong et al. Given a moderate choice for the related mixing and projection maps, the extended phase-space explicit symplectic-like methods are well suited for various inseparable Hamiltonian problems. Samples of these problems involve the algorithmic regularization of gravitational systems with velocity-dependent perturbations in the Solar system and post-Newtonian Hamiltonian formulations of spinning compact objects.
Detection and Imaging of Moving Targets with LiMIT SAR Data
2017-03-03
include space time adaptive processing (STAP) or displaced phase center antenna (DPCA) [4]–[7]. Page et al. combined constant acceleration target...motion focusing with space-time adaptive processing (STAP), and included the refocusing parameters in the STAP steering vector. Due to inhomogenous...wavelength λ and slow time t, of a moving target after matched filter and passband equalization processing can be expressed as: P (t) = exp ( −j 4π λ ||~rp
BOOK REVIEW: Advanced Mechanics and General Relativity Advanced Mechanics and General Relativity
NASA Astrophysics Data System (ADS)
Louko, Jorma
2011-04-01
Joel Franklin's textbook `Advanced Mechanics and General Relativity' comprises two partially overlapping, partially complementary introductory paths into general relativity at advanced undergraduate level. Path I starts with the Lagrangian and Hamiltonian formulations of Newtonian point particle motion, emphasising the action principle and the connection between symmetries and conservation laws. The concepts are then adapted to point particle motion in Minkowski space, introducing Lorentz transformations as symmetries of the action. There follows a focused development of tensor calculus, parallel transport and curvature, using examples from Newtonian mechanics and special relativity, culminating in the field equations of general relativity. The Schwarzschild solution is analysed, including a detailed discussion of the tidal forces on a radially infalling observer. Basics of gravitational radiation are examined, highlighting the similarities to and differences from electromagnetic radiation. The final topics in Path I are equatorial geodesics in Kerr and the motion of a relativistic string in Minkowski space. Path II starts by introducing scalar field theory on Minkowski space as a limit of point masses connected by springs, emphasising the action principle, conservation laws and the energy-momentum tensor. The action principle for electromagnetism is introduced, and the coupling of electromagnetism to a complex scalar field is developed in a detailed and pedagogical fashion. A free symmetric second-rank tensor field on Minkowski space is introduced, and the action principle of general relativity is recovered from coupling the second-rank tensor to its own energy-momentum tensor. Path II then merges with Path I and, supplanted with judicious early selections from Path I, can proceed to the Schwarzschild solution. The choice of material in each path is logical and focused. A notable example in Path I is that Lorentz transformations in Minkowki space are introduced efficiently and with a minimum of fuss, as symmetries of a geodesic action principle. Another example is a similarly efficient and hands-on introduction of Killing vectors. A consequence of this focus is that some perhaps traditional material is omitted. For example, Lorentz contraction appears briefly in the incompatibility discussion of special relativity and Newtonian gravity but is not introduced in a more systematic manner. The style is informal and very readable, with detailed explanations, frequent summaries of what has been achieved and pointers to what is about to follow. There are plenty of examples and some 150 well-chosen exercises, and the author's website hosts relevant Maple sample scripts for tensor manipulations and variational problems. The text conveys an enthusiasm for explaining the subject, frequently reminiscent of the Feynman lectures. The presentation emphasises explicit calculations and examples, largely avoiding technical definitions of abstract mathematical concepts. The author negotiates the challenge between readability and technical accuracy with admirable skill, striking a balance that will be much appreciated by the target audience. For example, the notion of spherical symmetry in curved spacetime is introduced informally as a generalisation of a spherically symmetric vector field in Minkowski space, and spherically symmetric vacuum and electrovacuum solutions are then carefully discussed so that a formal definition of spherical symmetry is not required. A rare instance that may border on oversimplification is the brief discussion of curvature scalars versus spacetime singularities. Towards the end of the book, the text mentions with increasing explicitness that inserting a gauge condition or an ansatz in an action before varying may not always give the correct equations of motion. It would be useful to be more explicit about this point already earlier in the book. In particular, the text refers to the reparametrisation-invariant square root action of a relativistic point particle as being `in proper time parametrisation', while the actual calculations of course impose the proper time condition only in the equation of motion after the action has been varied. Two presentational conventions surprised me. First, the speed of light is throughout kept explicitly as c: might advanced undergraduates appreciate being trusted with geometric units, reinstating c by dimensional analysis when desired? Second, in Minkowski space field theory, the overall coefficient in the action is chosen so that the time derivative term is negative, with the consequence that the Hamiltonian is negative (as explicitly noted in an exercise) and the definition of the energy-momentum tensor must include a minus sign to achieve the usual choice T00 > 0. This convention eliminates some minus signs in the computations with the spin two field: does this computational saving outweigh the adjustment awaiting those who continue with the topic at graduate level? Overall, Franklin's book is an excellent addition to the literature, and its readability and explicitness will be appreciated by the target audience. Should I be teaching an introductory undergraduate class in general relativity in the near future, I would seriously consider this book for the main class text.
Flexible explicit but rigid implicit learning in a visuomotor adaptation task
Bond, Krista M.
2015-01-01
There is mounting evidence for the idea that performance in a visuomotor rotation task can be supported by both implicit and explicit forms of learning. The implicit component of learning has been well characterized in previous experiments and is thought to arise from the adaptation of an internal model driven by sensorimotor prediction errors. However, the role of explicit learning is less clear, and previous investigations aimed at characterizing the explicit component have relied on indirect measures such as dual-task manipulations, posttests, and descriptive computational models. To address this problem, we developed a new method for directly assaying explicit learning by having participants verbally report their intended aiming direction on each trial. While our previous research employing this method has demonstrated the possibility of measuring explicit learning over the course of training, it was only tested over a limited scope of manipulations common to visuomotor rotation tasks. In the present study, we sought to better characterize explicit and implicit learning over a wider range of task conditions. We tested how explicit and implicit learning change as a function of the specific visual landmarks used to probe explicit learning, the number of training targets, and the size of the rotation. We found that explicit learning was remarkably flexible, responding appropriately to task demands. In contrast, implicit learning was strikingly rigid, with each task condition producing a similar degree of implicit learning. These results suggest that explicit learning is a fundamental component of motor learning and has been overlooked or conflated in previous visuomotor tasks. PMID:25855690
A DYNAMIC MODEL OF AN ESTUARINE INVASION BY A NON-NATIVE SEAGRASS
Mathematical and simulation models provide an excellent tool for examining and predicting biological invasions in time and space; however, traditional models do not incorporate dynamic rates of population growth, which limits their realism. We developed a spatially explicit simul...
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.; Karel, S.
1975-01-01
An algorithm for solving the nonlinear stationary Navier-Stokes problem is developed. Explicit error estimates are given. This mathematical technique is potentially adaptable to the separation problem.
Configuration of the thermal landscape determines thermoregulatory performance of ectotherms
Sears, Michael W.; Angilletta, Michael J.; Schuler, Matthew S.; Borchert, Jason; Dilliplane, Katherine F.; Stegman, Monica; Rusch, Travis W.; Mitchell, William A.
2016-01-01
Although most organisms thermoregulate behaviorally, biologists still cannot easily predict whether mobile animals will thermoregulate in natural environments. Current models fail because they ignore how the spatial distribution of thermal resources constrains thermoregulatory performance over space and time. To overcome this limitation, we modeled the spatially explicit movements of animals constrained by access to thermal resources. Our models predict that ectotherms thermoregulate more accurately when thermal resources are dispersed throughout space than when these resources are clumped. This prediction was supported by thermoregulatory behaviors of lizards in outdoor arenas with known distributions of environmental temperatures. Further, simulations showed how the spatial structure of the landscape qualitatively affects responses of animals to climate. Biologists will need spatially explicit models to predict impacts of climate change on local scales. PMID:27601639
Some aspects of algorithm performance and modeling in transient analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1981-01-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Dang Van; NeuroSpin, Bat145, Point Courrier 156, CEA Saclay Center, 91191 Gif-sur-Yvette Cedex; Li, Jing-Rebecca, E-mail: jingrebecca.li@inria.fr
2014-04-15
The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch–Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution atmore » the cell interfaces by using double nodes. Using a transformation of the Bloch–Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge–Kutta–Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.« less
NASA Astrophysics Data System (ADS)
de Almeida, Valmor F.
2017-07-01
A phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equation and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
Sampling-free Bayesian inversion with adaptive hierarchical tensor representations
NASA Astrophysics Data System (ADS)
Eigel, Martin; Marschall, Manuel; Schneider, Reinhold
2018-03-01
A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.
A Comparison of Three Programming Models for Adaptive Applications
NASA Technical Reports Server (NTRS)
Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)
2000-01-01
We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.
NASA Technical Reports Server (NTRS)
Griffin, Brian Joseph; Burken, John J.; Xargay, Enric
2010-01-01
This paper presents an L(sub 1) adaptive control augmentation system design for multi-input multi-output nonlinear systems in the presence of unmatched uncertainties which may exhibit significant cross-coupling effects. A piecewise continuous adaptive law is adopted and extended for applicability to multi-input multi-output systems that explicitly compensates for dynamic cross-coupling. In addition, explicit use of high-fidelity actuator models are added to the L1 architecture to reduce uncertainties in the system. The L(sub 1) multi-input multi-output adaptive control architecture is applied to the X-29 lateral/directional dynamics and results are evaluated against a similar single-input single-output design approach.
Space-time adaptive solution of inverse problems with the discrete adjoint method
NASA Astrophysics Data System (ADS)
Alexe, Mihai; Sandu, Adrian
2014-08-01
This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.
NASA Astrophysics Data System (ADS)
Simoni, L.; Secchi, S.; Schrefler, B. A.
2008-12-01
This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.
1981-02-01
converting Bu’s to Bal s Case 2: Pa,cc (t) = 0.25 + 0.075t = PaA(t) =1.0 In this case a space (and hence time) varying representation of the attrition rate...34 a Attrition rates can be made time ( space ) dependent. * Note that the attrition law is assumed for illustration only to be in accordance with a...34 Systems Reserach Lab., Dept. of Industrial Eng., University of Michigan. Gaver, D.P. and Tonguc K. (1979) "Modelling the influnece of information on
Reducing uncertainty about objective functions in adaptive management
Williams, B.K.
2012-01-01
This paper extends the uncertainty framework of adaptive management to include uncertainty about the objectives to be used in guiding decisions. Adaptive decision making typically assumes explicit and agreed-upon objectives for management, but allows for uncertainty as to the structure of the decision process that generates change through time. Yet it is not unusual for there to be uncertainty (or disagreement) about objectives, with different stakeholders expressing different views not only about resource responses to management but also about the appropriate management objectives. In this paper I extend the treatment of uncertainty in adaptive management, and describe a stochastic structure for the joint occurrence of uncertainty about objectives as well as models, and show how adaptive decision making and the assessment of post-decision monitoring data can be used to reduce uncertainties of both kinds. Different degrees of association between model and objective uncertainty lead to different patterns of learning about objectives. ?? 2011.
Communication: Adaptive boundaries in multiscale simulations
NASA Astrophysics Data System (ADS)
Wagoner, Jason A.; Pande, Vijay S.
2018-04-01
Combined-resolution simulations are an effective way to study molecular properties across a range of length and time scales. These simulations can benefit from adaptive boundaries that allow the high-resolution region to adapt (change size and/or shape) as the simulation progresses. The number of degrees of freedom required to accurately represent even a simple molecular process can vary by several orders of magnitude throughout the course of a simulation, and adaptive boundaries react to these changes to include an appropriate but not excessive amount of detail. Here, we derive the Hamiltonian and distribution function for such a molecular simulation. We also design an algorithm that can efficiently sample the boundary as a new coordinate of the system. We apply this framework to a mixed explicit/continuum simulation of a peptide in solvent. We use this example to discuss the conditions necessary for a successful implementation of adaptive boundaries that is both efficient and accurate in reproducing molecular properties.
"Leadership" and the Social: Time, Space and the Epistemic
ERIC Educational Resources Information Center
Eacott, Scott
2013-01-01
Purpose: "Leadership" is arguably the central concept of interest in contemporary scholarship on educational administration. Within this scholarly discourse, there is an explicit assumption that leadership is a "real" phenomenon that is not only important, but also necessary for educational institutions. However, few scholars…
The Euler-Poisson-Darboux equation for relativists
NASA Astrophysics Data System (ADS)
Stewart, John M.
2009-09-01
The Euler-Poisson-Darboux (EPD) equation is the simplest linear hyperbolic equation in two independent variables whose coefficients exhibit singularities, and as such must be of interest as a paradigm to relativists. Sadly it receives scant treatment in the textbooks. The first half of this review is didactic in nature. It discusses in the simplest terms possible the nature of solutions of the EPD equation for the timelike and spacelike singularity cases. Also covered is the Riemann representation of solutions of the characteristic initial value problem, which is hard to find in the literature. The second half examines a few of the possible applications, ranging from explicit computation of the leading terms in the far-field backscatter from predominantly outgoing radiation in a Schwarzschild space-time, to computing explicitly the leading terms in the matter-induced singularities in plane symmetric space-times. There are of course many other applications and the aim of this article is to encourage relativists to investigate this underrated paradigm.
Wave energy transfer in elastic half-spaces with soft interlayers.
Glushkov, Evgeny; Glushkova, Natalia; Fomenko, Sergey
2015-04-01
The paper deals with guided waves generated by a surface load in a coated elastic half-space. The analysis is based on the explicit integral and asymptotic expressions derived in terms of Green's matrix and given loads for both laminate and functionally graded substrates. To perform the energy analysis, explicit expressions for the time-averaged amount of energy transferred in the time-harmonic wave field by every excited guided or body wave through horizontal planes and lateral cylindrical surfaces have been also derived. The study is focused on the peculiarities of wave energy transmission in substrates with soft interlayers that serve as internal channels for the excited guided waves. The notable features of the source energy partitioning in such media are the domination of a single emerging mode in each consecutive frequency subrange and the appearance of reverse energy fluxes at certain frequencies. These effects as well as modal and spatial distribution of the wave energy coming from the source into the substructure are numerically analyzed and discussed.
Explicit and implicit processes in behavioural adaptation to road width.
Lewis-Evans, Ben; Charlton, Samuel G
2006-05-01
The finding that drivers may react to safety interventions in a way that is contrary to what was intended is the phenomenon of behavioural adaptation. This phenomenon has been demonstrated across various safety interventions and has serious implications for road safety programs the world over. The present research used a driving simulator to assess behavioural adaptation in drivers' speed and lateral displacement in response to manipulations of road width. Of interest was whether behavioural adaptation would occur and whether we could determine whether it was the result of explicit, conscious decisions or implicit perceptual processes. The results supported an implicit, zero perceived risk model of behavioural adaptation with reduced speeds on a narrowed road accompanied by increased ratings of risk and a marked inability of the participants to identify that any change in road width had occurred.
Semi-implicit time integration of atmospheric flows with characteristic-based flux partitioning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Debojyoti; Constantinescu, Emil M.
2016-06-23
Here, this paper presents a characteristic-based flux partitioning for the semi-implicit time integration of atmospheric flows. Nonhydrostatic models require the solution of the compressible Euler equations. The acoustic time scale is significantly faster than the advective scale, yet it is typically not relevant to atmospheric and weather phenomena. The acoustic and advective components of the hyperbolic flux are separated in the characteristic space. High-order, conservative additive Runge-Kutta methods are applied to the partitioned equations so that the acoustic component is integrated in time implicitly with an unconditionally stable method, while the advective component is integrated explicitly. The time step ofmore » the overall algorithm is thus determined by the advective scale. Benchmark flow problems are used to demonstrate the accuracy, stability, and convergence of the proposed algorithm. The computational cost of the partitioned semi-implicit approach is compared with that of explicit time integration.« less
Fully implicit Particle-in-cell algorithms for multiscale plasma simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chacon, Luis
The outline of the paper is as follows: Particle-in-cell (PIC) methods for fully ionized collisionless plasmas, explicit vs. implicit PIC, 1D ES implicit PIC (charge and energy conservation, moment-based acceleration), and generalization to Multi-D EM PIC: Vlasov-Darwin model (review and motivation for Darwin model, conservation properties (energy, charge, and canonical momenta), and numerical benchmarks). The author demonstrates a fully implicit, fully nonlinear, multidimensional PIC formulation that features exact local charge conservation (via a novel particle mover strategy), exact global energy conservation (no particle self-heating or self-cooling), adaptive particle orbit integrator to control errors in momentum conservation, and canonical momenta (EM-PICmore » only, reduced dimensionality). The approach is free of numerical instabilities: ω peΔt >> 1, and Δx >> λ D. It requires many fewer dofs (vs. explicit PIC) for comparable accuracy in challenging problems. Significant CPU gains (vs explicit PIC) have been demonstrated. The method has much potential for efficiency gains vs. explicit in long-time-scale applications. Moment-based acceleration is effective in minimizing N FE, leading to an optimal algorithm.« less
ERIC Educational Resources Information Center
Aghaie, Reza; Zhang, Lawrence Jun
2012-01-01
This study explored the impact of explicit teaching of reading strategies on English-as-a-foreign-language (EFL) students' reading performance in Iran. The study employed a questionnaire adapted from Chamot and O'Malley's (1994) cognitive and metacognitive strategies framework. To test the effects of explicit teaching of cognitive and…
NASA Astrophysics Data System (ADS)
Dasgupta, Bhaskar; Nakamura, Haruki; Higo, Junichi
2016-10-01
Virtual-system coupled adaptive umbrella sampling (VAUS) enhances sampling along a reaction coordinate by using a virtual degree of freedom. However, VAUS and regular adaptive umbrella sampling (AUS) methods are yet computationally expensive. To decrease the computational burden further, improvements of VAUS for all-atom explicit solvent simulation are presented here. The improvements include probability distribution calculation by a Markov approximation; parameterization of biasing forces by iterative polynomial fitting; and force scaling. These when applied to study Ala-pentapeptide dimerization in explicit solvent showed advantage over regular AUS. By using improved VAUS larger biological systems are amenable.
NASA Astrophysics Data System (ADS)
Li, Can; Deng, Wei-Hua
2014-07-01
Following the fractional cable equation established in the letter [B.I. Henry, T.A.M. Langlands, and S.L. Wearne, Phys. Rev. Lett. 100 (2008) 128103], we present the time-space fractional cable equation which describes the anomalous transport of electrodiffusion in nerve cells. The derivation is based on the generalized fractional Ohm's law; and the temporal memory effects and spatial-nonlocality are involved in the time-space fractional model. With the help of integral transform method we derive the analytical solutions expressed by the Green's function; the corresponding fractional moments are calculated; and their asymptotic behaviors are discussed. In addition, the explicit solutions of the considered model with two different external current injections are also presented.
CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van der Holst, B.; Toth, G.; Sokolov, I. V.
We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1)more » an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.« less
Concurrent processing simulation of the space station
NASA Technical Reports Server (NTRS)
Gluck, R.; Hale, A. L.; Sunkel, John W.
1989-01-01
The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.
Periodic activations of behaviours and emotional adaptation in behaviour-based robotics
NASA Astrophysics Data System (ADS)
Burattini, Ernesto; Rossi, Silvia
2010-09-01
The possible modulatory influence of motivations and emotions is of great interest in designing robotic adaptive systems. In this paper, an attempt is made to connect the concept of periodic behaviour activations to emotional modulation, in order to link the variability of behaviours to the circumstances in which they are activated. The impact of emotion is studied, described as timed controlled structures, on simple but conflicting reactive behaviours. Through this approach it is shown that the introduction of such asynchronies in the robot control system may lead to an adaptation in the emergent behaviour without having an explicit action selection mechanism. The emergent behaviours of a simple robot designed with both a parallel and a hierarchical architecture are evaluated and compared.
Driven Metadynamics: Reconstructing Equilibrium Free Energies from Driven Adaptive-Bias Simulations
2013-01-01
We present a novel free-energy calculation method that constructively integrates two distinct classes of nonequilibrium sampling techniques, namely, driven (e.g., steered molecular dynamics) and adaptive-bias (e.g., metadynamics) methods. By employing nonequilibrium work relations, we design a biasing protocol with an explicitly time- and history-dependent bias that uses on-the-fly work measurements to gradually flatten the free-energy surface. The asymptotic convergence of the method is discussed, and several relations are derived for free-energy reconstruction and error estimation. Isomerization reaction of an atomistic polyproline peptide model is used to numerically illustrate the superior efficiency and faster convergence of the method compared with its adaptive-bias and driven components in isolation. PMID:23795244
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.
2010-10-01
Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.
An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU
Xu, Hailong; Cui, Xiaowei; Lu, Mingquan
2016-01-01
Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications. PMID:26978363
An SDR-Based Real-Time Testbed for GNSS Adaptive Array Anti-Jamming Algorithms Accelerated by GPU.
Xu, Hailong; Cui, Xiaowei; Lu, Mingquan
2016-03-11
Nowadays, software-defined radio (SDR) has become a common approach to evaluate new algorithms. However, in the field of Global Navigation Satellite System (GNSS) adaptive array anti-jamming, previous work has been limited due to the high computational power demanded by adaptive algorithms, and often lack flexibility and configurability. In this paper, the design and implementation of an SDR-based real-time testbed for GNSS adaptive array anti-jamming accelerated by a Graphics Processing Unit (GPU) are documented. This testbed highlights itself as a feature-rich and extendible platform with great flexibility and configurability, as well as high computational performance. Both Space-Time Adaptive Processing (STAP) and Space-Frequency Adaptive Processing (SFAP) are implemented with a wide range of parameters. Raw data from as many as eight antenna elements can be processed in real-time in either an adaptive nulling or beamforming mode. To fully take advantage of the parallelism resource provided by the GPU, a batched method in programming is proposed. Tests and experiments are conducted to evaluate both the computational and anti-jamming performance. This platform can be used for research and prototyping, as well as a real product in certain applications.
Application of Knowledge-Based Techniques to Tracking Function
2006-09-01
38394041 42434445 46474849 505152 53545556 57585960 616263 646566 676869 707172 737475 7677 7879 8081 8283 8485 8687 8889 9091 9293 9495 969798 99100...Knowledge-based applications to adaptive space-time processing. Volume I: Summary”, AFRL-SN-TR-2001-146 Vol. I (of Vol. VI ), Final Technical Report, July...2001-146 Vol. IV (of Vol. VI ), Final Technical Report, July 2001. [53] C. Morgan, L. Moyer, “Knowledge-based applications to adaptive space-time
Open Cascades as Simple Solutions to Providing Ultrasensitivity and Adaptation in Cellular Signaling
Srividhya, Jeyaraman; Li, Yongfeng; Pomerening, Joseph R.
2011-01-01
Cell signaling is achieved predominantly by reversible phosphorylation-dephosphorylation reaction cascades. Up until now, circuits conferring adaptation have all required the presence of a cascade with some type of closed topology: negative–feedback loop with a buffering node, or incoherent feedforward loop with a proportioner node. In this paper—using Goldbeter and Koshland-type expressions—we propose a differential equation model to describe a generic, open signaling cascade that elicits an adaptation response. This is accomplished by coupling N phosphorylation–dephosphorylation cycles unidirectionally, without any explicit feedback loops. Using this model, we show that as the length of the cascade grows, the steady states of the downstream cycles reach a limiting value. In other words, our model indicates that there are a minimum number of cycles required to achieve a maximum in sensitivity and amplitude in the response of a signaling cascade. We also describe for the first time that the phenomenon of ultrasensitivity can be further subdivided into three sub–regimes, separated by sharp stimulus threshold values: OFF, OFF-ON-OFF, and ON. In the OFF-ON-OFF regime, an interesting property emerges. In the presence of a basal amount of activity, the temporal evolution of early cycles yields damped peak responses. On the other hand, the downstream cycles switch rapidly to a higher activity state for an extended period of time, prior to settling to an OFF state (OFF-ON-OFF). This response arises from the changing dynamics between a feed–forward activation module and dephosphorylation reactions. In conclusion, our model gives the new perspective that open signaling cascades embedded in complex biochemical circuits may possess the ability to show a switch–like adaptation response, without the need for any explicit feedback circuitry. PMID:21566270
Synchronization of Clocks Through 12 km of Strongly Turbulent Air Over a City.
Sinclair, Laura C; Swann, William C; Bergeron, Hugo; Baumann, Esther; Cermak, Michael; Coddington, Ian; Deschênes, Jean-Daniel; Giorgetta, Fabrizio R; Juarez, Juan C; Khader, Isaac; Petrillo, Keith G; Souza, Katherine T; Dennis, Michael L; Newbury, Nathan R
2016-10-15
We demonstrate real-time, femtosecond-level clock synchronization across a low-lying, strongly turbulent, 12-km horizontal air path by optical two-way time transfer. For this long horizontal free-space path, the integrated turbulence extends well into the strong turbulence regime corresponding to multiple scattering with a Rytov variance up to 7 and with the number of signal interruptions exceeding 100 per second. Nevertheless, optical two-way time transfer is used to synchronize a remote clock to a master clock with femtosecond-level agreement and with a relative time deviation dropping as low as a few hundred attoseconds. Synchronization is shown for a remote clock based on either an optical or microwave oscillator and using either tip-tilt or adaptive-optics free-space optical terminals. The performance is unaltered from optical two-way time transfer in weak turbulence across short links. These results confirm that the two-way reciprocity of the free-space time-of-flight is maintained both under strong turbulence and with the use of adaptive optics. The demonstrated robustness of optical two-way time transfer against strong turbulence and its compatibility with adaptive optics is encouraging for future femtosecond clock synchronization over very long distance ground-to-air free-space paths.
Synchronization of Clocks Through 12 km of Strongly Turbulent Air Over a City
Sinclair, Laura C.; Swann, William C.; Bergeron, Hugo; Baumann, Esther; Cermak, Michael; Coddington, Ian; Deschênes, Jean-Daniel; Giorgetta, Fabrizio R.; Juarez, Juan C.; Khader, Isaac; Petrillo, Keith G.; Souza, Katherine T.; Dennis, Michael L.; Newbury, Nathan R.
2018-01-01
We demonstrate real-time, femtosecond-level clock synchronization across a low-lying, strongly turbulent, 12-km horizontal air path by optical two-way time transfer. For this long horizontal free-space path, the integrated turbulence extends well into the strong turbulence regime corresponding to multiple scattering with a Rytov variance up to 7 and with the number of signal interruptions exceeding 100 per second. Nevertheless, optical two-way time transfer is used to synchronize a remote clock to a master clock with femtosecond-level agreement and with a relative time deviation dropping as low as a few hundred attoseconds. Synchronization is shown for a remote clock based on either an optical or microwave oscillator and using either tip-tilt or adaptive-optics free-space optical terminals. The performance is unaltered from optical two-way time transfer in weak turbulence across short links. These results confirm that the two-way reciprocity of the free-space time-of-flight is maintained both under strong turbulence and with the use of adaptive optics. The demonstrated robustness of optical two-way time transfer against strong turbulence and its compatibility with adaptive optics is encouraging for future femtosecond clock synchronization over very long distance ground-to-air free-space paths. PMID:29348695
Adaptive array antenna for satellite cellular and direct broadcast communications
NASA Technical Reports Server (NTRS)
Horton, Charles R.; Abend, Kenneth
1993-01-01
Adaptive phased-array antennas provide cost-effective implementation of large, light weight apertures with high directivity and precise beamshape control. Adaptive self-calibration allows for relaxation of all mechanical tolerances across the aperture and electrical component tolerances, providing high performance with a low-cost, lightweight array, even in the presence of large physical distortions. Beam-shape is programmable and adaptable to changes in technical and operational requirements. Adaptive digital beam-forming eliminates uplink contention by allowing a single electronically steerable antenna to service a large number of receivers with beams which adaptively focus on one source while eliminating interference from others. A large, adaptively calibrated and fully programmable aperture can also provide precise beam shape control for power-efficient direct broadcast from space. Advanced adaptive digital beamforming technologies are described for: (1) electronic compensation of aperture distortion, (2) multiple receiver adaptive space-time processing, and (3) downlink beam-shape control. Cost considerations for space-based array applications are also discussed.
On non-autonomous dynamical systems
NASA Astrophysics Data System (ADS)
Anzaldo-Meneses, A.
2015-04-01
In usual realistic classical dynamical systems, the Hamiltonian depends explicitly on time. In this work, a class of classical systems with time dependent nonlinear Hamiltonians is analyzed. This type of problems allows to find invariants by a family of Veronese maps. The motivation to develop this method results from the observation that the Poisson-Lie algebra of monomials in the coordinates and momenta is clearly defined in terms of its brackets and leads naturally to an infinite linear set of differential equations, under certain circumstances. To perform explicit analytic and numerical calculations, two examples are presented to estimate the trajectories, the first given by a nonlinear problem and the second by a quadratic Hamiltonian with three time dependent parameters. In the nonlinear problem, the Veronese approach using jets is shown to be equivalent to a direct procedure using elliptic functions identities, and linear invariants are constructed. For the second example, linear and quadratic invariants as well as stability conditions are given. Explicit solutions are also obtained for stepwise constant forces. For the quadratic Hamiltonian, an appropriated set of coordinates relates the geometric setting to that of the three dimensional manifold of central conic sections. It is shown further that the quantum mechanical problem of scattering in a superlattice leads to mathematically equivalent equations for the wave function, if the classical time is replaced by the space coordinate along a superlattice. The mathematical method used to compute the trajectories for stepwise constant parameters can be applied to both problems. It is the standard method in quantum scattering calculations, as known for locally periodic systems including a space dependent effective mass.
Algebra of implicitly defined constraints for gravity as the general form of embedding theory
NASA Astrophysics Data System (ADS)
Paston, S. A.; Semenova, E. N.; Franke, V. A.; Sheykin, A. A.
2017-01-01
We consider the embedding theory, the approach to gravity proposed by Regge and Teitelboim, in which 4D space-time is treated as a surface in high-dimensional flat ambient space. In its general form, which does not contain artificially imposed constraints, this theory can be viewed as an extension of GR. In the present paper we study the canonical description of the embedding theory in this general form. In this case, one of the natural constraints cannot be written explicitly, in contrast to the case where additional Einsteinian constraints are imposed. Nevertheless, it is possible to calculate all Poisson brackets with this constraint. We prove that the algebra of four emerging constraints is closed, i.e., all of them are first-class constraints. The explicit form of this algebra is also obtained.
USDA-ARS?s Scientific Manuscript database
Agroecosystem models and conservation planning tools require spatially and temporally explicit input data about agricultural management operations. The Land-use and Agricultural Management Practices web-Service (LAMPS) provides crop rotation and management information for user-specified areas within...
Synchronization for Optical PPM with Inter-Symbol Guard Times
NASA Astrophysics Data System (ADS)
Rogalin, R.; Srinivasan, M.
2017-05-01
Deep space optical communications promises orders of magnitude growth in communication capacity, supporting high data rate applications such as video streaming and high-bandwidth science instruments. Pulse position modulation is the modulation format of choice for deep space applications, and by inserting inter-symbol guard times between the symbols, the signal carries the timing information needed by the demodulator. Accurately extracting this timing information is crucial to demodulating and decoding this signal. In this article, we propose a number of timing and frequency estimation schemes for this modulation format, and in particular highlight a low complexity maximum likelihood timing estimator that significantly outperforms the prior art in this domain. This method does not require an explicit synchronization sequence, freeing up channel resources for data transmission.
Self-tuning control of attitude and momentum management for the Space Station
NASA Technical Reports Server (NTRS)
Shieh, L. S.; Sunkel, J. W.; Yuan, Z. Z.; Zhao, X. M.
1992-01-01
This paper presents a hybrid state-space self-tuning design methodology using dual-rate sampling for suboptimal digital adaptive control of attitude and momentum management for the Space Station. This new hybrid adaptive control scheme combines an on-line recursive estimation algorithm for indirectly identifying the parameters of a continuous-time system from the available fast-rate sampled data of the inputs and states and a controller synthesis algorithm for indirectly finding the slow-rate suboptimal digital controller from the designed optimal analog controller. The proposed method enables the development of digitally implementable control algorithms for the robust control of Space Station Freedom with unknown environmental disturbances and slowly time-varying dynamics.
High Performance Programming Using Explicit Shared Memory Model on Cray T3D1
NASA Technical Reports Server (NTRS)
Simon, Horst D.; Saini, Subhash; Grassi, Charles
1994-01-01
The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.
Gupta-Bleuler Quantization of the Maxwell Field in Globally Hyperbolic Space-Times
NASA Astrophysics Data System (ADS)
Finster, Felix; Strohmaier, Alexander
2015-08-01
We give a complete framework for the Gupta-Bleuler quantization of the free electromagnetic field on globally hyperbolic space-times. We describe one-particle structures that give rise to states satisfying the microlocal spectrum condition. The field algebras in the so-called Gupta-Bleuler representations satisfy the time-slice axiom, and the corresponding vacuum states satisfy the microlocal spectrum condition. We also give an explicit construction of ground states on ultrastatic space-times. Unlike previous constructions, our method does not require a spectral gap or the absence of zero modes. The only requirement, the absence of zero-resonance states, is shown to be stable under compact perturbations of topology and metric. Usual deformation arguments based on the time-slice axiom then lead to a construction of Gupta-Bleuler representations on a large class of globally hyperbolic space-times. As usual, the field algebra is represented on an indefinite inner product space, in which the physical states form a positive semi-definite subspace. Gauge transformations are incorporated in such a way that the field can be coupled perturbatively to a Dirac field. Our approach does not require any topological restrictions on the underlying space-time.
Wang, Yiwen; Wang, Fang; Xu, Kai; Zhang, Qiaosheng; Zhang, Shaomin; Zheng, Xiaoxiang
2015-05-01
Reinforcement learning (RL)-based brain machine interfaces (BMIs) enable the user to learn from the environment through interactions to complete the task without desired signals, which is promising for clinical applications. Previous studies exploited Q-learning techniques to discriminate neural states into simple directional actions providing the trial initial timing. However, the movements in BMI applications can be quite complicated, and the action timing explicitly shows the intention when to move. The rich actions and the corresponding neural states form a large state-action space, imposing generalization difficulty on Q-learning. In this paper, we propose to adopt attention-gated reinforcement learning (AGREL) as a new learning scheme for BMIs to adaptively decode high-dimensional neural activities into seven distinct movements (directional moves, holdings and resting) due to the efficient weight-updating. We apply AGREL on neural data recorded from M1 of a monkey to directly predict a seven-action set in a time sequence to reconstruct the trajectory of a center-out task. Compared to Q-learning techniques, AGREL could improve the target acquisition rate to 90.16% in average with faster convergence and more stability to follow neural activity over multiple days, indicating the potential to achieve better online decoding performance for more complicated BMI tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190
2015-03-15
We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less
NASA Astrophysics Data System (ADS)
D'Alessandro, Valerio; Binci, Lorenzo; Montelpare, Sergio; Ricci, Renato
2018-01-01
Open-source CFD codes provide suitable environments for implementing and testing low-dissipative algorithms typically used to simulate turbulence. In this research work we developed CFD solvers for incompressible flows based on high-order explicit and diagonally implicit Runge-Kutta (RK) schemes for time integration. In particular, an iterated PISO-like procedure based on Rhie-Chow correction was used to handle pressure-velocity coupling within each implicit RK stage. For the explicit approach, a projected scheme was used to avoid the "checker-board" effect. The above-mentioned approaches were also extended to flow problems involving heat transfer. It is worth noting that the numerical technology available in the OpenFOAM library was used for space discretization. In this work, we additionally explore the reliability and effectiveness of the proposed implementations by computing several unsteady flow benchmarks; we also show that the numerical diffusion due to the time integration approach is completely canceled using the solution techniques proposed here.
Explicit control of adaptive automation under different levels of environmental stress.
Sauer, Jürgen; Kao, Chung-Shan; Wastell, David; Nickel, Peter
2011-08-01
This article examines the effectiveness of three different forms of explicit control of adaptive automation under low- and high-stress conditions, operationalised by different levels of noise. In total, 60 participants were assigned to one of three types of automation design (free, prompted and forced choice). They were trained for 4 h on a highly automated simulation of a process control environment, called AutoCAMS. This was followed by a 4-h testing session under noise exposure and quiet conditions. Measures of performance, psychophysiology and subjective reactions were taken. The results showed that all three modes of explicit control of adaptive automation modes were able to attenuate the negative effects of noise. This was partly due to the fact that operators opted for higher levels of automation under noise. It also emerged that forced choice showed marginal advantages over the two other automation modes. Statement of Relevance: This work is relevant to the design of adaptive automation since it emphasises the need to consider the impact of work-related stressors during task completion. During the presence of stressors, different forms of operator support through automation may be required than under more favourable working conditions.
Adaptive form-finding method for form-fixed spatial network structures
NASA Astrophysics Data System (ADS)
Lan, Cheng; Tu, Xi; Xue, Junqing; Briseghella, Bruno; Zordan, Tobia
2018-02-01
An effective form-finding method for form-fixed spatial network structures is presented in this paper. The adaptive form-finding method is introduced along with the example of designing an ellipsoidal network dome with bar length variations being as small as possible. A typical spherical geodesic network is selected as an initial state, having bar lengths in a limit group number. Next, this network is transformed into the ellipsoidal shape as desired by applying compressions on bars according to the bar length variations caused by transformation. Afterwards, the dynamic relaxation method is employed to explicitly integrate the node positions by applying residual forces. During the form-finding process, the boundary condition of constraining nodes on the ellipsoid surface is innovatively considered as reactions on the normal direction of the surface at node positions, which are balanced with the components of the nodal forces in a reverse direction induced by compressions on bars. The node positions are also corrected according to the fixed-form condition in each explicit iteration step. In the serial results of time history, the optimal solution is found from a time history of states by properly choosing convergence criteria, and the presented form-finding procedure is proved to be applicable for form-fixed problems.
Superresolution restoration of an image sequence: adaptive filtering approach.
Elad, M; Feuer, A
1999-01-01
This paper presents a new method based on adaptive filtering theory for superresolution restoration of continuous image sequences. The proposed methodology suggests least squares (LS) estimators which adapt in time, based on adaptive filters, least mean squares (LMS) or recursive least squares (RLS). The adaptation enables the treatment of linear space and time-variant blurring and arbitrary motion, both of them assumed known. The proposed new approach is shown to be of relatively low computational requirements. Simulations demonstrating the superresolution restoration algorithms are presented.
Parallel CE/SE Computations via Domain Decomposition
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung
2000-01-01
This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.
Sean Healey; Gretchen Moisen; Jeff Masek; Warren Cohen; Sam Goward; < i> et al< /i>
2007-01-01
The Forest Inventory and Analysis (FIA) program has partnered with researchers from the National Aeronautics and Space Administration, the University of Maryland, and other U.S. Department of Agriculture Forest Service units to identify disturbance patterns across the United States using FIA plot data and time series of Landsat satellite images. Spatially explicit...
Nuclear reactor descriptions for space power systems analysis
NASA Technical Reports Server (NTRS)
Mccauley, E. W.; Brown, N. J.
1972-01-01
For the small, high performance reactors required for space electric applications, adequate neutronic analysis is of crucial importance, but in terms of computational time consumed, nuclear calculations probably yield the least amount of detail for mission analysis study. It has been found possible, after generation of only a few designs of a reactor family in elaborate thermomechanical and nuclear detail to use simple curve fitting techniques to assure desired neutronic performance while still performing the thermomechanical analysis in explicit detail. The resulting speed-up in computation time permits a broad detailed examination of constraints by the mission analyst.
Visibility graphs and symbolic dynamics
NASA Astrophysics Data System (ADS)
Lacasa, Lucas; Just, Wolfram
2018-07-01
Visibility algorithms are a family of geometric and ordering criteria by which a real-valued time series of N data is mapped into a graph of N nodes. This graph has been shown to often inherit in its topology nontrivial properties of the series structure, and can thus be seen as a combinatorial representation of a dynamical system. Here we explore in some detail the relation between visibility graphs and symbolic dynamics. To do that, we consider the degree sequence of horizontal visibility graphs generated by the one-parameter logistic map, for a range of values of the parameter for which the map shows chaotic behaviour. Numerically, we observe that in the chaotic region the block entropies of these sequences systematically converge to the Lyapunov exponent of the time series. Hence, Pesin's identity suggests that these block entropies are converging to the Kolmogorov-Sinai entropy of the physical measure, which ultimately suggests that the algorithm is implicitly and adaptively constructing phase space partitions which might have the generating property. To give analytical insight, we explore the relation k(x) , x ∈ [ 0 , 1 ] that, for a given datum with value x, assigns in graph space a node with degree k. In the case of the out-degree sequence, such relation is indeed a piece-wise constant function. By making use of explicit methods and tools from symbolic dynamics we are able to analytically show that the algorithm indeed performs an effective partition of the phase space and that such partition is naturally expressed as a countable union of subintervals, where the endpoints of each subinterval are related to the fixed point structure of the iterates of the map and the subinterval enumeration is associated with particular ordering structures that we called motifs.
NASA Technical Reports Server (NTRS)
Melis, Matthew E.
2003-01-01
NASA Glenn Research Center s Structural Mechanics Branch has years of expertise in using explicit finite element methods to predict the outcome of ballistic impact events. Shuttle engineers from the NASA Marshall Space Flight Center and NASA Kennedy Space Flight Center required assistance in assessing the structural loads that a newly proposed thrust vector control system for the space shuttle solid rocket booster (SRB) aft skirt would expect to see during its recovery splashdown.
An interactive adaptive remeshing algorithm for the two-dimensional Euler equations
NASA Technical Reports Server (NTRS)
Slack, David C.; Walters, Robert W.; Lohner, R.
1990-01-01
An interactive adaptive remeshing algorithm utilizing a frontal grid generator and a variety of time integration schemes for the two-dimensional Euler equations on unstructured meshes is presented. Several device dependent interactive graphics interfaces have been developed along with a device independent DI-3000 interface which can be employed on any computer that has the supporting software including the Cray-2 supercomputers Voyager and Navier. The time integration methods available include: an explicit four stage Runge-Kutta and a fully implicit LU decomposition. A cell-centered finite volume upwind scheme utilizing Roe's approximate Riemann solver is developed. To obtain higher order accurate results a monotone linear reconstruction procedure proposed by Barth is utilized. Results for flow over a transonic circular arc and flow through a supersonic nozzle are examined.
Newmark local time stepping on high-performance computing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rietmann, Max, E-mail: max.rietmann@erdw.ethz.ch; Institute of Geophysics, ETH Zurich; Grote, Marcus, E-mail: marcus.grote@unibas.ch
In multi-scale complex media, finite element meshes often require areas of local refinement, creating small elements that can dramatically reduce the global time-step for wave-propagation problems due to the CFL condition. Local time stepping (LTS) algorithms allow an explicit time-stepping scheme to adapt the time-step to the element size, allowing near-optimal time-steps everywhere in the mesh. We develop an efficient multilevel LTS-Newmark scheme and implement it in a widely used continuous finite element seismic wave-propagation package. In particular, we extend the standard LTS formulation with adaptations to continuous finite element methods that can be implemented very efficiently with very strongmore » element-size contrasts (more than 100x). Capable of running on large CPU and GPU clusters, we present both synthetic validation examples and large scale, realistic application examples to demonstrate the performance and applicability of the method and implementation on thousands of CPU cores and hundreds of GPUs.« less
The Family in Us: Family History, Family Identity and Self-Reproductive Adaptive Behavior.
Ferring, Dieter
2017-06-01
This contribution is an essay about the notion of family identity reflecting shared significant experiences within a family system originating a set of signs used in social communication within and between families. Significant experiences are considered as experiences of events that have an immediate impact on the adaptation of the family in a given socio-ecological and cultural context at a given historical time. It is assumed that family history is stored in a shared "family memory" holding both implicit and explicit knowledge and exerting an influence on the behavior of each family member. This is described as transgenerational family memory being constituted of a system of meaningful signs. The crucial dimension underlying the logic of this essay are the ideas of adaptation as well as self-reproduction of systems.
NASA Astrophysics Data System (ADS)
Liu, Xiao-Ming; Jiang, Jun; Hong, Ling; Tang, Dafeng
In this paper, a new method of Generalized Cell Mapping with Sampling-Adaptive Interpolation (GCMSAI) is presented in order to enhance the efficiency of the computation of one-step probability transition matrix of the Generalized Cell Mapping method (GCM). Integrations with one mapping step are replaced by sampling-adaptive interpolations of third order. An explicit formula of interpolation error is derived for a sampling-adaptive control to switch on integrations for the accuracy of computations with GCMSAI. By applying the proposed method to a two-dimensional forced damped pendulum system, global bifurcations are investigated with observations of boundary metamorphoses including full to partial and partial to partial as well as the birth of fully Wada boundary. Moreover GCMSAI requires a computational time of one thirtieth up to one fiftieth compared to that of the previous GCM.
Adaptive management of forest ecosystems: did some rubber hit the road?
B.T. Bormann; R.W. Haynes; J.R. Martin
2007-01-01
Although many scientists recommend adaptive management for large forest tracts, there is little evidence that its use has been effective at this scale. One exception is the 10-million-hectare Northwest Forest Plan, which explicitly included adaptive management in its design. Evidence from 10 years of implementing the plan suggests that formalizing adaptive steps and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Almeida, Valmor F.
In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less
de Almeida, Valmor F.
2017-04-19
In this work, a phase-space discontinuous Galerkin (PSDG) method is presented for the solution of stellar radiative transfer problems. It allows for greater adaptivity than competing methods without sacrificing generality. The method is extensively tested on a spherically symmetric, static, inverse-power-law scattering atmosphere. Results for different sizes of atmospheres and intensities of scattering agreed with asymptotic values. The exponentially decaying behavior of the radiative field in the diffusive-transparent transition region, and the forward peaking behavior at the surface of extended atmospheres were accurately captured. The integrodifferential equation of radiation transfer is solved iteratively by alternating between the radiative pressure equationmore » and the original equation with the integral term treated as an energy density source term. In each iteration, the equations are solved via an explicit, flux-conserving, discontinuous Galerkin method. Finite elements are ordered in wave fronts perpendicular to the characteristic curves so that elemental linear algebraic systems are solved quickly by sweeping the phase space element by element. Two implementations of a diffusive boundary condition at the origin are demonstrated wherein the finite discontinuity in the radiation intensity is accurately captured by the proposed method. This allows for a consistent mechanism to preserve photon luminosity. The method was proved to be robust and fast, and a case is made for the adequacy of parallel processing. In addition to classical two-dimensional plots, results of normalized radiation intensity were mapped onto a log-polar surface exhibiting all distinguishing features of the problem studied.« less
Solving delay differential equations in S-ADAPT by method of steps.
Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech
2013-09-01
S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.
The effect of orthology and coregulation on detecting regulatory motifs.
Storms, Valerie; Claeys, Marleen; Sanchez, Aminael; De Moor, Bart; Verstuyf, Annemieke; Marchal, Kathleen
2010-02-03
Computational de novo discovery of transcription factor binding sites is still a challenging problem. The growing number of sequenced genomes allows integrating orthology evidence with coregulation information when searching for motifs. Moreover, the more advanced motif detection algorithms explicitly model the phylogenetic relatedness between the orthologous input sequences and thus should be well adapted towards using orthologous information. In this study, we evaluated the conditions under which complementing coregulation with orthologous information improves motif detection for the class of probabilistic motif detection algorithms with an explicit evolutionary model. We designed datasets (real and synthetic) covering different degrees of coregulation and orthologous information to test how well Phylogibbs and Phylogenetic sampler, as representatives of the motif detection algorithms with evolutionary model performed as compared to MEME, a more classical motif detection algorithm that treats orthologs independently. Under certain conditions detecting motifs in the combined coregulation-orthology space is indeed more efficient than using each space separately, but this is not always the case. Moreover, the difference in success rate between the advanced algorithms and MEME is still marginal. The success rate of motif detection depends on the complex interplay between the added information and the specificities of the applied algorithms. Insights in this relation provide information useful to both developers and users. All benchmark datasets are available at http://homes.esat.kuleuven.be/~kmarchal/Supplementary_Storms_Valerie_PlosONE.
The Effect of Orthology and Coregulation on Detecting Regulatory Motifs
Storms, Valerie; Claeys, Marleen; Sanchez, Aminael; De Moor, Bart; Verstuyf, Annemieke; Marchal, Kathleen
2010-01-01
Background Computational de novo discovery of transcription factor binding sites is still a challenging problem. The growing number of sequenced genomes allows integrating orthology evidence with coregulation information when searching for motifs. Moreover, the more advanced motif detection algorithms explicitly model the phylogenetic relatedness between the orthologous input sequences and thus should be well adapted towards using orthologous information. In this study, we evaluated the conditions under which complementing coregulation with orthologous information improves motif detection for the class of probabilistic motif detection algorithms with an explicit evolutionary model. Methodology We designed datasets (real and synthetic) covering different degrees of coregulation and orthologous information to test how well Phylogibbs and Phylogenetic sampler, as representatives of the motif detection algorithms with evolutionary model performed as compared to MEME, a more classical motif detection algorithm that treats orthologs independently. Results and Conclusions Under certain conditions detecting motifs in the combined coregulation-orthology space is indeed more efficient than using each space separately, but this is not always the case. Moreover, the difference in success rate between the advanced algorithms and MEME is still marginal. The success rate of motif detection depends on the complex interplay between the added information and the specificities of the applied algorithms. Insights in this relation provide information useful to both developers and users. All benchmark datasets are available at http://homes.esat.kuleuven.be/~kmarchal/Supplementary_Storms_Valerie_PlosONE. PMID:20140085
Graphite/epoxy composite adapters for the Space Shuttle/Centaur vehicle
NASA Technical Reports Server (NTRS)
Kasper, Harold J.; Ring, Darryl S.
1990-01-01
The decision to launch various NASA satellite and Air Force spacecraft from the Space Shuttle created the need for a high-energy upper stage capable of being deployed from the cargo bay. Two redesigned versions of the Centaur vehicle which employed a graphite/epoxy composite material for the forward and aft adapters were selected. Since this was the first time a graphite/epoxy material was used for Centaur major structural components, the development of the adapters was a major effort. An overview of the composite adapter designs, subcomponent design evaluation test results, and composite adapter test results from a full-scale vehicle structural test is presented.
Daidone, Isabella; Amadei, Andrea; Di Nola, Alfredo
2005-05-15
The folding of the amyloidogenic H1 peptide MKHMAGAAAAGAVV taken from the syrian hamster prion protein is explored in explicit aqueous solution at 300 K using long time scale all-atom molecular dynamics simulations for a total simulation time of 1.1 mus. The system, initially modeled as an alpha-helix, preferentially adopts a beta-hairpin structure and several unfolding/refolding events are observed, yielding a very short average beta-hairpin folding time of approximately 200 ns. The long time scale accessed by our simulations and the reversibility of the folding allow to properly explore the configurational space of the peptide in solution. The free energy profile, as a function of the principal components (essential eigenvectors) of motion, describing the main conformational transitions, shows the characteristic features of a funneled landscape, with a downhill surface toward the beta-hairpin folded basin. However, the analysis of the peptide thermodynamic stability, reveals that the beta-hairpin in solution is rather unstable. These results are in good agreement with several experimental evidences, according to which the isolated H1 peptide adopts very rapidly in water beta-sheet structure, leading to amyloid fibril precipitates [Nguyen et al., Biochemistry 1995;34:4186-4192; Inouye et al., J Struct Biol 1998;122:247-255]. Moreover, in this article we also characterize the diffusion behavior in conformational space, investigating its relations with folding/unfolding conditions. Copyright 2005 Wiley-Liss, Inc.
A Comparative Study of Acousto-Optic Time-Integrating Correlators for Adaptive Jamming Cancellation
1997-10-01
This final report presents a comparative study of the space-integrating and time-integrating configurations of an acousto - optic correlator...systematically evaluate all existing acousto - optic correlator architectures and to determine which would be most suitable for adaptive jamming
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
Simulating spatial and temporal context of forest management using hypothetical landscapes
Eric J. Gustafson; Thomas R. Crow
1998-01-01
Spatially explicit models that combine remote sensing with geographic information systems (GIS) offer great promise to land managers because they consider the arrangement of landscape elements in time and space. Their visual and geographic nature facilitate the comparison of alternative landscape designs. Among various activities associated with forest management,...
Design for Verification: Using Design Patterns to Build Reliable Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Koga, Dennis (Technical Monitor)
2003-01-01
Components so far have been mainly used in commercial software development to reduce time to market. While some effort has been spent on formal aspects of components, most of this was done in the context of programming language or operating system framework integration. As a consequence, increased reliability of composed systems is mainly regarded as a side effect of a more rigid testing of pre-fabricated components. In contrast to this, Design for Verification (D4V) puts the focus on component specific property guarantees, which are used to design systems with high reliability requirements. D4V components are domain specific design pattern instances with well-defined property guarantees and usage rules, which are suitable for automatic verification. The guaranteed properties are explicitly used to select components according to key system requirements. The D4V hypothesis is that the same general architecture and design principles leading to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the limitations of conventional reliability assurance measures, such as too large a state space or too many execution paths.
Real-time prediction of respiratory motion based on a local dynamic model in an augmented space
NASA Astrophysics Data System (ADS)
Hong, S.-M.; Jung, B.-H.; Ruan, D.
2011-03-01
Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively low observation rate. Sensitivity analysis indicates its robustness toward the choice of parameters. Its simplicity, robustness and low computation cost makes the proposed local dynamic model an attractive tool for real-time prediction with system latencies below 0.4 s.
Real-time prediction of respiratory motion based on a local dynamic model in an augmented space.
Hong, S-M; Jung, B-H; Ruan, D
2011-03-21
Motion-adaptive radiotherapy aims to deliver ablative radiation dose to the tumor target with minimal normal tissue exposure, by accounting for real-time target movement. In practice, prediction is usually necessary to compensate for system latency induced by measurement, communication and control. This work focuses on predicting respiratory motion, which is most dominant for thoracic and abdominal tumors. We develop and investigate the use of a local dynamic model in an augmented space, motivated by the observation that respiratory movement exhibits a locally circular pattern in a plane augmented with a delayed axis. By including the angular velocity as part of the system state, the proposed dynamic model effectively captures the natural evolution of respiratory motion. The first-order extended Kalman filter is used to propagate and update the state estimate. The target location is predicted by evaluating the local dynamic model equations at the required prediction length. This method is complementary to existing work in that (1) the local circular motion model characterizes 'turning', overcoming the limitation of linear motion models; (2) it uses a natural state representation including the local angular velocity and updates the state estimate systematically, offering explicit physical interpretations; (3) it relies on a parametric model and is much less data-satiate than the typical adaptive semiparametric or nonparametric method. We tested the performance of the proposed method with ten RPM traces, using the normalized root mean squared difference between the predicted value and the retrospective observation as the error metric. Its performance was compared with predictors based on the linear model, the interacting multiple linear models and the kernel density estimator for various combinations of prediction lengths and observation rates. The local dynamic model based approach provides the best performance for short to medium prediction lengths under relatively low observation rate. Sensitivity analysis indicates its robustness toward the choice of parameters. Its simplicity, robustness and low computation cost makes the proposed local dynamic model an attractive tool for real-time prediction with system latencies below 0.4 s.
Bursting endemic bubbles in an adaptive network
NASA Astrophysics Data System (ADS)
Sherborne, N.; Blyuss, K. B.; Kiss, I. Z.
2018-04-01
The spread of an infectious disease is known to change people's behavior, which in turn affects the spread of disease. Adaptive network models that account for both epidemic and behavioral change have found oscillations, but in an extremely narrow region of the parameter space, which contrasts with intuition and available data. In this paper we propose a simple susceptible-infected-susceptible epidemic model on an adaptive network with time-delayed rewiring, and show that oscillatory solutions are now present in a wide region of the parameter space. Altering the transmission or rewiring rates reveals the presence of an endemic bubble—an enclosed region of the parameter space where oscillations are observed.
Space-time mesh adaptation for solute transport in randomly heterogeneous porous media.
Dell'Oca, Aronne; Porta, Giovanni Michele; Guadagnini, Alberto; Riva, Monica
2018-05-01
We assess the impact of an anisotropic space and time grid adaptation technique on our ability to solve numerically solute transport in heterogeneous porous media. Heterogeneity is characterized in terms of the spatial distribution of hydraulic conductivity, whose natural logarithm, Y, is treated as a second-order stationary random process. We consider nonreactive transport of dissolved chemicals to be governed by an Advection Dispersion Equation at the continuum scale. The flow field, which provides the advective component of transport, is obtained through the numerical solution of Darcy's law. A suitable recovery-based error estimator is analyzed to guide the adaptive discretization. We investigate two diverse strategies guiding the (space-time) anisotropic mesh adaptation. These are respectively grounded on the definition of the guiding error estimator through the spatial gradients of: (i) the concentration field only; (ii) both concentration and velocity components. We test the approach for two-dimensional computational scenarios with moderate and high levels of heterogeneity, the latter being expressed in terms of the variance of Y. As quantities of interest, we key our analysis towards the time evolution of section-averaged and point-wise solute breakthrough curves, second centered spatial moment of concentration, and scalar dissipation rate. As a reference against which we test our results, we consider corresponding solutions associated with uniform space-time grids whose level of refinement is established through a detailed convergence study. We find a satisfactory comparison between results for the adaptive methodologies and such reference solutions, our adaptive technique being associated with a markedly reduced computational cost. Comparison of the two adaptive strategies tested suggests that: (i) defining the error estimator relying solely on concentration fields yields some advantages in grasping the key features of solute transport taking place within low velocity regions, where diffusion-dispersion mechanisms are dominant; and (ii) embedding the velocity field in the error estimator guiding strategy yields an improved characterization of the forward fringe of solute fronts which propagate through high velocity regions. Copyright © 2017 Elsevier B.V. All rights reserved.
Guzik, Stephen M.; Gao, Xinfeng; Owen, Landon D.; ...
2015-12-20
We present a fourth-order accurate finite-volume method for solving time-dependent hyperbolic systems of conservation laws on mapped grids that are adaptively refined in space and time. Some novel considerations for formulating the semi-discrete system of equations in computational space are combined with detailed mechanisms for accommodating the adapting grids. Furthermore, these considerations ensure that conservation is maintained and that the divergence of a constant vector field is always zero (freestream-preservation property). The solution in time is advanced with a fourth-order Runge-Kutta method. A series of tests verifies that the expected accuracy is achieved in smooth flows and the solution ofmore » a Mach reflection problem demonstrates the effectiveness of the algorithm in resolving strong discontinuities.« less
Stevens, Andreas; Schwarz, Jürgen; Schwarz, Benedikt; Ruf, Ilona; Kolter, Thomas; Czekalla, Joerg
2002-03-01
Novel and classic neuroleptics differ in their effects on limbic striatal/nucleus accumbens (NA) and prefrontal cortex (PFC) dopamine turnover, suggesting differential effects on implicit and explicit learning as well as on anhedonia. The present study investigates whether such differences can be demonstrated in a naturalistic sample of schizophrenic patients. Twenty-five inpatients diagnosed with DSM-IV schizophrenic psychosis and treated for at least 14 days with the novel neuroleptic olanzapine were compared with 25 schizophrenics taking classic neuroleptics and with 25 healthy controls, matched by age and education level. PFC/NA-dependent implicit learning was assessed by a serial reaction time task (SRTT) and compared with cerebellum-mediated classical eye-blink conditioning and explicit visuospatial memory. Anhedonia was measured with the Snaith-Hamilton-Pleasure Scale (SHAPS). Implicit (SRTT) and psychomotor speed, but not explicit (visuospatial) learning were superior in the olanzapine-treated group as compared to the patients on classic neuroleptics. Compared to healthy controls, olanzapine-treated schizophrenics showed similar implicit learning, but reduced explicit (visuospatial) memory performance. Acquisition of eyeblink conditioning was not different between the three groups. There was no difference with regard to anhedonia and SANS scores between the patients. Olanzapine seems to interfere less with unattended learning and motor speed than classical neuroleptics. In daily life, this may translate into better adaptation to a rapidly changing environment. The effects seem specific, as in explicit learning and eyeblink conditioning no difference to classic NL was found.
Size and shape of Brain may be such as to take advantage of two Dimensions of Time
NASA Astrophysics Data System (ADS)
Kriske, Richard
2014-03-01
This author had previously Theorized that there are two non-commuting Dimensions of time. One is Clock Time and the other is Information Time (which we generally refer to as Information, like Spin Up or Spin Down). When time does not commute with another Dimension of Time, one takes the Clock Time at one point in space and the Information time is not known; that is different than if one takes the Information time at that point and the Clock time is not known--This is not explicitly about time but rather space. An example of this non-commutation is that if one knows the Spin at one point and the Time at one point of space then simultaneosly, one knows the Spin at another point of Space and the Time there (It is the same time), it is a restatement of the EPR paradox. As a matter of fact two Dimensions of Time would prove the EPR paradox. It is obvious from that argument that if one needed to take advantage of Information, then a fairly large space needs to be used, a large amount of Energy needs to be Generated and a symmetry needs to be established in Space-like the lobes of a Brain in order to detect the fact that the Tclock and Tinfo are not Commuting. This Non-Commuting deposits a large amount of Information simultaneously in that space, and synchronizes the time there.
NASA Astrophysics Data System (ADS)
Zhong, Jiaqi; Zeng, Cheng; Yuan, Yupeng; Zhang, Yuzhe; Zhang, Ye
2018-04-01
The aim of this paper is to present an explicit numerical algorithm based on improved spectral Galerkin method for solving the unsteady diffusion-convection-reaction equation. The principal characteristics of this approach give the explicit eigenvalues and eigenvectors based on the time-space separation method and boundary condition analysis. With the help of Fourier series and Galerkin truncation, we can obtain the finite-dimensional ordinary differential equations which facilitate the system analysis and controller design. By comparing with the finite element method, the numerical solutions are demonstrated via two examples. It is shown that the proposed method is effective.
Adaptive Automation Design and Implementation
2015-09-17
Study : Space Navigator This section demonstrates the player modeling paradigm, focusing specifically on the response generation section of the player ...human-machine system, a real-time player modeling framework for imitating a specific person’s task performance, and the Adaptive Automation System...Model . . . . . . . . . . . . . . . . . . . . . . . 13 Clustering-Based Real-Time Player Modeling . . . . . . . . . . . . . . . . . . . . . . 15 An
It Is Not What You Expect: Dissociating Conflict Adaptation from Expectancies in a Stroop Task
ERIC Educational Resources Information Center
Jimenez, Luis; Mendez, Amavia
2013-01-01
In conflict tasks, congruency effects are modulated by the sequence of preceding trials. This modulation effect has been interpreted as an influence of a proactive mechanism of adaptation to conflict (Botvinick, Nystrom, Fissell, Carter, & Cohen, 1999), but the possible contribution of explicit expectancies to this adaptation effect remains…
Not explicit but implicit memory is influenced by individual perception style
Tsushima, Yoshiaki
2018-01-01
Not only explicit but also implicit memory has considerable influence on our daily life. However, it is still unclear whether explicit and implicit memories are sensitive to individual differences. Here, we investigated how individual perception style (global or local) correlates with implicit and explicit memory. As a result, we found that not explicit but implicit memory was affected by the perception style: local perception style people more greatly used implicit memory than global perception style people. These results help us to make the new effective application adapting to individual perception style and understand some clinical symptoms such as autistic spectrum disorder. Furthermore, this finding might give us new insight of memory involving consciousness and unconsciousness as well as relationship between implicit/explicit memory and individual perception style. PMID:29370212
Not explicit but implicit memory is influenced by individual perception style.
Hine, Kyoko; Tsushima, Yoshiaki
2018-01-01
Not only explicit but also implicit memory has considerable influence on our daily life. However, it is still unclear whether explicit and implicit memories are sensitive to individual differences. Here, we investigated how individual perception style (global or local) correlates with implicit and explicit memory. As a result, we found that not explicit but implicit memory was affected by the perception style: local perception style people more greatly used implicit memory than global perception style people. These results help us to make the new effective application adapting to individual perception style and understand some clinical symptoms such as autistic spectrum disorder. Furthermore, this finding might give us new insight of memory involving consciousness and unconsciousness as well as relationship between implicit/explicit memory and individual perception style.
Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gavel, D
2002-10-08
A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In thismore » paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.« less
A sparse grid based method for generative dimensionality reduction of high-dimensional data
NASA Astrophysics Data System (ADS)
Bohn, Bastian; Garcke, Jochen; Griebel, Michael
2016-03-01
Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.
Rotational wind indicator enhances control of rotated displays
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Pavel, Misha
1991-01-01
Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.
The Built Environment and Health: Introducing Individual Space-Time Behavior
Saarloos, Dick; Kim, Jae-Eun; Timmermans, Harry
2009-01-01
Many studies have examined the relationship between the built environment and health. Yet, the question of how and why the environment influences health behavior remains largely unexplored. As health promotion interventions work through the individuals in a targeted population, an explicit understanding of individual behavior is required to formulate and evaluate intervention strategies. Bringing in concepts from various fields, this paper proposes the use of an activity-based modeling approach for understanding and predicting, from the bottom up, how individuals interact with their environment and each other in space and time, and how their behaviors aggregate to population-level health outcomes. PMID:19578457
The Weyl-Lanczos equations and the Lanczos wave equation in four dimensions as systems in involution
NASA Astrophysics Data System (ADS)
Dolan, P.; Gerber, A.
2003-07-01
The Weyl-Lanczos equations in four dimensions form a system in involution. We compute its Cartan characters explicitly and use Janet-Riquier theory to confirm the results in the case of all space-times with a diagonal metric tensor and for the plane wave limit of space-times. We write the Lanczos wave equation as an exterior differential system and, with assistance from Janet-Riquier theory, we compute its Cartan characters and find that it forms a system in involution. We compare these Cartan characters with those of the Weyl-Lanczos equations. All results hold for the real analytic case.
Classical integrable defects as quasi Bäcklund transformations
NASA Astrophysics Data System (ADS)
Doikou, Anastasia
2016-10-01
We consider the algebraic setting of classical defects in discrete and continuous integrable theories. We derive the ;equations of motion; on the defect point via the space-like and time-like description. We then exploit the structural similarity of these equations with the discrete and continuous Bäcklund transformations. And although these equations are similar they are not exactly the same to the Bäcklund transformations. We also consider specific examples of integrable models to demonstrate our construction, i.e. the Toda chain and the sine-Gordon model. The equations of the time (space) evolution of the defect (discontinuity) degrees of freedom for these models are explicitly derived.
NASA Technical Reports Server (NTRS)
Tao, Gang; Joshi, Suresh M.
2008-01-01
In this paper, the problem of controlling systems with failures and faults is introduced, and an overview of recent work on direct adaptive control for compensation of uncertain actuator failures is presented. Actuator failures may be characterized by some unknown system inputs being stuck at some unknown (fixed or varying) values at unknown time instants, that cannot be influenced by the control signals. The key task of adaptive compensation is to design the control signals in such a manner that the remaining actuators can automatically and seamlessly take over for the failed ones, and achieve desired stability and asymptotic tracking. A certain degree of redundancy is necessary to accomplish failure compensation. The objective of adaptive control design is to effectively use the available actuation redundancy to handle failures without the knowledge of the failure patterns, parameters, and time of occurrence. This is a challenging problem because failures introduce large uncertainties in the dynamic structure of the system, in addition to parametric uncertainties and unknown disturbances. The paper addresses some theoretical issues in adaptive actuator failure compensation: actuator failure modeling, redundant actuation requirements, plant-model matching, error system dynamics, adaptation laws, and stability, tracking, and performance analysis. Adaptive control designs can be shown to effectively handle uncertain actuator failures without explicit failure detection. Some open technical challenges and research problems in this important research area are discussed.
Rapid adaptation to microgravity in mammalian macrophage cells.
Thiel, Cora S; de Zélicourt, Diane; Tauber, Svantje; Adrian, Astrid; Franz, Markus; Simmet, Dana M; Schoppmann, Kathrin; Hauschild, Swantje; Krammer, Sonja; Christen, Miriam; Bradacs, Gesine; Paulsen, Katrin; Wolf, Susanne A; Braun, Markus; Hatton, Jason; Kurtcuoglu, Vartan; Franke, Stefanie; Tanner, Samuel; Cristoforetti, Samantha; Sick, Beate; Hock, Bertold; Ullrich, Oliver
2017-02-27
Despite the observed severe effects of microgravity on mammalian cells, many astronauts have completed long term stays in space without suffering from severe health problems. This raises questions about the cellular capacity for adaptation to a new gravitational environment. The International Space Station (ISS) experiment TRIPLE LUX A, performed in the BIOLAB laboratory of the ISS COLUMBUS module, allowed for the first time the direct measurement of a cellular function in real time and on orbit. We measured the oxidative burst reaction in mammalian macrophages (NR8383 rat alveolar macrophages) exposed to a centrifuge regime of internal 0 g and 1 g controls and step-wise increase or decrease of the gravitational force in four independent experiments. Surprisingly, we found that these macrophages adapted to microgravity in an ultra-fast manner within seconds, after an immediate inhibitory effect on the oxidative burst reaction. For the first time, we provided direct evidence of cellular sensitivity to gravity, through real-time on orbit measurements and by using an experimental system, in which all factors except gravity were constant. The surprisingly ultra-fast adaptation to microgravity indicates that mammalian macrophages are equipped with a highly efficient adaptation potential to a low gravity environment. This opens new avenues for the exploration of adaptation of mammalian cells to gravitational changes.
Fault recovery for real-time, multi-tasking computer system
NASA Technical Reports Server (NTRS)
Hess, Richard (Inventor); Kelly, Gerald B. (Inventor); Rogers, Randy (Inventor); Stange, Kent A. (Inventor)
2011-01-01
System and methods for providing a recoverable real time multi-tasking computer system are disclosed. In one embodiment, a system comprises a real time computing environment, wherein the real time computing environment is adapted to execute one or more applications and wherein each application is time and space partitioned. The system further comprises a fault detection system adapted to detect one or more faults affecting the real time computing environment and a fault recovery system, wherein upon the detection of a fault the fault recovery system is adapted to restore a backup set of state variables.
The Noncommutative Doplicher-Fredenhagen-Roberts-Amorim Space
NASA Astrophysics Data System (ADS)
Abreu, Everton M. C.; Mendes, Albert C. R.; Oliveira, Wilson; Zangirolami, Adriano O.
2010-10-01
This work is an effort in order to compose a pedestrian review of the recently elaborated Doplicher, Fredenhagen, Roberts and Amorim (DFRA) noncommutative (NC) space which is a minimal extension of the DFR space. In this DRFA space, the object of noncommutativity (θμν) is a variable of the NC system and has a canonical conjugate momentum. Namely, for instance, in NC quantum mechanics we will show that θij (i,j=1,2,3) is an operator in Hilbert space and we will explore the consequences of this so-called ''operationalization''. The DFRA formalism is constructed in an extended space-time with independent degrees of freedom associated with the object of noncommutativity θμν. We will study the symmetry properties of an extended x+θ space-time, given by the group P', which has the Poincaré group P as a subgroup. The Noether formalism adapted to such extended x+θ (D=4+6) space-time is depicted. A consistent algebra involving the enlarged set of canonical operators is described, which permits one to construct theories that are dynamically invariant under the action of the rotation group. In this framework it is also possible to give dynamics to the NC operator sector, resulting in new features. A consistent classical mechanics formulation is analyzed in such a way that, under quantization, it furnishes a NC quantum theory with interesting results. The Dirac formalism for constrained Hamiltonian systems is considered and the object of noncommutativity θij plays a fundamental role as an independent quantity. Next, we explain the dynamical spacetime symmetries in NC relativistic theories by using the DFRA algebra. It is also explained about the generalized Dirac equation issue, that the fermionic field depends not only on the ordinary coordinates but on θμν as well. The dynamical symmetry content of such fermionic theory is discussed, and we show that its action is invariant under P'. In the last part of this work we analyze the complex scalar fields using this new framework. As said above, in a first quantized formalism, θμν and its canonical momentum πμν are seen as operators living in some Hilbert space. In a second quantized formalism perspective, we show an explicit form for the extended Poincaré generators and the same algebra is generated via generalized Heisenberg relations. We also consider a source term and construct the general solution for the complex scalar fields using the Green function technique.
NASA Astrophysics Data System (ADS)
Lemarié, F.; Debreu, L.
2016-02-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost. To our knowledge no unconditionally stable scheme with such high order accuracy in time and space have been presented so far in the literature. Furthermore, we show how those schemes can be made monotonic without compromising their stability properties.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.
Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
Sarkimaki, Konsta; Hirvijoki, E.; Terava, J.
2017-10-12
Here, we report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell–Jüttner statistics. The implementation is based on the Beliaev–Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space.
ERIC Educational Resources Information Center
Sasanguie, Delphine; Gobel, Silke M.; Moll, Kristina; Smets, Karolien; Reynvoet, Bert
2013-01-01
In this study, the performance of typically developing 6- to 8-year-old children on an approximate number discrimination task, a symbolic comparison task, and a symbolic and nonsymbolic number line estimation task was examined. For the first time, children's performances on these basic cognitive number processing tasks were explicitly contrasted…
Real-time optical laboratory solution of parabolic differential equations
NASA Technical Reports Server (NTRS)
Casasent, David; Jackson, James
1988-01-01
An optical laboratory matrix-vector processor is used to solve parabolic differential equations (the transient diffusion equation with two space variables and time) by an explicit algorithm. This includes optical matrix-vector nonbase-2 encoded laboratory data, the combination of nonbase-2 and frequency-multiplexed data on such processors, a high-accuracy optical laboratory solution of a partial differential equation, new data partitioning techniques, and a discussion of a multiprocessor optical matrix-vector architecture.
3D glasma initial state for relativistic heavy ion collisions
Schenke, Björn; Schlichting, Sören
2016-10-13
We extend the impact-parameter-dependent Glasma model to three dimensions using explicit small-x evolution of the two incoming nuclear gluon distributions. We compute rapidity distributions of produced gluons and the early-time energy momentum tensor as a function of space-time rapidity and transverse coordinates. Finally, we study rapidity correlations and fluctuations of the initial geometry and multiplicity distributions and make comparisons to existing models for the three-dimensional initial state.
Explicit formulation of second and third order optical nonlinearity in the FDTD framework
NASA Astrophysics Data System (ADS)
Varin, Charles; Emms, Rhys; Bart, Graeme; Fennel, Thomas; Brabec, Thomas
2018-01-01
The finite-difference time-domain (FDTD) method is a flexible and powerful technique for rigorously solving Maxwell's equations. However, three-dimensional optical nonlinearity in current commercial and research FDTD softwares requires solving iteratively an implicit form of Maxwell's equations over the entire numerical space and at each time step. Reaching numerical convergence demands significant computational resources and practical implementation often requires major modifications to the core FDTD engine. In this paper, we present an explicit method to include second and third order optical nonlinearity in the FDTD framework based on a nonlinear generalization of the Lorentz dispersion model. A formal derivation of the nonlinear Lorentz dispersion equation is equally provided, starting from the quantum mechanical equations describing nonlinear optics in the two-level approximation. With the proposed approach, numerical integration of optical nonlinearity and dispersion in FDTD is intuitive, transparent, and fully explicit. A strong-field formulation is also proposed, which opens an interesting avenue for FDTD-based modelling of the extreme nonlinear optics phenomena involved in laser filamentation and femtosecond micromachining of dielectrics.
Contracting Officer Workload and Contractual Terms: Theory and Evidence
2012-08-30
should be conducted in accordance with simplified acquisitions procedures and are explicitly set aside for small businesses . These awards are known as...analyze a set of California Highway Procurement auctions and find that the ex-post adaptation costs make up between 7 and 13% of the winning bid...and simply assume he is facing some set of incentives that leads him to value saving time and money on the project and on its procurement . Having him
NASA Astrophysics Data System (ADS)
Couderc, F.; Duran, A.; Vila, J.-P.
2017-08-01
We present an explicit scheme for a two-dimensional multilayer shallow water model with density stratification, for general meshes and collocated variables. The proposed strategy is based on a regularized model where the transport velocity in the advective fluxes is shifted proportionally to the pressure potential gradient. Using a similar strategy for the potential forces, we show the stability of the method in the sense of a discrete dissipation of the mechanical energy, in general multilayer and non-linear frames. These results are obtained at first-order in space and time and extended using a second-order MUSCL extension in space and a Heun's method in time. With the objective of minimizing the diffusive losses in realistic contexts, sufficient conditions are exhibited on the regularizing terms to ensure the scheme's linear stability at first and second-order in time and space. The other main result stands in the consistency with respect to the asymptotics reached at small and large time scales in low Froude regimes, which governs large-scale oceanic circulation. Additionally, robustness and well-balanced results for motionless steady states are also ensured. These stability properties tend to provide a very robust and efficient approach, easy to implement and particularly well suited for large-scale simulations. Some numerical experiments are proposed to highlight the scheme efficiency: an experiment of fast gravitational modes, a smooth surface wave propagation, an initial propagating surface water elevation jump considering a non-trivial topography, and a last experiment of slow Rossby modes simulating the displacement of a baroclinic vortex subject to the Coriolis force.
The dynamics of adapting, unregulated populations and a modified fundamental theorem.
O'Dwyer, James P
2013-01-06
A population in a novel environment will accumulate adaptive mutations over time, and the dynamics of this process depend on the underlying fitness landscape: the fitness of and mutational distance between possible genotypes in the population. Despite its fundamental importance for understanding the evolution of a population, inferring this landscape from empirical data has been problematic. We develop a theoretical framework to describe the adaptation of a stochastic, asexual, unregulated, polymorphic population undergoing beneficial, neutral and deleterious mutations on a correlated fitness landscape. We generate quantitative predictions for the change in the mean fitness and within-population variance in fitness over time, and find a simple, analytical relationship between the distribution of fitness effects arising from a single mutation, and the change in mean population fitness over time: a variant of Fisher's 'fundamental theorem' which explicitly depends on the form of the landscape. Our framework can therefore be thought of in three ways: (i) as a set of theoretical predictions for adaptation in an exponentially growing phase, with applications in pathogen populations, tumours or other unregulated populations; (ii) as an analytically tractable problem to potentially guide theoretical analysis of regulated populations; and (iii) as a basis for developing empirical methods to infer general features of a fitness landscape.
Wolff, Sebastian; Bucher, Christian
2013-01-01
This article presents asynchronous collision integrators and a simple asynchronous method treating nodal restraints. Asynchronous discretizations allow individual time step sizes for each spatial region, improving the efficiency of explicit time stepping for finite element meshes with heterogeneous element sizes. The article first introduces asynchronous variational integration being expressed by drift and kick operators. Linear nodal restraint conditions are solved by a simple projection of the forces that is shown to be equivalent to RATTLE. Unilateral contact is solved by an asynchronous variant of decomposition contact response. Therein, velocities are modified avoiding penetrations. Although decomposition contact response is solving a large system of linear equations (being critical for the numerical efficiency of explicit time stepping schemes) and is needing special treatment regarding overconstraint and linear dependency of the contact constraints (for example from double-sided node-to-surface contact or self-contact), the asynchronous strategy handles these situations efficiently and robust. Only a single constraint involving a very small number of degrees of freedom is considered at once leading to a very efficient solution. The treatment of friction is exemplified for the Coulomb model. Special care needs the contact of nodes that are subject to restraints. Together with the aforementioned projection for restraints, a novel efficient solution scheme can be presented. The collision integrator does not influence the critical time step. Hence, the time step can be chosen independently from the underlying time-stepping scheme. The time step may be fixed or time-adaptive. New demands on global collision detection are discussed exemplified by position codes and node-to-segment integration. Numerical examples illustrate convergence and efficiency of the new contact algorithm. Copyright © 2013 The Authors. International Journal for Numerical Methods in Engineering published by John Wiley & Sons, Ltd. PMID:23970806
Compiling global name-space programs for distributed execution
NASA Technical Reports Server (NTRS)
Koelbel, Charles; Mehrotra, Piyush
1990-01-01
Distributed memory machines do not provide hardware support for a global address space. Thus programmers are forced to partition the data across the memories of the architecture and use explicit message passing to communicate data between processors. The compiler support required to allow programmers to express their algorithms using a global name-space is examined. A general method is presented for analysis of a high level source program and along with its translation to a set of independently executing tasks communicating via messages. If the compiler has enough information, this translation can be carried out at compile-time. Otherwise run-time code is generated to implement the required data movement. The analysis required in both situations is described and the performance of the generated code on the Intel iPSC/2 is presented.
Functional brain networks for learning predictive statistics.
Giorgio, Joseph; Karlaftis, Vasilis M; Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew; Kourtzi, Zoe
2017-08-18
Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. This skill relies on extracting regular patterns in space and time by mere exposure to the environment (i.e., without explicit feedback). Yet, we know little about the functional brain networks that mediate this type of statistical learning. Here, we test whether changes in the processing and connectivity of functional brain networks due to training relate to our ability to learn temporal regularities. By combining behavioral training and functional brain connectivity analysis, we demonstrate that individuals adapt to the environment's statistics as they change over time from simple repetition to probabilistic combinations. Further, we show that individual learning of temporal structures relates to decision strategy. Our fMRI results demonstrate that learning-dependent changes in fMRI activation within and functional connectivity between brain networks relate to individual variability in strategy. In particular, extracting the exact sequence statistics (i.e., matching) relates to changes in brain networks known to be involved in memory and stimulus-response associations, while selecting the most probable outcomes in a given context (i.e., maximizing) relates to changes in frontal and striatal networks. Thus, our findings provide evidence that dissociable brain networks mediate individual ability in learning behaviorally-relevant statistics. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Jaggers, R. F.
1974-01-01
An optimum powered explicit guidance algorithm capable of handling all space shuttle exoatospheric maneuvers is presented. The theoretical and practical basis for the currently baselined space shuttle powered flight guidance equations and logic is documented. Detailed flow diagrams for implementing the steering computations for all shuttle phases, including powered return to launch site (RTLS) abort, are also presented. Derivation of the powered RTLS algorithm is provided, as well as detailed flow diagrams for implementing the option. The flow diagrams and equations are compatible with the current powered flight documentation.
Deep Space Network Antenna Monitoring Using Adaptive Time Series Methods and Hidden Markov Models
NASA Technical Reports Server (NTRS)
Smyth, Padhraic; Mellstrom, Jeff
1993-01-01
The Deep Space Network (DSN)(designed and operated by the Jet Propulsion Laboratory for the National Aeronautics and Space Administration (NASA) provides end-to-end telecommunication capabilities between earth and various interplanetary spacecraft throughout the solar system.
Explicit and implicit motor learning in children with unilateral cerebral palsy.
van der Kamp, John; Steenbergen, Bert; Masters, Rich S W
2017-07-30
The current study aimed to investigate the capacity for explicit and implicit learning in children with unilateral cerebral palsy. Children with left and right unilateral cerebral palsy and typically developing children shuffled disks toward a target. A prism-adaptation design was implemented, consisting of pre-exposure, prism exposure, and post-exposure phases. Half of the participants were instructed about the function of the prism glasses, while the other half were not. For each trial, the distance between the target and the shuffled disk was determined. Explicit learning was indicated by the rate of adaptation during the prism exposure phase, whereas implicit learning was indicated by the magnitude of the negative after-effect at the start of the post-exposure phase. Results No significant effects were revealed between typically developing participants and participants with unilateral cerebral palsy. Comparison of participants with left and right unilateral cerebral palsy demonstrated that participants with right unilateral cerebral palsy had a significantly lower rate of adaptation than participants with left unilateral cerebral palsy, but only when no instructions were provided. The magnitude of the negative after-effects did not differ significantly between participants with right and left unilateral cerebral palsy. The capacity for explicit motor learning is reduced among individuals with right unilateral cerebral palsy when accumulation of declarative knowledge is unguided (i.e., discovery learning). In contrast, the capacity for implicit learning appears to remain intact among individuals with left as well as right unilateral cerebral palsy. Implications for rehabilitation Implicit motor learning interventions are recommended for individuals with cerebral palsy, particularly for individuals with right unilateral cerebral palsy Explicit motor learning interventions for individual with cerebral palsy - if used - best consist of singular verbal instruction.
Shryock, Daniel F.; Havrilla, Caroline A.; DeFalco, Lesley; Esque, Todd C.; Custer, Nathan; Wood, Troy E.
2015-01-01
Local adaptation influences plant species’ responses to climate change and their performance in ecological restoration. Fine-scale physiological or phenological adaptations that direct demographic processes may drive intraspecific variability when baseline environmental conditions change. Landscape genomics characterize adaptive differentiation by identifying environmental drivers of adaptive genetic variability and mapping the associated landscape patterns. We applied such an approach to Sphaeralcea ambigua, an important restoration plant in the arid southwestern United States, by analyzing variation at 153 amplified fragment length polymorphism loci in the context of environmental gradients separating 47 Mojave Desert populations. We identified 37 potentially adaptive loci through a combination of genome scan approaches. We then used a generalized dissimilarity model (GDM) to relate variability in potentially adaptive loci with spatial gradients in temperature, precipitation, and topography. We identified non-linear thresholds in loci frequencies driven by summer maximum temperature and water stress, along with continuous variation corresponding to temperature seasonality. Two GDM-based approaches for mapping predicted patterns of local adaptation are compared. Additionally, we assess uncertainty in spatial interpolations through a novel spatial bootstrapping approach. Our study presents robust, accessible methods for deriving spatially-explicit models of adaptive genetic variability in non-model species that will inform climate change modelling and ecological restoration.
DynEarthSol3D: numerical studies of basal crevasses and calving blocks
NASA Astrophysics Data System (ADS)
Logan, E.; Lavier, L. L.; Choi, E.; Tan, E.; Catania, G. A.
2014-12-01
DynEarthSol3D (DES) is a thermomechanical model for the simulation of dynamic ice flow. We present the application of DES toward two case studies - basal crevasses and calving blocks - to illustrate the potential of the model to aid in understanding calving processes. Among the advantages of using DES are: its unstructured meshes which adaptively resolve zones of high interest; its use of multiple rheologies to simulate different types of dynamic behavior; and its explicit and parallel numerical core which both make the implementation of different boundary conditions easy and the model highly scalable. We examine the initiation and development of both basal crevasses and calving blocks through time using visco-elasto-plastic rheology. Employing a brittle-to-ductile transition zone (BDTZ) based on local strain rate shows that the style and development of brittle features like crevasses differs markedly on the rheological parameters. Brittle and ductile behavior are captured by Mohr-Coulomb elastoplasticity and Maxwell viscoelasticity, respectively. We explore the parameter spaces which define these rheologies (including temperature) as well as the BDTZ threshold (shown in the literature as 10-7 Pa s), using time-to-failure as a metric for accuracy within the model. As the time it takes for a block of ice to fail can determine an iceberg's size, this work has implications for calving laws.
Immune response during space flight.
Criswell-Hudak, B S
1991-01-01
The health status of an astronaut prior to and following space flight has been a prime concern of NASA throughout the Apollo series of lunar landings, Skylab, Apollo-Soyuz Test Projects (ASTP), and the new Spacelab-Shuttle missions. Both humoral and cellular immunity has been studied using classical clinical procedures. Serum proteins show fluctuations that can be explained with adaptation to flight. Conversely, cellular immune responses of lymphocytes appear to be depressed in both in vivo as well as in vitro. If this depression in vivo and in vitro is a result of the same cause, then man's adaptation to outer space living will present interesting challenges in the future. Since the cause may be due to reduced gravity, perhaps the designs of the experiments for space flight will offer insights at the cellular levels that will facilitate development of mechanisms for adaptation. Further, if the aging process is viewed as an adaptational concept or model and not as a disease process then perhaps space flight could very easily interact to supply some information on our biological time clocks.
A family of compact high order coupled time-space unconditionally stable vertical advection schemes
NASA Astrophysics Data System (ADS)
Lemarié, Florian; Debreu, Laurent
2016-04-01
Recent papers by Shchepetkin (2015) and Lemarié et al. (2015) have emphasized that the time-step of an oceanic model with an Eulerian vertical coordinate and an explicit time-stepping scheme is very often restricted by vertical advection in a few hot spots (i.e. most of the grid points are integrated with small Courant numbers, compared to the Courant-Friedrichs-Lewy (CFL) condition, except just few spots where numerical instability of the explicit scheme occurs first). The consequence is that the numerics for vertical advection must have good stability properties while being robust to changes in Courant number in terms of accuracy. An other constraint for oceanic models is the strict control of numerical mixing imposed by the highly adiabatic nature of the oceanic interior (i.e. mixing must be very small in the vertical direction below the boundary layer). We examine in this talk the possibility of mitigating vertical Courant-Friedrichs-Lewy (CFL) restriction, while avoiding numerical inaccuracies associated with standard implicit advection schemes (i.e. large sensitivity of the solution on Courant number, large phase delay, and possibly excess of numerical damping with unphysical orientation). Most regional oceanic models have been successfully using fourth order compact schemes for vertical advection. In this talk we present a new general framework to derive generic expressions for (one-step) coupled time and space high order compact schemes (see Daru & Tenaud (2004) for a thorough description of coupled time and space schemes). Among other properties, we show that those schemes are unconditionally stable and have very good accuracy properties even for large Courant numbers while having a very reasonable computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bezerra de Mello, E.R.
2006-01-15
In this paper we present, in a integral form, the Euclidean Green function associated with a massless scalar field in the five-dimensional Kaluza-Klein magnetic monopole superposed to a global monopole, admitting a nontrivial coupling between the field with the geometry. This Green function is expressed as the sum of two contributions: the first one related with uncharged component of the field, is similar to the Green function associated with a scalar field in a four-dimensional global monopole space-time. The second contains the information of all the other components. Using this Green function it is possible to study the vacuum polarizationmore » effects on this space-time. Explicitly we calculate the renormalized vacuum expectation value <{phi}{sup *}(x){phi}(x)>{sub Ren}, which by its turn is also expressed as the sum of two contributions.« less
Fully implicit adaptive mesh refinement MHD algorithm
NASA Astrophysics Data System (ADS)
Philip, Bobby
2005-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.
Fully implicit adaptive mesh refinement algorithm for reduced MHD
NASA Astrophysics Data System (ADS)
Philip, Bobby; Pernice, Michael; Chacon, Luis
2006-10-01
In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)
Adaptive density trajectory cluster based on time and space distance
NASA Astrophysics Data System (ADS)
Liu, Fagui; Zhang, Zhijie
2017-10-01
There are some hotspot problems remaining in trajectory cluster for discovering mobile behavior regularity, such as the computation of distance between sub trajectories, the setting of parameter values in cluster algorithm and the uncertainty/boundary problem of data set. As a result, based on the time and space, this paper tries to define the calculation method of distance between sub trajectories. The significance of distance calculation for sub trajectories is to clearly reveal the differences in moving trajectories and to promote the accuracy of cluster algorithm. Besides, a novel adaptive density trajectory cluster algorithm is proposed, in which cluster radius is computed through using the density of data distribution. In addition, cluster centers and number are selected by a certain strategy automatically, and uncertainty/boundary problem of data set is solved by designed weighted rough c-means. Experimental results demonstrate that the proposed algorithm can perform the fuzzy trajectory cluster effectively on the basis of the time and space distance, and obtain the optimal cluster centers and rich cluster results information adaptably for excavating the features of mobile behavior in mobile and sociology network.
Seakeeping with the semi-Lagrangian particle finite element method
NASA Astrophysics Data System (ADS)
Nadukandi, Prashanth; Servan-Camas, Borja; Becker, Pablo Agustín; Garcia-Espinosa, Julio
2017-07-01
The application of the semi-Lagrangian particle finite element method (SL-PFEM) for the seakeeping simulation of the wave adaptive modular vehicle under spray generating conditions is presented. The time integration of the Lagrangian advection is done using the explicit integration of the velocity and acceleration along the streamlines (X-IVAS). Despite the suitability of the SL-PFEM for the considered seakeeping application, small time steps were needed in the X-IVAS scheme to control the solution accuracy. A preliminary proposal to overcome this limitation of the X-IVAS scheme for seakeeping simulations is presented.
Closed-Loop Optimal Control Implementations for Space Applications
2016-12-01
analyses of a series of optimal control problems, several real- time optimal control algorithms are developed that continuously adapt to feedback on the...through the analyses of a series of optimal control problems, several real- time optimal control algorithms are developed that continuously adapt to...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering
Adaptive grid embedding for the two-dimensional flux-split Euler equations. M.S. Thesis
NASA Technical Reports Server (NTRS)
Warren, Gary Patrick
1990-01-01
A numerical algorithm is presented for solving the 2-D flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for a subcritical airfoil and a transonic airfoil with 3 levels of adaptation. Comparisons are made with a structured upwind Euler code which uses the same flux integration techniques of the present algorithm. Good agreement is obtained with converged surface pressure coefficients. The lift coefficients of the adaptive code are within 2 1/2 percent of the structured code for the sub-critical case and within 4 1/2 percent of the structured code for the transonic case using approximately one-third the number of grid points.
Construction of non-Abelian gauge theories on noncommutative spaces
NASA Astrophysics Data System (ADS)
Jurčo, B.; Möller, L.; Schraml, S.; Schupp, P.; Wess, J.
We present a formalism to explicitly construct non-Abelian gauge theories on noncommutative spaces (induced via a star product with a constant Poisson tensor) from a consistency relation. This results in an expansion of the gauge parameter, the noncommutative gauge potential and fields in the fundamental representation, in powers of a parameter of the noncommutativity. This allows the explicit construction of actions for these gauge theories.
Using Model-Based Reasoning for Autonomous Instrument Operation - Lessons Learned From IMAGE/LENA
NASA Technical Reports Server (NTRS)
Johnson, Michael A.; Rilee, Michael L.; Truszkowski, Walt; Bailin, Sidney C.
2001-01-01
Model-based reasoning has been applied as an autonomous control strategy on the Low Energy Neutral Atom (LENA) instrument currently flying on board the Imager for Magnetosphere-to-Aurora Global Exploration (IMAGE) spacecraft. Explicit models of instrument subsystem responses have been constructed and are used to dynamically adapt the instrument to the spacecraft's environment. These functions are cast as part of a Virtual Principal Investigator (VPI) that autonomously monitors and controls the instrument. In the VPI's current implementation, LENA's command uplink volume has been decreased significantly from its previous volume; typically, no uplinks are required for operations. This work demonstrates that a model-based approach can be used to enhance science instrument effectiveness. The components of LENA are common in space science instrumentation, and lessons learned by modeling this system may be applied to other instruments. Future work involves the extension of these methods to cover more aspects of LENA operation and the generalization to other space science instrumentation.
Emotional valence and physical space: limits of interaction.
de la Vega, Irmgard; de Filippis, Mónica; Lachmair, Martin; Dudschig, Carolin; Kaup, Barbara
2012-04-01
According to the body-specificity hypothesis, people associate positive things with the side of space that corresponds to their dominant hand and negative things with the side corresponding to their nondominant hand. Our aim was to find out whether this association holds also true for a response time study using linguistic stimuli, and whether such an association is activated automatically. Four experiments explored this association using positive and negative words. In Exp. 1, right-handers made a lexical judgment by pressing a left or right key. Attention was not explicitly drawn to the valence of the stimuli. No valence-by-side interaction emerged. In Exp. 2 and 3, right-handers and left-handers made a valence judgment by pressing a left or a right key. A valence-by-side interaction emerged: For positive words, responses were faster when participants responded with their dominant hand, whereas for negative words, responses were faster for the nondominant hand. Exp. 4 required a valence judgment without stating an explicit mapping of valence and side. No valence-by-side interaction emerged. The experiments provide evidence for an association between response side and valence, which, however, does not seem to be activated automatically but rather requires a task with an explicit response mapping to occur.
Adaptive Tunable Laser Spectrometer for Space Applications
NASA Technical Reports Server (NTRS)
Flesch, Gregory; Keymeulen, Didier
2010-01-01
An architecture and process for the rapid prototyping and subsequent development of an adaptive tunable laser absorption spectrometer (TLS) are described. Our digital hardware/firmware/software platform is both reconfigurable at design time as well as autonomously adaptive in real-time for both post-integration and post-launch situations. The design expands the range of viable target environments and enhances tunable laser spectrometer performance in extreme and even unpredictable environments. Through rapid prototyping with a commercial RTOS/FPGA platform, we have implemented a fully operational tunable laser spectrometer (using a highly sensitive second harmonic technique). With this prototype, we have demonstrated autonomous real-time adaptivity in the lab with simulated extreme environments.
Jet Noise Physics and Modeling Using First-principles Simulations
NASA Technical Reports Server (NTRS)
Freund, Jonathan B.
2003-01-01
An extensive analysis of our jet DNS database has provided for the first time the complex correlations that are the core of many statistical jet noise models, including MGBK. We have also for the first time explicitly computed the noise from different components of a commonly used noise source as proposed in many modeling approaches. Key findings are: (1) While two-point (space and time) velocity statistics are well-fitted by decaying exponentials, even for our low-Reynolds-number jet, spatially integrated fourth-order space/retarded-time correlations, which constitute the noise "source" in MGBK, are instead well-fitted by Gaussians. The width of these Gaussians depends (by a factor of 2) on which components are considered. This is counter to current modeling practice, (2) A standard decomposition of the Lighthill source is shown by direct evaluation to be somewhat artificial since the noise from these nominally separate components is in fact highly correlated. We anticipate that the same will be the case for the Lilley source, and (3) The far-field sound is computed in a way that explicitly includes all quadrupole cancellations, yet evaluating the Lighthill integral for only a small part of the jet yields a far-field noise far louder than that from the whole jet due to missing nonquadrupole cancellations. Details of this study are discussed in a draft of a paper included as appendix A.
NASA Astrophysics Data System (ADS)
Herda, Maxime; Rodrigues, L. Miguel
2018-03-01
The present contribution investigates the dynamics generated by the two-dimensional Vlasov-Poisson-Fokker-Planck equation for charged particles in a steady inhomogeneous background of opposite charges. We provide global in time estimates that are uniform with respect to initial data taken in a bounded set of a weighted L^2 space, and where dependencies on the mean-free path τ and the Debye length δ are made explicit. In our analysis the mean free path covers the full range of possible values: from the regime of evanescent collisions τ → ∞ to the strongly collisional regime τ → 0. As a counterpart, the largeness of the Debye length, that enforces a weakly nonlinear regime, is used to close our nonlinear estimates. Accordingly we pay a special attention to relax as much as possible the τ -dependent constraint on δ ensuring exponential decay with explicit τ -dependent rates towards the stationary solution. In the strongly collisional limit τ → 0, we also examine all possible asymptotic regimes selected by a choice of observation time scale. Here also, our emphasis is on strong convergence, uniformity with respect to time and to initial data in bounded sets of a L^2 space. Our proofs rely on a detailed study of the nonlinear elliptic equation defining stationary solutions and a careful tracking and optimization of parameter dependencies of hypocoercive/hypoelliptic estimates.
An information theoretic approach of designing sparse kernel adaptive filters.
Liu, Weifeng; Park, Il; Principe, José C
2009-12-01
This paper discusses an information theoretic approach of designing sparse kernel adaptive filters. To determine useful data to be learned and remove redundant ones, a subjective information measure called surprise is introduced. Surprise captures the amount of information a datum contains which is transferable to a learning system. Based on this concept, we propose a systematic sparsification scheme, which can drastically reduce the time and space complexity without harming the performance of kernel adaptive filters. Nonlinear regression, short term chaotic time-series prediction, and long term time-series forecasting examples are presented.
The Integration of Delta Prime (f)in a Multidimensional Space
NASA Technical Reports Server (NTRS)
Farassat, F.
1999-01-01
Consideration is given to the thickness noise term of the Ffowcs Williams-Hawkings equation when the time derivative is taken explicitly. An interpretation is presented of the integral I = function phi(x)delta-prime(f) dx, where it is initially assumed that the absolute value of Del-f is not equal to 1 on the surface f = 0.
Mark D. Nelson; Sean Healey; W. Keith Moser; J.G. Masek; Warren Cohen
2011-01-01
We assessed the consistency across space and time of spatially explicit models of forest presence and biomass in southern Missouri, USA, for adjacent, partially overlapping satellite image Path/Rows, and for coincident satellite images from the same Path/Row acquired in different years. Such consistency in satellite image-based classification and estimation is critical...
Control Law Design in a Computational Aeroelasticity Environment
NASA Technical Reports Server (NTRS)
Newsom, Jerry R.; Robertshaw, Harry H.; Kapania, Rakesh K.
2003-01-01
A methodology for designing active control laws in a computational aeroelasticity environment is given. The methodology involves employing a systems identification technique to develop an explicit state-space model for control law design from the output of a computational aeroelasticity code. The particular computational aeroelasticity code employed in this paper solves the transonic small disturbance aerodynamic equation using a time-accurate, finite-difference scheme. Linear structural dynamics equations are integrated simultaneously with the computational fluid dynamics equations to determine the time responses of the structure. These structural responses are employed as the input to a modern systems identification technique that determines the Markov parameters of an "equivalent linear system". The Eigensystem Realization Algorithm is then employed to develop an explicit state-space model of the equivalent linear system. The Linear Quadratic Guassian control law design technique is employed to design a control law. The computational aeroelasticity code is modified to accept control laws and perform closed-loop simulations. Flutter control of a rectangular wing model is chosen to demonstrate the methodology. Various cases are used to illustrate the usefulness of the methodology as the nonlinearity of the aeroelastic system is increased through increased angle-of-attack changes.
Coates, Peter S.; Casazza, Michael L.; Brussee, Brianne E.; Ricca, Mark A.; Gustafson, K. Benjamin; Sanchez-Chopitea, Erika; Mauch, Kimberly; Niell, Lara; Gardner, Scott; Espinosa, Shawn; Delehanty, David J.
2016-05-20
Successful adaptive management hinges largely upon integrating new and improved sources of information as they become available. As a timely example of this tenet, we updated a management decision support tool that was previously developed for greater sage-grouse (Centrocercus urophasianus, hereinafter referred to as “sage-grouse”) populations in Nevada and California. Specifically, recently developed spatially explicit habitat maps derived from empirical data played a key role in the conservation of this species facing listing under the Endangered Species Act. This report provides an updated process for mapping relative habitat suitability and management categories for sage-grouse in Nevada and northeastern California (Coates and others, 2014, 2016). These updates include: (1) adding radio and GPS telemetry locations from sage-grouse monitored at multiple sites during 2014 to the original location dataset beginning in 1998; (2) integrating output from high resolution maps (1–2 m2) of sagebrush and pinyon-juniper cover as covariates in resource selection models; (3) modifying the spatial extent of the analyses to match newly available vegetation layers; (4) explicit modeling of relative habitat suitability during three seasons (spring, summer, winter) that corresponded to critical life history periods for sage-grouse (breeding, brood-rearing, over-wintering); (5) accounting for differences in habitat availability between more mesic sagebrush steppe communities in the northern part of the study area and drier Great Basin sagebrush in more southerly regions by categorizing continuous region-wide surfaces of habitat suitability index (HSI) with independent locations falling within two hydrological zones; (6) integrating the three seasonal maps into a composite map of annual relative habitat suitability; (7) deriving updated land management categories based on previously determined cut-points for intersections of habitat suitability and an updated index of sage-grouse abundance and space-use (AUI); and (8) masking urban footprints and major roadways out of the final map products.Seasonal habitat maps were generated based on model-averaged resource selection functions (RSF) derived for 10 project areas (813 sage-grouse; 14,085 locations) during the spring season, 10 during the summer season (591 sage-grouse, 11,743 locations), and 7 during the winter season (288 sage-grouse, 4,862 locations). RSF surfaces were transformed to HSIs and averaged in a GIS framework for every pixel for each season. Validation analyses of categorized HSI surfaces using a suite of independent datasets resulted in an agreement of 93–97 percent for habitat versus non-habitat on an annual basis. Spring and summer maps validated similarly well at 94–97 percent, while winter maps validated slightly less accurately at 87–93 percent.We then provide an updated example of how space use models can be integrated with habitat models to help inform conservation planning. We used updated lek count data to calculate a composite abundance and space use index (AUI) that comprised the combination of probabilistic breeding density with a non-linear probability of occurrence relative to distance to nearest lek. The AUI was then classified into two categories of use (high and low-to-no) and intersected with the HSI categories to create potential management prioritization scenarios based on information about sage-grouse occupancy coupled with habitat suitability. Compared to Coates and others (2014, 2016), the amount of area classified as habitat across the region increased by 6.5 percent (approximately 1,700,000 acres). For management categories, core increased by 7.2 percent (approximately 865,000 acres), priority increased by 9.6 percent (approximately 855,000 acres), and general increased by 9.2 percent (approximately 768,000 acres), while non-habitat decreased (that is, classified non-habitat occurring outside of areas of concentrated use) by 11.9 percent (approximately 2,500,000 acres). Importantly, seasonal and annual maps represent habitat for all age and sex classes of sage-grouse (that is, sample sizes of marked grouse were insufficient to only construct models for reproductive females). This revised sage-grouse habitat mapping product helps improve adaptive application of conservation planning tools based on intersections of spatially explicit habitat suitability, abundance, and space use indices.
Digital adaptive controllers for VTOL vehicles. Volume 2: Software documentation
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Stein, G.; Pratt, S. G.
1979-01-01
The VTOL approach and landing test (VALT) adaptive software is documented. Two self-adaptive algorithms, one based on an implicit model reference design and the other on an explicit parameter estimation technique were evaluated. The organization of the software, user options, and a nominal set of input data are presented along with a flow chart and program listing of each algorithm.
An adaptive signal-processing approach to online adaptive tutoring.
Bergeron, Bryan; Cline, Andrew
2011-01-01
Conventional intelligent or adaptive tutoring online systems rely on domain-specific models of learner behavior based on rules, deep domain knowledge, and other resource-intensive methods. We have developed and studied a domain-independent methodology of adaptive tutoring based on domain-independent signal-processing approaches that obviate the need for the construction of explicit expert and student models. A key advantage of our method over conventional approaches is a lower barrier to entry for educators who want to develop adaptive online learning materials.
Grüneis, Heidelinde; Penker, Marianne; Höferl, Karl-Michael
2016-01-01
Our scientific view on climate change adaptation (CCA) is unsatisfying in many ways: It is often dominated by a modernistic perspective of planned pro-active adaptation, with a selective focus on measures directly responding to climate change impacts and thus it is far from real-life conditions of those who are actually affected by climate change. Farmers have to simultaneously adapt to multiple changes. Therefore, also empirical climate change adaptation research needs a more integrative perspective on real-life climate change adaptations. This also has to consider "hidden" adaptations, which are not explicitly and directly motivated by CCA but actually contribute to the sector's adaptability to climate change. The aim of the present study is to develop and test an analytic framework that contributes to a broader understanding of CCA and to bridge the gap between scientific expertise and practical action. The framework distinguishes three types of CCA according to their climate related motivations: explicit adaptations, multi-purpose adaptations, and hidden adaptations. Although agriculture is among the sectors that are most affected by climate change, results from the case study of Tyrolean mountain agriculture show that climate change is ranked behind other more pressing "real-life-challenges" such as changing agricultural policies or market conditions. We identified numerous hidden adaptations which make a valuable contribution when dealing with climate change impacts. We conclude that these hidden adaptations have not only to be considered to get an integrative und more realistic view on CCA; they also provide a great opportunity for linking adaptation strategies to farmers' realities.
A solution-adaptive hybrid-grid method for the unsteady analysis of turbomachinery
NASA Technical Reports Server (NTRS)
Mathur, Sanjay R.; Madavan, Nateri K.; Rajagopalan, R. G.
1993-01-01
A solution-adaptive method for the time-accurate analysis of two-dimensional flows in turbomachinery is described. The method employs a hybrid structured-unstructured zonal grid topology in conjunction with appropriate modeling equations and solution techniques in each zone. The viscous flow region in the immediate vicinity of the airfoils is resolved on structured O-type grids while the rest of the domain is discretized using an unstructured mesh of triangular cells. Implicit, third-order accurate, upwind solutions of the Navier-Stokes equations are obtained in the inner regions. In the outer regions, the Euler equations are solved using an explicit upwind scheme that incorporates a second-order reconstruction procedure. An efficient and robust grid adaptation strategy, including both grid refinement and coarsening capabilities, is developed for the unstructured grid regions. Grid adaptation is also employed to facilitate information transfer at the interfaces between unstructured grids in relative motion. Results for grid adaptation to various features pertinent to turbomachinery flows are presented. Good comparisons between the present results and experimental measurements and earlier structured-grid results are obtained.
Limited evolutionary rescue of locally adapted populations facing climate change.
Schiffers, Katja; Bourne, Elizabeth C; Lavergne, Sébastien; Thuiller, Wilfried; Travis, Justin M J
2013-01-19
Dispersal is a key determinant of a population's evolutionary potential. It facilitates the propagation of beneficial alleles throughout the distributional range of spatially outspread populations and increases the speed of adaptation. However, when habitat is heterogeneous and individuals are locally adapted, dispersal may, at the same time, reduce fitness through increasing maladaptation. Here, we use a spatially explicit, allelic simulation model to quantify how these equivocal effects of dispersal affect a population's evolutionary response to changing climate. Individuals carry a diploid set of chromosomes, with alleles coding for adaptation to non-climatic environmental conditions and climatic conditions, respectively. Our model results demonstrate that the interplay between gene flow and habitat heterogeneity may decrease effective dispersal and population size to such an extent that substantially reduces the likelihood of evolutionary rescue. Importantly, even when evolutionary rescue saves a population from extinction, its spatial range following climate change may be strongly narrowed, that is, the rescue is only partial. These findings emphasize that neglecting the impact of non-climatic, local adaptation might lead to a considerable overestimation of a population's evolvability under rapid environmental change.
The influence of vertical motor responses on explicit and incidental processing of power words.
Jiang, Tianjiao; Sun, Lining; Zhu, Lei
2015-07-01
There is increasing evidence demonstrating that power judgment is affected by vertical information. Such interaction between vertical space and power (i.e., response facilitation under space-power congruent conditions) is generally elicited in paradigms that require participants to explicitly evaluate the power of the presented words. The current research explored the possibility that explicit evaluative processing is not a prerequisite for the emergence of this effect. Here we compared the influence of vertical information on a standard explicit power evaluation task with influence on a task that linked power with stimuli in a more incidental manner, requiring participants to report whether the words represented people or animals or the font of the words. The results revealed that although the effect is more modest, the interaction between responses and power is also evident in an incidental task. Furthermore, we also found that explicit semantic processing is a prerequisite to ensure such an effect. Copyright © 2015 Elsevier Inc. All rights reserved.
Moderating Effects of Mathematics Anxiety on the Effectiveness of Explicit Timing
ERIC Educational Resources Information Center
Grays, Sharnita D.; Rhymer, Katrina N.; Swartzmiller, Melissa D.
2017-01-01
Explicit timing is an empirically validated intervention to increase problem completion rates by exposing individuals to a stopwatch and explicitly telling them of the time limit for the assignment. Though explicit timing has proven to be effective for groups of students, some students may not respond well to explicit timing based on factors such…
Isomonodromy for the Degenerate Fifth Painlevé Equation
NASA Astrophysics Data System (ADS)
Acosta-Humánez, Primitivo B.; van der Put, Marius; Top, Jaap
2017-05-01
This is a sequel to papers by the last two authors making the Riemann-Hilbert correspondence and isomonodromy explicit. For the degenerate fifth Painlevé equation, the moduli spaces for connections and for monodromy are explicitly computed. It is proven that the extended Riemann-Hilbert morphism is an isomorphism. As a consequence these equations have the Painlevé property and the Okamoto-Painlevé space is identified with a moduli space of connections. Using MAPLE computations, one obtains formulas for the degenerate fifth Painlevé equation, for the Bäcklund transformations.
Weck, Florian; Höfling, Volkmar
2015-01-01
Two adaptations of the Implicit Association Task were used to assess implicit anxiety (IAT-Anxiety) and implicit health attitudes (IAT-Hypochondriasis) in patients with hypochondriasis (n = 58) and anxiety patients (n = 71). Explicit anxieties and health attitudes were assessed using questionnaires. The analysis of several multitrait-multimethod models indicated that the low correlation between explicit and implicit measures of health attitudes is due to the substantial methodological differences between the IAT and the self-report questionnaire. Patients with hypochondriasis displayed significantly more dysfunctional explicit and implicit health attitudes than anxiety patients, but no differences were found regarding explicit and implicit anxieties. The study demonstrates the specificity of explicit and implicit dysfunctional health attitudes among patients with hypochondriasis.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
Macular Bioaccelerometers on Earth and in Space
NASA Technical Reports Server (NTRS)
Ross, M. D.; Cutler, L.; Meyer, G.; Vazin, P.; Lam, T.
1991-01-01
Space flight offers the opportunity to study linear bioaccelerometers (vestibular maculas) in the virtual absence of a primary stimulus, gravitational acceleration. Macular research in space is particularly important to NASA because the bioaccelerometers are proving to be weighted neural networks in which information is distributed for parallel processing. Neural networks are plastic and highly adaptive to new environments. Combined morphological-physiological studies of maculas fixed in space and following flight should reveal macular adaptive responses to microgravity, and their time-course. Ground-based research, already begun, using computer-assisted, 3-dimensional reconstruction of macular terminal fields will lead to development of computer models of functioning maculas. This research should continue in conjunction with physiological studies, including work with multichannel electrodes. The results of such a combined effort could usher in a new era in understanding vestibular function on Earth and in space. They can also provide a rational basis for counter-measures to space motion sickness, which may prove troublesome as space voyager encounter new gravitational fields on planets, or must re-adapt to 1 g upon return to earth.
NASA Technical Reports Server (NTRS)
Thomas, Claudine
1995-01-01
The generation and dissemination of International Atomic Time, TAI, and of Coordinated Universal Time, UTC, are explicitly mentioned in the list of the principal tasks of the BIPM, recalled in the Comptes Rendus of the 18th Conference Generale des Poids et Mesures, in 1987. These tasks are fulfilled by the BIPM Time Section, thanks to international cooperation with national timing centers, which maintain, under metrological conditions, the clocks used to generate TAI. Besides the current work of data collection and processing, research activities are carried out in order to adapt the computation of TAI to the most recent improvements occurring in the time and frequency domains. Studies concerning the application of general relativity and pulsar timing to time metrology are also actively pursued. This paper summarizes the work done in all these fields and outlines future projects.
Genetic algorithms for adaptive real-time control in space systems
NASA Technical Reports Server (NTRS)
Vanderzijp, J.; Choudry, A.
1988-01-01
Genetic Algorithms that are used for learning as one way to control the combinational explosion associated with the generation of new rules are discussed. The Genetic Algorithm approach tends to work best when it can be applied to a domain independent knowledge representation. Applications to real time control in space systems are discussed.
NASA Technical Reports Server (NTRS)
Dubos, Gregory F.; Cornford, Steven
2012-01-01
While the ability to model the state of a space system over time is essential during spacecraft operations, the use of time-based simulations remains rare in preliminary design. The absence of the time dimension in most traditional early design tools can however become a hurdle when designing complex systems whose development and operations can be disrupted by various events, such as delays or failures. As the value delivered by a space system is highly affected by such events, exploring the trade space for designs that yield the maximum value calls for the explicit modeling of time.This paper discusses the use of discrete-event models to simulate spacecraft development schedule as well as operational scenarios and on-orbit resources in the presence of uncertainty. It illustrates how such simulations can be utilized to support trade studies, through the example of a tool developed for DARPA's F6 program to assist the design of "fractionated spacecraft".
Modeling Didactic Knowledge by Storyboarding
ERIC Educational Resources Information Center
Knauf, Rainer; Sakurai, Yoshitaka; Tsuruta, Setsuo; Jantke, Klaus P.
2010-01-01
University education often suffers from a lack of an explicit and adaptable didactic design. Students complain about the insufficient adaptability to the learners' needs. Learning content and services need to reach their audience according to their different prerequisites, needs, and different learning styles and conditions. A way to overcome such…
Analysis of Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol
NASA Technical Reports Server (NTRS)
Woo, Simon S.
2011-01-01
To synchronize clocks between spacecraft in proximity, the Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol has been proposed. PITS is based on the NTP Interleaved On-Wire Protocol and is capable of being adapted and integrated into CCSDS Proximity-1 Space Link Protocol with minimal modifications. In this work, we will discuss the correctness and liveness of PITS. Further, we analyze and evaluate the performance of time synchronization latency with various channel error rates in different PITS operational modes.
Adaptive real time selection for quantum key distribution in lossy and turbulent free-space channels
NASA Astrophysics Data System (ADS)
Vallone, Giuseppe; Marangon, Davide G.; Canale, Matteo; Savorgnan, Ilaria; Bacco, Davide; Barbieri, Mauro; Calimani, Simon; Barbieri, Cesare; Laurenti, Nicola; Villoresi, Paolo
2015-04-01
The unconditional security in the creation of cryptographic keys obtained by quantum key distribution (QKD) protocols will induce a quantum leap in free-space communication privacy in the same way that we are beginning to realize secure optical fiber connections. However, free-space channels, in particular those with long links and the presence of atmospheric turbulence, are affected by losses, fluctuating transmissivity, and background light that impair the conditions for secure QKD. Here we introduce a method to contrast the atmospheric turbulence in QKD experiments. Our adaptive real time selection (ARTS) technique at the receiver is based on the selection of the intervals with higher channel transmissivity. We demonstrate, using data from the Canary Island 143-km free-space link, that conditions with unacceptable average quantum bit error rate which would prevent the generation of a secure key can be used once parsed according to the instantaneous scintillation using the ARTS technique.
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
Gravitational Scattering Amplitudes and Closed String Field Theory in the Proper-Time Gauge
NASA Astrophysics Data System (ADS)
Lee, Taejin
2018-01-01
We construct a covariant closed string field theory by extending recent works on the covariant open string field theory in the proper-time gauge. Rewriting the string scattering amplitudes generated by the closed string field theory in terms of the Polyakov string path integrals, we identify the Fock space representations of the closed string vertices. We show that the Fock space representations of the closed string field theory may be completely factorized into those of the open string field theory. It implies that the well known Kawai-Lewellen-Tye (KLT) relations of the first quantized string theory may be promoted to the second quantized closed string theory. We explicitly calculate the scattering amplitudes of three gravitons by using the closed string field theory in the proper-time gauge.
Heart Fibrillation and Parallel Supercomputers
NASA Technical Reports Server (NTRS)
Kogan, B. Y.; Karplus, W. J.; Chudin, E. E.
1997-01-01
The Luo and Rudy 3 cardiac cell mathematical model is implemented on the parallel supercomputer CRAY - T3D. The splitting algorithm combined with variable time step and an explicit method of integration provide reasonable solution times and almost perfect scaling for rectilinear wave propagation. The computer simulation makes it possible to observe new phenomena: the break-up of spiral waves caused by intracellular calcium and dynamics and the non-uniformity of the calcium distribution in space during the onset of the spiral wave.
A class of generalized Ginzburg-Landau equations with random switching
NASA Astrophysics Data System (ADS)
Wu, Zheng; Yin, George; Lei, Dongxia
2018-09-01
This paper focuses on a class of generalized Ginzburg-Landau equations with random switching. In our formulation, the nonlinear term is allowed to have higher polynomial growth rate than the usual cubic polynomials. The random switching is modeled by a continuous-time Markov chain with a finite state space. First, an explicit solution is obtained. Then properties such as stochastic-ultimate boundedness and permanence of the solution processes are investigated. Finally, two-time-scale models are examined leading to a reduction of complexity.
NASA Astrophysics Data System (ADS)
Chen, Guangye; Chacón, Luis; CoCoMans Team
2014-10-01
For decades, the Vlasov-Darwin model has been recognized to be attractive for PIC simulations (to avoid radiative noise issues) in non-radiative electromagnetic regimes. However, the Darwin model results in elliptic field equations that renders explicit time integration unconditionally unstable. Improving on linearly implicit schemes, fully implicit PIC algorithms for both electrostatic and electromagnetic regimes, with exact discrete energy and charge conservation properties, have been recently developed in 1D. This study builds on these recent algorithms to develop an implicit, orbit-averaged, time-space-centered finite difference scheme for the particle-field equations in multiple dimensions. The algorithm conserves energy, charge, and canonical-momentum exactly, even with grid packing. A simple fluid preconditioner allows efficient use of large timesteps, O (√{mi/me}c/veT) larger than the explicit CFL. We demonstrate the accuracy and efficiency properties of the of the algorithm with various numerical experiments in 2D3V.
Deformable Mirrors Correct Optical Distortions
NASA Technical Reports Server (NTRS)
2010-01-01
By combining the high sensitivity of space telescopes with revolutionary imaging technologies consisting primarily of adaptive optics, the Terrestrial Planet Finder is slated to have imaging power 100 times greater than the Hubble Space Telescope. To this end, Boston Micromachines Corporation, of Cambridge, Massachusetts, received Small Business Innovation Research (SBIR) contracts from the Jet Propulsion Laboratory for space-based adaptive optical technology. The work resulted in a microelectromechanical systems (MEMS) deformable mirror (DM) called the Kilo-DM. The company now offers a full line of MEMS DMs, which are being used in observatories across the world, in laser communication, and microscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, John; Jankovsky, Zachary; Metzroth, Kyle G
2018-04-04
The purpose of the ADAPT code is to generate Dynamic Event Trees (DET) using a user specified set of simulators. ADAPT can utilize any simulation tool which meets a minimal set of requirements. ADAPT is based on the concept of DET which uses explicit modeling of the deterministic dynamic processes that take place during a nuclear reactor plant system (or other complex system) evolution along with stochastic modeling. When DET are used to model various aspects of Probabilistic Risk Assessment (PRA), all accident progression scenarios starting from an initiating event are considered simultaneously. The DET branching occurs at user specifiedmore » times and/or when an action is required by the system and/or the operator. These outcomes then decide how the dynamic system variables will evolve in time for each DET branch. Since two different outcomes at a DET branching may lead to completely different paths for system evolution, the next branching for these paths may occur not only at separate times, but can be based on different branching criteria. The computational infrastructure allows for flexibility in ADAPT to link with different system simulation codes, parallel processing of the scenarios under consideration, on-line scenario management (initiation as well as termination), analysis of results, and user friendly graphical capabilities. The ADAPT system is designed for a distributed computing environment; the scheduler can track multiple concurrent branches simultaneously. The scheduler is modularized so that the DET branching strategy can be modified (e.g. biasing towards the worst-case scenario/event). Independent database systems store data from the simulation tasks and the DET structure so that the event tree can be constructed and analyzed later. ADAPT is provided with a user-friendly client which can easily sort through and display the results of an experiment, precluding the need for the user to manually inspect individual simulator runs.« less
Finite-error metrological bounds on multiparameter Hamiltonian estimation
NASA Astrophysics Data System (ADS)
Kura, Naoto; Ueda, Masahito
2018-01-01
Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.
Coadaptive aiding and automation enhance operator performance.
Christensen, James C; Estepp, Justin R
2013-10-01
In this work, we expand on the theory of adaptive aiding by measuring the effectiveness of coadaptive aiding, wherein we explicitly allow for both system and user to adapt to each other. Adaptive aiding driven by psychophysiological monitoring has been demonstrated to be a highly effective means of controlling task allocation and system functioning. Psychophysiological monitoring is uniquely well suited for coadaptation, as malleable brain activity may be used as a continuous input to the adaptive system. To establish the efficacy of the coadaptive system, physiological activation of adaptation was directly compared with manual activation or no activation of the same automation and cuing systems. We used interface adaptations and automation that are plausible for real-world operations, presented in the context of a multi-remotely piloted aircraft control simulation. Each participant completed 3 days of testing during 1 week. Performance was assessed via proportion of targets successfully engaged. In the first 2 days of testing, there were no significant differences in performance between the conditions. However, in the third session, physiological adaptation produced the highest performance. By extending the data collection across multiple days, we offered enough time and repeated experience for user adaptation as well as online system adaptation, hence demonstrating coadaptive aiding. The results of this work may be employed to implement more effective adaptive workstations in a variety of work domains.
Chemodiversity and molecular plasticity: recognition processes as explored by property spaces.
Vistoli, Giulio; Pedretti, Alessandro; Testa, Bernard
2011-06-01
In the last few years, a need to account for molecular flexibility in drug-design methodologies has emerged, even if the dynamic behavior of molecular properties is seldom made explicit. For a flexible molecule, it is indeed possible to compute different values for a given conformation-dependent property and the ensemble of such values defines a property space that can be used to describe its molecular variability; a most representative case is the lipophilicity space. In this review, a number of applications of lipophilicity space and other property spaces are presented, showing that this concept can be fruitfully exploited: to investigate the constraints exerted by media of different levels of structural organization, to examine processes of molecular recognition and binding at an atomic level, to derive informative descriptors to be included in quantitative structure--activity relationships and to analyze protein simulations extracting the relevant information. Much molecular information is neglected in the descriptors used by medicinal chemists, while the concept of property space can fill this gap by accounting for the often-disregarded dynamic behavior of both small ligands and biomacromolecules. Property space also introduces some innovative concepts such as molecular sensitivity and plasticity, which appear best suited to explore the ability of a molecule to adapt itself to the environment variously modulating its property and conformational profiles. Globally, such concepts can enhance our understanding of biological phenomena providing fruitful descriptors in drug-design and pharmaceutical sciences.
Random element method for numerical modeling of diffusional processes
NASA Technical Reports Server (NTRS)
Ghoniem, A. F.; Oppenheim, A. K.
1982-01-01
The random element method is a generalization of the random vortex method that was developed for the numerical modeling of momentum transport processes as expressed in terms of the Navier-Stokes equations. The method is based on the concept that random walk, as exemplified by Brownian motion, is the stochastic manifestation of diffusional processes. The algorithm based on this method is grid-free and does not require the diffusion equation to be discritized over a mesh, it is thus devoid of numerical diffusion associated with finite difference methods. Moreover, the algorithm is self-adaptive in space and explicit in time, resulting in an improved numerical resolution of gradients as well as a simple and efficient computational procedure. The method is applied here to an assortment of problems of diffusion of momentum and energy in one-dimension as well as heat conduction in two-dimensions in order to assess its validity and accuracy. The numerical solutions obtained are found to be in good agreement with exact solution except for a statistical error introduced by using a finite number of elements, the error can be reduced by increasing the number of elements or by using ensemble averaging over a number of solutions.
Statistical Quality Control of Moisture Data in GEOS DAS
NASA Technical Reports Server (NTRS)
Dee, D. P.; Rukhovets, L.; Todling, R.
1999-01-01
A new statistical quality control algorithm was recently implemented in the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The final step in the algorithm consists of an adaptive buddy check that either accepts or rejects outlier observations based on a local statistical analysis of nearby data. A basic assumption in any such test is that the observed field is spatially coherent, in the sense that nearby data can be expected to confirm each other. However, the buddy check resulted in excessive rejection of moisture data, especially during the Northern Hemisphere summer. The analysis moisture variable in GEOS DAS is water vapor mixing ratio. Observational evidence shows that the distribution of mixing ratio errors is far from normal. Furthermore, spatial correlations among mixing ratio errors are highly anisotropic and difficult to identify. Both factors contribute to the poor performance of the statistical quality control algorithm. To alleviate the problem, we applied the buddy check to relative humidity data instead. This variable explicitly depends on temperature and therefore exhibits a much greater spatial coherence. As a result, reject rates of moisture data are much more reasonable and homogeneous in time and space.
How landscape ecology informs global land-change science and policy
Audrey L. Mayer; Brian Buma; Am??lie Davis; Sara A. Gagn??; E. Louise Loudermilk; Robert M. Scheller; Fiona K.A. Schmiegelow; Yolanda F. Wiersma; Janet Franklin
2016-01-01
Landscape ecology is a discipline that explicitly considers the influence of time and space on the environmental patterns we observe and the processes that create them. Although many of the topics studied in landscape ecology have public policy implications, three are of particular concern: climate change; land useâland cover change (LULCC); and a particular type of...
Towards Zero-Waste Furniture Design.
Koo, Bongjin; Hergel, Jean; Lefebvre, Sylvain; Mitra, Niloy J
2017-12-01
In traditional design, shapes are first conceived, and then fabricated. While this decoupling simplifies the design process, it can result in unwanted material wastage, especially where off-cut pieces are hard to reuse. In absence of explicit feedback on material usage, the designer remains helpless to effectively adapt the design - even when design variabilities exist. We investigate waste minimizing furniture design wherein based on the current design, the user is presented with design variations that result in less wastage of materials. Technically, we dynamically analyze material space layout to determine which parts to change and how , while maintaining original design intent specified in the form of design constraints. We evaluate the approach on various design scenarios, and demonstrate effective material usage that is difficult, if not impossible, to achieve without computational support.
An approach to integrating and creating flexible software environments
NASA Technical Reports Server (NTRS)
Bellman, Kirstie L.
1992-01-01
Engineers and scientists are attempting to represent, analyze, and reason about increasingly complex systems. Many researchers have been developing new ways of creating increasingly open environments. In this research on VEHICLES, a conceptual design environment for space systems, an approach was developed, called 'wrapping', to flexibility and integration based on the collection and then processing of explicit qualitative descriptions of all the software resources in the environment. Currently, a simulation is available, VSIM, used to study both the types of wrapping descriptions and the processes necessary to use the metaknowledge to combine, select, adapt, and explain some of the software resources used in VEHICLES. What was learned about the types of knowledge necessary for the wrapping approach is described along with the implications of wrapping for several key software engineering issues.
Neuroadaptive technology enables implicit cursor control based on medial prefrontal cortex activity.
Zander, Thorsten O; Krol, Laurens R; Birbaumer, Niels P; Gramann, Klaus
2016-12-27
The effectiveness of today's human-machine interaction is limited by a communication bottleneck as operators are required to translate high-level concepts into a machine-mandated sequence of instructions. In contrast, we demonstrate effective, goal-oriented control of a computer system without any form of explicit communication from the human operator. Instead, the system generated the necessary input itself, based on real-time analysis of brain activity. Specific brain responses were evoked by violating the operators' expectations to varying degrees. The evoked brain activity demonstrated detectable differences reflecting congruency with or deviations from the operators' expectations. Real-time analysis of this activity was used to build a user model of those expectations, thus representing the optimal (expected) state as perceived by the operator. Based on this model, which was continuously updated, the computer automatically adapted itself to the expectations of its operator. Further analyses showed this evoked activity to originate from the medial prefrontal cortex and to exhibit a linear correspondence to the degree of expectation violation. These findings extend our understanding of human predictive coding and provide evidence that the information used to generate the user model is task-specific and reflects goal congruency. This paper demonstrates a form of interaction without any explicit input by the operator, enabling computer systems to become neuroadaptive, that is, to automatically adapt to specific aspects of their operator's mindset. Neuroadaptive technology significantly widens the communication bottleneck and has the potential to fundamentally change the way we interact with technology.
Stability analysis of Eulerian-Lagrangian methods for the one-dimensional shallow-water equations
Casulli, V.; Cheng, R.T.
1990-01-01
In this paper stability and error analyses are discussed for some finite difference methods when applied to the one-dimensional shallow-water equations. Two finite difference formulations, which are based on a combined Eulerian-Lagrangian approach, are discussed. In the first part of this paper the results of numerical analyses for an explicit Eulerian-Lagrangian method (ELM) have shown that the method is unconditionally stable. This method, which is a generalized fixed grid method of characteristics, covers the Courant-Isaacson-Rees method as a special case. Some artificial viscosity is introduced by this scheme. However, because the method is unconditionally stable, the artificial viscosity can be brought under control either by reducing the spatial increment or by increasing the size of time step. The second part of the paper discusses a class of semi-implicit finite difference methods for the one-dimensional shallow-water equations. This method, when the Eulerian-Lagrangian approach is used for the convective terms, is also unconditionally stable and highly accurate for small space increments or large time steps. The semi-implicit methods seem to be more computationally efficient than the explicit ELM; at each time step a single tridiagonal system of linear equations is solved. The combined explicit and implicit ELM is best used in formulating a solution strategy for solving a network of interconnected channels. The explicit ELM is used at channel junctions for each time step. The semi-implicit method is then applied to the interior points in each channel segment. Following this solution strategy, the channel network problem can be reduced to a set of independent one-dimensional open-channel flow problems. Numerical results support properties given by the stability and error analyses. ?? 1990.
Ahirwal, M K; Kumar, Anil; Singh, G K
2013-01-01
This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.
A time reversal algorithm in acoustic media with Dirac measure approximations
NASA Astrophysics Data System (ADS)
Bretin, Élie; Lucas, Carine; Privat, Yannick
2018-04-01
This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t = 0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.
Classical space-times from the S-matrix
NASA Astrophysics Data System (ADS)
Neill, Duff; Rothstein, Ira Z.
2013-12-01
We show that classical space-times can be derived directly from the S-matrix for a theory of massive particles coupled to a massless spin two particle. As an explicit example we derive the Schwarzchild space-time as a series in GN. At no point of the derivation is any use made of the Einstein-Hilbert action or the Einstein equations. The intermediate steps involve only on-shell S-matrix elements which are generated via BCFW recursion relations and unitarity sewing techniques. The notion of a space-time metric is only introduced at the end of the calculation where it is extracted by matching the potential determined by the S-matrix to the geodesic motion of a test particle. Other static space-times such as Kerr follow in a similar manner. Furthermore, given that the procedure is action independent and depends only upon the choice of the representation of the little group, solutions to Yang-Mills (YM) theory can be generated in the same fashion. Moreover, the squaring relation between the YM and gravity three point functions shows that the seeds that generate solutions in the two theories are algebraically related. From a technical standpoint our methodology can also be utilized to calculate quantities relevant for the binary inspiral problem more efficiently then the more traditional Feynman diagram approach.
NASA Astrophysics Data System (ADS)
Tryfonidis, Michail
It has been observed that during orbital spaceflight the absence of gravitation related sensory inputs causes incongruence between the expected and the actual sensory feedback resulting from voluntary movements. This incongruence results in a reinterpretation or neglect of gravity-induced sensory input signals. Over time, new internal models develop, gradually compensating for the loss of spatial reference. The study of adaptation of goal-directed movements is the main focus of this thesis. The hypothesis is that during the adaptive learning process the neural connections behave in ways that can be described by an adaptive control method. The investigation presented in this thesis includes two different sets of experiments. A series of dart throwing experiments took place onboard the space station Mir. Experiments also took place at the Biomechanics lab at MIT, where the subjects performed a series of continuous trajectory tracking movements while a planar robotic manipulandum exerted external torques on the subjects' moving arms. The experimental hypothesis for both experiments is that during the first few trials the subjects will perform poorly trying to follow a prescribed trajectory, or trying to hit a target. A theoretical framework is developed that is a modification of the sliding control method used in robotics. The new control framework is an attempt to explain the adaptive behavior of the subjects. Numerical simulations of the proposed framework are compared with experimental results and predictions from competitive models. The proposed control methodology extends the results of the sliding mode theory to human motor control. The resulting adaptive control model of the motor system is robust to external dynamics, even those of negative gain, uses only position and velocity feedback, and achieves bounded steady-state error without explicit knowledge of the system's nonlinearities. In addition, the experimental and modeling results demonstrate that visuomotor learning is important not only for error correction through internal model adaptation on ground or in microgravity, but also for the minimization of the total mean-square error in the presence of random variability. Thus human intelligent decision displays certain attributes that seem to conform to Bayesian statistical games. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Yet another family of diagonal metrics for de Sitter and anti-de Sitter spacetimes
NASA Astrophysics Data System (ADS)
Podolský, Jiří; Hruška, Ondřej
2017-06-01
In this work we present and analyze a new class of coordinate representations of de Sitter and anti-de Sitter spacetimes for which the metrics are diagonal and (typically) static and axially symmetric. Contrary to the well-known forms of these fundamental geometries, that usually correspond to a 1 +3 foliation with the 3-space of a constant spatial curvature, the new metrics are adapted to a 2 +2 foliation, and are warped products of two 2-spaces of constant curvature. This new class of (anti-)de Sitter metrics depends on the value of cosmological constant Λ and two discrete parameters +1 ,0 ,-1 related to the curvature of the 2-spaces. The class admits 3 distinct subcases for Λ >0 and 8 subcases for Λ <0 . We systematically study all these possibilities. In particular, we explicitly present the corresponding parametrizations of the (anti-)de Sitter hyperboloid, visualize the coordinate lines and surfaces within the global conformal cylinder, investigate their mutual relations, present some closely related forms of the metrics, and give transformations to standard de Sitter and anti-de Sitter metrics. Using these results, we also provide a physical interpretation of B -metrics as exact gravitational fields of a tachyon.
Water solvent effects using continuum and discrete models: The nitromethane molecule, CH3NO2.
Modesto-Costa, Lucas; Uhl, Elmar; Borges, Itamar
2015-11-15
The first three valence transitions of the two nitromethane conformers (CH3NO2) are two dark n → π* transitions and a very intense π → π* transition. In this work, these transitions in gas-phase and solvated in water of both conformers were investigated theoretically. The polarizable continuum model (PCM), two conductor-like screening (COSMO) models, and the discrete sequential quantum mechanics/molecular mechanics (S-QM/MM) method were used to describe the solvation effect on the electronic spectra. Time dependent density functional theory (TDDFT), configuration interaction including all single substitutions and perturbed double excitations (CIS(D)), the symmetry-adapted-cluster CI (SAC-CI), the multistate complete active space second order perturbation theory (CASPT2), and the algebraic-diagrammatic construction (ADC(2)) electronic structure methods were used. Gas-phase CASPT2, SAC-CI, and ADC(2) results are in very good agreement with published experimental and theoretical spectra. Among the continuum models, PCM combined either with CASPT2, SAC-CI, or B3LYP provided good agreement with available experimental data. COSMO combined with ADC(2) described the overall trends of the transition energy shifts. The effect of increasing the number of explicit water molecules in the S-QM/MM approach was discussed and the formation of hydrogen bonds was clearly established. By including explicitly 24 water molecules corresponding to the complete first solvation shell in the S-QM/MM approach, the ADC(2) method gives more accurate results as compared to the TDDFT approach and with similar computational demands. The ADC(2) with S-QM/MM model is, therefore, the best compromise for accurate solvent calculations in a polar environment. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Huttenlau, Matthias; Schneeberger, Klaus; Winter, Benjamin; Pazur, Robert; Förster, Kristian; Achleitner, Stefan; Bolliger, Janine
2017-04-01
Devastating flood events have caused substantial economic damage across Europe during past decades. Flood risk management has therefore become a topic of crucial interest across state agencies, research communities and the public sector including insurances. There is consensus that mitigating flood risk relies on impact assessments which quantitatively account for a broad range of aspects in a (changing) environment. Flood risk assessments which take into account the interaction between the drivers climate change, land-use change and socio-economic change might bring new insights to the understanding of the magnitude and spatial characteristic of flood risks. Furthermore, the comparative assessment of different adaptation measures can give valuable information for decision-making. With this contribution we present an inter- and transdisciplinary research project aiming at developing and applying such an impact assessment relying on a coupled modelling framework for the Province of Vorarlberg in Austria. Stakeholder engagement ensures that the final outcomes of our study are accepted and successfully implemented in flood management practice. The study addresses three key questions: (i) What are scenarios of land- use and climate change for the study area? (ii) How will the magnitude and spatial characteristic of future flood risk change as a result of changes in climate and land use? (iii) Are there spatial planning and building-protection measures which effectively reduce future flood risk? The modelling framework has a modular structure comprising modules (i) climate change, (ii) land-use change, (iii) hydrologic modelling, (iv) flood risk analysis, and (v) adaptation measures. Meteorological time series are coupled with spatially explicit scenarios of land-use change to model runoff time series. The runoff time series are combined with impact indicators such as building damages and results are statistically assessed to analyse flood risk scenarios. Thus, the regional flood risk can be expressed in terms of expected annual damage and damages associated with a low probability of occurrence. We consider building protection measures explicitly as part of the consequence analysis of flood risk whereas spatial planning measures are already considered as explicit scenarios in the course of land-use change modelling.
Adaptive Management of Computing and Network Resources for Spacecraft Systems
NASA Technical Reports Server (NTRS)
Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)
2000-01-01
It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.
Delay-dependent coupling for a multi-agent LTI consensus system with inter-agent delays
NASA Astrophysics Data System (ADS)
Qiao, Wei; Sipahi, Rifat
2014-01-01
Delay-dependent coupling (DDC) is considered in this paper in a broadly studied linear time-invariant multi-agent consensus system in which agents communicate with each other under homogeneous delays, while attempting to reach consensus. The coupling among the agents is designed here as an explicit parameter of this delay, allowing couplings to autonomously adapt based on the delay value, and in order to guarantee stability and a certain degree of robustness in the network despite the destabilizing effect of delay. Design procedures, analysis of convergence speed of consensus, comprehensive numerical studies for the case of time-varying delay, and limitations are presented.
NASA Astrophysics Data System (ADS)
Yang, Haijian; Sun, Shuyu; Yang, Chao
2017-03-01
Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.
Clipping in neurocontrol by adaptive dynamic programming.
Fairbank, Michael; Prokhorov, Danil; Alonso, Eduardo
2014-10-01
In adaptive dynamic programming, neurocontrol, and reinforcement learning, the objective is for an agent to learn to choose actions so as to minimize a total cost function. In this paper, we show that when discretized time is used to model the motion of the agent, it can be very important to do clipping on the motion of the agent in the final time step of the trajectory. By clipping, we mean that the final time step of the trajectory is to be truncated such that the agent stops exactly at the first terminal state reached, and no distance further. We demonstrate that when clipping is omitted, learning performance can fail to reach the optimum, and when clipping is done properly, learning performance can improve significantly. The clipping problem we describe affects algorithms that use explicit derivatives of the model functions of the environment to calculate a learning gradient. These include backpropagation through time for control and methods based on dual heuristic programming. However, the clipping problem does not significantly affect methods based on heuristic dynamic programming, temporal differences learning, or policy-gradient learning algorithms.
Kongsager, Rico; Locatelli, Bruno; Chazarin, Florie
2016-02-01
Adaptation and mitigation share the ultimate purpose of reducing climate change impacts. However, they tend to be considered separately in projects and policies because of their different objectives and scales. Agriculture and forestry are related to both adaptation and mitigation: they contribute to greenhouse gas emissions and removals, are vulnerable to climate variations, and form part of adaptive strategies for rural livelihoods. We assessed how climate change project design documents (PDDs) considered a joint contribution to adaptation and mitigation in forestry and agriculture in the tropics, by analyzing 201 PDDs from adaptation funds, mitigation instruments, and project standards [e.g., climate community and biodiversity (CCB)]. We analyzed whether PDDs established for one goal reported an explicit contribution to the other (i.e., whether mitigation PDDs contributed to adaptation and vice versa). We also examined whether the proposed activities or expected outcomes allowed for potential contributions to the two goals. Despite the separation between the two goals in international and national institutions, 37% of the PDDs explicitly mentioned a contribution to the other objective, although only half of those substantiated it. In addition, most adaptation (90%) and all mitigation PDDs could potentially report a contribution to at least partially to the other goal. Some adaptation project developers were interested in mitigation for the prospect of carbon funding, whereas mitigation project developers integrated adaptation to achieve greater long-term sustainability or to attain CCB certification. International and national institutions can provide incentives for projects to harness synergies and avoid trade-offs between adaptation and mitigation.
Action recognition using mined hierarchical compound features.
Gilbert, Andrew; Illingworth, John; Bowden, Richard
2011-05-01
The field of Action Recognition has seen a large increase in activity in recent years. Much of the progress has been through incorporating ideas from single-frame object recognition and adapting them for temporal-based action recognition. Inspired by the success of interest points in the 2D spatial domain, their 3D (space-time) counterparts typically form the basic components used to describe actions, and in action recognition the features used are often engineered to fire sparsely. This is to ensure that the problem is tractable; however, this can sacrifice recognition accuracy as it cannot be assumed that the optimum features in terms of class discrimination are obtained from this approach. In contrast, we propose to initially use an overcomplete set of simple 2D corners in both space and time. These are grouped spatially and temporally using a hierarchical process, with an increasing search area. At each stage of the hierarchy, the most distinctive and descriptive features are learned efficiently through data mining. This allows large amounts of data to be searched for frequently reoccurring patterns of features. At each level of the hierarchy, the mined compound features become more complex, discriminative, and sparse. This results in fast, accurate recognition with real-time performance on high-resolution video. As the compound features are constructed and selected based upon their ability to discriminate, their speed and accuracy increase at each level of the hierarchy. The approach is tested on four state-of-the-art data sets, the popular KTH data set to provide a comparison with other state-of-the-art approaches, the Multi-KTH data set to illustrate performance at simultaneous multiaction classification, despite no explicit localization information provided during training. Finally, the recent Hollywood and Hollywood2 data sets provide challenging complex actions taken from commercial movie sequences. For all four data sets, the proposed hierarchical approach outperforms all other methods reported thus far in the literature and can achieve real-time operation.
NASA Technical Reports Server (NTRS)
Mulavara, A. P.; Seidler, R. D.; Feiveson, A.; Oddsson, L.; Zanello, S.; Oman, C. M.; Ploutz-Snyder, L.; Peters, B.; Cohen, H. S.; Reschke, M.;
2014-01-01
Astronauts experience sensorimotor disturbances during the initial exposure to microgravity and during the re-adapation phase following a return to an earth-gravitational environment. These alterations may disrupt the ability to perform mission critical functional tasks requiring ambulation, manual control and gaze stability. Interestingly, astronauts who return from space flight show substantial differences in their abilities to readapt to a gravitational environment. The ability to predict the manner and degree to which individual astronauts would be affected would improve the effectiveness of countermeasure training programs designed to enhance sensorimotor adaptability. For such an approach to succeed, we must develop predictive measures of sensorimotor adaptability that will allow us to foresee, before actual space flight, which crewmembers are likely to experience the greatest challenges to their adaptive capacities. The goals of this project are to identify and characterize this set of predictive measures that include: 1) behavioral tests to assess sensory bias and adaptability quantified using both strategic and plastic-adaptive responses; 2) imaging to determine individual brain morphological and functional features using structural magnetic resonance imaging (MRI), diffusion tensor imaging, resting state functional connectivity MRI, and sensorimotor adaptation task-related functional brain activation; 3) genotype markers for genetic polymorphisms in Catechol-O-Methyl Transferase, Dopamine Receptor D2, Brain-derived neurotrophic factor and genetic polymorphism of alpha2-adrenergic receptor that play a role in the neural pathways underlying sensorimotor adaptation. We anticipate these predictive measures will be significantly correlated with individual differences in sensorimotor adaptability after long-duration space flight and an analog bed rest environment. We will be conducting a retrospective study leveraging data already collected from relevant ongoing/completed bed rest and space flight studies. These data will be combined with predictor metrics that will be collected prospectively - behavioral, brain imaging and genomic measures; from these returning subjects to build models for predicting post-mission (bed rest - non-astronauts or space flight - astronauts) adaptive capability as manifested in their outcome measures. Comparisons of model performance will allow us to better design and implement sensorimotor adaptability training countermeasures that are customized for each crewmember's sensory biases, adaptive capacity, brain structure and functional capacities, and genetic predispositions against decrements in post-mission adaptive capability. This ability will allow more efficient use of crew time during training and will optimize training prescriptions for astronauts to ensure expected outcomes.
NASA Technical Reports Server (NTRS)
Baumeister, K. J.; Kreider, K. L.
1996-01-01
An explicit finite difference iteration scheme is developed to study harmonic sound propagation in ducts. To reduce storage requirements for large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable, time is introduced into the Fourier transformed (steady-state) acoustic potential field as a parameter. Under a suitable transformation, the time dependent governing equation in frequency space is simplified to yield a parabolic partial differential equation, which is then marched through time to attain the steady-state solution. The input to the system is the amplitude of an incident harmonic sound source entering a quiescent duct at the input boundary, with standard impedance boundary conditions on the duct walls and duct exit. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1996-01-01
An explicit finite difference iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable, time is introduced into the Fourier transformed (steady-state) acoustic potential field as a parameter. Under a suitable transformation, the time dependent governing equation in frequency space is simplified to yield a parabolic partial differential equation, which is then marched through time to attain the steady-state solution. The input to the system is the amplitude of an incident harmonic sound source entering a quiescent duct at the input boundary, with standard impedance boundary conditions on the duct walls and duct exit. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
NASA Technical Reports Server (NTRS)
Mulavara, A. P.; Wood, S. J.; Cohen, H. S.; Bloomberg, J. J.
2012-01-01
Exposure to the microgravity conditions of space flight induces adaptive modification in sensorimotor function allowing astronauts to operate in this unique environment. This adaptive state, however, is inappropriate for a 1-g environment. Consequently astronauts must spend time readapting to Earth s gravity following their return to Earth. During this readaptation period, alterations in sensorimotor function cause various disturbances in astronaut gait during postflight walking. They often rely more on vision for postural and gait stability and many report the need for greater cognitive supervision of motor actions that previous to space flight were fully automated. Over the last several years our laboratory has investigated postflight astronaut locomotion with the aim of better understanding how adaptive changes in underlying sensorimotor mechanisms contribute to postflight gait dysfunction. Exposure to the microgravity conditions of space flight induces adaptive modification in the control of vestibularly-mediated reflexive head movement during locomotion after space flight. Furthermore, during motor learning, adaptive transitions are composed of two main mechanisms: strategic and plastic. Strategic mechanisms represent immediate and transitory modifications in control to deal with changes in the prevailing environment that, if prolonged, induce plastic mechanisms designed to automate new behavioral responses. The goal of the present study was to examine the contributions of sensorimotor subsystems such as the vestibular and body load sensing (BLS) somatosensory influences on head movement control during locomotion after long-duration space flight. Further we present data on the two motor learning processes during readaptation of locomotor function after long-duration space flight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain
In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strongmore » laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.« less
TIME CALIBRATED OSCILLOSCOPE SWEEP
Owren, H.M.; Johnson, B.M.; Smith, V.L.
1958-04-22
The time calibrator of an electric signal displayed on an oscilloscope is described. In contrast to the conventional technique of using time-calibrated divisions on the face of the oscilloscope, this invention provides means for directly superimposing equal time spaced markers upon a signal displayed upon an oscilloscope. More explicitly, the present invention includes generally a generator for developing a linear saw-tooth voltage and a circuit for combining a high-frequency sinusoidal voltage of a suitable amplitude and frequency with the saw-tooth voltage to produce a resultant sweep deflection voltage having a wave shape which is substantially linear with respect to time between equal time spaced incremental plateau regions occurring once each cycle of the sinusoidal voltage. The foregoing sweep voltage when applied to the horizontal deflection plates in combination with a signal to be observed applied to the vertical deflection plates of a cathode ray oscilloscope produces an image on the viewing screen which is essentially a display of the signal to be observed with respect to time. Intensified spots, or certain other conspicuous indications corresponding to the equal time spaced plateau regions of said sweep voltage, appear superimposed upon said displayed signal, which indications are therefore suitable for direct time calibration purposes.
Mistry, Pankaj; Dunn, Janet A; Marshall, Andrea
2017-07-18
The application of adaptive design methodology within a clinical trial setting is becoming increasingly popular. However the application of these methods within trials is not being reported as adaptive designs hence making it more difficult to capture the emerging use of these designs. Within this review, we aim to understand how adaptive design methodology is being reported, whether these methods are explicitly stated as an 'adaptive design' or if it has to be inferred and to identify whether these methods are applied prospectively or concurrently. Three databases; Embase, Ovid and PubMed were chosen to conduct the literature search. The inclusion criteria for the review were phase II, phase III and phase II/III randomised controlled trials within the field of Oncology that published trial results in 2015. A variety of search terms related to adaptive designs were used. A total of 734 results were identified, after screening 54 were eligible. Adaptive designs were more commonly applied in phase III confirmatory trials. The majority of the papers performed an interim analysis, which included some sort of stopping criteria. Additionally only two papers explicitly stated the term 'adaptive design' and therefore for most of the papers, it had to be inferred that adaptive methods was applied. Sixty-five applications of adaptive design methods were applied, from which the most common method was an adaptation using group sequential methods. This review indicated that the reporting of adaptive design methodology within clinical trials needs improving. The proposed extension to the current CONSORT 2010 guidelines could help capture adaptive design methods. Furthermore provide an essential aid to those involved with clinical trials.
End-to-end Coronagraphic Modeling Including a Low-order Wavefront Sensor
NASA Technical Reports Server (NTRS)
Krist, John E.; Trauger, John T.; Unwin, Stephen C.; Traub, Wesley A.
2012-01-01
To evaluate space-based coronagraphic techniques, end-to-end modeling is necessary to simulate realistic fields containing speckles caused by wavefront errors. Real systems will suffer from pointing errors and thermal and motioninduced mechanical stresses that introduce time-variable wavefront aberrations that can reduce the field contrast. A loworder wavefront sensor (LOWFS) is needed to measure these changes at a sufficiently high rate to maintain the contrast level during observations. We implement here a LOWFS and corresponding low-order wavefront control subsystem (LOWFCS) in end-to-end models of a space-based coronagraph. Our goal is to be able to accurately duplicate the effect of the LOWFS+LOWFCS without explicitly evaluating the end-to-end model at numerous time steps.
Sampayan, Stephen E.
2016-11-22
Apparatus, systems, and methods that provide an X-ray interrogation system having a plurality of stationary X-ray point sources arranged to substantially encircle an area or space to be interrogated. A plurality of stationary detectors are arranged to substantially encircle the area or space to be interrogated, A controller is adapted to control the stationary X-ray point sources to emit X-rays one at a time, and to control the stationary detectors to detect the X-rays emitted by the stationary X-ray point sources.
NASA Technical Reports Server (NTRS)
Price, Jennifer; Harris, Philip; Hochstetler, Bruce; Guerra, Mark; Mendez, Israel; Healy, Matthew; Khan, Ahmed
2013-01-01
International Space Station Live! (ISSLive!) is a Web application that uses a proprietary commercial technology called Lightstreamer to push data across the Internet using the standard http port (port 80). ISSLive! uses the push technology to display real-time telemetry and mission timeline data from the space station in any common Web browser or Internet- enabled mobile device. ISSLive! is designed to fill a unique niche in the education and outreach areas by providing access to real-time space station data without a physical presence in the mission control center. The technology conforms to Internet standards, supports the throughput needed for real-time space station data, and is flexible enough to work on a large number of Internet-enabled devices. ISSLive! consists of two custom components: (1) a series of data adapters that resides server-side in the mission control center at Johnson Space Center, and (2) a set of public html that renders the data pushed from the data adapters. A third component, the Lightstreamer server, is commercially available from a third party and acts as an intermediary between custom components (1) and (2). Lightstreamer also provides proprietary software libraries that are required to use the custom components. At the time of this reporting, this is the first usage of Web-based, push streaming technology in the aerospace industry.
Adaptive multi-time-domain subcycling for crystal plasticity FE modeling of discrete twin evolution
NASA Astrophysics Data System (ADS)
Ghosh, Somnath; Cheng, Jiahao
2018-02-01
Crystal plasticity finite element (CPFE) models that accounts for discrete micro-twin nucleation-propagation have been recently developed for studying complex deformation behavior of hexagonal close-packed (HCP) materials (Cheng and Ghosh in Int J Plast 67:148-170, 2015, J Mech Phys Solids 99:512-538, 2016). A major difficulty with conducting high fidelity, image-based CPFE simulations of polycrystalline microstructures with explicit twin formation is the prohibitively high demands on computing time. High strain localization within fast propagating twin bands requires very fine simulation time steps and leads to enormous computational cost. To mitigate this shortcoming and improve the simulation efficiency, this paper proposes a multi-time-domain subcycling algorithm. It is based on adaptive partitioning of the evolving computational domain into twinned and untwinned domains. Based on the local deformation-rate, the algorithm accelerates simulations by adopting different time steps for each sub-domain. The sub-domains are coupled back after coarse time increments using a predictor-corrector algorithm at the interface. The subcycling-augmented CPFEM is validated with a comprehensive set of numerical tests. Significant speed-up is observed with this novel algorithm without any loss of accuracy that is advantageous for predicting twinning in polycrystalline microstructures.
Environmental Controls on Space-Time Biodiversity Patterns in the Amazon
NASA Astrophysics Data System (ADS)
Porporato, A. M.; Bonetti, S.; Feng, X.
2014-12-01
The Amazon/Andes territory is characterized by the highest biodiversity on Earth and understanding how all these ecological niches and different species originated and developed is an open challenge. The niche perspective assumes that species have evolved and occupy deterministically different roles within its environment. This view differs from that of the neutral theories, which assume ecological equivalence between all species but incorporates stochastic demographic processes along with long-term migration and speciation rates. Both approaches have demonstrated tremendous power in predicting aspects species biodiversity. By combining tools from both approaches, we use modified birth and death processes to simulate plant species diversification in the Amazon/Andes and their space-time ecohydrological controls. By defining parameters related to births and deaths as functions of available resources, we incorporate the role of space-time resource variability on niche formation and community composition. We also explicitly include the role of a heterogeneous landscape and topography. The results are discussed in relation to transect datasets from neotropical forests.
Gauge interaction as periodicity modulation
NASA Astrophysics Data System (ADS)
Dolce, Donatello
2012-06-01
The paper is devoted to a geometrical interpretation of gauge invariance in terms of the formalism of field theory in compact space-time dimensions (Dolce, 2011) [8]. In this formalism, the kinematic information of an interacting elementary particle is encoded on the relativistic geometrodynamics of the boundary of the theory through local transformations of the underlying space-time coordinates. Therefore gauge interactions are described as invariance of the theory under local deformations of the boundary. The resulting local variations of the field solution are interpreted as internal transformations. The internal symmetries of the gauge theory turn out to be related to corresponding space-time local symmetries. In the approximation of local infinitesimal isometric transformations, Maxwell's kinematics and gauge invariance are inferred directly from the variational principle. Furthermore we explicitly impose periodic conditions at the boundary of the theory as semi-classical quantization condition in order to investigate the quantum behavior of gauge interaction. In the abelian case the result is a remarkable formal correspondence with scalar QED.
Dirac Hamiltonian and Reissner-Nordström metric: Coulomb interaction in curved space-time
NASA Astrophysics Data System (ADS)
Noble, J. H.; Jentschura, U. D.
2016-03-01
We investigate the spin-1 /2 relativistic quantum dynamics in the curved space-time generated by a central massive charged object (black hole). This necessitates a study of the coupling of a Dirac particle to the Reissner-Nordström space-time geometry and the simultaneous covariant coupling to the central electrostatic field. The relativistic Dirac Hamiltonian for the Reissner-Nordström geometry is derived. A Foldy-Wouthuysen transformation reveals the presence of gravitational and electrogravitational spin-orbit coupling terms which generalize the Fokker precession terms found for the Dirac-Schwarzschild Hamiltonian, and other electrogravitational correction terms to the potential proportional to αnG , where α is the fine-structure constant and G is the gravitational coupling constant. The particle-antiparticle symmetry found for the Dirac-Schwarzschild geometry (and for other geometries which do not include electromagnetic interactions) is shown to be explicitly broken due to the electrostatic coupling. The resulting spectrum of radially symmetric, electrostatically bound systems (with gravitational corrections) is evaluated for example cases.
NASA Technical Reports Server (NTRS)
Burken, John J.
2005-01-01
This viewgraph presentation covers the following topics: 1) Brief explanation of Generation II Flight Program; 2) Motivation for Neural Network Adaptive Systems; 3) Past/ Current/ Future IFCS programs; 4) Dynamic Inverse Controller with Explicit Model; 5) Types of Neural Networks Investigated; and 6) Brief example
Forest Management Under Uncertainty for Multiple Bird Population Objectives
Clinton T. Moore; W. Todd Plummer; Michael J. Conroy
2005-01-01
We advocate adaptive programs of decision making and monitoring for the management of forest birds when responses by populations to management, and particularly management trade-offs among populations, are uncertain. Models are necessary components of adaptive management. Under this approach, uncertainty about the behavior of a managed system is explicitly captured in...
Anelli, Filomena; Ciaramelli, Elisa; Arzy, Shahar; Frassinetti, Francesca
2016-11-01
Accumulating evidence suggests that humans process time and space in similar veins. Humans represent time along a spatial continuum, and perception of temporal durations can be altered through manipulations of spatial attention by prismatic adaptation (PA). Here, we investigated whether PA-induced manipulations of spatial attention can also influence more conceptual aspects of time, such as humans' ability to travel mentally back and forward in time (mental time travel, MTT). Before and after leftward- and rightward-PA, participants projected themselves in the past, present or future time (i.e., self-projection), and, for each condition, determined whether a series of events were located in the past or the future with respect to that specific self-location in time (i.e., self-reference). The results demonstrated that leftward and rightward shifts of spatial attention facilitated recognition of past and future events, respectively. These findings suggest that spatial attention affects the temporal processing of the human self. Copyright © 2016 Elsevier B.V. All rights reserved.
AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING
Sharif, Behzad; Bresler, Yoram
2013-01-01
We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431
Liu, Jian; Liu, Kexin; Liu, Shutang
2017-01-01
In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.
NASA Astrophysics Data System (ADS)
Henneaux, Marc; Lekeu, Victor; Matulich, Javier; Prohazka, Stefan
2018-06-01
The action of the free [InlineMediaObject not available: see fulltext.] theory in six spacetime dimensions is explicitly constructed. The variables of the variational principle are prepotentials adapted to the self-duality conditions on the fields. The (3, 1) supersymmetry variations are given and the invariance of the action is verified. The action is first-order in time derivatives. It is also Poincaré invariant but not manifestly so, just like the Hamiltonian action of more familiar relativistic field theories.
Adaptive Sampling using Support Vector Machines
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. Mandelli; C. Smith
2012-11-01
Reliability/safety analysis of stochastic dynamic systems (e.g., nuclear power plants, airplanes, chemical plants) is currently performed through a combination of Event-Tress and Fault-Trees. However, these conventional methods suffer from certain drawbacks: • Timing of events is not explicitly modeled • Ordering of events is preset by the analyst • The modeling of complex accident scenarios is driven by expert-judgment For these reasons, there is currently an increasing interest into the development of dynamic PRA methodologies since they can be used to address the deficiencies of conventional methods listed above.
KEWPIE: A dynamical cascade code for decaying exited compound nuclei
NASA Astrophysics Data System (ADS)
Bouriquet, Bertrand; Abe, Yasuhisa; Boilley, David
2004-05-01
A new dynamical cascade code for decaying hot nuclei is proposed and specially adapted to the synthesis of super-heavy nuclei. For such a case, the interesting channel is of the tiny fraction that will decay through particles emission, thus the code avoids classical Monte-Carlo methods and proposes a new numerical scheme. The time dependence is explicitely taken into account in order to cope with the fact that fission decay rate might not be constant. The code allows to evaluate both statistical and dynamical observables. Results are successfully compared to experimental data.
Local Operators in the Eternal Black Hole.
Papadodimas, Kyriakos; Raju, Suvrat
2015-11-20
In the AdS/CFT correspondence, states obtained by Hamiltonian evolution of the thermofield doubled state are also dual to an eternal black-hole geometry, which is glued to the boundary with a time shift generated by a large diffeomorphism. We describe gauge-invariant relational observables that probe the black hole interior in these states and constrain their properties using effective field theory. By adapting recent versions of the information paradox we show that these observables are necessarily described by state-dependent bulk-boundary maps, which we construct explicitly.
Zayit-Soudry, Shiri; Duncan, Jacque L; Syed, Reema; Menghini, Moreno; Roorda, Austin J
2013-11-15
To evaluate cone spacing using adaptive optics scanning laser ophthalmoscopy (AOSLO) in eyes with nonneovascular AMD, and to correlate progression of AOSLO-derived cone measures with standard measures of macular structure. Adaptive optics scanning laser ophthalmoscopy images were obtained over 12 to 21 months from seven patients with AMD including four eyes with geographic atrophy (GA) and four eyes with drusen. Adaptive optics scanning laser ophthalmoscopy images were overlaid with color, infrared, and autofluorescence fundus photographs and spectral domain optical coherence tomography (SD-OCT) images to allow direct correlation of cone parameters with macular structure. Cone spacing was measured for each visit in selected regions including areas over drusen (n = 29), at GA margins (n = 14), and regions without drusen or GA (n = 13) and compared with normal, age-similar values. Adaptive optics scanning laser ophthalmoscopy imaging revealed continuous cone mosaics up to the GA edge and overlying drusen, although reduced cone reflectivity often resulted in hyporeflective AOSLO signals at these locations. Baseline cone spacing measures were normal in 13/13 unaffected regions, 26/28 drusen regions, and 12/14 GA margin regions. Although standard clinical measures showed progression of GA in all study eyes, cone spacing remained within normal ranges in most drusen regions and all GA margin regions. Adaptive optics scanning laser ophthalmoscopy provides adequate resolution for quantitative measurement of cone spacing at the margin of GA and over drusen in eyes with AMD. Although cone spacing was often normal at baseline and remained normal over time, these regions showed focal areas of decreased cone reflectivity. These findings may provide insight into the pathophysiology of AMD progression. (ClinicalTrials.gov number, NCT00254605).
Critical Skills of Marine Corps Infantry Small Unit Leaders
2008-11-17
skills, leadership skills, assertiveness, adaptability, and time management skills, were rated by all 6 raters with the maximum possible rating. All...adaptability 5.00 0.00 - - 1.00 p<.05 0.00 p<.05 and Prac. Sig. 47 time management skills 5.00 0.00 - - 1.00 p<.05 0.00 p<.05 and Prac. Sig. 9...tasks efficiently; to keep one’s work space neat and tidy. 14.2 Time Management Skills - To manage one’s own time and the time of others to accomplish
Nonparametric Bayesian Segmentation of a Multivariate Inhomogeneous Space-Time Poisson Process.
Ding, Mingtao; He, Lihan; Dunson, David; Carin, Lawrence
2012-12-01
A nonparametric Bayesian model is proposed for segmenting time-evolving multivariate spatial point process data. An inhomogeneous Poisson process is assumed, with a logistic stick-breaking process (LSBP) used to encourage piecewise-constant spatial Poisson intensities. The LSBP explicitly favors spatially contiguous segments, and infers the number of segments based on the observed data. The temporal dynamics of the segmentation and of the Poisson intensities are modeled with exponential correlation in time, implemented in the form of a first-order autoregressive model for uniformly sampled discrete data, and via a Gaussian process with an exponential kernel for general temporal sampling. We consider and compare two different inference techniques: a Markov chain Monte Carlo sampler, which has relatively high computational complexity; and an approximate and efficient variational Bayesian analysis. The model is demonstrated with a simulated example and a real example of space-time crime events in Cincinnati, Ohio, USA.
NASA Technical Reports Server (NTRS)
Kurtz, L. A.; Smith, R. E.; Parks, C. L.; Boney, L. R.
1978-01-01
Steady state solutions to two time dependent partial differential systems have been obtained by the Method of Lines (MOL) and compared to those obtained by efficient standard finite difference methods: (1) Burger's equation over a finite space domain by a forward time central space explicit method, and (2) the stream function - vorticity form of viscous incompressible fluid flow in a square cavity by an alternating direction implicit (ADI) method. The standard techniques were far more computationally efficient when applicable. In the second example, converged solutions at very high Reynolds numbers were obtained by MOL, whereas solution by ADI was either unattainable or impractical. With regard to 'set up' time, solution by MOL is an attractive alternative to techniques with complicated algorithms, as much of the programming difficulty is eliminated.
The study of Thai stock market across the 2008 financial crisis
NASA Astrophysics Data System (ADS)
Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik
2016-11-01
The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Yusuf, Abdullahi; Isa Aliyu, Aliyu; Baleanu, Dumitru
2018-03-01
This research analyzes the symmetry analysis, explicit solutions and convergence analysis to the time fractional Cahn-Allen (CA) and time-fractional Klein-Gordon (KG) equations with Riemann-Liouville (RL) derivative. The time fractional CA and time fractional KG are reduced to respective nonlinear ordinary differential equation of fractional order. We solve the reduced fractional ODEs using an explicit power series method. The convergence analysis for the obtained explicit solutions are investigated. Some figures for the obtained explicit solutions are also presented.
Human Pathophysiological Adaptations to the Space Environment
Demontis, Gian C.; Germani, Marco M.; Caiani, Enrico G.; Barravecchia, Ivana; Passino, Claudio; Angeloni, Debora
2017-01-01
Space is an extreme environment for the human body, where during long-term missions microgravity and high radiation levels represent major threats to crew health. Intriguingly, space flight (SF) imposes on the body of highly selected, well-trained, and healthy individuals (astronauts and cosmonauts) pathophysiological adaptive changes akin to an accelerated aging process and to some diseases. Such effects, becoming manifest over a time span of weeks (i.e., cardiovascular deconditioning) to months (i.e., loss of bone density and muscle atrophy) of exposure to weightlessness, can be reduced through proper countermeasures during SF and in due time are mostly reversible after landing. Based on these considerations, it is increasingly accepted that SF might provide a mechanistic insight into certain pathophysiological processes, a concept of interest to pre-nosological medicine. In this article, we will review the main stress factors encountered in space and their impact on the human body and will also discuss the possible lessons learned with space exploration in reference to human health on Earth. In fact, this is a productive, cross-fertilized, endeavor in which studies performed on Earth yield countermeasures for protection of space crew health, and space research is translated into health measures for Earth-bound population. PMID:28824446
Human Pathophysiological Adaptations to the Space Environment.
Demontis, Gian C; Germani, Marco M; Caiani, Enrico G; Barravecchia, Ivana; Passino, Claudio; Angeloni, Debora
2017-01-01
Space is an extreme environment for the human body, where during long-term missions microgravity and high radiation levels represent major threats to crew health. Intriguingly, space flight (SF) imposes on the body of highly selected, well-trained, and healthy individuals (astronauts and cosmonauts) pathophysiological adaptive changes akin to an accelerated aging process and to some diseases. Such effects, becoming manifest over a time span of weeks (i.e., cardiovascular deconditioning) to months (i.e., loss of bone density and muscle atrophy) of exposure to weightlessness, can be reduced through proper countermeasures during SF and in due time are mostly reversible after landing. Based on these considerations, it is increasingly accepted that SF might provide a mechanistic insight into certain pathophysiological processes, a concept of interest to pre-nosological medicine. In this article, we will review the main stress factors encountered in space and their impact on the human body and will also discuss the possible lessons learned with space exploration in reference to human health on Earth. In fact, this is a productive, cross-fertilized, endeavor in which studies performed on Earth yield countermeasures for protection of space crew health, and space research is translated into health measures for Earth-bound population.
The a(3) Scheme--A Fourth-Order Space-Time Flux-Conserving and Neutrally Stable CESE Solver
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
2008-01-01
The CESE development is driven by a belief that a solver should (i) enforce conservation laws in both space and time, and (ii) be built from a non-dissipative (i.e., neutrally stable) core scheme so that the numerical dissipation can be controlled effectively. To initiate a systematic CESE development of high order schemes, in this paper we provide a thorough discussion on the structure, consistency, stability, phase error, and accuracy of a new 4th-order space-time flux-conserving and neutrally stable CESE solver of an 1D scalar advection equation. The space-time stencil of this two-level explicit scheme is formed by one point at the upper time level and three points at the lower time level. Because it is associated with three independent mesh variables (the numerical analogues of the dependent variable and its 1st-order and 2ndorder spatial derivatives, respectively) and three equations per mesh point, the new scheme is referred to as the a(3) scheme. Through the von Neumann analysis, it is shown that the a(3) scheme is stable if and only if the Courant number is less than 0.5. Moreover, it is established numerically that the a(3) scheme is 4th-order accurate.
Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, Francois G.
2002-06-01
Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less
New multigrid approach for three-dimensional unstructured, adaptive grids
NASA Technical Reports Server (NTRS)
Parthasarathy, Vijayan; Kallinderis, Y.
1994-01-01
A new multigrid method with adaptive unstructured grids is presented. The three-dimensional Euler equations are solved on tetrahedral grids that are adaptively refined or coarsened locally. The multigrid method is employed to propagate the fine grid corrections more rapidly by redistributing the changes-in-time of the solution from the fine grid to the coarser grids to accelerate convergence. A new approach is employed that uses the parent cells of the fine grid cells in an adapted mesh to generate successively coaser levels of multigrid. This obviates the need for the generation of a sequence of independent, nonoverlapping grids as well as the relatively complicated operations that need to be performed to interpolate the solution and the residuals between the independent grids. The solver is an explicit, vertex-based, finite volume scheme that employs edge-based data structures and operations. Spatial discretization is of central-differencing type combined with a special upwind-like smoothing operators. Application cases include adaptive solutions obtained with multigrid acceleration for supersonic and subsonic flow over a bump in a channel, as well as transonic flow around the ONERA M6 wing. Two levels of multigrid resulted in reduction in the number of iterations by a factor of 5.
The nonlinear modified equation approach to analyzing finite difference schemes
NASA Technical Reports Server (NTRS)
Klopfer, G. H.; Mcrae, D. S.
1981-01-01
The nonlinear modified equation approach is taken in this paper to analyze the generalized Lax-Wendroff explicit scheme approximation to the unsteady one- and two-dimensional equations of gas dynamics. Three important applications of the method are demonstrated. The nonlinear modified equation analysis is used to (1) generate higher order accurate schemes, (2) obtain more accurate estimates of the discretization error for nonlinear systems of partial differential equations, and (3) generate an adaptive mesh procedure for the unsteady gas dynamic equations. Results are obtained for all three areas. For the adaptive mesh procedure, mesh point requirements for equal resolution of discontinuities were reduced by a factor of five for a 1-D shock tube problem solved by the explicit MacCormack scheme.
Adaptive Bayes classifiers for remotely sensed data
NASA Technical Reports Server (NTRS)
Raulston, H. S.; Pace, M. O.; Gonzalez, R. C.
1975-01-01
An algorithm is developed for a learning, adaptive, statistical pattern classifier for remotely sensed data. The estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest, and (2) a projection of the parameters in time and space. The results reported are for Gaussian data in which the mean vector of each class may vary with time or position after the classifier is trained.
NASA Astrophysics Data System (ADS)
Re, B.; Dobrzynski, C.; Guardone, A.
2017-07-01
A novel strategy to solve the finite volume discretization of the unsteady Euler equations within the Arbitrary Lagrangian-Eulerian framework over tetrahedral adaptive grids is proposed. The volume changes due to local mesh adaptation are treated as continuous deformations of the finite volumes and they are taken into account by adding fictitious numerical fluxes to the governing equation. This peculiar interpretation enables to avoid any explicit interpolation of the solution between different grids and to compute grid velocities so that the Geometric Conservation Law is automatically fulfilled also for connectivity changes. The solution on the new grid is obtained through standard ALE techniques, thus preserving the underlying scheme properties, such as conservativeness, stability and monotonicity. The adaptation procedure includes node insertion, node deletion, edge swapping and points relocation and it is exploited both to enhance grid quality after the boundary movement and to modify the grid spacing to increase solution accuracy. The presented approach is assessed by three-dimensional simulations of steady and unsteady flow fields. The capability of dealing with large boundary displacements is demonstrated by computing the flow around the translating infinite- and finite-span NACA 0012 wing moving through the domain at the flight speed. The proposed adaptive scheme is applied also to the simulation of a pitching infinite-span wing, where the bi-dimensional character of the flow is well reproduced despite the three-dimensional unstructured grid. Finally, the scheme is exploited in a piston-induced shock-tube problem to take into account simultaneously the large deformation of the domain and the shock wave. In all tests, mesh adaptation plays a crucial role.
Achieving Rigorous Accelerated Conformational Sampling in Explicit Solvent.
Doshi, Urmi; Hamelberg, Donald
2014-04-03
Molecular dynamics simulations can provide valuable atomistic insights into biomolecular function. However, the accuracy of molecular simulations on general-purpose computers depends on the time scale of the events of interest. Advanced simulation methods, such as accelerated molecular dynamics, have shown tremendous promise in sampling the conformational dynamics of biomolecules, where standard molecular dynamics simulations are nonergodic. Here we present a sampling method based on accelerated molecular dynamics in which rotatable dihedral angles and nonbonded interactions are boosted separately. This method (RaMD-db) is a different implementation of the dual-boost accelerated molecular dynamics, introduced earlier. The advantage is that this method speeds up sampling of the conformational space of biomolecules in explicit solvent, as the degrees of freedom most relevant for conformational transitions are accelerated. We tested RaMD-db on one of the most difficult sampling problems - protein folding. Starting from fully extended polypeptide chains, two fast folding α-helical proteins (Trpcage and the double mutant of C-terminal fragment of Villin headpiece) and a designed β-hairpin (Chignolin) were completely folded to their native structures in very short simulation time. Multiple folding/unfolding transitions could be observed in a single trajectory. Our results show that RaMD-db is a promisingly fast and efficient sampling method for conformational transitions in explicit solvent. RaMD-db thus opens new avenues for understanding biomolecular self-assembly and functional dynamics occurring on long time and length scales.
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
NASA Astrophysics Data System (ADS)
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
Adaptive single-pixel imaging with aggregated sampling and continuous differential measurements
NASA Astrophysics Data System (ADS)
Huo, Yaoran; He, Hongjie; Chen, Fan; Tai, Heng-Ming
2018-06-01
This paper proposes an adaptive compressive imaging technique with one single-pixel detector and single arm. The aggregated sampling (AS) method enables the reduction of resolutions of the reconstructed images. It aims to reduce the time and space consumption. The target image with a resolution up to 1024 × 1024 can be reconstructed successfully at the 20% sampling rate. The continuous differential measurement (CDM) method combined with a ratio factor of significant coefficient (RFSC) improves the imaging quality. Moreover, RFSC reduces the human intervention in parameter setting. This technique enhances the practicability of single-pixel imaging with the benefits from less time and space consumption, better imaging quality and less human intervention.
State-space based analysis and forecasting of macroscopic road safety trends in Greece.
Antoniou, Constantinos; Yannis, George
2013-11-01
In this paper, macroscopic road safety trends in Greece are analyzed using state-space models and data for 52 years (1960-2011). Seemingly unrelated time series equations (SUTSE) models are developed first, followed by richer latent risk time-series (LRT) models. As reliable estimates of vehicle-kilometers are not available for Greece, the number of vehicles in circulation is used as a proxy to the exposure. Alternative considered models are presented and discussed, including diagnostics for the assessment of their model quality and recommendations for further enrichment of this model. Important interventions were incorporated in the models developed (1986 financial crisis, 1991 old-car exchange scheme, 1996 new road fatality definition) and found statistically significant. Furthermore, the forecasting results using data up to 2008 were compared with final actual data (2009-2011) indicating that the models perform properly, even in unusual situations, like the current strong financial crisis in Greece. Forecasting results up to 2020 are also presented and compared with the forecasts of a model that explicitly considers the currently on-going recession. Modeling the recession, and assuming that it will end by 2013, results in more reasonable estimates of risk and vehicle-kilometers for the 2020 horizon. This research demonstrates the benefits of using advanced state-space modeling techniques for modeling macroscopic road safety trends, such as allowing the explicit modeling of interventions. The challenges associated with the application of such state-of-the-art models for macroscopic phenomena, such as traffic fatalities in a region or country, are also highlighted. Furthermore, it is demonstrated that it is possible to apply such complex models using the relatively short time-series that are available in macroscopic road safety analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.
Explicit blow-up solutions to the Schroedinger maps from R{sup 2} to the hyperbolic 2-space H{sup 2}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding Qing
2009-10-15
In this article, we prove that the equation of the Schroedinger maps from R{sup 2} to the hyperbolic 2-space H{sup 2} is SU(1,1)-gauge equivalent to the following 1+2 dimensional nonlinear Schroedinger-type system of three unknown complex functions p, q, r, and a real function u: iq{sub t}+q{sub zz}-2uq+2(pq){sub z}-2pq{sub z}-4|p|{sup 2}q=0, ir{sub t}-r{sub zz}+2ur+2(pr){sub z}-2pr{sub z}+4|p|{sup 2}r=0, ip{sub t}+(qr){sub z}-u{sub z}=0, p{sub z}+p{sub z}=-|q|{sup 2}+|r|{sup 2}, -r{sub z}+q{sub z}=-2(pr+pq), where z is a complex coordinate of the plane R{sup 2} and z is the complex conjugate of z. Although this nonlinear Schroedinger-type system looks complicated, it admits a class ofmore » explicit blow-up smooth solutions: p=0, q=(e{sup i(bzz/2(a+bt))}/a+bt){alpha}z, r=e{sup -i(bzz/2(a+bt))}/(a+bt){alpha}z, u=2{alpha}{sup 2}zz/(a+bt){sup 2}, where a and b are real numbers with ab<0 and {alpha} satisfies {alpha}{sup 2}=b{sup 2}/16. From these facts, we explicitly construct smooth solutions to the Schroedinger maps from R{sup 2} to the hyperbolic 2-space H{sup 2} by using the gauge transformations such that the absolute values of their gradients blow up in finite time. This reveals some blow-up phenomenon of Schroedinger maps.« less
A new heterogeneous asynchronous explicit-implicit time integrator for nonsmooth dynamics
NASA Astrophysics Data System (ADS)
Fekak, Fatima-Ezzahra; Brun, Michael; Gravouil, Anthony; Depale, Bruno
2017-07-01
In computational structural dynamics, particularly in the presence of nonsmooth behavior, the choice of the time-step and the time integrator has a critical impact on the feasibility of the simulation. Furthermore, in some cases, as in the case of a bridge crane under seismic loading, multiple time-scales coexist in the same problem. In that case, the use of multi-time scale methods is suitable. Here, we propose a new explicit-implicit heterogeneous asynchronous time integrator (HATI) for nonsmooth transient dynamics with frictionless unilateral contacts and impacts. Furthermore, we present a new explicit time integrator for contact/impact problems where the contact constraints are enforced using a Lagrange multiplier method. In other words, the aim of this paper consists in using an explicit time integrator with a fine time scale in the contact area for reproducing high frequency phenomena, while an implicit time integrator is adopted in the other parts in order to reproduce much low frequency phenomena and to optimize the CPU time. In a first step, the explicit time integrator is tested on a one-dimensional example and compared to Moreau-Jean's event-capturing schemes. The explicit algorithm is found to be very accurate and the scheme has generally a higher order of convergence than Moreau-Jean's schemes and provides also an excellent energy behavior. Then, the two time scales explicit-implicit HATI is applied to the numerical example of a bridge crane under seismic loading. The results are validated in comparison to a fine scale full explicit computation. The energy dissipated in the implicit-explicit interface is well controlled and the computational time is lower than a full-explicit simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Błaszak, Maciej, E-mail: blaszakm@amu.edu.pl; Domański, Ziemowit, E-mail: ziemowit@amu.edu.pl
In the paper is presented an invariant quantization procedure of classical mechanics on the phase space over flat configuration space. Then, the passage to an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. An explicit form of position and momentum operators as well as their appropriate ordering in arbitrary curvilinear coordinates is demonstrated. Finally, the extension of presented formalism onto non-flat case and related ambiguities of the process of quantization are discussed. -- Highlights: •An invariant quantization procedure of classical mechanics on the phase space over flat configuration space is presented. •The passage tomore » an operator representation of quantum mechanics in a Hilbert space over configuration space is derived. •Explicit form of position and momentum operators and their appropriate ordering in curvilinear coordinates is shown. •The invariant form of Hamiltonian operators quadratic and cubic in momenta is derived. •The extension of presented formalism onto non-flat case and related ambiguities of the quantization process are discussed.« less
Mining geographic variations of Plasmodium vivax for active surveillance: a case study in China.
Shi, Benyun; Tan, Qi; Zhou, Xiao-Nong; Liu, Jiming
2015-05-27
Geographic variations of an infectious disease characterize the spatial differentiation of disease incidences caused by various impact factors, such as environmental, demographic, and socioeconomic factors. Some factors may directly determine the force of infection of the disease (namely, explicit factors), while many other factors may indirectly affect the number of disease incidences via certain unmeasurable processes (namely, implicit factors). In this study, the impact of heterogeneous factors on geographic variations of Plasmodium vivax incidences is systematically investigate in Tengchong, Yunnan province, China. A space-time model that resembles a P. vivax transmission model and a hidden time-dependent process, is presented by taking into consideration both explicit and implicit factors. Specifically, the transmission model is built upon relevant demographic, environmental, and biophysical factors to describe the local infections of P. vivax. While the hidden time-dependent process is assessed by several socioeconomic factors to account for the imported cases of P. vivax. To quantitatively assess the impact of heterogeneous factors on geographic variations of P. vivax infections, a Markov chain Monte Carlo (MCMC) simulation method is developed to estimate the model parameters by fitting the space-time model to the reported spatial-temporal disease incidences. Since there is no ground-truth information available, the performance of the MCMC method is first evaluated against a synthetic dataset. The results show that the model parameters can be well estimated using the proposed MCMC method. Then, the proposed model is applied to investigate the geographic variations of P. vivax incidences among all 18 towns in Tengchong, Yunnan province, China. Based on the geographic variations, the 18 towns can be further classify into five groups with similar socioeconomic causality for P. vivax incidences. Although this study focuses mainly on the transmission of P. vivax, the proposed space-time model is general and can readily be extended to investigate geographic variations of other diseases. Practically, such a computational model will offer new insights into active surveillance and strategic planning for disease surveillance and control.
Modeling heterogeneous processor scheduling for real time systems
NASA Technical Reports Server (NTRS)
Leathrum, J. F.; Mielke, R. R.; Stoughton, J. W.
1994-01-01
A new model is presented to describe dataflow algorithms implemented in a multiprocessing system. Called the resource/data flow graph (RDFG), the model explicitly represents cyclo-static processor schedules as circuits of processor arcs which reflect the order that processors execute graph nodes. The model also allows the guarantee of meeting hard real-time deadlines. When unfolded, the model identifies statically the processor schedule. The model therefore is useful for determining the throughput and latency of systems with heterogeneous processors. The applicability of the model is demonstrated using a space surveillance algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furuuchi, Kazuyuki; Sperling, Marcus, E-mail: kazuyuki.furuuchi@manipal.edu, E-mail: marcus.sperling@univie.ac.at
2017-05-01
We study quantum tunnelling in Dante's Inferno model of large field inflation. Such a tunnelling process, which will terminate inflation, becomes problematic if the tunnelling rate is rapid compared to the Hubble time scale at the time of inflation. Consequently, we constrain the parameter space of Dante's Inferno model by demanding a suppressed tunnelling rate during inflation. The constraints are derived and explicit numerical bounds are provided for representative examples. Our considerations are at the level of an effective field theory; hence, the presented constraints have to hold regardless of any UV completion.
Mission definition study for a VLBI station utilizing the Space Shuttle
NASA Technical Reports Server (NTRS)
Burke, B. F.
1982-01-01
The uses of the Space Shuttle transportation system for orbiting VeryLong-Baseline Interferometry (OVLBI) were examined, both with respect to technical feasibility and its scientific possibilities. The study consisted of a critical look at the adaptability of current technology to an orbiting environment, the suitability of current data reduction facilities for the new technique, and a review of the new science that is made possible by using the Space Shuttle as a moving platform for a VLBI terminal in space. The conclusions are positive in all respects: no technological deficiencies exist that would need remedy, the data processing problem can be handled easily by straightforward adaptations of existing systems, and there is a significant new research frontier to be explored, with the Space Shuttle providing the first step. The VLBI technique utilizes the great frequency stability of modern atomic time standards, the power of integrated circuitry to perform real-time signal conditioning, and the ability of magnetic tape recorders to provide essentially error-free data recording, all of which combine to permit the realization of radio interferometry at arbitrarily large baselines.
Fitzgerald, Matthew; Sagi, Elad; Morbiwala, Tasnim A.; Tan, Chin-Tuan; Svirsky, Mario A.
2013-01-01
Objectives Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CI), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table. Design 34 normal-hearing adults listened to a noise-vocoded acoustic simulation of a cochlear implant and adjusted the frequency table in real time until they obtained a frequency table that sounded “most intelligible” to them. The use of an acoustic simulation was essential to this study because it allowed us to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, we measured CNC word-recognition scores with that self-selected table and two other frequency tables: a “frequency-matched” table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a “right information” table that is similar to that used in most cochlear implant speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space. Results Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2–3 minutes for each trial, and the between-trial variability was comparable to that previously observed with closely-related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table. Conclusions Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable to that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure. PMID:23807089
Reinforced dynamics for enhanced sampling in large atomic and molecular systems
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Wang, Han; E, Weinan
2018-03-01
A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.
Decentralized Adaptive Neural Output-Feedback DSC for Switched Large-Scale Nonlinear Systems.
Lijun Long; Jun Zhao
2017-04-01
In this paper, for a class of switched large-scale uncertain nonlinear systems with unknown control coefficients and unmeasurable states, a switched-dynamic-surface-based decentralized adaptive neural output-feedback control approach is developed. The approach proposed extends the classical dynamic surface control (DSC) technique for nonswitched version to switched version by designing switched first-order filters, which overcomes the problem of multiple "explosion of complexity." Also, a dual common coordinates transformation of all subsystems is exploited to avoid individual coordinate transformations for subsystems that are required when applying the backstepping recursive design scheme. Nussbaum-type functions are utilized to handle the unknown control coefficients, and a switched neural network observer is constructed to estimate the unmeasurable states. Combining with the average dwell time method and backstepping and the DSC technique, decentralized adaptive neural controllers of subsystems are explicitly designed. It is proved that the approach provided can guarantee the semiglobal uniformly ultimately boundedness for all the signals in the closed-loop system under a class of switching signals with average dwell time, and the tracking errors to a small neighborhood of the origin. A two inverted pendulums system is provided to demonstrate the effectiveness of the method proposed.
He, Pingan; Jagannathan, S
2007-04-01
A novel adaptive-critic-based neural network (NN) controller in discrete time is designed to deliver a desired tracking performance for a class of nonlinear systems in the presence of actuator constraints. The constraints of the actuator are treated in the controller design as the saturation nonlinearity. The adaptive critic NN controller architecture based on state feedback includes two NNs: the critic NN is used to approximate the "strategic" utility function, whereas the action NN is employed to minimize both the strategic utility function and the unknown nonlinear dynamic estimation errors. The critic and action NN weight updates are derived by minimizing certain quadratic performance indexes. Using the Lyapunov approach and with novel weight updates, the uniformly ultimate boundedness of the closed-loop tracking error and weight estimates is shown in the presence of NN approximation errors and bounded unknown disturbances. The proposed NN controller works in the presence of multiple nonlinearities, unlike other schemes that normally approximate one nonlinearity. Moreover, the adaptive critic NN controller does not require an explicit offline training phase, and the NN weights can be initialized at zero or random. Simulation results justify the theoretical analysis.
Adaptive mesh refinement and load balancing based on multi-level block-structured Cartesian mesh
NASA Astrophysics Data System (ADS)
Misaka, Takashi; Sasaki, Daisuke; Obayashi, Shigeru
2017-11-01
We developed a framework for a distributed-memory parallel computer that enables dynamic data management for adaptive mesh refinement and load balancing. We employed simple data structure of the building cube method (BCM) where a computational domain is divided into multi-level cubic domains and each cube has the same number of grid points inside, realising a multi-level block-structured Cartesian mesh. Solution adaptive mesh refinement, which works efficiently with the help of the dynamic load balancing, was implemented by dividing cubes based on mesh refinement criteria. The framework was investigated with the Laplace equation in terms of adaptive mesh refinement, load balancing and the parallel efficiency. It was then applied to the incompressible Navier-Stokes equations to simulate a turbulent flow around a sphere. We considered wall-adaptive cube refinement where a non-dimensional wall distance y+ near the sphere is used for a criterion of mesh refinement. The result showed the load imbalance due to y+ adaptive mesh refinement was corrected by the present approach. To utilise the BCM framework more effectively, we also tested a cube-wise algorithm switching where an explicit and implicit time integration schemes are switched depending on the local Courant-Friedrichs-Lewy (CFL) condition in each cube.
Adaptive restoration of river terrace vegetation through iterative experiments
Dela Cruz, Michelle P.; Beauchamp, Vanessa B.; Shafroth, Patrick B.; Decker, Cheryl E.; O’Neil, Aviva
2014-01-01
Restoration projects can involve a high degree of uncertainty and risk, which can ultimately result in failure. An adaptive restoration approach can reduce uncertainty through controlled, replicated experiments designed to test specific hypotheses and alternative management approaches. Key components of adaptive restoration include willingness of project managers to accept the risk inherent in experimentation, interest of researchers, availability of funding for experimentation and monitoring, and ability to restore sites as iterative experiments where results from early efforts can inform the design of later phases. This paper highlights an ongoing adaptive restoration project at Zion National Park (ZNP), aimed at reducing the cover of exotic annual Bromus on riparian terraces, and revegetating these areas with native plant species. Rather than using a trial-and-error approach, ZNP staff partnered with academic, government, and private-sector collaborators to conduct small-scale experiments to explicitly address uncertainties concerning biomass removal of annual bromes, herbicide application rates and timing, and effective seeding methods for native species. Adaptive restoration has succeeded at ZNP because managers accept the risk inherent in experimentation and ZNP personnel are committed to continue these projects over a several-year period. Techniques that result in exotic annual Bromus removal and restoration of native plant species at ZNP can be used as a starting point for adaptive restoration projects elsewhere in the region.
NASA Technical Reports Server (NTRS)
Kaufman, Howard
1998-01-01
Many papers relevant to reconfigurable flight control have appeared over the past fifteen years. In general these have consisted of theoretical issues, simulation experiments, and in some cases, actual flight tests. Results indicate that reconfiguration of flight controls is certainly feasible for a wide class of failures. However many of the proposed procedures although quite attractive, need further analytical and experimental studies for meaningful validation. Many procedures assume the availability of failure detection and identification logic that will supply adequately fast, the dynamics corresponding to the failed aircraft. This in general implies that the failure detection and fault identification logic must have access to all possible anticipated faults and the corresponding dynamical equations of motion. Unless some sort of explicit on line parameter identification is included, the computational demands could possibly be too excessive. This suggests the need for some form of adaptive control, either by itself as the prime procedure for control reconfiguration or in conjunction with the failure detection logic. If explicit or indirect adaptive control is used, then it is important that the identified models be such that the corresponding computed controls deliver adequate performance to the actual aircraft. Unknown changes in trim should be modelled, and parameter identification needs to be adequately insensitive to noise and at the same time capable of tracking abrupt changes. If however, both failure detection and system parameter identification turn out to be too time consuming in an emergency situation, then the concepts of direct adaptive control should be considered. If direct model reference adaptive control is to be used (on a linear model) with stability assurances, then a positive real or passivity condition needs to be satisfied for all possible configurations. This condition is often satisfied with a feedforward compensator around the plant. This compensator must be robustly designed such that the compensated plant satisfies the required positive real conditions over all expected parameter values. Furthermore, with the feedforward only around the plant, a nonzero (but bounded error) will exist in steady state between the plant and model outputs. This error can be removed by placing the compensator also in the reference model. Design of such a compensator should not be too difficult a problem since for flight control it is generally possible to feedback all the system states.
Adaptive time steps in trajectory surface hopping simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spörkel, Lasse, E-mail: spoerkel@kofo.mpg.de; Thiel, Walter, E-mail: thiel@kofo.mpg.de
2016-05-21
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energymore » surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.« less
Adaptive time steps in trajectory surface hopping simulations
NASA Astrophysics Data System (ADS)
Spörkel, Lasse; Thiel, Walter
2016-05-01
Trajectory surface hopping (TSH) simulations are often performed in combination with active-space multi-reference configuration interaction (MRCI) treatments. Technical problems may arise in such simulations if active and inactive orbitals strongly mix and switch in some particular regions. We propose to use adaptive time steps when such regions are encountered in TSH simulations. For this purpose, we present a computational protocol that is easy to implement and increases the computational effort only in the critical regions. We test this procedure through TSH simulations of a GFP chromophore model (OHBI) and a light-driven rotary molecular motor (F-NAIBP) on semiempirical MRCI potential energy surfaces, by comparing the results from simulations with adaptive time steps to analogous ones with constant time steps. For both test molecules, the number of successful trajectories without technical failures rises significantly, from 53% to 95% for OHBI and from 25% to 96% for F-NAIBP. The computed excited-state lifetime remains essentially the same for OHBI and increases somewhat for F-NAIBP, and there is almost no change in the computed quantum efficiency for internal rotation in F-NAIBP. We recommend the general use of adaptive time steps in TSH simulations with active-space CI methods because this will help to avoid technical problems, increase the overall efficiency and robustness of the simulations, and allow for a more complete sampling.
NASA Astrophysics Data System (ADS)
Luce, C.; Tonina, D.; Gariglio, F. P.; Applebee, R.
2012-12-01
Differences in the diurnal variations of temperature at different depths in streambed sediments are commonly used for estimating vertical fluxes of water in the streambed. We applied spatial and temporal rescaling of the advection-diffusion equation to derive two new relationships that greatly extend the kinds of information that can be derived from streambed temperature measurements. The first equation provides a direct estimate of the Peclet number from the amplitude decay and phase delay information. The analytical equation is explicit (e.g. no numerical root-finding is necessary), and invertable. The thermal front velocity can be estimated from the Peclet number when the thermal diffusivity is known. The second equation allows for an independent estimate of the thermal diffusivity directly from the amplitude decay and phase delay information. Several improvements are available with the new information. The first equation uses a ratio of the amplitude decay and phase delay information; thus Peclet number calculations are independent of depth. The explicit form also makes it somewhat faster and easier to calculate estimates from a large number of sensors or multiple positions along one sensor. Where current practice requires a priori estimation of streambed thermal diffusivity, the new approach allows an independent calculation, improving precision of estimates. Furthermore, when many measurements are made over space and time, expectations of the spatial correlation and temporal invariance of thermal diffusivity are valuable for validation of measurements. Finally, the closed-form explicit solution allows for direct calculation of propagation of uncertainties in error measurements and parameter estimates, providing insight about error expectations for sensors placed at different depths in different environments as a function of surface temperature variation amplitudes. The improvements are expected to increase the utility of temperature measurement methods for studying groundwater-surface water interactions across space and time scales. We discuss the theoretical implications of the new solutions supported by examples with data for illustration and validation.
Functional near-infrared spectroscopy for adaptive human-computer interfaces
NASA Astrophysics Data System (ADS)
Yuksel, Beste F.; Peck, Evan M.; Afergan, Daniel; Hincks, Samuel W.; Shibata, Tomoki; Kainerstorfer, Jana; Tgavalekos, Kristen; Sassaroli, Angelo; Fantini, Sergio; Jacob, Robert J. K.
2015-03-01
We present a brain-computer interface (BCI) that detects, analyzes and responds to user cognitive state in real-time using machine learning classifications of functional near-infrared spectroscopy (fNIRS) data. Our work is aimed at increasing the narrow communication bandwidth between the human and computer by implicitly measuring users' cognitive state without any additional effort on the part of the user. Traditionally, BCIs have been designed to explicitly send signals as the primary input. However, such systems are usually designed for people with severe motor disabilities and are too slow and inaccurate for the general population. In this paper, we demonstrate with previous work1 that a BCI that implicitly measures cognitive workload can improve user performance and awareness compared to a control condition by adapting to user cognitive state in real-time. We also discuss some of the other applications we have used in this field to measure and respond to cognitive states such as cognitive workload, multitasking, and user preference.
ERIC Educational Resources Information Center
Mongeon, David; Blanchet, Pierre; Messier, Julie
2013-01-01
The capacity to learn new visuomotor associations is fundamental to adaptive motor behavior. Evidence suggests visuomotor learning deficits in Parkinson's disease (PD). However, the exact nature of these deficits and the ability of dopamine medication to improve them are under-explored. Previous studies suggested that learning driven by large and…
ERIC Educational Resources Information Center
Komlenov, Zivana; Budimac, Zoran; Ivanovic, Mirjana
2010-01-01
In order to improve the learning process for students with different pre-knowledge, personal characteristics and preferred learning styles, a certain degree of adaptability must be introduced to online courses. In learning environments that support such kind of functionalities students can explicitly choose different paths through course contents…
Laura Phillips-Mao; Susan M. Galatowitsch; Stephanie A. Snyder; Robert G. Haight
2016-01-01
Incorporating climate change into conservation decision-making at site and population scales is challenging due to uncertainties associated with localized climate change impacts and population responses to multiple interacting impacts and adaptation strategies. We explore the use of spatially explicit population models to facilitate scenario analysis, a conservation...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Yidong; Liu, Xiaodong; Luo, Hong
2015-06-01
Here, a space and time third-order discontinuous Galerkin method based on a Hermite weighted essentially non-oscillatory reconstruction is presented for the unsteady compressible Euler and Navier–Stokes equations. At each time step, a lower-upper symmetric Gauss–Seidel preconditioned generalized minimal residual solver is used to solve the systems of linear equations arising from an explicit first stage, single diagonal coefficient, diagonally implicit Runge–Kutta time integration scheme. The performance of the developed method is assessed through a variety of unsteady flow problems. Numerical results indicate that this method is able to deliver the designed third-order accuracy of convergence in both space and time,more » while requiring remarkably less storage than the standard third-order discontinous Galerkin methods, and less computing time than the lower-order discontinous Galerkin methods to achieve the same level of temporal accuracy for computing unsteady flow problems.« less
Variable input observer for structural health monitoring of high-rate systems
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob
2017-02-01
The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.
Adaptive temporal refinement in injection molding
NASA Astrophysics Data System (ADS)
Karyofylli, Violeta; Schmitz, Mauritius; Hopmann, Christian; Behr, Marek
2018-05-01
Mold filling is an injection molding stage of great significance, because many defects of the plastic components (e.g. weld lines, burrs or insufficient filling) can occur during this process step. Therefore, it plays an important role in determining the quality of the produced parts. Our goal is the temporal refinement in the vicinity of the evolving melt front, in the context of 4D simplex-type space-time grids [1, 2]. This novel discretization method has an inherent flexibility to employ completely unstructured meshes with varying levels of resolution both in spatial dimensions and in the time dimension, thus allowing the use of local time-stepping during the simulations. This can lead to a higher simulation precision, while preserving calculation efficiency. A 3D benchmark case, which concerns the filling of a plate-shaped geometry, is used for verifying our numerical approach [3]. The simulation results obtained with the fully unstructured space-time discretization are compared to those obtained with the standard space-time method and to Moldflow simulation results. This example also serves for providing reliable timing measurements and the efficiency aspects of the filling simulation of complex 3D molds while applying adaptive temporal refinement.
NASA Technical Reports Server (NTRS)
Warren, L. E.; Mulavara, A. P.; Peters, B. T.; Cohen, H. S.; Richards, J. T.; Miller, C. A.; Brady, R.; Ruttley, T. M.; Bloomberg, J. J.
2006-01-01
Space flight induces adaptive modification in sensorimotor function, allowing crewmembers to operate in the unique microgravity environment. This adaptive state, however, is inappropriate for a terrestrial environment. During a re-adaptation period upon their return to Earth, crewmembers experience alterations in sensorimotor function, causing various disturbances in perception, spatial orientation, posture, gait, and eye-head coordination. Following long duration space flight, sensorimotor dysfunction would prevent or extend the time required to make an emergency egress from the vehicle; compromising crew safety and mission objectives. We are investigating two types of motor learning that may interact with each other and influence a crewmember's ability to re-adapt to Earth's gravity environment. In strategic learning, crewmembers make rapid modifications in their motor control strategy emphasizing error reduction. This type of learning may be critical during the first minutes and hours after landing. In adaptive learning, long-term plastic transformations occur, involving morphological changes and synaptic modification. In recent literature these two behavioral components have been associated with separate brain structures that control the execution of motor strategies: the strategic component was linked to the posterior parietal cortex and the adaptive component was linked to the cerebellum (Pisella, et al. 2004). The goal of this paper was to demonstrate the relative contributions of the strategic and adaptive components to the re-adaptation process in locomotor control after long duration space flight missions on the International Space Station (ISS). The Functional Mobility Test (FMT) was developed to assess crewmember s ability to ambulate postflight from an operational and functional perspective. Sixteen crewmembers were tested preflight (3 sessions) and postflight (days 1, 2, 4, 7, 25) following a long duration space flight (approx 6 months) on the ISS. We have further analyzed the FMT data to characterize strategic and adaptive components during the postflight readaptation period. Crewmembers walked at a preferred pace through an obstacle course set up on a base of 10 cm thick medium density foam (Sunmate Foam, Dynamic Systems, Inc., Leicester, NC). The 6.0m X 4.0m course consisted of several pylons made of foam; a Styrofoam barrier 46.0cm high that crewmembers stepped over; and a portal constructed of two Styrofoam blocks, each 31cm high, with a horizontal bar covered by foam and suspended from the ceiling which was adjusted to the height of the crewmember s shoulder. The portal required crewmembers to bend at the waist and step over a barrier simultaneously. All obstacles were lightweight, soft and easily knocked over. Crewmembers were instructed to walk through the course as quickly and as safely as possible without touching any of the objects on the course. This task was performed three times in the clockwise direction and three times in the counterclockwise direction that was randomly chosen. The dependent measures for each trial were: time to complete the course (seconds) and the number of obstacles touched or knocked down. For each crewmember, the time to complete each FMT trial from postflight days 1, 2, 4, 7 and 25 were further analyzed. A single logarithmic curve using a least squares calculation was fit through these data to produce a single comprehensive curve (macro). This macro curve composed of data spanning 25 days, illustrates the re-adaptive learning function over the longer time scale term. Additionally, logarithmic curves were fit to the 6 data trials within each individual post flight test day to produce 5 separate daily curves. These micro curves, produced from data obtained over the course of minutes, illustrates the strategic learning function exhibited over a relative shorter time scale. The macro curve for all subjects exhibited adaptive motor learning patterns over the 25 day period. Howev, 9/16 crewmembers exhibited significant strategic motor learning patterns in their micro curves, as defined by m > 1 in the equation of the line y=m*LN(x) +b. These data indicate that postflight recovery in locomotor function involves both strategic and adaptive mechanisms. Future countermeasures will be designed to enhance both recovery processes.
Ratliff, Kristin R; Newcombe, Nora S
2008-03-01
Being able to reorient to the spatial environment after disorientation is a basic adaptive challenge. There is clear evidence that reorientation uses geometric information about the shape of the surrounding space. However, there has been controversy concerning whether use of geometry is a modular function, and whether use of features is dependent on human language. A key argument for the role of language comes from shadowing findings where adults engaged in a linguistic task during reorientation ignored a colored wall feature and only used geometric information to reorient [Hermer-Vazquez, L., Spelke, E., & Katsnelson, A. (1999). Sources of flexibility in human cognition: Dual task studies of space and language. Cognitive Psychology, 39, 3-36]. We report three studies showing: (a) that the results of Hermer-Vazques et al. [Hermer-Vazquez, L., Spelke, E., & Katsnelson, A. (1999). Sources of flexibility in human cognition: Dual task studies of space and language. Cognitive Psychology, 39, 3-36] are obtained in incidental learning but not with explicit instructions, (b) that a spatial task impedes use of features at least as much as a verbal shadowing task, and (c) that neither secondary task impedes use of features in a room larger than that used by Hermer-Vazquez et al. These results suggest that language is not necessary for successful use of features in reorientation. In fact, whether or not there is an encapsulated geometric module is currently unsettled. The current findings support an alternative to modularity; the adaptive combination view hypothesizes that geometric and featural information are utilized in varying degrees, dependent upon the certainty and variance with which the two kinds of information are encoded, along with their salience and perceived usefulness.
Combining Model-driven and Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Denney, Ewen; Whittle, John
2004-01-01
We describe ongoing work which aims to extend the schema-based program synthesis paradigm with explicit models. In this context, schemas can be considered as model-to-model transformations. The combination of schemas with explicit models offers a number of advantages, namely, that building synthesis systems becomes much easier since the models can be used in verification and in adaptation of the synthesis systems. We illustrate our approach using an example from signal processing.
Adaptive time-stepping Monte Carlo integration of Coulomb collisions
NASA Astrophysics Data System (ADS)
Särkimäki, K.; Hirvijoki, E.; Terävä, J.
2018-01-01
We report an accessible and robust tool for evaluating the effects of Coulomb collisions on a test particle in a plasma that obeys Maxwell-Jüttner statistics. The implementation is based on the Beliaev-Budker collision integral which allows both the test particle and the background plasma to be relativistic. The integration method supports adaptive time stepping, which is shown to greatly improve the computational efficiency. The Monte Carlo method is implemented for both the three-dimensional particle momentum space and the five-dimensional guiding center phase space. Detailed description is provided for both the physics and implementation of the operator. The focus is in adaptive integration of stochastic differential equations, which is an overlooked aspect among existing Monte Carlo implementations of Coulomb collision operators. We verify that our operator converges to known analytical results and demonstrate that careless implementation of the adaptive time step can lead to severely erroneous results. The operator is provided as a self-contained Fortran 95 module and can be included into existing orbit-following tools that trace either the full Larmor motion or the guiding center dynamics. The adaptive time-stepping algorithm is expected to be useful in situations where the collision frequencies vary greatly over the course of a simulation. Examples include the slowing-down of fusion products or other fast ions, and the Dreicer generation of runaway electrons as well as the generation of fast ions or electrons with ion or electron cyclotron resonance heating.
Vrancken, Bram; Suchard, Marc A; Lemey, Philippe
2017-07-01
Analyses of virus evolution in known transmission chains have the potential to elucidate the impact of transmission dynamics on the viral evolutionary rate and its difference within and between hosts. Lin et al. (2015, Journal of Virology , 89/7: 3512-22) recently investigated the evolutionary history of hepatitis B virus in a transmission chain and postulated that the 'colonization-adaptation-transmission' model can explain the differential impact of transmission on synonymous and non-synonymous substitution rates. Here, we revisit this dataset using a full probabilistic Bayesian phylogenetic framework that adequately accounts for the non-independence of sequence data when estimating evolutionary parameters. Examination of the transmission chain data under a flexible coalescent prior reveals a general inconsistency between the estimated timings and clustering patterns and the known transmission history, highlighting the need to incorporate host transmission information in the analysis. Using an explicit genealogical transmission chain model, we find strong support for a transmission-associated decrease of the overall evolutionary rate. However, in contrast to the initially reported larger transmission effect on non-synonymous substitution rate, we find a similar decrease in both non-synonymous and synonymous substitution rates that cannot be adequately explained by the colonization-adaptation-transmission model. An alternative explanation may involve a transmission/establishment advantage of hepatitis B virus variants that have accumulated fewer within-host substitutions, perhaps by spending more time in the covalently closed circular DNA state between each round of viral replication. More generally, this study illustrates that ignoring phylogenetic relationships can lead to misleading evolutionary estimates.
Ensemble Data Assimilation Without Ensembles: Methodology and Application to Ocean Data Assimilation
NASA Technical Reports Server (NTRS)
Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume
2013-01-01
Two methods to estimate background error covariances for data assimilation are introduced. While both share properties with the ensemble Kalman filter (EnKF), they differ from it in that they do not require the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The first method is referred-to as SAFE (Space Adaptive Forecast error Estimation) because it estimates error covariances from the spatial distribution of model variables within a single state vector. It can thus be thought of as sampling an ensemble in space. The second method, named FAST (Flow Adaptive error Statistics from a Time series), constructs an ensemble sampled from a moving window along a model trajectory. The underlying assumption in these methods is that forecast errors in data assimilation are primarily phase errors in space and/or time.
Negative specific heat of black-holes from fluid-gravity correspondence
NASA Astrophysics Data System (ADS)
Bhattacharya, Swastik; Shankaranarayanan, S.
2017-04-01
Black holes in asymptotically flat space-times have negative specific heat—they get hotter as they loose energy. A clear statistical mechanical understanding of this has remained a challenge. In this work, we address this issue using fluid-gravity correspondence which aims to associate fluid degrees of freedom to the horizon. Using linear response theory and the teleological nature of event horizon, we show explicitly that the fluctuations of the horizon-fluid lead to negative specific heat for a Schwarzschild black Hole. We also point out how the specific heat can be positive for Kerr-Newman or AdS black holes. Our approach constitutes an important advance as it allows us to apply the canonical ensemble approach to study thermodynamics of asymptotically flat black hole space-times.
Territory surveillance and prey management: Wolves keep track of space and time.
Schlägel, Ulrike E; Merrill, Evelyn H; Lewis, Mark A
2017-10-01
Identifying behavioral mechanisms that underlie observed movement patterns is difficult when animals employ sophisticated cognitive-based strategies. Such strategies may arise when timing of return visits is important, for instance to allow for resource renewal or territorial patrolling. We fitted spatially explicit random-walk models to GPS movement data of six wolves ( Canis lupus ; Linnaeus, 1758) from Alberta, Canada to investigate the importance of the following: (1) territorial surveillance likely related to renewal of scent marks along territorial edges, to reduce intraspecific risk among packs, and (2) delay in return to recently hunted areas, which may be related to anti-predator responses of prey under varying prey densities. The movement models incorporated the spatiotemporal variable "time since last visit," which acts as a wolf's memory index of its travel history and is integrated into the movement decision along with its position in relation to territory boundaries and information on local prey densities. We used a model selection framework to test hypotheses about the combined importance of these variables in wolf movement strategies. Time-dependent movement for territory surveillance was supported by all wolf movement tracks. Wolves generally avoided territory edges, but this avoidance was reduced as time since last visit increased. Time-dependent prey management was weak except in one wolf. This wolf selected locations with longer time since last visit and lower prey density, which led to a longer delay in revisiting high prey density sites. Our study shows that we can use spatially explicit random walks to identify behavioral strategies that merge environmental information and explicit spatiotemporal information on past movements (i.e., "when" and "where") to make movement decisions. The approach allows us to better understand cognition-based movement in relation to dynamic environments and resources.
Patterns in clinicians' responses to patient emotion in cancer care.
Finset, Arnstein; Heyn, Lena; Ruland, Cornelia
2013-10-01
To investigate how patient, clinician and relationship characteristics may predict how oncologists and nurses respond to patients' emotional expressions. Observational study of audiotapes of 196 consultations in cancer care. The consultations were coded according to Verona Coding Definitions of Emotional Sequences (VR-CoDES). Associations were tested in multi-level analyzes. There were 471 cues and 109 concerns with a mean number of 3.0 (SD=3.2) cues and concerns per consultation. Nurses in admittance interviews were five times more likely to provide space for further disclosure of cues and concerns (according to VR-CoDES definitions) than oncologists in out-patient follow-up consultations. Oncologists gave more room for disclosure to the first cue or concern in the consultation, to more explicit and doctor initiated cues/concerns and when the doctor and/or patient was female. Nurses gave room for further disclosure to explicit and nurse initiated cues/concerns, but the effects were smaller than for oncologists. Responses of clinicians which provide room for further disclosure do not occur at random and are systematically dependent on the source, explicitness and timing of the cue or concern. Knowledge on which factors influence responses to cues and concerns may be useful in communication skills training. Copyright © 2013. Published by Elsevier Ireland Ltd.
A New Time-Space Accurate Scheme for Hyperbolic Problems. 1; Quasi-Explicit Case
NASA Technical Reports Server (NTRS)
Sidilkover, David
1998-01-01
This paper presents a new discretization scheme for hyperbolic systems of conservations laws. It satisfies the TVD property and relies on the new high-resolution mechanism which is compatible with the genuinely multidimensional approach proposed recently. This work can be regarded as a first step towards extending the genuinely multidimensional approach to unsteady problems. Discontinuity capturing capabilities and accuracy of the scheme are verified by a set of numerical tests.
The Fermionic Signature Operator and Hadamard States in the Presence of a Plane Electromagnetic Wave
NASA Astrophysics Data System (ADS)
Finster, Felix; Reintjes, Moritz
2017-05-01
We give a non-perturbative construction of a distinguished state for the quantized Dirac field in Minkowski space in the presence of a time-dependent external field of the form of a plane electromagnetic wave. By explicit computation of the fermionic signature operator, it is shown that the Dirac operator has the strong mass oscillation property. We prove that the resulting fermionic projector state is a Hadamard state.
LCMS landscape change monitoring system—results from an information needs assessment
Kevin Megown; Brian Schwind; Don Evans; Mark Finco
2015-01-01
Understanding changes in land use and land cover over space and time provides an important means to evaluate complex interactions between human and biophysical systems, to project future conditions, and to design mitigation and adaptive management strategies. Assessing and monitoring landscape change is evolving into a foundational element of climate change adaptation...
Orion EM-1 Crew Module Adapter Lift & Move to Stand
2016-11-11
The Orion crew module adapter (CMA) for Exploration Mission 1 was lifted for the first and only time, Nov. 11, during its processing flow inside the Neil Armstrong Operations and Checkout (O&C) Building high bay at the agency's Kennedy Space Center in Florida. The CMA is now undergoing secondary structure outfitting.
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
NASA Technical Reports Server (NTRS)
Jagge, Amy
2016-01-01
With ever changing landscapes and environmental conditions due to human induced climate change, adaptability is imperative for the long-term success of facilities and Federal agency missions. To mitigate the effects of climate change, indicators such as above-ground biomass change must be identified to establish a comprehensive monitoring effort. Researching the varying effects of climate change on ecosystems can provide a scientific framework that will help produce informative, strategic and tactical policies for environmental adaptation. As a proactive approach to climate change mitigation, NASA tasked the Climate Change Adaptation Science Investigators Workgroup (CASI) to provide climate change expertise and data to Center facility managers and planners in order to ensure sustainability based on predictive models and current research. Generation of historical datasets that will be used in an agency-wide effort to establish strategies for climate change mitigation and adaptation at NASA facilities is part of the CASI strategy. Using time series of historical remotely sensed data is well-established means of measuring change over time. CASI investigators have acquired multispectral and hyperspectral optical and LiDAR remotely sensed datasets from NASA Earth Observation Satellites (including the International Space Station), airborne sensors, and astronaut photography using hand held digital cameras to create a historical dataset for the Johnson Space Center, as well as the Houston and Galveston area. The raster imagery within each dataset has been georectified, and the multispectral and hyperspectral imagery has been atmospherically corrected. Using ArcGIS for Server, the CASI-Regional Remote Sensing data has been published as an image service, and can be visualized through a basic web mapping application. Future work will include a customized web mapping application created using a JavaScript Application Programming Interface (API), and inclusion of the CASI data for the NASA Johnson Space Center into a NASA-Wide GIS Institutional Portal.
Adaptive Neural Network Based Control of Noncanonical Nonlinear Systems.
Zhang, Yanjun; Tao, Gang; Chen, Mou
2016-09-01
This paper presents a new study on the adaptive neural network-based control of a class of noncanonical nonlinear systems with large parametric uncertainties. Unlike commonly studied canonical form nonlinear systems whose neural network approximation system models have explicit relative degree structures, which can directly be used to derive parameterized controllers for adaptation, noncanonical form nonlinear systems usually do not have explicit relative degrees, and thus their approximation system models are also in noncanonical forms. It is well-known that the adaptive control of noncanonical form nonlinear systems involves the parameterization of system dynamics. As demonstrated in this paper, it is also the case for noncanonical neural network approximation system models. Effective control of such systems is an open research problem, especially in the presence of uncertain parameters. This paper shows that it is necessary to reparameterize such neural network system models for adaptive control design, and that such reparameterization can be realized using a relative degree formulation, a concept yet to be studied for general neural network system models. This paper then derives the parameterized controllers that guarantee closed-loop stability and asymptotic output tracking for noncanonical form neural network system models. An illustrative example is presented with the simulation results to demonstrate the control design procedure, and to verify the effectiveness of such a new design method.
NASA Technical Reports Server (NTRS)
Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.
1997-01-01
Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.
Adaptivity in Agent-Based Routing for Data Networks
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Kirshner, Sergey; Merz, Chris J.; Turner, Kagan
2000-01-01
Adaptivity, both of the individual agents and of the interaction structure among the agents, seems indispensable for scaling up multi-agent systems (MAS s) in noisy environments. One important consideration in designing adaptive agents is choosing their action spaces to be as amenable as possible to machine learning techniques, especially to reinforcement learning (RL) techniques. One important way to have the interaction structure connecting agents itself be adaptive is to have the intentions and/or actions of the agents be in the input spaces of the other agents, much as in Stackelberg games. We consider both kinds of adaptivity in the design of a MAS to control network packet routing. We demonstrate on the OPNET event-driven network simulator the perhaps surprising fact that simply changing the action space of the agents to be better suited to RL can result in very large improvements in their potential performance: at their best settings, our learning-amenable router agents achieve throughputs up to three and one half times better than that of the standard Bellman-Ford routing algorithm, even when the Bellman-Ford protocol traffic is maintained. We then demonstrate that much of that potential improvement can be realized by having the agents learn their settings when the agent interaction structure is itself adaptive.
Brown, Iain
2018-06-13
Climate change policy requires prioritization of adaptation actions across many diverse issues. The policy agenda for the natural environment includes not only biodiversity, soils and water, but also associated human benefits through agriculture, forestry, water resources, hazard alleviation, climate regulation and amenity value. To address this broad agenda, the use of comparative risk assessment is investigated with reference to statutory requirements of the UK Climate Change Risk Assessment. Risk prioritization was defined by current adaptation progress relative to risk magnitude and implementation lead times. Use of an ecosystem approach provided insights into risk interactions, but challenges remain in quantifying ecosystem services. For all risks, indirect effects and potential systemic risks were identified from land-use change, responding to both climate and socio-economic drivers, and causing increased competition for land and water resources. Adaptation strategies enhancing natural ecosystem resilience can buffer risks and sustain ecosystem services but require improved cross-sectoral coordination and recognition of dynamic change. To facilitate this, risk assessments need to be reflexive and explicitly assess decision outcomes contingent on their riskiness and adaptability, including required levels of human intervention, influence of uncertainty and ethical dimensions. More national-scale information is also required on adaptation occurring in practice and its efficacy in moderating risks.This article is part of the theme issue 'Advances in risk assessment for climate change adaptation policy'. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Brown, Iain
2018-06-01
Climate change policy requires prioritization of adaptation actions across many diverse issues. The policy agenda for the natural environment includes not only biodiversity, soils and water, but also associated human benefits through agriculture, forestry, water resources, hazard alleviation, climate regulation and amenity value. To address this broad agenda, the use of comparative risk assessment is investigated with reference to statutory requirements of the UK Climate Change Risk Assessment. Risk prioritization was defined by current adaptation progress relative to risk magnitude and implementation lead times. Use of an ecosystem approach provided insights into risk interactions, but challenges remain in quantifying ecosystem services. For all risks, indirect effects and potential systemic risks were identified from land-use change, responding to both climate and socio-economic drivers, and causing increased competition for land and water resources. Adaptation strategies enhancing natural ecosystem resilience can buffer risks and sustain ecosystem services but require improved cross-sectoral coordination and recognition of dynamic change. To facilitate this, risk assessments need to be reflexive and explicitly assess decision outcomes contingent on their riskiness and adaptability, including required levels of human intervention, influence of uncertainty and ethical dimensions. More national-scale information is also required on adaptation occurring in practice and its efficacy in moderating risks. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.
Dupas, Laura; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre; Boulant, Nicolas
2015-06-01
The spokes method combined with parallel transmission is a promising technique to mitigate the B1(+) inhomogeneity at ultra-high field in 2D imaging. To date however, the spokes placement optimization combined with the magnitude least squares pulse design has never been done in direct conjunction with the explicit Specific Absorption Rate (SAR) and hardware constraints. In this work, the joint optimization of 2-spoke trajectories and RF subpulse weights is performed under these constraints explicitly and in the small tip angle regime. The problem is first considerably simplified by making the observation that only the vector between the 2 spokes is relevant in the magnitude least squares cost-function, thereby reducing the size of the parameter space and allowing a more exhaustive search. The algorithm starts from a set of initial k-space candidates and performs in parallel for all of them optimizations of the RF subpulse weights and the k-space locations simultaneously, under explicit SAR and power constraints, using an active-set algorithm. The dimensionality of the spoke placement parameter space being low, the RF pulse performance is computed for every location in k-space to study the robustness of the proposed approach with respect to initialization, by looking at the probability to converge towards a possible global minimum. Moreover, the optimization of the spoke placement is repeated with an increased pulse bandwidth in order to investigate the impact of the constraints on the result. Bloch simulations and in vivo T2(∗)-weighted images acquired at 7 T validate the approach. The algorithm returns simulated normalized root mean square errors systematically smaller than 5% in 10 s. Copyright © 2015 Elsevier Inc. All rights reserved.
Lotka-Volterra competition models for sessile organisms.
Spencer, Matthew; Tanner, Jason E
2008-04-01
Markov models are widely used to describe the dynamics of communities of sessile organisms, because they are easily fitted to field data and provide a rich set of analytical tools. In typical ecological applications, at any point in time, each point in space is in one of a finite set of states (e.g., species, empty space). The models aim to describe the probabilities of transitions between states. In most Markov models for communities, these transition probabilities are assumed to be independent of state abundances. This assumption is often suspected to be false and is rarely justified explicitly. Here, we start with simple assumptions about the interactions among sessile organisms and derive a model in which transition probabilities depend on the abundance of destination states. This model is formulated in continuous time and is equivalent to a Lotka-Volterra competition model. We fit this model and a variety of alternatives in which transition probabilities do not depend on state abundances to a long-term coral reef data set. The Lotka-Volterra model describes the data much better than all models we consider other than a saturated model (a model with a separate parameter for each transition at each time interval, which by definition fits the data perfectly). Our approach provides a basis for further development of stochastic models of sessile communities, and many of the methods we use are relevant to other types of community. We discuss possible extensions to spatially explicit models.
How adaptive optics may have won the Cold War
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
2013-05-01
While there are many theories and studies concerning the end of the Cold War, circa 1990, I postulate that one of the contributors to the result was the development of adaptive optics. The emergence of directed energy weapons, specifically space-based and ground-based high energy lasers made practicable with adaptive optics, showed that a successful defense against inter-continental ballistic missiles was not only possible, but achievable in a reasonable period of time.
NASA Astrophysics Data System (ADS)
Mancho, Ana M.; Wiggins, Stephen; Curbelo, Jezabel; Mendoza, Carolina
2013-11-01
Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a ``heuristic argument'' that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (``time averages''). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We thank CESGA for computing facilities. This research was supported by MINECO grants: MTM2011-26696, I-Math C3-0104, ICMAT Severo Ochoa project SEV-2011-0087, and CSIC grant OCEANTECH. SW acknowledges the support of the ONR (Grant No. N00014-01-1-0769).
NASA Technical Reports Server (NTRS)
Carey, L. D.; Petersen, W. A.; Deierling, W.; Roeder, W. P.
2009-01-01
A new weather radar is being acquired for use in support of America s space program at Cape Canaveral Air Force Station, NASA Kennedy Space Center, and Patrick AFB on the east coast of central Florida. This new radar replaces the modified WSR-74C at Patrick AFB that has been in use since 1984. The new radar is a Radtec TDR 43-250, which has Doppler and dual polarization capability. A new fixed scan strategy was designed to best support the space program. The fixed scan strategy represents a complex compromise between many competing factors and relies on climatological heights of various temperatures that are important for improved lightning forecasting and evaluation of Lightning Launch Commit Criteria (LCC), which are the weather rules to avoid lightning strikes to in-flight rockets. The 0 C to -20 C layer is vital since most generation of electric charge occurs within it and so it is critical in evaluating Lightning LCC and in forecasting lightning. These are two of the most important duties of 45 WS. While the fixed scan strategy that covers most of the climatological variation of the 0 C to -20 C levels with high resolution ensures that these critical temperatures are well covered most of the time, it also means that on any particular day the radar is spending precious time scanning at angles covering less important heights. The goal of this project is to develop a user-friendly, Interactive Data Language (IDL) computer program that will automatically generate optimized radar scan strategies that adapt to user input of the temperature profile and other important parameters. By using only the required scan angles output by the temperature profile adaptive scan strategy program, faster update times for volume scans and/or collection of more samples per gate for better data quality is possible, while maintaining high resolution at the critical temperature levels. The temperature profile adaptive technique will also take into account earth curvature and refraction when geo-locating the radar beam (i.e., beam height and arc distance), including non-standard refraction based on the user-input temperature profile. In addition to temperature profile adaptivity, this paper will also summarize the other requirements for this scan strategy program such as detection of low-level boundaries, detection of anvil clouds, reducing the Cone Of Silence, and allowing for times when deep convective clouds will not occur. The adaptive technique will be carefully compared to and benchmarked against the new fixed scan strategy. Specific environmental scenarios in which the adaptive scan strategy is able to optimize and improve coverage and resolution at critical heights, scan time, and/or sample numbers relative to the fixed scan strategy will be presented.
The effect of space flight on spatial orientation
NASA Technical Reports Server (NTRS)
Reschke, Millard F.; Bloomberg, Jacob J.; Harm, Deborah L.; Paloski, William H.; Satake, Hirotaka
1992-01-01
Both during and following early space missions, little neurosensory change in the astronauts was noted as a result of their exposure to microgravity. It is believed that this lack of in-flight adaptation in the spatial orientation and perceptual-motor system resulted from short exposure times and limited interaction with the new environment. Parker and Parker (1990) have suggested that while spatial orientation and motion information can be detected by a passive observer, adaptation to stimulus rearrangement is greatly enhanced when the observer moves through or acts on the environment. Experience with the actual consequences of action can be compared with those consequences expected on the basis of prior experience. Space flight today is of longer duration, and space craft volume has increased. These changes have forced the astronauts to interact with the new environment of microgravity, and as a result substantial changes occur in the perceptual and sensory-motor repsonses reflecting adaptation to the stimulus rearrangement of space flight. We are currently evaluating spatial orientation and the perceptual-motor systems' adaptation to microgravity by examining responses of postural control, head and gaze stability during locomotion, goal oriented vestibulo-ocular reflex (VOR), and structured quantitative perceptual reports. Evidence suggests that humans can successfully replace the gravitational reference available on Earth with cues available within the spacecraft or within themselves, but that adaptation to microgravity is not appropriate for a return to Earth. Countermeasures for optimal performance on-orbit and a successful return to earth will require development of preflight and in-flight training to help the astronauts acquire and maintain a dual adaptive state. An understanding of spatial orientation and motion perception, postural control, locomotion, and the VOR will aid in this process.
The Interface Theory of Perception.
Hoffman, Donald D; Singh, Manish; Prakash, Chetan
2015-12-01
Perception is a product of evolution. Our perceptual systems, like our limbs and livers, have been shaped by natural selection. The effects of selection on perception can be studied using evolutionary games and genetic algorithms. To this end, we define and classify perceptual strategies and allow them to compete in evolutionary games in a variety of worlds with a variety of fitness functions. We find that veridical perceptions--strategies tuned to the true structure of the world--are routinely dominated by nonveridical strategies tuned to fitness. Veridical perceptions escape extinction only if fitness varies monotonically with truth. Thus, a perceptual strategy favored by selection is best thought of not as a window on truth but as akin to a windows interface of a PC. Just as the color and shape of an icon for a text file do not entail that the text file itself has a color or shape, so also our perceptions of space-time and objects do not entail (by the Invention of Space-Time Theorem) that objective reality has the structure of space-time and objects. An interface serves to guide useful actions, not to resemble truth. Indeed, an interface hides the truth; for someone editing a paper or photo, seeing transistors and firmware is an irrelevant hindrance. For the perceptions of H. sapiens, space-time is the desktop and physical objects are the icons. Our perceptions of space-time and objects have been shaped by natural selection to hide the truth and guide adaptive behaviors. Perception is an adaptive interface.
Work, exercise, and space flight. 2: Modification of adaptation by exercise (exercise prescription)
NASA Technical Reports Server (NTRS)
Thornton, William
1989-01-01
The fundamentals of exercise theory on earth must be rigorously understood and applied to prevent adaptation to long periods of weightlessness. Locomotor activity, not weight, determines the capacity or condition of the largest muscles and bones in the body and usually also determines cardio-respiratory capacity. Absence of this activity results in rapid atrophy of muscle, bone, and cardio-respiratory capacity. Upper body muscle and bone are less affected depending upon the individual's usual, or 1-g, activities. Methodology is available to prevent these changes but space operations demand that it be done in the most efficient fashion, i.e., shortest time. At this point in time we can reasonably select the type of exercise and methods of obtaining it, but additional work in 1-g will be required to optimize the time.
NASA Astrophysics Data System (ADS)
Veprik, A.; Zechtzer, S.; Pundak, N.; Kirkconnell, C.; Freeman, J.; Riabzev, S.
2011-06-01
Cryogenic coolers are often used in modern spacecraft in conjunction with sensitive electronics and sensors of military, commercial and scientific instrumentation. The typical space requirements are: power efficiency, low vibration export, proven reliability, ability to survive launch vibration/shock and long-term exposure to space radiation. A long-standing paradigm of exclusively using "space heritage" equipment has become the standard practice for delivering high reliability components. Unfortunately, this conservative "space heritage" practice can result in using outdated, oversized, overweight and overpriced cryogenic coolers and is becoming increasingly unacceptable for space agencies now operating within tough monetary and time constraints. The recent trend in developing mini and micro satellites for relatively inexpensive missions has prompted attempts to adapt leading-edge tactical cryogenic coolers for suitability in the space environment. The primary emphasis has been on reducing cost, weight and size. The authors are disclosing theoretical and practical aspects of a collaborative effort to develop a space qualified cryogenic refrigerator system based on the tactical cooler model Ricor K527 and the Iris Technology radiation hardened Low Cost Cryocooler Electronics (LCCE). The K27/LCCE solution is ideal for applications where cost, size, weight, power consumption, vibration export, reliability and time to spacecraft integration are of concern.
Neutral signature Walker-CSI metrics
NASA Astrophysics Data System (ADS)
Coley, A.; Musoke, N.
2015-03-01
We will construct explicit examples of four-dimensional neutral signature Einstein Walker spaces for which all of the polynomial scalar curvature invariants are constant. We show that these Einstein Walker spaces are Kundt. We then investigate the mathematical properties of the spaces, including holonomy and universality.
Finding joy in poor health: The leisure-scapes of chronic illness.
McQuoid, Julia
2017-06-01
Globally, increasing numbers of people face the challenge of enjoying life while living with long-term illness. Little research addresses leisure participation for people with chronic illness despite its links with mental and physical health and self-rated quality of life. I use a space-time geographical approach to explore experiences with leisure in everyday life for 26 individuals with chronic kidney disease (CKD) in Australia. I examine ways in which the spatial and temporal characteristics of illness management and symptoms shape where, when, and how participants can enjoy leisure, focusing on: 1) logistical conflicts between illness and leisure; 2) rhythmic interferences with the force of habit in skilful leisure performance; and 3) absorbing experiences of encounter with self and place through leisure. Data were collected from 2013 to 2014. Participants kept diaries over two sample days and then participated in semi-structured interviews. Findings show that the voluntary nature of leisure offered participants important benefits in coping with and managing illness over the long-term, including opportunities to experience greater sense of control, an alternative experience of one's body to the 'sick body', and knowledge creation that supports adaptation to the uncertainties of illness trajectories. The ability to engage in meaningful leisure was constrained by the shaping forces of illness symptoms and management on participants' leisure-scapes. Illness treatment regimens should therefore be adapted to better accommodate leisure participation for chronically ill patients, and leisure should be explicitly incorporated into illness management plans negotiated between patients and health practitioners. Finally, greater understanding of the transformative capacity of habit in activities of experimentation and play may have wider-reaching implications for leisure's potential applications in public health. Leisure should be taken seriously as a vehicle for enhancing wellbeing and adaptation to life with long-term illness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Finding joy in poor health: The leisure-scapes of chronic illness
2017-01-01
Globally, increasing numbers of people face the challenge of enjoying life while living with long-term illness. Little research addresses leisure participation for people with chronic illness despite its links with mental and physical health and self-rated quality of life. I use a space-time geographical approach to explore experiences with leisure in everyday life for 26 individuals with chronic kidney disease (CKD) in Australia. I examine ways in which the spatial and temporal characteristics of illness management and symptoms shape where, when, and how participants can enjoy leisure, focusing on: 1) logistical conflicts between illness and leisure; 2) rhythmic interferences with the force of habit in skilful leisure performance; and 3) absorbing experiences of encounter with self and place through leisure. Data were collected from 2013 to 2014. Participants kept diaries over two sample days and then participated in semi-structured interviews. Findings show that the voluntary nature of leisure offered participants important benefits in coping with and managing illness over the long-term, including opportunities to experience greater sense of control, an alternative experience of one’s body to the ‘sick body’, and knowledge creation that supports adaptation to the uncertainties of illness trajectories. The ability to engage in meaningful leisure was constrained by the shaping forces of illness symptoms and management on participants’ leisure-scapes. Illness treatment regimens should therefore be adapted to better accommodate leisure participation for chronically ill patients, and leisure should be explicitly incorporated into illness management plans negotiated between patients and health practitioners. Finally, greater understanding of the transformative capacity of habit in activities of experimentation and play may have wider-reaching implications for leisure’s potential applications in public health. Leisure should be taken seriously as a vehicle for enhancing wellbeing and adaptation to life with long-term illness. PMID:28475903
NASA Technical Reports Server (NTRS)
Ryabenkii, V. S.; Turchaninov, V. I.; Tsynkov, S. V.
1999-01-01
We propose a family of algorithms for solving numerically a Cauchy problem for the three-dimensional wave equation. The sources that drive the equation (i.e., the right-hand side) are compactly supported in space for any given time; they, however, may actually move in space with a subsonic speed. The solution is calculated inside a finite domain (e.g., sphere) that also moves with a subsonic speed and always contains the support of the right-hand side. The algorithms employ a standard consistent and stable explicit finite-difference scheme for the wave equation. They allow one to calculate tile solution for arbitrarily long time intervals without error accumulation and with the fixed non-growing amount of tile CPU time and memory required for advancing one time step. The algorithms are inherently three-dimensional; they rely on the presence of lacunae in the solutions of the wave equation in oddly dimensional spaces. The methodology presented in the paper is, in fact, a building block for constructing the nonlocal highly accurate unsteady artificial boundary conditions to be used for the numerical simulation of waves propagating with finite speed over unbounded domains.
NASA Astrophysics Data System (ADS)
Bertrand, Sophie; Díaz, Erich; Lengaigne, Matthieu
2008-10-01
Peruvian anchovy ( Engraulis ringens) stock abundance is tightly driven by the high and unpredictable variability of the Humboldt Current Ecosystem. Management of the fishery therefore cannot rely on mid- or long-term management policy alone but needs to be adaptive at relatively short time scales. Regular acoustic surveys are performed on the stock at intervals of 2 to 4 times a year, but there is a need for more time continuous monitoring indicators to ensure that management can respond at suitable time scales. Existing literature suggests that spatially explicit data on the location of fishing activities could be used as a proxy for target stock distribution. Spatially explicit commercial fishing data could therefore guide adaptive management decisions at shorter time scales than is possible through scientific stock surveys. In this study we therefore aim to (1) estimate the position of fishing operations for the entire fleet of Peruvian anchovy purse-seiners using the Peruvian satellite vessel monitoring system (VMS), and (2) quantify the extent to which the distribution of purse-seine sets describes anchovy distribution. To estimate fishing set positions from vessel tracks derived from VMS data we developed a methodology based on artificial neural networks (ANN) trained on a sample of fishing trips with known fishing set positions (exact fishing positions are known for approximately 1.5% of the fleet from an at-sea observer program). The ANN correctly identified 83% of the real fishing sets and largely outperformed comparative linear models. This network is then used to forecast fishing operations for those trips where no observers were onboard. To quantify the extent to which fishing set distribution was correlated to stock distribution we compared three metrics describing features of the distributions (the mean distance to the coast, the total area of distribution, and a clustering index) for concomitant acoustic survey observations and fishing set positions identified from VMS. For two of these metrics (mean distance to the coast and clustering index), fishing and survey data were significantly correlated. We conclude that the location of purse-seine fishing sets yields significant and valuable information on the distribution of the Peruvian anchovy stock and ultimately on its vulnerability to the fishery. For example, a high concentration of sets in the near coastal zone could potentially be used as a warning signal of high levels of stock vulnerability and trigger appropriate management measures aimed at reducing fishing effort.
Fast time- and frequency-domain finite-element methods for electromagnetic analysis
NASA Astrophysics Data System (ADS)
Lee, Woochan
Fast electromagnetic analysis in time and frequency domain is of critical importance to the design of integrated circuits (IC) and other advanced engineering products and systems. Many IC structures constitute a very large scale problem in modeling and simulation, the size of which also continuously grows with the advancement of the processing technology. This results in numerical problems beyond the reach of existing most powerful computational resources. Different from many other engineering problems, the structure of most ICs is special in the sense that its geometry is of Manhattan type and its dielectrics are layered. Hence, it is important to develop structure-aware algorithms that take advantage of the structure specialties to speed up the computation. In addition, among existing time-domain methods, explicit methods can avoid solving a matrix equation. However, their time step is traditionally restricted by the space step for ensuring the stability of a time-domain simulation. Therefore, making explicit time-domain methods unconditionally stable is important to accelerate the computation. In addition to time-domain methods, frequency-domain methods have suffered from an indefinite system that makes an iterative solution difficult to converge fast. The first contribution of this work is a fast time-domain finite-element algorithm for the analysis and design of very large-scale on-chip circuits. The structure specialty of on-chip circuits such as Manhattan geometry and layered permittivity is preserved in the proposed algorithm. As a result, the large-scale matrix solution encountered in the 3-D circuit analysis is turned into a simple scaling of the solution of a small 1-D matrix, which can be obtained in linear (optimal) complexity with negligible cost. Furthermore, the time step size is not sacrificed, and the total number of time steps to be simulated is also significantly reduced, thus achieving a total cost reduction in CPU time. The second contribution is a new method for making an explicit time-domain finite-element method (TDFEM) unconditionally stable for general electromagnetic analysis. In this method, for a given time step, we find the unstable modes that are the root cause of instability, and deduct them directly from the system matrix resulting from a TDFEM based analysis. As a result, an explicit TDFEM simulation is made stable for an arbitrarily large time step irrespective of the space step. The third contribution is a new method for full-wave applications from low to very high frequencies in a TDFEM based on matrix exponential. In this method, we directly deduct the eigenmodes having large eigenvalues from the system matrix, thus achieving a significantly increased time step in the matrix exponential based TDFEM. The fourth contribution is a new method for transforming the indefinite system matrix of a frequency-domain FEM to a symmetric positive definite one. We deduct non-positive definite component directly from the system matrix resulting from a frequency-domain FEM-based analysis. The resulting new representation of the finite-element operator ensures an iterative solution to converge in a small number of iterations. We then add back the non-positive definite component to synthesize the original solution with negligible cost.
Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.
Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin
2013-09-01
Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.
Affective Interface Adaptations in the Musickiosk Interactive Entertainment Application
NASA Astrophysics Data System (ADS)
Malatesta, L.; Raouzaiou, A.; Pearce, L.; Karpouzis, K.
The current work presents the affective interface adaptations in the Musickiosk application. Adaptive interaction poses several open questions since there is no unique way of mapping affective factors of user behaviour to the output of the system. Musickiosk uses a non-contact interface and implicit interaction through emotional affect rather than explicit interaction where a gesture, sound or other input directly maps to an output behaviour - as in traditional entertainment applications. PAD model is used for characterizing the different affective states and emotions.
Space station human productivity study, volume 1
NASA Technical Reports Server (NTRS)
1985-01-01
The primary goal was to develop design and operations requirements for direct support of intra-vehicular activity (IVA) crew performance and productivity. It was recognized that much work had already been accomplished which provided sufficient data for the definition of the desired requirements. It was necessary, therefore, to assess the status of such data to extract definable requirements, and then to define the remaining study needs. The explicit objectives of the study were to: review existing data to identify potential problems of space station crew productivity and to define requirements for support of productivity insofar as they could be justified by current information; identify those areas that lack adequate data; and prepare plans for managing studies to develop the lacking data, so that results can be input to the space station program in a timely manner.
Distant Operational Care Centre: Design Project Report
NASA Technical Reports Server (NTRS)
1996-01-01
The goal of this project is to outline the design of the Distant Operational Care Centre (DOCC), a modular medical facility to maintain human health and performance in space, that is adaptable to a range of remote human habitats. The purpose of this project is to outline a design, not to go into a complete technical specification of a medical facility for space. This project involves a process to produce a concise set of requirements, addressing the fundamental problems and issues regarding all aspects of a space medical facility for the future. The ideas presented here are at a high level, based on existing, researched, and hypothetical technologies. Given the long development times for space exploration, the outlined concepts from this project embodies a collection of identified problems, and corresponding proposed solutions and ideas, ready to contribute to future space exploration efforts. In order to provide a solid extrapolation and speculation in the context of the future of space medicine, the extent of this project's vision is roughly within the next two decades. The Distant Operational Care Centre (DOCC) is a modular medical facility for space. That is, its function is to maintain human health and performance in space environments, through prevention, diagnosis, and treatment. Furthermore, the DOCC must be adaptable to meet the environmental requirements of different remote human habitats, and support a high quality of human performance. To meet a diverse range of remote human habitats, the DOCC concentrates on a core medical capability that can then be adapted. Adaptation would make use of the DOCC's functional modularity, providing the ability to replace, add, and modify core functions of the DOCC by updating hardware, operations, and procedures. Some of the challenges to be addressed by this project include what constitutes the core medical capability in terms of hardware, operations, and procedures, and how DOCC can be adapted to different remote habitats.
Lu, Yongtao; Boudiffa, Maya; Dall'Ara, Enrico; Bellantuono, Ilaria; Viceconti, Marco
2016-07-05
In vivo micro-computed tomography (µCT) scanning of small rodents is a powerful method for longitudinal monitoring of bone adaptation. However, the life-time bone growth in small rodents makes it a challenge to quantify local bone adaptation. Therefore, the aim of this study was to develop a protocol, which can take into account large bone growth, to quantify local bone adaptations over space and time. The entire right tibiae of eight 14-week-old C57BL/6J female mice were consecutively scanned four times in an in vivo µCT scanner using a nominal isotropic image voxel size of 10.4µm. The repeated scan image datasets were aligned to the corresponding baseline (first) scan image dataset using rigid registration. 80% of tibia length (starting from the endpoint of the proximal growth plate) was selected as the volume of interest and partitioned into 40 regions along the tibial long axis (10 divisions) and in the cross-section (4 sectors). The bone mineral content (BMC) was used to quantify bone adaptation and was calculated in each region. All local BMCs have precision errors (PE%CV) of less than 3.5% (24 out of 40 regions have PE%CV of less than 2%), least significant changes (LSCs) of less than 3.8%, and 38 out of 40 regions have intraclass correlation coefficients (ICCs) of over 0.8. The proposed protocol allows to quantify local bone adaptations over an entire tibia in longitudinal studies, with a high reproducibility, an essential requirement to reduce the number of animals to achieve the necessary statistical power. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less
NASA Astrophysics Data System (ADS)
Duru, Kenneth; Dunham, Eric M.
2016-01-01
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.
NASA Astrophysics Data System (ADS)
Most, S.; Dentz, M.; Bolster, D.; Bijeljic, B.; Nowak, W.
2017-12-01
Transport in real porous media shows non-Fickian characteristics. In the Lagrangian perspective this leads to skewed distributions of particle arrival times. The skewness is triggered by particles' memory of velocity that persists over a characteristic length. Capturing process memory is essential to represent non-Fickianity thoroughly. Classical non-Fickian models (e.g., CTRW models) simulate the effects of memory but not the mechanisms leading to process memory. CTRWs have been applied successfully in many studies but nonetheless they have drawbacks. In classical CTRWs each particle makes a spatial transition for which each particle adapts a random transit time. Consecutive transit times are drawn independently from each other, and this is only valid for sufficiently large spatial transitions. If we want to apply a finer numerical resolution than that, we have to implement memory into the simulation. Recent CTRW methods use transitions matrices to simulate correlated transit times. However, deriving such transition matrices require transport data of a fine-scale transport simulation, and the obtained transition matrix is solely valid for this single Péclet regime. The CTRW method we propose overcomes all three drawbacks: 1) We simulate transport without restrictions in transition length. 2) We parameterize our CTRW without requiring a transport simulation. 3) Our parameterization scales across Péclet regimes. We do so by sampling the pore-scale velocity distribution to generate correlated transit times as a Lévy flight on the CDF-axis of velocities with reflection at 0 and 1. The Lévy flight is parametrized only by the correlation length. We explicitly model memory including the evolution and decay of non-Fickianity, so it extends from local via pre-asymptotic to asymptotic scales.
Deep learning with domain adaptation for accelerated projection-reconstruction MR.
Han, Yoseob; Yoo, Jaejun; Kim, Hak Hee; Shin, Hee Jung; Sung, Kyunghyun; Ye, Jong Chul
2018-09-01
The radial k-space trajectory is a well-established sampling trajectory used in conjunction with magnetic resonance imaging. However, the radial k-space trajectory requires a large number of radial lines for high-resolution reconstruction. Increasing the number of radial lines causes longer acquisition time, making it more difficult for routine clinical use. On the other hand, if we reduce the number of radial lines, streaking artifact patterns are unavoidable. To solve this problem, we propose a novel deep learning approach with domain adaptation to restore high-resolution MR images from under-sampled k-space data. The proposed deep network removes the streaking artifacts from the artifact corrupted images. To address the situation given the limited available data, we propose a domain adaptation scheme that employs a pre-trained network using a large number of X-ray computed tomography (CT) or synthesized radial MR datasets, which is then fine-tuned with only a few radial MR datasets. The proposed method outperforms existing compressed sensing algorithms, such as the total variation and PR-FOCUSS methods. In addition, the calculation time is several orders of magnitude faster than the total variation and PR-FOCUSS methods. Moreover, we found that pre-training using CT or MR data from similar organ data is more important than pre-training using data from the same modality for different organ. We demonstrate the possibility of a domain-adaptation when only a limited amount of MR data is available. The proposed method surpasses the existing compressed sensing algorithms in terms of the image quality and computation time. © 2018 International Society for Magnetic Resonance in Medicine.
Interdependence of geomorphic and ecologic resilience properties in a geographic context
NASA Astrophysics Data System (ADS)
Anthony Stallins, J.; Corenblit, Dov
2018-03-01
Ecology and geomorphology recognize the dynamic aspects of resistance and resilience. However, formal resilience theory in ecology has tended to deemphasize the geomorphic habitat template. Conversely, landscape sensitivity and state-and-transition models in geomorphology downweight mechanisms of biotic adaptation operative in fluctuating, spatially explicit environments. Adding to the interdisciplinary challenge of understanding complex biogeomorphic systems is that environmental heterogeneity and overlapping gradients of disturbance complicate inference of the geographic patterns of resistance and resilience. We develop a conceptual model for comparing the resilience properties among barrier dunes. The model illustrates how adaptive cycles and panarchies, the formal building blocks of resilience recognized in ecology, can be expressed as a set of hierarchically nested geomorphic and ecological metrics. The variance structure of these data is proposed as a means to delineate different kinds and levels of resilience. Specifically, it is the dimensionality of these data and how geomorphic and ecological variables load on the first and succeeding axes that facilitates the delineation of resistance and resilience. The construction of dune topographic state space from observations among different barrier islands is proposed as a way to measure the interdependence of geomorphic and ecological resilience properties.
Building Science-Relevant Literacy with Technical Writing in High School
DOE Office of Scientific and Technical Information (OSTI.GOV)
Girill, T R
2006-06-02
By drawing on the in-class work of an on-going literacy outreach project, this paper explains how well-chosen technical writing activities can earn time in high-school science courses by enabling underperforming students (including ESL students) to learn science more effectively. We adapted basic research-based text-design and usability techniques into age-appropriate exercises and cases using the cognitive apprenticeship approach. This enabled high-school students, aided by explicit guidelines, to build their cognitive maturity, learn how to craft good instructions and descriptions, and apply those skills to better note taking and technical talks in their science classes.
Asymptotic charges cannot be measured in finite time
Bousso, Raphael; Chandrasekaran, Venkatesa; Halpern, Illan F.; ...
2018-02-28
To study quantum gravity in asymptotically flat spacetimes, one would like to understand the algebra of observables at null infinity. Here we show that the Bondi mass cannot be observed in finite retarded time, and so is not contained in the algebra on any finite portion of I +. This follows immediately from recently discovered asymptotic entropy bounds. We verify this explicitly, and we find that attempts to measure a conserved charge at arbitrarily large radius in fixed retarded time are thwarted by quantum fluctuations. We comment on the implications of our results to flat space holography and the BMSmore » charges at I +.« less
Asymptotic charges cannot be measured in finite time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bousso, Raphael; Chandrasekaran, Venkatesa; Halpern, Illan F.
To study quantum gravity in asymptotically flat spacetimes, one would like to understand the algebra of observables at null infinity. Here we show that the Bondi mass cannot be observed in finite retarded time, and so is not contained in the algebra on any finite portion of I +. This follows immediately from recently discovered asymptotic entropy bounds. We verify this explicitly, and we find that attempts to measure a conserved charge at arbitrarily large radius in fixed retarded time are thwarted by quantum fluctuations. We comment on the implications of our results to flat space holography and the BMSmore » charges at I +.« less
Simulating the Risk of Liver Fluke Infection using a Mechanistic Hydro-epidemiological Model
NASA Astrophysics Data System (ADS)
Beltrame, Ludovica; Dunne, Toby; Rose, Hannah; Walker, Josephine; Morgan, Eric; Vickerman, Peter; Wagener, Thorsten
2016-04-01
Liver Fluke (Fasciola hepatica) is a common parasite found in livestock and responsible for considerable economic losses throughout the world. Risk of infection is strongly influenced by climatic and hydrological conditions, which characterise the host environment for parasite development and transmission. Despite on-going control efforts, increases in fluke outbreaks have been reported in recent years in the UK, and have been often attributed to climate change. Currently used fluke risk models are based on empirical relationships derived between historical climate and incidence data. However, hydro-climate conditions are becoming increasingly non-stationary due to climate change and direct anthropogenic impacts such as land use change, making empirical models unsuitable for simulating future risk. In this study we introduce a mechanistic hydro-epidemiological model for Liver Fluke, which explicitly simulates habitat suitability for disease development in space and time, representing the parasite life cycle in connection with key environmental conditions. The model is used to assess patterns of Liver Fluke risk for two catchments in the UK under current and potential future climate conditions. Comparisons are made with a widely used empirical model employing different datasets, including data from regional veterinary laboratories. Results suggest that mechanistic models can achieve adequate predictive ability and support adaptive fluke control strategies under climate change scenarios.
Evolutionarily stable range limits set by interspecific competition.
Price, Trevor D; Kirkpatrick, Mark
2009-04-22
A combination of abiotic and biotic factors probably restricts the range of many species. Recent evolutionary models and tests of those models have asked how a gradual change in environmental conditions can set the range limit, with a prominent idea being that gene flow disrupts local adaptation. We investigate how biotic factors, explicitly competition for limited resources, result in evolutionarily stable range limits even in the absence of the disruptive effect of gene flow. We model two competing species occupying different segments of the resource spectrum. If one segment of the resource spectrum declines across space, a species that specializes on that segment can be driven to extinction, even though in the absence of competition it would evolve to exploit other abundant resources and so be saved. The result is that a species range limit is set in both evolutionary and ecological time, as the resources associated with its niche decline. Factors promoting this outcome include: (i) inherent gaps in the resource distribution, (ii) relatively high fitness of the species when in its own niche, and low fitness in the alternative niche, even when resource abundances are similar in each niche, (iii) strong interspecific competition, and (iv) asymmetric interspecific competition. We suggest that these features are likely to be common in multispecies communities, thereby setting evolutionarily stable range limits.
Evolutionarily stable range limits set by interspecific competition
Price, Trevor D.; Kirkpatrick, Mark
2009-01-01
A combination of abiotic and biotic factors probably restricts the range of many species. Recent evolutionary models and tests of those models have asked how a gradual change in environmental conditions can set the range limit, with a prominent idea being that gene flow disrupts local adaptation. We investigate how biotic factors, explicitly competition for limited resources, result in evolutionarily stable range limits even in the absence of the disruptive effect of gene flow. We model two competing species occupying different segments of the resource spectrum. If one segment of the resource spectrum declines across space, a species that specializes on that segment can be driven to extinction, even though in the absence of competition it would evolve to exploit other abundant resources and so be saved. The result is that a species range limit is set in both evolutionary and ecological time, as the resources associated with its niche decline. Factors promoting this outcome include: (i) inherent gaps in the resource distribution, (ii) relatively high fitness of the species when in its own niche, and low fitness in the alternative niche, even when resource abundances are similar in each niche, (iii) strong interspecific competition, and (iv) asymmetric interspecific competition. We suggest that these features are likely to be common in multispecies communities, thereby setting evolutionarily stable range limits. PMID:19324813
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
NASA Astrophysics Data System (ADS)
Temme, F. P.
1991-06-01
For many-body spin cluster problems, dual-symmetry recoupled tensors over Liouville space provide suitable bases for a generalized torque formalism using the Sn-adapted density operator in which to discuss NMR and related techniques. The explicit structure of such tensors is considered in the context of the Cayley algebra of scalar invariants over a field, specified by the inner ki rank labels of the Tkq(kl-kn)s. The pertinence of both lexical combinatorial architectures over inner rank sets and SU2 propagative topologies in specifying the structure of dual recoupling tensors is considered in the context of the Sn partitional aspects of spin clusters. The form of Heisenberg superoperator generators whose algebra underlies the Gel'fand pattern algebra of SU(2) and SU(2)×Sn tensor bases over Liouville space is presented together with both the related s-boson algebras and a description of the associated {||2k 0>>} pattern sets of CF29H carrier space under the appropriate symmetry. These concepts are correlated with recent work on SU(2)×Sn induced symmetry hierarchies over Liouville spin space. The pertinence of this theoretical work to an understanding of multiquantum NMR in Liouville space formalisms is stressed in a discussion of the nature of pathways for intracluster J coupling, which also gives a valuable physical insight into the nature of coherence transfer in more general spin-1/2 systems.
Identifying sighting clusters of endangered taxa with historical records.
Duffy, Karl J
2011-04-01
The probability and time of extinction of taxa is often inferred from statistical analyses of historical records. Many of these analyses require the exclusion of multiple records within a unit of time (i.e., a month or a year). Nevertheless, spatially explicit, temporally aggregated data may be useful for identifying clusters of sightings (i.e., sighting clusters) in space and time. Identification of sighting clusters highlights changes in the historical recording of endangered taxa. I used two methods to identify sighting clusters in historical records: the Ederer-Myers-Mantel (EMM) test and the space-time permutation scan (STPS). I applied these methods to the spatially explicit sighting records of three species of orchids that are listed as endangered in the Republic of Ireland under the Wildlife Act (1976): Cephalanthera longifolia, Hammarbya paludosa, and Pseudorchis albida. Results with the EMM test were strongly affected by the choice of the time interval, and thus the number of temporal samples, used to examine the records. For example, sightings of P. albida clustered when the records were partitioned into 20-year temporal samples, but not when they were partitioned into 22-year temporal samples. Because the statistical power of EMM was low, it will not be useful when data are sparse. Nevertheless, the STPS identified regions that contained sighting clusters because it uses a flexible scanning window (defined by cylinders of varying size that move over the study area and evaluate the likelihood of clustering) to detect them, and it identified regions with high and regions with low rates of orchid sightings. The STPS analyses can be used to detect sighting clusters of endangered species that may be related to regions of extirpation and may assist in the categorization of threat status. ©2010 Society for Conservation Biology.
NASA Technical Reports Server (NTRS)
Moffitt, William L.
2003-01-01
As missions have become increasingly more challenging over the years, the most adaptable and capable element of space shuttle operations has proven time and again to be human beings. Human space flight provides unique aspects of observation. interaction and intervention that can reduce risk and improve mission success. No other launch vehicle - in development or in operation today - can match the space shuttle's human space flight capabilities. Preserving U.S. leadership in human space flight requires a strategy to meet those challenges. The ongoing development of next generation vehicles, along with upgrades to the space shuttle, is the most effective means for assuring our access to space.
Explicit methods in extended phase space for inseparable Hamiltonian problems
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2015-03-01
We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.
Gravitational radiation quadrupole formula is valid for gravitationally interacting systems
NASA Technical Reports Server (NTRS)
Walker, M.; Will, C. M.
1980-01-01
An argument is presented for the validity of the quadrupole formula for gravitational radiation energy loss in the far field of nearly Newtonian (e.g., binary stellar) systems. This argument differs from earlier ones in that it determines beforehand the formal accuracy of approximation required to describe gravitationally self-interacting systems, uses the corresponding approximate equation of motion explicitly, and evaluates the appropriate asymptotic quantities by matching along the correct space-time light cones.
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
Ingram, James N; Howard, Ian S; Flanagan, J Randall; Wolpert, Daniel M
2011-09-01
Motor learning has been extensively studied using dynamic (force-field) perturbations. These induce movement errors that result in adaptive changes to the motor commands. Several state-space models have been developed to explain how trial-by-trial errors drive the progressive adaptation observed in such studies. These models have been applied to adaptation involving novel dynamics, which typically occurs over tens to hundreds of trials, and which appears to be mediated by a dual-rate adaptation process. In contrast, when manipulating objects with familiar dynamics, subjects adapt rapidly within a few trials. Here, we apply state-space models to familiar dynamics, asking whether adaptation is mediated by a single-rate or dual-rate process. Previously, we reported a task in which subjects rotate an object with known dynamics. By presenting the object at different visual orientations, adaptation was shown to be context-specific, with limited generalization to novel orientations. Here we show that a multiple-context state-space model, with a generalization function tuned to visual object orientation, can reproduce the time-course of adaptation and de-adaptation as well as the observed context-dependent behavior. In contrast to the dual-rate process associated with novel dynamics, we show that a single-rate process mediates adaptation to familiar object dynamics. The model predicts that during exposure to the object across multiple orientations, there will be a degree of independence for adaptation and de-adaptation within each context, and that the states associated with all contexts will slowly de-adapt during exposure in one particular context. We confirm these predictions in two new experiments. Results of the current study thus highlight similarities and differences in the processes engaged during exposure to novel versus familiar dynamics. In both cases, adaptation is mediated by multiple context-specific representations. In the case of familiar object dynamics, however, the representations can be engaged based on visual context, and are updated by a single-rate process.
Space Adaptation Back Pain: A Retrospective Study
NASA Technical Reports Server (NTRS)
Kerstman, E. L.; Scheuring, R. A.; Barnes, M. G.; DeKorse, T. B.; Saile, L. G.
2008-01-01
Back pain is frequently reported by astronauts during the early phase of space flight as they adapt to the microgravity environment. However, the epidemiology of space adaptation back pain has not been well defined. The purpose of this retrospective study was to develop a case definition of space adaptation back pain, determine the incidence of space adaptation back pain, and determine the effectiveness of available treatments. Medical records from the Mercury, Apollo, Apollo-Soyuz Test Project (ASTP), Skylab, Mir, International Space Station (ISS), and Shuttle programs were reviewed. All episodes of in-flight back pain that met the criteria for space adaptation back pain were recorded. Pain characteristics, including intensity, location, and duration of the pain were noted. The effectiveness of specific treatments also was recorded. The incidence of space adaptation back pain among astronauts was determined to be 53% (384/722). Most of the affected astronauts reported mild pain (85%). Moderate pain was reported by 11% of the affected astronauts and severe pain was reported by only 4% of the affected astronauts. The most effective treatments were fetal positioning (91% effective) and the use of analgesic medications (85% effective). This retrospective study aids in the development of a case definition of space adaptation back pain and examines the epidemiology of space adaptation back pain. Space adaptation back pain is usually mild and self-limited. However, there is a risk of functional impairment and mission impact in cases of moderate or severe pain that do not respond to currently available treatments. Therefore, the development of preventive measures and more effective treatments should be pursued.
NASA Astrophysics Data System (ADS)
Mädler, Thomas
2013-05-01
Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.
Covariant electrodynamics in linear media: Optical metric
NASA Astrophysics Data System (ADS)
Thompson, Robert T.
2018-03-01
While the postulate of covariance of Maxwell's equations for all inertial observers led Einstein to special relativity, it was the further demand of general covariance—form invariance under general coordinate transformations, including between accelerating frames—that led to general relativity. Several lines of inquiry over the past two decades, notably the development of metamaterial-based transformation optics, has spurred a greater interest in the role of geometry and space-time covariance for electrodynamics in ponderable media. I develop a generally covariant, coordinate-free framework for electrodynamics in general dielectric media residing in curved background space-times. In particular, I derive a relation for the spatial medium parameters measured by an arbitrary timelike observer. In terms of those medium parameters I derive an explicit expression for the pseudo-Finslerian optical metric of birefringent media and show how it reduces to a pseudo-Riemannian optical metric for nonbirefringent media. This formulation provides a basis for a unified approach to ray and congruence tracing through media in curved space-times that may smoothly vary among positively refracting, negatively refracting, and vacuum.
Unruh effect for general trajectories
NASA Astrophysics Data System (ADS)
Obadia, N.; Milgrom, M.
2007-03-01
We consider two-level detectors coupled to a scalar field and moving on arbitrary trajectories in Minkowski space-time. We first derive a generic expression for the response function using a (novel) regularization procedure based on the Feynman prescription that is explicitly causal, and we compare it to other expressions used in the literature. We then use this expression to study, analytically and numerically, the time dependence of the response function in various nonstationarity situations. We show that, generically, the response function decreases like a power in the detector’s level spacing, E, for high E. It is only for stationary worldlines that the response function decays faster than any power law, in keeping with the known exponential behavior for some stationary cases. Under some conditions the (time-dependent) response function for a nonstationary worldline is well approximated by the value of the response function for a stationary worldline having the same instantaneous acceleration, torsion, and hypertorsion. While we cannot offer general conditions for this to apply, we discuss special cases; in particular, the low-energy limit for linear space trajectories.
Space, time, and the third dimension (model error)
Moss, Marshall E.
1979-01-01
The space-time tradeoff of hydrologic data collection (the ability to substitute spatial coverage for temporal extension of records or vice versa) is controlled jointly by the statistical properties of the phenomena that are being measured and by the model that is used to meld the information sources. The control exerted on the space-time tradeoff by the model and its accompanying errors has seldom been studied explicitly. The technique, known as Network Analyses for Regional Information (NARI), permits such a study of the regional regression model that is used to relate streamflow parameters to the physical and climatic characteristics of the drainage basin.The NARI technique shows that model improvement is a viable and sometimes necessary means of improving regional data collection systems. Model improvement provides an immediate increase in the accuracy of regional parameter estimation and also increases the information potential of future data collection. Model improvement, which can only be measured in a statistical sense, cannot be quantitatively estimated prior to its achievement; thus an attempt to upgrade a particular model entails a certain degree of risk on the part of the hydrologist.
Cosmic time and reduced phase space of general relativity
NASA Astrophysics Data System (ADS)
Ita, Eyo Eyo; Soo, Chopin; Yu, Hoi-Lai
2018-05-01
In an ever-expanding spatially closed universe, the fractional change of the volume is the preeminent intrinsic time interval to describe evolution in general relativity. The expansion of the universe serves as a subsidiary condition which transforms Einstein's theory from a first class to a second class constrained system when the physical degrees of freedom (d.o.f.) are identified with transverse traceless excitations. The super-Hamiltonian constraint is solved by eliminating the trace of the momentum in terms of the other variables, and spatial diffeomorphism symmetry is tackled explicitly by imposing transversality. The theorems of Maskawa-Nishijima appositely relate the reduced phase space to the physical variables in canonical functional integral and Dirac's criterion for second class constraints to nonvanishing Faddeev-Popov determinants in the phase space measures. A reduced physical Hamiltonian for intrinsic time evolution of the two physical d.o.f. emerges. Freed from the first class Dirac algebra, deformation of the Hamiltonian constraint is permitted, and natural extension of the Hamiltonian while maintaining spatial diffeomorphism invariance leads to a theory with Cotton-York term as the ultraviolet completion of Einstein's theory.
The Deployment of a Commercial RGA to the International Space Station
NASA Technical Reports Server (NTRS)
Kowitt, Matt; Hawk, Doug; Rossetti, Dino; Woronowicz, Michael
2015-01-01
The International Space Station (ISS) uses ammonia as a medium for heat transport in its Active Thermal Control System. Over time, there have been intermittent component failures and leaks in the ammonia cooling loop. One specific challenge in dealing with an ammonia leak on the exterior of the ISS is determining the exact location from which ammonia is escaping before addressing the problem. Together, researchers and engineers from Stanford Research Systems (SRS) and NASA's Johnson Space Center and Goddard Space Flight Center have adapted a commercial off-the-shelf (COTS) residual gas analyzer (RGA) for repackaging and operation outside the ISS as a core component in the ISS Robotic External Leak Locator, a technology demonstration payload currently scheduled for launch during 2015. The packaging and adaptation of the COTS RGA to the Leak Locator will be discussed. The collaborative process of adapting a commercial instrument for spaceflight will also be reviewed, including the build--up of the flight units. Measurements from a full--scale thermal vacuum test will also be presented demonstrating the absolute and directional sensitivity of the RGA.
Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.
Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene
2016-11-01
Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wang, Xingmei; Hao, Wenqian; Li, Qiming
2017-12-18
This paper proposes an adaptive cultural algorithm with improved quantum-behaved particle swarm optimization (ACA-IQPSO) to detect the underwater sonar image. In the population space, to improve searching ability of particles, iterative times and the fitness value of particles are regarded as factors to adaptively adjust the contraction-expansion coefficient of the quantum-behaved particle swarm optimization algorithm (QPSO). The improved quantum-behaved particle swarm optimization algorithm (IQPSO) can make particles adjust their behaviours according to their quality. In the belief space, a new update strategy is adopted to update cultural individuals according to the idea of the update strategy in shuffled frog leaping algorithm (SFLA). Moreover, to enhance the utilization of information in the population space and belief space, accept function and influence function are redesigned in the new communication protocol. The experimental results show that ACA-IQPSO can obtain good clustering centres according to the grey distribution information of underwater sonar images, and accurately complete underwater objects detection. Compared with other algorithms, the proposed ACA-IQPSO has good effectiveness, excellent adaptability, a powerful searching ability and high convergence efficiency. Meanwhile, the experimental results of the benchmark functions can further demonstrate that the proposed ACA-IQPSO has better searching ability, convergence efficiency and stability.
tESA: a distributional measure for calculating semantic relatedness.
Rybinski, Maciej; Aldana-Montes, José Francisco
2016-12-28
Semantic relatedness is a measure that quantifies the strength of a semantic link between two concepts. Often, it can be efficiently approximated with methods that operate on words, which represent these concepts. Approximating semantic relatedness between texts and concepts represented by these texts is an important part of many text and knowledge processing tasks of crucial importance in the ever growing domain of biomedical informatics. The problem of most state-of-the-art methods for calculating semantic relatedness is their dependence on highly specialized, structured knowledge resources, which makes these methods poorly adaptable for many usage scenarios. On the other hand, the domain knowledge in the Life Sciences has become more and more accessible, but mostly in its unstructured form - as texts in large document collections, which makes its use more challenging for automated processing. In this paper we present tESA, an extension to a well known Explicit Semantic Relatedness (ESA) method. In our extension we use two separate sets of vectors, corresponding to different sections of the articles from the underlying corpus of documents, as opposed to the original method, which only uses a single vector space. We present an evaluation of Life Sciences domain-focused applicability of both tESA and domain-adapted Explicit Semantic Analysis. The methods are tested against a set of standard benchmarks established for the evaluation of biomedical semantic relatedness quality. Our experiments show that the propsed method achieves results comparable with or superior to the current state-of-the-art methods. Additionally, a comparative discussion of the results obtained with tESA and ESA is presented, together with a study of the adaptability of the methods to different corpora and their performance with different input parameters. Our findings suggest that combined use of the semantics from different sections (i.e. extending the original ESA methodology with the use of title vectors) of the documents of scientific corpora may be used to enhance the performance of a distributional semantic relatedness measures, which can be observed in the largest reference datasets. We also present the impact of the proposed extension on the size of distributional representations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nottale, Laurent; Célérier, Marie-Noëlle
One of the main results of scale relativity as regards the foundation of quantum mechanics is its explanation of the origin of the complex nature of the wave function. The scale relativity theory introduces an explicit dependence of physical quantities on scale variables, founding itself on the theorem according to which a continuous and non-differentiable space-time is fractal (i.e., scale-divergent). In the present paper, the nature of the scale variables and their relations to resolutions and differential elements are specified in the non-relativistic case (fractal space). We show that, owing to the scale-dependence which it induces, non-differentiability involves a fundamentalmore » two-valuedness of the mean derivatives. Since, in the scale relativity framework, the wave function is a manifestation of the velocity field of fractal space-time geodesics, the two-valuedness of velocities leads to write them in terms of complex numbers, and yields therefore the complex nature of the wave function, from which the usual expression of the Schrödinger equation can be derived.« less
Quantization of wave equations and hermitian structures in partial differential varieties
Paneitz, S. M.; Segal, I. E.
1980-01-01
Sufficiently close to 0, the solution variety of a nonlinear relativistic wave equation—e.g., of the form □ϕ + m2ϕ + gϕp = 0—admits a canonical Lorentz-invariant hermitian structure, uniquely determined by the consideration that the action of the differential scattering transformation in each tangent space be unitary. Similar results apply to linear time-dependent equations or to equations in a curved asymptotically flat space-time. A close relation of the Riemannian structure to the determination of vacuum expectation values is developed and illustrated by an explicit determination of a perturbative 2-point function for the case of interaction arising from curvature. The theory underlying these developments is in part a generalization of that of M. G. Krein and collaborators concerning stability of differential equations in Hilbert space and in part a precise relation between the unitarization of given symplectic linear actions and their full probabilistic quantization. The unique causal structure in the infinite symplectic group is instrumental in these developments. PMID:16592923
Moduli stabilising in heterotic nearly Kähler compactifications
NASA Astrophysics Data System (ADS)
Klaput, Michael; Lukas, Andre; Matti, Cyril; Svanes, Eirik E.
2013-01-01
We study heterotic string compactifications on nearly Kähler homogeneous spaces, including the gauge field effects which arise at order α'. Using Abelian gauge fields, we are able to solve the Bianchi identity and supersymmetry conditions to this order. The four-dimensional external space-time consists of a domain wall solution with moduli fields varying along the transverse direction. We find that the inclusion of α' corrections improves the moduli stabilization features of this solution. In this case, one of the dilaton and the volume modulus asymptotes to a constant value away from the domain wall. It is further shown that the inclusion of non-perturbative effects can stabilize the remaining modulus and "lift" the domain wall to an AdS vacuum. The coset SU(3)/U(1)2 is used as an explicit example to demonstrate the validity of this AdS vacuum. Our results show that heterotic nearly Kähler compactifications can lead to maximally symmetric four-dimensional space-times at the non-perturbative level.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas
2003-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
NASA Technical Reports Server (NTRS)
Wang, Nanbor; Kircher, Michael; Schmidt, Douglas C.
2000-01-01
Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of-service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and often sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration frame-work for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines reflective middleware techniques designed to adaptively: (1) select optimal communication mechanisms, (2) man- age QoS properties of CORBA components in their containers, and (3) (re)configure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of reflective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.
Prism adaptation does not alter object-based attention in healthy participants.
Bultitude, Janet H; List, Alexandra; Aimola Davies, Anne M
2013-01-01
Hemispatial neglect ('neglect') is a disabling condition that can follow damage to the right side of the brain, in which patients show difficulty in responding to or orienting towards objects and events that occur on the left side of space. Symptoms of neglect can manifest in both space- and object-based frames of reference. Although patients can show a combination of these two forms of neglect, they are considered separable and have distinct neurological bases. In recent years considerable evidence has emerged to demonstrate that spatial symptoms of neglect can be reduced by an intervention called prism adaptation. Patients point towards objects viewed through prismatic lenses that shift the visual image to the right. Approximately five minutes of repeated pointing results in a leftward recalibration of pointing and improved performance on standard clinical tests for neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Here we examined the effect of prism adaptation on the performance of healthy participants who completed a computerised test of space- and object-based attention. Participants underwent adaptation to leftward- or rightward-shifting prisms, or performed neutral pointing according to a between-groups design. Significant pointing after-effects were found for both prism groups, indicating successful adaptation. In addition, the results of the computerised test revealed larger reaction-time costs associated with shifts of attention between two objects compared to shifts of attention within the same object, replicating previous work. However there were no differences in the performance of the three groups, indicating that prism adaptation did not influence space- or object-based attention for this task. When combined with existing literature, the results are consistent with the proposal that prism adaptation may only perturb cognitive functions for which normal baseline performance is already biased.
Prism adaptation does not alter object-based attention in healthy participants
Bultitude, Janet H.
2013-01-01
Hemispatial neglect (‘neglect’) is a disabling condition that can follow damage to the right side of the brain, in which patients show difficulty in responding to or orienting towards objects and events that occur on the left side of space. Symptoms of neglect can manifest in both space- and object-based frames of reference. Although patients can show a combination of these two forms of neglect, they are considered separable and have distinct neurological bases. In recent years considerable evidence has emerged to demonstrate that spatial symptoms of neglect can be reduced by an intervention called prism adaptation. Patients point towards objects viewed through prismatic lenses that shift the visual image to the right. Approximately five minutes of repeated pointing results in a leftward recalibration of pointing and improved performance on standard clinical tests for neglect. The understanding of prism adaptation has also been advanced through studies of healthy participants, in whom adaptation to leftward prismatic shifts results in temporary neglect-like performance. Here we examined the effect of prism adaptation on the performance of healthy participants who completed a computerised test of space- and object-based attention. Participants underwent adaptation to leftward- or rightward-shifting prisms, or performed neutral pointing according to a between-groups design. Significant pointing after-effects were found for both prism groups, indicating successful adaptation. In addition, the results of the computerised test revealed larger reaction-time costs associated with shifts of attention between two objects compared to shifts of attention within the same object, replicating previous work. However there were no differences in the performance of the three groups, indicating that prism adaptation did not influence space- or object-based attention for this task. When combined with existing literature, the results are consistent with the proposal that prism adaptation may only perturb cognitive functions for which normal baseline performance is already biased. PMID:24715960
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal.
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D; Hubbi, Basil; Liu, Xuan
2015-11-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue.
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D.; Hubbi, Basil; Liu, Xuan
2015-01-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue. PMID:26600996
Orsini, Luisa; Schwenk, Klaus; De Meester, Luc; Colbourne, John K.; Pfrender, Michael E.; Weider, Lawrence J.
2013-01-01
Evolutionary changes are determined by a complex assortment of ecological, demographic and adaptive histories. Predicting how evolution will shape the genetic structures of populations coping with current (and future) environmental challenges has principally relied on investigations through space, in lieu of time, because long-term phenotypic and molecular data are scarce. Yet, dormant propagules in sediments, soils and permafrost are convenient natural archives of population-histories from which to trace adaptive trajectories along extended time periods. DNA sequence data obtained from these natural archives, combined with pioneering methods for analyzing both ecological and population genomic time-series data, are likely to provide predictive models to forecast evolutionary responses of natural populations to environmental changes resulting from natural and anthropogenic stressors, including climate change. PMID:23395434
Direct Coupling Method for Time-Accurate Solution of Incompressible Navier-Stokes Equations
NASA Technical Reports Server (NTRS)
Soh, Woo Y.
1992-01-01
A noniterative finite difference numerical method is presented for the solution of the incompressible Navier-Stokes equations with second order accuracy in time and space. Explicit treatment of convection and diffusion terms and implicit treatment of the pressure gradient give a single pressure Poisson equation when the discretized momentum and continuity equations are combined. A pressure boundary condition is not needed on solid boundaries in the staggered mesh system. The solution of the pressure Poisson equation is obtained directly by Gaussian elimination. This method is tested on flow problems in a driven cavity and a curved duct.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Y.; Pang, N.; Halpin-Healy, T.
1994-12-01
The linear Langevin equation proposed by Edwards and Wilkinson [Proc. R. Soc. London A 381, 17 (1982)] is solved in closed form for noise of arbitrary space and time correlation. Furthermore, the temporal development of the full probability functional describing the height fluctuations is derived exactly, exhibiting an interesting evolution between two distinct Gaussian forms. We determine explicitly the dynamic scaling function for the interfacial width for any given initial condition, isolate the early-time behavior, and discover an invariance that was unsuspected in this problem of arbitrary spatiotemporal noise.
Lunar architecture and urbanism, 2nd ed
NASA Technical Reports Server (NTRS)
Sherwood, Brent
2005-01-01
As the space population grows over time, persistent issues of human urbanism will eclipse within a historically short time the technical challenges of space exploration that dominate current efforts. Although urban design teams will have to integrate many new disciplines into their already renaissance array of expertise, doing so will enable them to adapt ancient, proven solutions to opportunities afforded by expanding urbanism offworld. This paper updates the author's original 1988 treatment of the subject.
On the performance of explicit and implicit algorithms for transient thermal analysis
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.
1980-09-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.
Visible, invisible and trapped ghosts as sources of wormholes and black universes
NASA Astrophysics Data System (ADS)
Bolokhov, S. V.; Bronnikov, K. A.; Korolyov, P. A.; Skvortsova, M. V.
2016-02-01
We construct explicit examples of globally regular static, spherically symmetric solutions in general relativity with scalar and electromagnetic fields, describing traversable wormholes with flat and AdS asymptotics and regular black holes, in particular, black universes. (A black universe is a regular black hole with an expanding, asymptotically isotropic space-time beyond the horizon.) Such objects exist in the presence of scalar fields with negative kinetic energy (“phantoms”, or “ghosts”), which are not observed under usual physical conditions. To account for that, we consider what we call “trapped ghosts” (scalars whose kinetic energy is only negative in a strong-field region of space-time) and “invisible ghosts”, i.e., phantom scalar fields sufficiently rapidly decaying in the weak-field region. The resulting configurations contain different numbers of Killing horizons, from zero to four.
Exactly energy conserving semi-implicit particle in cell formulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, Giovanni, E-mail: giovanni.lapenta@kuleuven.be
We report a new particle in cell (PIC) method based on the semi-implicit approach. The novelty of the new method is that unlike any of its semi-implicit predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. Recent research has presented fully implicit methods where energy conservation is obtained as part of a non-linear iteration procedure. The new method (referred to as Energy Conserving Semi-Implicit Method, ECSIM), instead, does not require any non-linear iteration and its computational cycle is similar to that of explicit PIC. The properties of the new method are: i) it conservesmore » energy exactly to round-off for any time step or grid spacing; ii) it is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency and allowing the user to select any desired time step; iii) it eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length; iv) the particle mover has a computational complexity identical to that of the explicit PIC, only the field solver has an increased computational cost. The new ECSIM is tested in a number of benchmarks where accuracy and computational performance are tested. - Highlights: • We present a new fully energy conserving semi-implicit particle in cell (PIC) method based on the implicit moment method (IMM). The new method is called Energy Conserving Implicit Moment Method (ECIMM). • The novelty of the new method is that unlike any of its predecessors at the same time it retains the explicit computational cycle and conserves energy exactly. • The new method is unconditionally stable in time, freeing the user from the need to resolve the electron plasma frequency. • The new method eliminates the constraint of the finite grid instability, allowing the user to select any desired resolution without being forced to resolve the Debye length. • These features are achieved at a reduced cost compared with either previous IMM or fully implicit implementation of PIC.« less
Real-time adaptive finite element solution of time-dependent Kohn-Sham equation
NASA Astrophysics Data System (ADS)
Bao, Gang; Hu, Guanghui; Liu, Di
2015-01-01
In our previous paper (Bao et al., 2012 [1]), a general framework of using adaptive finite element methods to solve the Kohn-Sham equation has been presented. This work is concerned with solving the time-dependent Kohn-Sham equations. The numerical methods are studied in the time domain, which can be employed to explain both the linear and the nonlinear effects. A Crank-Nicolson scheme and linear finite element space are employed for the temporal and spatial discretizations, respectively. To resolve the trouble regions in the time-dependent simulations, a heuristic error indicator is introduced for the mesh adaptive methods. An algebraic multigrid solver is developed to efficiently solve the complex-valued system derived from the semi-implicit scheme. A mask function is employed to remove or reduce the boundary reflection of the wavefunction. The effectiveness of our method is verified by numerical simulations for both linear and nonlinear phenomena, in which the effectiveness of the mesh adaptive methods is clearly demonstrated.
Fisher, Moria E; Huang, Felix C; Wright, Zachary A; Patton, James L
2014-01-01
Manipulation of error feedback has been of great interest to recent studies in motor control and rehabilitation. Typically, motor adaptation is shown as a change in performance with a single scalar metric for each trial, yet such an approach might overlook details about how error evolves through the movement. We believe that statistical distributions of movement error through the extent of the trajectory can reveal unique patterns of adaption and possibly reveal clues to how the motor system processes information about error. This paper describes different possible ordinate domains, focusing on representations in time and state-space, used to quantify reaching errors. We hypothesized that the domain with the lowest amount of variability would lead to a predictive model of reaching error with the highest accuracy. Here we showed that errors represented in a time domain demonstrate the least variance and allow for the highest predictive model of reaching errors. These predictive models will give rise to more specialized methods of robotic feedback and improve previous techniques of error augmentation.
Time and motion, experiment M151. [human performance and space flight stress
NASA Technical Reports Server (NTRS)
Kubis, J. F.; Elrod, J. T.; Rusnak, R.; Mcbride, G. H.; Barnes, J. E.; Saxon, S. C.
1973-01-01
Astronaut work performance during the preparation and execution of experiments in simulated Skylab tests was analyzed according to time and motion in order to evaluate the efficiency and consistency of performance (adaptation function) for several different types of activity over the course of the mission; to evaluate the procedures to be used by the same experiment in Skylab; to generate characteristic adaptation functions for later comparison with Skylab data; and to examine astronaut performance for any behavioral stress due to the environment. The overall results indicate that the anticipated adaptation function was obtained both for individual and for averaged data.
Local matrix learning in clustering and applications for manifold visualization.
Arnonkijpanich, Banchar; Hasenfuss, Alexander; Hammer, Barbara
2010-05-01
Electronic data sets are increasing rapidly with respect to both, size of the data sets and data resolution, i.e. dimensionality, such that adequate data inspection and data visualization have become central issues of data mining. In this article, we present an extension of classical clustering schemes by local matrix adaptation, which allows a better representation of data by means of clusters with an arbitrary spherical shape. Unlike previous proposals, the method is derived from a global cost function. The focus of this article is to demonstrate the applicability of this matrix clustering scheme to low-dimensional data embedding for data inspection. The proposed method is based on matrix learning for neural gas and manifold charting. This provides an explicit mapping of a given high-dimensional data space to low dimensionality. We demonstrate the usefulness of this method for data inspection and manifold visualization. 2009 Elsevier Ltd. All rights reserved.
Integrating diffusion maps with umbrella sampling: Application to alanine dipeptide
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.; Panagiotopoulos, Athanassios Z.; Debenedetti, Pablo G.; Kevrekidis, Ioannis G.
2011-04-01
Nonlinear dimensionality reduction techniques can be applied to molecular simulation trajectories to systematically extract a small number of variables with which to parametrize the important dynamical motions of the system. For molecular systems exhibiting free energy barriers exceeding a few kBT, inadequate sampling of the barrier regions between stable or metastable basins can lead to a poor global characterization of the free energy landscape. We present an adaptation of a nonlinear dimensionality reduction technique known as the diffusion map that extends its applicability to biased umbrella sampling simulation trajectories in which restraining potentials are employed to drive the system into high free energy regions and improve sampling of phase space. We then propose a bootstrapped approach to iteratively discover good low-dimensional parametrizations by interleaving successive rounds of umbrella sampling and diffusion mapping, and we illustrate the technique through a study of alanine dipeptide in explicit solvent.
No Need for Conspiracy: Self-Organized Cartel Formation in a Modified Trust Game
NASA Astrophysics Data System (ADS)
Peixoto, Tiago P.; Bornholdt, Stefan
2012-05-01
We investigate the dynamics of a trust game on a mixed population, where individuals with the role of buyers are forced to play against a predetermined number of sellers whom they choose dynamically. Agents with the role of sellers are also allowed to adapt the level of value for money of their products, based on payoff. The dynamics undergoes a transition at a specific value of the strategy update rate, above which an emergent cartel organization is observed, where sellers have similar values of below-optimal value for money. This cartel organization is not due to an explicit collusion among agents; instead, it arises spontaneously from the maximization of the individual payoffs. This dynamics is marked by large fluctuations and a high degree of unpredictability for most of the parameter space and serves as a plausible qualitative explanation for observed elevated levels and fluctuations of certain commodity prices.
Design for Verification: Enabling Verification of High Dependability Software-Intensive Systems
NASA Technical Reports Server (NTRS)
Mehlitz, Peter C.; Penix, John; Markosian, Lawrence Z.; Koga, Dennis (Technical Monitor)
2003-01-01
Strategies to achieve confidence that high-dependability applications are correctly implemented include testing and automated verification. Testing deals mainly with a limited number of expected execution paths. Verification usually attempts to deal with a larger number of possible execution paths. While the impact of architecture design on testing is well known, its impact on most verification methods is not as well understood. The Design for Verification approach considers verification from the application development perspective, in which system architecture is designed explicitly according to the application's key properties. The D4V-hypothesis is that the same general architecture and design principles that lead to good modularity, extensibility and complexity/functionality ratio can be adapted to overcome some of the constraints on verification tools, such as the production of hand-crafted models and the limits on dynamic and static analysis caused by state space explosion.
Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions
Leech, Robert; Holt, Lori L.; Devlin, Joseph T.; Dick, Frederic
2009-01-01
Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial non-linguistic sounds. Before and after training, we used functional MRI to measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the non-speech sounds predicted the change in pre- to post-training activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based restructuring of high-dimensional perceptual space. PMID:19386919
Michael S. Balshi; A. David McGuire; Paul Duffy; Mike Flannigan; John Walsh; Jerry Melillo
2009-01-01
We developed temporally and spatially explicit relationships between air temperature and fuel moisture codes derived from the Canadian Fire Weather Index System to estimate annual area burned at 2.5o (latitude x longitude) resolution using a Multivariate Adaptive Regression Spline (MARS) approach across Alaska and Canada. Burned area was...
He, Fei; Fromion, Vincent; Westerhoff, Hans V
2013-11-21
Metabolic control analysis (MCA) and supply-demand theory have led to appreciable understanding of the systems properties of metabolic networks that are subject exclusively to metabolic regulation. Supply-demand theory has not yet considered gene-expression regulation explicitly whilst a variant of MCA, i.e. Hierarchical Control Analysis (HCA), has done so. Existing analyses based on control engineering approaches have not been very explicit about whether metabolic or gene-expression regulation would be involved, but designed different ways in which regulation could be organized, with the potential of causing adaptation to be perfect. This study integrates control engineering and classical MCA augmented with supply-demand theory and HCA. Because gene-expression regulation involves time integration, it is identified as a natural instantiation of the 'integral control' (or near integral control) known in control engineering. This study then focuses on robustness against and adaptation to perturbations of process activities in the network, which could result from environmental perturbations, mutations or slow noise. It is shown however that this type of 'integral control' should rarely be expected to lead to the 'perfect adaptation': although the gene-expression regulation increases the robustness of important metabolite concentrations, it rarely makes them infinitely robust. For perfect adaptation to occur, the protein degradation reactions should be zero order in the concentration of the protein, which may be rare biologically for cells growing steadily. A proposed new framework integrating the methodologies of control engineering and metabolic and hierarchical control analysis, improves the understanding of biological systems that are regulated both metabolically and by gene expression. In particular, the new approach enables one to address the issue whether the intracellular biochemical networks that have been and are being identified by genomics and systems biology, correspond to the 'perfect' regulatory structures designed by control engineering vis-à-vis optimal functions such as robustness. To the extent that they are not, the analyses suggest how they may become so and this in turn should facilitate synthetic biology and metabolic engineering.
Bell - Kochen - Specker theorem for any finite dimension ?
NASA Astrophysics Data System (ADS)
Cabello, Adán; García-Alcaine, Guillermo
1996-03-01
The Bell - Kochen - Specker theorem against non-contextual hidden variables can be proved by constructing a finite set of `totally non-colourable' directions, as Kochen and Specker did in a Hilbert space of dimension n = 3. We generalize Kochen and Specker's set to Hilbert spaces of any finite dimension 0305-4470/29/5/016/img2, in a three-step process that shows the relationship between different kinds of proofs (`continuum', `probabilistic', `state-specific' and `state-independent') of the Bell - Kochen - Specker theorem. At the same time, this construction of a totally non-colourable set of directions in any dimension explicitly solves the question raised by Zimba and Penrose about the existence of such a set for n = 5.
Abdelnour, A. Farras; Huppert, Theodore
2009-01-01
Near-infrared spectroscopy is a non-invasive neuroimaging method which uses light to measure changes in cerebral blood oxygenation associated with brain activity. In this work, we demonstrate the ability to record and analyze images of brain activity in real-time using a 16-channel continuous wave optical NIRS system. We propose a novel real-time analysis framework using an adaptive Kalman filter and a state–space model based on a canonical general linear model of brain activity. We show that our adaptive model has the ability to estimate single-trial brain activity events as we apply this method to track and classify experimental data acquired during an alternating bilateral self-paced finger tapping task. PMID:19457389
Optimal coordination and control of posture and movements.
Johansson, Rolf; Fransson, Per-Anders; Magnusson, Måns
2009-01-01
This paper presents a theoretical model of stability and coordination of posture and locomotion, together with algorithms for continuous-time quadratic optimization of motion control. Explicit solutions to the Hamilton-Jacobi equation for optimal control of rigid-body motion are obtained by solving an algebraic matrix equation. The stability is investigated with Lyapunov function theory and it is shown that global asymptotic stability holds. It is also shown how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. The solution describes motion strategies of minimum effort and variance. The proposed optimal control is formulated to be suitable as a posture and movement model for experimental validation and verification. The combination of adaptive and optimal control makes this algorithm a candidate for coordination and control of functional neuromuscular stimulation as well as of prostheses. Validation examples with experimental data are provided.
Aerodynamics of Engine-Airframe Interaction
NASA Technical Reports Server (NTRS)
Caughey, D. A.
1986-01-01
The report describes progress in research directed towards the efficient solution of the inviscid Euler and Reynolds-averaged Navier-Stokes equations for transonic flows through engine inlets, and past complete aircraft configurations, with emphasis on the flowfields in the vicinity of engine inlets. The research focusses upon the development of solution-adaptive grid procedures for these problems, and the development of multi-grid algorithms in conjunction with both, implicit and explicit time-stepping schemes for the solution of three-dimensional problems. The work includes further development of mesh systems suitable for inlet and wing-fuselage-inlet geometries using a variational approach. Work during this reporting period concentrated upon two-dimensional problems, and has been in two general areas: (1) the development of solution-adaptive procedures to cluster the grid cells in regions of high (truncation) error;and (2) the development of a multigrid scheme for solution of the two-dimensional Euler equations using a diagonalized alternating direction implicit (ADI) smoothing algorithm.
Reengineering observatory operations for the time domain
NASA Astrophysics Data System (ADS)
Seaman, Robert L.; Vestrand, W. T.; Hessman, Frederic V.
2014-07-01
Observatories are complex scientific and technical institutions serving diverse users and purposes. Their telescopes, instruments, software, and human resources engage in interwoven workflows over a broad range of timescales. These workflows have been tuned to be responsive to concepts of observatory operations that were applicable when various assets were commissioned, years or decades in the past. The astronomical community is entering an era of rapid change increasingly characterized by large time domain surveys, robotic telescopes and automated infrastructures, and - most significantly - of operating modes and scientific consortia that span our individual facilities, joining them into complex network entities. Observatories must adapt and numerous initiatives are in progress that focus on redesigning individual components out of the astronomical toolkit. New instrumentation is both more capable and more complex than ever, and even simple instruments may have powerful observation scripting capabilities. Remote and queue observing modes are now widespread. Data archives are becoming ubiquitous. Virtual observatory standards and protocols and astroinformatics data-mining techniques layered on these are areas of active development. Indeed, new large-aperture ground-based telescopes may be as expensive as space missions and have similarly formal project management processes and large data management requirements. This piecewise approach is not enough. Whatever challenges of funding or politics facing the national and international astronomical communities it will be more efficient - scientifically as well as in the usual figures of merit of cost, schedule, performance, and risks - to explicitly address the systems engineering of the astronomical community as a whole.
Modeling Forest Biomass and Growth: Coupling Long-Term Inventory and Lidar Data
NASA Technical Reports Server (NTRS)
Babcock, Chad; Finley, Andrew O.; Cook, Bruce D.; Weiskittel, Andrew; Woodall, Christopher W.
2016-01-01
Combining spatially-explicit long-term forest inventory and remotely sensed information from Light Detection and Ranging (LiDAR) datasets through statistical models can be a powerful tool for predicting and mapping above-ground biomass (AGB) at a range of geographic scales. We present and examine a novel modeling approach to improve prediction of AGB and estimate AGB growth using LiDAR data. The proposed model accommodates temporal misalignment between field measurements and remotely sensed data-a problem pervasive in such settings-by including multiple time-indexed measurements at plot locations to estimate AGB growth. We pursue a Bayesian modeling framework that allows for appropriately complex parameter associations and uncertainty propagation through to prediction. Specifically, we identify a space-varying coefficients model to predict and map AGB and its associated growth simultaneously. The proposed model is assessed using LiDAR data acquired from NASA Goddard's LiDAR, Hyper-spectral & Thermal imager and field inventory data from the Penobscot Experimental Forest in Bradley, Maine. The proposed model outperformed the time-invariant counterpart models in predictive performance as indicated by a substantial reduction in root mean squared error. The proposed model adequately accounts for temporal misalignment through the estimation of forest AGB growth and accommodates residual spatial dependence. Results from this analysis suggest that future AGB models informed using remotely sensed data, such as LiDAR, may be improved by adapting traditional modeling frameworks to account for temporal misalignment and spatial dependence using random effects.
Numerical simulation of phase transition problems with explicit interface tracking
Hu, Yijing; Shi, Qiangqiang; de Almeida, Valmor F.; ...
2015-12-19
Phase change is ubiquitous in nature and industrial processes. Started from the Stefan problem, it is a topic with a long history in applied mathematics and sciences and continues to generate outstanding mathematical problems. For instance, the explicit tracking of the Gibbs dividing surface between phases is still a grand challenge. Our work has been motivated by such challenge and here we report on progress made in solving the governing equations of continuum transport in the presence of a moving interface by the front tracking method. The most pressing issue is the accounting of topological changes suffered by the interfacemore » between phases wherein break up and/or merge takes place. The underlying physics of topological changes require the incorporation of space-time subscales not at reach at the moment. Therefore we use heuristic geometrical arguments to reconnect phases in space. This heuristic approach provides new insight in various applications and it is extensible to include subscale physics and chemistry in the future. We demonstrate the method on applications such as simulating freezing, melting, dissolution, and precipitation. The later examples also include the coupling of the phase transition solution with the Navier-Stokes equations for the effect of flow convection.« less
Integrating spatially explicit representations of landscape perceptions into land change research
Dorning, Monica; Van Berkel, Derek B.; Semmens, Darius J.
2017-01-01
Purpose of ReviewHuman perceptions of the landscape can influence land-use and land-management decisions. Recognizing the diversity of landscape perceptions across space and time is essential to understanding land change processes and emergent landscape patterns. We summarize the role of landscape perceptions in the land change process, demonstrate advances in quantifying and mapping landscape perceptions, and describe how these spatially explicit techniques have and may benefit land change research.Recent FindingsMapping landscape perceptions is becoming increasingly common, particularly in research focused on quantifying ecosystem services provision. Spatial representations of landscape perceptions, often measured in terms of landscape values and functions, provide an avenue for matching social and environmental data in land change studies. Integrating these data can provide new insights into land change processes, contribute to landscape planning strategies, and guide the design and implementation of land change models.SummaryChallenges remain in creating spatial representations of human perceptions. Maps must be accompanied by descriptions of whose perceptions are being represented and the validity and uncertainty of those representations across space. With these considerations, rapid advancements in mapping landscape perceptions hold great promise for improving representation of human dimensions in landscape ecology and land change research.
de Lamare, Rodrigo C; Sampaio-Neto, Raimundo
2008-11-01
A space-time adaptive decision feedback (DF) receiver using recurrent neural networks (RNNs) is proposed for joint equalization and interference suppression in direct-sequence code-division multiple-access (DS-CDMA) systems equipped with antenna arrays. The proposed receiver structure employs dynamically driven RNNs in the feedforward section for equalization and multiaccess interference (MAI) suppression and a finite impulse response (FIR) linear filter in the feedback section for performing interference cancellation. A data selective gradient algorithm, based upon the set-membership (SM) design framework, is proposed for the estimation of the coefficients of RNN structures and is applied to the estimation of the parameters of the proposed neural receiver structure. Simulation results show that the proposed techniques achieve significant performance gains over existing schemes.
Improving carbon monitoring and reporting in forests using spatially-explicit information.
Boisvenue, Céline; Smiley, Byron P; White, Joanne C; Kurz, Werner A; Wulder, Michael A
2016-12-01
Understanding and quantifying carbon (C) exchanges between the biosphere and the atmosphere-specifically the process of C removal from the atmosphere, and how this process is changing-is the basis for developing appropriate adaptation and mitigation strategies for climate change. Monitoring forest systems and reporting on greenhouse gas (GHG) emissions and removals are now required components of international efforts aimed at mitigating rising atmospheric GHG. Spatially-explicit information about forests can improve the estimates of GHG emissions and removals. However, at present, remotely-sensed information on forest change is not commonly integrated into GHG reporting systems. New, detailed (30-m spatial resolution) forest change products derived from satellite time series informing on location, magnitude, and type of change, at an annual time step, have recently become available. Here we estimate the forest GHG balance using these new Landsat-based change data, a spatial forest inventory, and develop yield curves as inputs to the Carbon Budget Model of the Canadian Forest Sector (CBM-CFS3) to estimate GHG emissions and removals at a 30 m resolution for a 13 Mha pilot area in Saskatchewan, Canada. Our results depict the forests as cumulative C sink (17.98 Tg C or 0.64 Tg C year -1 ) between 1984 and 2012 with an average C density of 206.5 (±0.6) Mg C ha -1 . Comparisons between our estimates and estimates from Canada's National Forest Carbon Monitoring, Accounting and Reporting System (NFCMARS) were possible only on a subset of our study area. In our simulations the area was a C sink, while the official reporting simulations, it was a C source. Forest area and overall C stock estimates also differ between the two simulated estimates. Both estimates have similar uncertainties, but the spatially-explicit results we present here better quantify the potential improvement brought on by spatially-explicit modelling. We discuss the source of the differences between these estimates. This study represents an important first step towards the integration of spatially-explicit information into Canada's NFCMARS.
Open space preservation, property value, and optimal spatial configuration
Yong Jiang; Stephen K. Swallow
2007-01-01
The public has increasingly demonstrated a strong support for open space preservation. How to finance the socially efficient level of open space with the optimal spatial structure is of high policy relevance to local governments. In this study, we developed a spatially explicit open space model to help identify the socially optimal amount and optimal spatial...
Trajectory Specification for Automation of Terminal Air Traffic Control
NASA Technical Reports Server (NTRS)
Paielli, Russell A.
2016-01-01
"Trajectory specification" is the explicit bounding and control of aircraft tra- jectories such that the position at each point in time is constrained to a precisely defined volume of space. The bounding space is defined by cross-track, along-track, and vertical tolerances relative to a reference trajectory that specifies position as a function of time. The tolerances are dynamic and will be based on the aircraft nav- igation capabilities and the current traffic situation. A standard language will be developed to represent these specifications and to communicate them by datalink. Assuming conformance, trajectory specification can guarantee safe separation for an arbitrary period of time even in the event of an air traffic control (ATC) sys- tem or datalink failure, hence it can help to achieve the high level of safety and reliability needed for ATC automation. As a more proactive form of ATC, it can also maximize airspace capacity and reduce the reliance on tactical backup systems during normal operation. It applies to both enroute airspace and the terminal area around airports, but this paper focuses on arrival spacing in the terminal area and presents ATC algorithms and software for achieving a specified delay of runway arrival time.
Rand, Miya K.; Rentsch, Sebastian
2016-01-01
This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task under the use of terminal visual feedback. Young adults made reaching movements to targets on a digitizer while looking at targets on a monitor where the rotated feedback (a cursor) of hand movements appeared after each movement. Three rotation angles (30°, 75° and 150°) were examined in three groups in order to vary the task difficulty. The results showed that the 30° group gradually reduced direction errors of reaching with practice and adapted well to the visuomotor rotation. The 75° group made large direction errors of reaching, and the 150° group applied a 180° reversal shift from early practice. The 75°and 150° groups, however, overcompensated the respective rotations at the end of practice. Despite these group differences in adaptive changes of reaching, all groups gradually adapted gaze directions prior to reaching from the target area to the areas related to the final positions of reaching during the course of practice. The adaptive changes of both hand and eye movements in all groups mainly reflected adjustments of movement directions based on explicit knowledge of the applied rotation acquired through practice. Only the 30° group showed small implicit adaptation in both effectors. The results suggest that by adapting gaze directions from the target to the final position of reaching based on explicit knowledge of the visuomotor rotation, the oculomotor system supports the limb-motor system to make precise preplanned adjustments of reaching directions during learning of visuomotor rotation under terminal visual feedback. PMID:27812093
NASA Astrophysics Data System (ADS)
Meng, R.; Wu, J.; Zhao, F. R.; Kathy, S. L.; Dennison, P. E.; Cook, B.; Hanavan, R. P.; Serbin, S.
2016-12-01
As a primary disturbance agent, fire significantly influences forest ecosystems, including the modification or resetting of vegetation composition and structure, which can then significantly impact landscape-scale plant function and carbon stocks. Most ecological processes associated with fire effects (e.g. tree damage, mortality, and vegetation recovery) display fine-scale, species specific responses but can also vary spatially within the boundary of the perturbation. For example, both oak and pine species are fire-adapted, but fire can still induce changes in composition, structure, and dominance in a mixed pine-oak forest, mainly because of their varying degrees of fire adaption. Evidence of post-fire shifts in dominance between oak and pine species has been documented in mixed pine-oak forests, but these processes have been poorly investigated in a spatially explicit manner. In addition, traditional field-based means of quantifying the response of partially damaged trees across space and time is logistically challenging. Here we show how combining high resolution satellite imagery (i.e. Worldview-2,WV-2) and airborne imaging spectroscopy and LiDAR (i.e. NASA Goddard's Lidar, Hyperspectral and Thermal airborne imager, G-LiHT) can be effectively used to remotely quantify spatial and temporal patterns of vegetation recovery following a top-killing fire that occurred in 2012 within mixed pine-oak forests in the Long Island Central Pine Barrens Region, New York. We explore the following questions: 1) what are the impacts of fire on species composition, dominance, plant health, and vertical structure; 2) what are the recovery trajectories of forest biomass, structure, and spectral properties for three years following the fire; and 3) to what extent can fire impacts be captured and characterized by multi-sensor remote sensing techniques from active and passive optical remote sensing.
Method and system for determining induction motor speed
Parlos, Alexander G.; Bharadwaj, Raj M.
2004-03-30
A non-linear, semi-parametric neural network-based adaptive filter is utilized to determine the dynamic speed of a rotating rotor within an induction motor, without the explicit use of a speed sensor, such as a tachometer, is disclosed. The neural network-based filter is developed using actual motor current measurements, voltage measurements, and nameplate information. The neural network-based adaptive filter is trained using an estimated speed calculator derived from the actual current and voltage measurements. The neural network-based adaptive filter uses voltage and current measurements to determine the instantaneous speed of a rotating rotor. The neural network-based adaptive filter also includes an on-line adaptation scheme that permits the filter to be readily adapted for new operating conditions during operations.
Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics
NASA Astrophysics Data System (ADS)
Guo, Qiang
Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.
NASA Astrophysics Data System (ADS)
Yang, Xinxin; Ge, Shuzhi Sam; He, Wei
2018-04-01
In this paper, both the closed-form dynamics and adaptive robust tracking control of a space robot with two-link flexible manipulators under unknown disturbances are developed. The dynamic model of the system is described with assumed modes approach and Lagrangian method. The flexible manipulators are represented as Euler-Bernoulli beams. Based on singular perturbation technique, the displacements/joint angles and flexible modes are modelled as slow and fast variables, respectively. A sliding mode control is designed for trajectories tracking of the slow subsystem under unknown but bounded disturbances, and an adaptive sliding mode control is derived for slow subsystem under unknown slowly time-varying disturbances. An optimal linear quadratic regulator method is proposed for the fast subsystem to damp out the vibrations of the flexible manipulators. Theoretical analysis validates the stability of the proposed composite controller. Numerical simulation results demonstrate the performance of the closed-loop flexible space robot system.
NASA Technical Reports Server (NTRS)
Burchard, E. C.
1975-01-01
The physiological and psychological factors of manned space flight had a particular significance in the Skylab missions during which astronauts were subjected to a life in a space environment for longer periods of time than on previous space missions. The Skylab missions demonstrated again the great adaptability of human physiology to the environment of man. The results of Skylab have indicated also approaches for enhancing the capability of man to tolerate the physiological and psychological stresses of space flight.