Science.gov

Sample records for adaptive background model

  1. Suppression of Background Odor Effect in Odor Sensing System Using Olfactory Adaptation Model

    NASA Astrophysics Data System (ADS)

    Ohba, Tsuneaki; Yamanaka, Takao

    In this study, a new method for suppressing the background odor effect is proposed. Since odor sensors response to background odors in addition to a target odor, it is difficult to detect the target odor information. In the conventional odor sensing systems, the effect of the background odors are compensated by subtracting the response to the background odors (the baseline response). Although this simple subtraction method is effective for constant background odors, it fails in the compensation for time-varying background odors. The proposed method for the background suppression is effective even for the time-varying background odors.

  2. In-Depth Functional Diagnostics of Mouse Models by Single-Flash and Flicker Electroretinograms without Adapting Background Illumination.

    PubMed

    Tanimoto, Naoyuki; Michalakis, Stylianos; Weber, Bernhard H F; Wahl-Schott, Christian A; Hammes, Hans-Peter; Seeliger, Mathias W

    2016-01-01

    Electroretinograms (ERGs) are commonly recorded at the cornea for an assessment of the functional status of the retina in mouse models. Full-field ERGs can be elicited by single-flash as well as flicker light stimulation although in most laboratories flicker ERGs are recorded much less frequently than singleflash ERGs. Whereas conventional single-flash ERGs contain information about layers, i.e., outer and inner retina, flicker ERGs permit functional assessment of the vertical pathways of the retina, i.e., rod system, cone ON-pathway, and cone OFF-pathway, when the responses are evoked at a relatively high luminance (0.5 log cd s/m(2)) with varying frequency (from 0.5 to 30 Hz) without any adapting background illumination. Therefore, both types of ERGs complement an in-depth functional characterization of the mouse retina, allowing for a discrimination of an underlying functional pathology. Here, we introduce the systematic interpretation of the single-flash and flicker ERGs by demonstrating several different patterns of functional phenotype in genetic mouse models, in which photoreceptors and/or bipolar cells are primarily or secondarily affected. PMID:26427467

  3. The GLAST Background Model

    SciTech Connect

    Ormes, J.F.; Atwood, W.; Burnett, T.; Grove, E.; Longo, F.; McEnery, J.; Mizuno, T.; Ritz, S.; /NASA, Goddard

    2007-10-17

    In order to estimate the ability of the GLAST/LAT to reject unwanted background of charged particles, optimize the on-board processing, size the required telemetry and optimize the GLAST orbit, we developed a detailed model of the background particles that would affect the LAT. In addition to the well-known components of the cosmic radiation, we included splash and reentrant components of protons, electrons (e+ and e-) from 10 MeV and beyond as well as the albedo gamma rays produced by cosmic ray interactions with the atmosphere. We made estimates of the irreducible background components produced by positrons and hadrons interacting in the multilayered micrometeorite shield and spacecraft surrounding the LAT and note that because the orbital debris has increased, the shielding required and hence the background are larger than were present in EGRET. Improvements to the model are currently being made to include the east-west effect.

  4. Temporal dark adaptation to spatially complex backgrounds: effect of an additional light source.

    PubMed

    Stokkermans, M G M; Heynderickx, I E J

    2014-07-01

    Visual adaptation (and especially dark adaptation) has been studied extensively in the past, however, mainly addressing adaptation to fully dark backgrounds. At this stage, it is unclear whether these results are not too simple to be applied to complex situations, such as predicting adaptation of a motorist driving at night. To fill this gap we set up a study investigating how spatially complex backgrounds influence temporal dark adaptation. Our results showed that dark adaptation to spatially complex backgrounds leads to much longer adaptation times than dark adaptation to spatially uniform backgrounds. We conclude therefore that the adaptation models based on past studies overestimate the visual system's sensitivity to detect luminance variations in spatially complex environments. Our results also showed large variations in adaptation times when varying the degree of spatial complexity of the background. Hence, we may conclude that it is important to take into account models that are based on spatially complex backgrounds when predicting dark adaptation for complex environments.

  5. Sensorimotor adaptation is influenced by background music.

    PubMed

    Bock, Otmar

    2010-06-01

    It is well established that listening to music can modify subjects' cognitive performance. The present study evaluates whether this so-called Mozart Effect extends beyond cognitive tasks and includes sensorimotor adaptation. Three subject groups listened to musical pieces that in the author's judgment were serene, neutral, or sad, respectively. This judgment was confirmed by the subjects' introspective reports. While listening to music, subjects engaged in a pointing task that required them to adapt to rotated visual feedback. All three groups adapted successfully, but the speed and magnitude of adaptive improvement was more pronounced with serene music than with the other two music types. In contrast, aftereffects upon restoration of normal feedback were independent of music type. These findings support the existence of a "Mozart effect" for strategic movement control, but not for adaptive recalibration. Possibly, listening to music modifies neural activity in an intertwined cognitive-emotional network.

  6. Psychological Adaptation of Adolescents with Immigrant Backgrounds.

    ERIC Educational Resources Information Center

    Sam, David Lackland

    2000-01-01

    Examines three theoretical perspectives (family values, acculturation strategies, and social group identity) as predictors of the psychological well-being of adolescents from immigrant backgrounds. Reveals that the perspectives accounted for between 12% and 22% of variance of mental health, life satisfaction, and self-esteem, while social group…

  7. An auto-adaptive background subtraction method for Raman spectra

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun

    2016-05-01

    Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy.

  8. Improved visual background extractor using an adaptive distance threshold

    NASA Astrophysics Data System (ADS)

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2014-11-01

    Camouflage is a challenging issue in moving object detection. Even the recent and advanced background subtraction technique, visual background extractor (ViBe), cannot effectively deal with it. To better handle camouflage according to the perception characteristics of the human visual system (HVS) in terms of minimum change of intensity under a certain background illumination, we propose an improved ViBe method using an adaptive distance threshold, named IViBe for short. Different from the original ViBe using a fixed distance threshold for background matching, our approach adaptively sets a distance threshold for each background sample based on its intensity. Through analyzing the performance of the HVS in discriminating intensity changes, we determine a reasonable ratio between the intensity of a background sample and its corresponding distance threshold. We also analyze the impacts of our adaptive threshold together with an update mechanism on detection results. Experimental results demonstrate that our method outperforms ViBe even when the foreground and background share similar intensities. Furthermore, in a scenario where foreground objects are motionless for several frames, our IViBe not only reduces the initial false negatives, but also suppresses the diffusion of misclassification caused by those false negatives serving as erroneous background seeds, and hence shows an improved performance compared to ViBe.

  9. Background stratospheric aerosol reference model

    NASA Technical Reports Server (NTRS)

    Mccormick, M. P.; Wang, P.

    1989-01-01

    In this analysis, a reference background stratospheric aerosol optical model is developed based on the nearly global SAGE 1 satellite observations in the non-volcanic period from March 1979 to February 1980. Zonally averaged profiles of the 1.0 micron aerosol extinction for the tropics and the mid- and high-altitudes for both hemispheres are obtained and presented in graphical and tabulated form for the different seasons. In addition, analytic expressions for these seasonal global zonal means, as well as the yearly global mean, are determined according to a third order polynomial fit to the vertical profile data set. This proposed background stratospheric aerosol model can be useful in modeling studies of stratospheric aerosols and for simulations of atmospheric radiative transfer and radiance calculations in atmospheric remote sensing.

  10. Motor adaptation to a small force field superimposed on a large background force.

    PubMed

    Liu, Jiayin; Reinkensmeyer, David J

    2007-04-01

    The human motor system adapts to novel force field perturbations during reaching by forming an internal model of the external dynamics and by modulating arm impedance. We studied whether it uses similar strategies when the perturbation is superimposed on a much larger background force. Consistent with the Weber-Fechner law for force perception, subjects had greater difficulty consciously perceiving the force field perturbation when it was superimposed on the large background force. However, they still adapted to the perturbation, decreasing trajectory distortion with repeated reaching and demonstrating kinematic after effects when the perturbation was unexpectedly removed. They also adapted by increasing their arm impedance when the background force was not present, but did not vary the arm impedance when the background force was present. The identified parameters of a previously proposed mathematical model of motor adaptation changed significantly with the presence of the background force. These results indicate that the motor system maintains its sensitivity for internal model formation even when there are large background forces that mask perception. Further, the motor system modulates arm impedance differently in response to the same perturbation depending on the background force onto which that perturbation is superimposed. Finally, these results suggest that computational models of motor adaptation will likely need to include force-dependent parameters to accurately predict errors.

  11. Adaptive response modelling

    NASA Astrophysics Data System (ADS)

    Campa, Alessandro; Esposito, Giuseppe; Belli, Mauro

    Cellular response to radiation is often modified by a previous delivery of a small "priming" dose: a smaller amount of damage, defined by the end point being investigated, is observed, and for this reason the effect is called adaptive response. An improved understanding of this effect is essential (as much as for the case of the bystander effect) for a reliable radiation risk assessment when low dose irradiations are involved. Experiments on adaptive response have shown that there are a number of factors that strongly influence the occurrence (and the level) of the adaptation. In particular, priming doses and dose rates have to fall in defined ranges; the same is true for the time interval between the delivery of the small priming dose and the irradiation with the main, larger, dose (called in this case challenging dose). Different hypotheses can be formulated on the main mechanism(s) determining the adaptive response: an increased efficiency of DNA repair, an increased level of antioxidant enzymes, an alteration of cell cycle progression, a chromatin conformation change. An experimental clearcut evidence going definitely in the direction of one of these explanations is not yet available. Modelling can be done at different levels. Simple models, relating the amount of damage, through elementary differential equations, to the dose and dose rate experienced by the cell, are relatively easy to handle, and they can be modified to account for the priming irradiation. However, this can hardly be of decisive help in the explanation of the mechanisms, since each parameter of these models often incorporates in an effective way several cellular processes related to the response to radiation. In this presentation we show our attempts to describe adaptive response with models that explicitly contain, as a dynamical variable, the inducible adaptive agent. At a price of a more difficult treatment, this approach is probably more prone to give support to the experimental studies

  12. Effect of chromatic adaptation on the achromatic locus: the role of contrast, luminance and background color.

    PubMed

    Werner, J S; Walraven, J

    1982-01-01

    Two superposed annular test lights of complementary spectral composition were presented as 60-90' incremental test flashes on 480' steady backgrounds. Two observers adjusted the ratio of the two test lights to maintain an achromatic appearance under conditions of adaptation that varied with respect to background luminance, chromaticity and stimulus contrast. The shift in chromaticity of the achromatic point was in the direction of the chromaticity of the background, while the magnitude of the shift increased as an increasing function of background luminance and as a decreasing function of contrast. These data confirm and extend a model of chromatic adaptation that has the following properties: (1) non-additivity of transient test and steady background fields, in the sense that the background, although physically adding to the test flash, only affects its hue by way of altering the gain of cone pathways; (2) Vos-Walraven cone spectral sensitivities; and (3) adaptation sites in the cone pathways having the same action spectra as Stiles' pi 5, pi 4 and (modified) pi 1 mechanisms, and which generate receptor-specific attenuation factors (von Kries Coefficients) according to Stiles' generalized threshold vs intensity function, zeta (x).

  13. Colour vision and background adaptation in a passerine bird, the zebra finch (Taeniopygia guttata)

    PubMed Central

    2016-01-01

    Today, there is good knowledge of the physiological basis of bird colour vision and how mathematical models can be used to predict visual thresholds. However, we still know only little about how colour vision changes between different viewing conditions. This limits the understanding of how colour signalling is configured in habitats where the light of the illumination and the background may shift dramatically. I examined how colour discrimination in zebra finch (Taeniopygia guttata) is affected by adaptation to different backgrounds. I trained finches in a two-alternative choice task, to choose between red discs displayed on backgrounds with different colours. I found that discrimination thresholds correlate with stimulus contrast to the background. Thresholds are low, and in agreement with model predictions, for a background with a red colour similar to the discs. For the most contrasting green background, thresholds are about five times higher than this. Subsequently, I trained the finches for the detection of single discs on a grey background. Detection thresholds are about 2.5 to 3 times higher than discrimination thresholds. This study demonstrates close similarities in human and bird colour vision, and the quantitative data offer a new possibility to account for shifting viewing conditions in colour vision models. PMID:27703702

  14. Background modeling for the GERDA experiment

    SciTech Connect

    Becerici-Schmidt, N.; Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  15. Background modeling for the GERDA experiment

    NASA Astrophysics Data System (ADS)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  16. Adaptive and Background-Aware GAL4 Expression Enhancement of Co-registered Confocal Microscopy Images.

    PubMed

    Trapp, Martin; Schulze, Florian; Novikov, Alexey A; Tirian, Laszlo; J Dickson, Barry; Bühler, Katja

    2016-04-01

    GAL4 gene expression imaging using confocal microscopy is a common and powerful technique used to study the nervous system of a model organism such as Drosophila melanogaster. Recent research projects focused on high throughput screenings of thousands of different driver lines, resulting in large image databases. The amount of data generated makes manual assessment tedious or even impossible. The first and most important step in any automatic image processing and data extraction pipeline is to enhance areas with relevant signal. However, data acquired via high throughput imaging tends to be less then ideal for this task, often showing high amounts of background signal. Furthermore, neuronal structures and in particular thin and elongated projections with a weak staining signal are easily lost. In this paper we present a method for enhancing the relevant signal by utilizing a Hessian-based filter to augment thin and weak tube-like structures in the image. To get optimal results, we present a novel adaptive background-aware enhancement filter parametrized with the local background intensity, which is estimated based on a common background model. We also integrate recent research on adaptive image enhancement into our approach, allowing us to propose an effective solution for known problems present in confocal microscopy images. We provide an evaluation based on annotated image data and compare our results against current state-of-the-art algorithms. The results show that our algorithm clearly outperforms the existing solutions. PMID:26743993

  17. Cosmic microwave background probes models of inflation

    NASA Technical Reports Server (NTRS)

    Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.

    1992-01-01

    Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.

  18. TIMSS 2011 User Guide for the International Database. Supplement 2: National Adaptations of International Background Questionnaires

    ERIC Educational Resources Information Center

    Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed.

    2013-01-01

    This supplement describes national adaptations made to the international version of the TIMSS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the TIMSS 2011 background variables. Background questionnaire adaptations…

  19. Influence of background size, luminance and eccentricity on different adaptation mechanisms.

    PubMed

    Gloriani, Alejandro H; Matesanz, Beatriz M; Barrionuevo, Pablo A; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A

    2016-08-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06-110cd/m(2)) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5cd/m(2). In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range.

  20. Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.

    2012-01-01

    Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.

  1. Increasing the Meaningfulness of Quantitative Material by Adapting Context to Student Background.

    ERIC Educational Resources Information Center

    Ross, Steven M.

    1983-01-01

    The focus of the present experiments was to examine the effect of adapting the context of a presentation to a student's background. The results showed familiarity of context to be an influential factor in learning quantitative material. (Author/PN)

  2. Metal mixtures modeling evaluation project: 1. Background.

    PubMed

    Meyer, Joseph S; Farley, Kevin J; Garman, Emily R

    2015-04-01

    Despite more than 5 decades of aquatic toxicity tests conducted with metal mixtures, there is still a need to understand how metals interact in mixtures and to predict their toxicity more accurately than what is currently done. The present study provides a background for understanding the terminology, regulatory framework, qualitative and quantitative concepts, experimental approaches, and visualization and data-analysis methods for chemical mixtures, with an emphasis on bioavailability and metal-metal interactions in mixtures of waterborne metals. In addition, a Monte Carlo-type randomization statistical approach to test for nonadditive toxicity is presented, and an example with a binary-metal toxicity data set demonstrates the challenge involved in inferring statistically significant nonadditive toxicity. This background sets the stage for the toxicity results, data analyses, and bioavailability models related to metal mixtures that are described in the remaining articles in this special section from the Metal Mixture Modeling Evaluation project and workshop. It is concluded that although qualitative terminology such as additive and nonadditive toxicity can be useful to convey general concepts, failure to expand beyond that limited perspective could impede progress in understanding and predicting metal mixture toxicity. Instead of focusing on whether a given metal mixture causes additive or nonadditive toxicity, effort should be directed to develop models that can accurately predict the toxicity of metal mixtures.

  3. PIRLS 2011 User Guide for the International Database. Supplement 2: National Adaptations of International Background Questionnaires

    ERIC Educational Resources Information Center

    Foy, Pierre, Ed.; Drucker, Kathleen T., Ed.

    2013-01-01

    This supplement describes national adaptations made to the international version of the PIRLS/prePIRLS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the PIRLS/prePIRLS 2011 background variables. Background…

  4. Background adaptation and water acidification affect pigmentation and stress physiology of tilapia, Oreochromis mossambicus.

    PubMed

    van der Salm, A L; Spanings, F A T; Gresnigt, R; Bonga, S E Wendelaar; Flik, G

    2005-10-01

    The ability to adjust skin darkness to the background is a common phenomenon in fish. The hormone alpha-melanophore-stimulating hormone (alphaMSH) enhances skin darkening. In Mozambique tilapia, Oreochromis mossambicus L., alphaMSH acts as a corticotropic hormone during adaptation to water with a low pH, in addition to its role in skin colouration. In the current study, we investigated the responses of this fish to these two environmental challenges when it is exposed to both simultaneously. The skin darkening of tilapia on a black background and the lightening on grey and white backgrounds are compromised in water with a low pH, indicating that the two vastly different processes both rely on alphaMSH-regulatory mechanisms. If the water is acidified after 25 days of undisturbed background adaptation, fish showed a transient pigmentation change but recovered after two days and continued the adaptation of their skin darkness to match the background. Black backgrounds are experienced by tilapia as more stressful than grey or white backgrounds both in neutral and in low pH water. A decrease of water pH from 7.8 to 4.5 applied over a two-day period was not experienced as stressful when combined with background adaptation, based on unchanged plasma pH and plasma alphaMSH, and Na levels. However, when water pH was lowered after 25 days of undisturbed background adaptation, particularly alphaMSH levels increased chronically. In these fish, plasma pH and Na levels had decreased, indicating a reduced capacity to maintain ion-homeostasis, implicating that the fish indeed experience stress. We conclude that simultaneous exposure to these two types of stressor has a lower impact on the physiology of tilapia than subsequent exposure to the stressors.

  5. Model of aircraft noise adaptation

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Coates, G. D.; Cawthorn, J. M.

    1977-01-01

    Development of an aircraft noise adaptation model, which would account for much of the variability in the responses of subjects participating in human response to noise experiments, was studied. A description of the model development is presented. The principal concept of the model, was the determination of an aircraft adaptation level which represents an annoyance calibration for each individual. Results showed a direct correlation between noise level of the stimuli and annoyance reactions. Attitude-personality variables were found to account for varying annoyance judgements.

  6. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  7. ADAPTIVE EYE MODEL - Poster Paper

    NASA Astrophysics Data System (ADS)

    Galetskiy, Sergey O.; Kudryashov, Alexey V.

    2008-01-01

    We propose experimental adaptive eye model based on flexible 18-electrode bimorph mirror reproducing human eye aberrations up to 4th radial order of Zernike polynomials at frequency of 10Hz. The accuracy of aberrations reproduction in most cases is better than λ/10 RMS. The model is introduced to aberrometer for human eye aberrations compensation to improve visual acuity test.

  8. Observations and Modeling of Seismic Background Noise

    USGS Publications Warehouse

    Peterson, Jon R.

    1993-01-01

    INTRODUCTION The preparation of this report had two purposes. One was to present a catalog of seismic background noise spectra obtained from a worldwide network of seismograph stations. The other purpose was to refine and document models of seismic background noise that have been in use for several years. The second objective was, in fact, the principal reason that this study was initiated and influenced the procedures used in collecting and processing the data. With a single exception, all of the data used in this study were extracted from the digital data archive at the U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL). This archive dates from 1972 when ASL first began deploying digital seismograph systems and collecting and distributing digital data under the sponsorship of the Defense Advanced Research Projects Agency (DARPA). There have been many changes and additions to the global seismograph networks during the past twenty years, but perhaps none as significant as the current deployment of very broadband seismographs by the U.S. Geological Survey (USGS) and the University of California San Diego (UCSD) under the scientific direction of the IRIS consortium. The new data acquisition systems have extended the bandwidth and resolution of seismic recording, and they utilize high-density recording media that permit the continuous recording of broadband data. The data improvements and continuous recording greatly benefit and simplify surveys of seismic background noise. Although there are many other sources of digital data, the ASL archive data were used almost exclusively because of accessibility and because the data systems and their calibration are well documented for the most part. Fortunately, the ASL archive contains high-quality data from other stations in addition to those deployed by the USGS. Included are data from UCSD IRIS/IDA stations, the Regional Seismic Test Network (RSTN) deployed by Sandia National Laboratories (SNL), and the

  9. An efficient background modeling approach based on vehicle detection

    NASA Astrophysics Data System (ADS)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  10. Background Noise Mitigation in Deep-Space Optical Communications Using Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Lee, S.; Wilson, K. E.; Troy, M.

    2005-05-01

    Over the last decade, adaptive optics technology has moved from the purview of a Department of Defense laboratory to astronomical telescopes around the world, and recently to industry, where adaptive optics systems have been developed to correct atmospheric-induced signal fades on high-bandwidth horizontal-path optical links. As JPL develops optical communications technology for high-bandwidth optical links from its deep-space probes, we are exploring the application of adaptive optics to the optical deep-space receiver to improve the quality of the link under turbulent atmospheric and high-background conditions. To provide maximum communications support, the operational deep-space optical communications receiver will need to point close to the Sun or to a bright Sun-illuminated planet. Under these conditions, the background noise from the sky degrades the quality of the optical link, especially when the atmospheric seeing is poor. In this work, we analyze how adaptive optics could be used to mitigate the effects of sky and planetary background noise on the deep-space optical communications receiver's performance in poor seeing conditions. Our results show that, under nominal background sky conditions, gains of 4 dB can be achieved for the uncoded bit-error rate of 0.01.

  11. An Adapted Dialogic Reading Program for Turkish Kindergarteners from Low Socio-Economic Backgrounds

    ERIC Educational Resources Information Center

    Ergül, Cevriye; Akoglu, Gözde; Sarica, Ayse D.; Karaman, Gökçe; Tufan, Mümin; Bahap-Kudret, Zeynep; Zülfikar, Deniz

    2016-01-01

    The study aimed to examine the effectiveness of the Adapted Dialogic Reading Program (ADR) on the language and early literacy skills of Turkish kindergarteners from low socio-economic (SES) backgrounds. The effectiveness of ADR was investigated across six different treatment conditions including classroom and home based implementations in various…

  12. Method For Model-Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.

  13. Background model for the Majorana Demonstrator

    SciTech Connect

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, III, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y -D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V.; Gusev, K.; Hallin, A.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, W. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. K.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C. -H.; Yumatov, V.

    2015-01-01

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.

  14. Background model for the Majorana Demonstrator

    DOE PAGES

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, III, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; et al

    2015-01-01

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example usingmore » powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.« less

  15. Background Model for the Majorana Demonstrator

    SciTech Connect

    Cuesta, C.; Abgrall, N.; Aguayo, Estanislao; Avignone, Frank T.; Barabash, Alexander S.; Bertrand, F.; Boswell, M.; Brudanin, V.; Busch, Matthew; Byram, D.; Caldwell, A. S.; Chan, Yuen-Dat; Christofferson, Cabot-Ann; Combs, Dustin C.; Detwiler, Jason A.; Doe, Peter J.; Efremenko, Yuri; Egorov, Viatcheslav; Ejiri, H.; Elliott, S. R.; Fast, James E.; Finnerty, P.; Fraenkle, Florian; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, Vincente; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, Reyco; Hoppe, Eric W.; Howard, Stanley; Howe, M. A.; Keeter, K.; Kidd, M. F.; Kochetov, Oleg; Konovalov, S.; Kouzes, Richard T.; Laferriere, Brian D.; Leon, Jonathan D.; Leviner, L.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Nomachi, Masaharu; Orrell, John L.; O'Shaughnessy, C.; Overman, Nicole R.; Phillips, D.; Poon, Alan; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, Keith; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, Alexis G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, Kyle J.; Snyder, N.; Suriano, Anne-Marie; Thompson, J.; Timkin, V.; Tornow, Werner; Trimble, J. E.; Varner, R. L.; Vasilyev, Sergey; Vetter, Kai; Vorren, Kris R.; White, Brandon R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A.; Yu, Chang-Hong; Yumatov, Vladimir

    2015-06-01

    The Majorana Collaboration is constructing a prototype system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment to search for neutrinoless double-beta (0v BB) decay in 76Ge. In view of the requirement that the next generation of tonne-scale Ge-based 0vBB-decay experiment be capable of probing the neutrino mass scale in the inverted-hierarchy region, a major goal of theMajorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using Geant4 simulations of the different background components whose purity levels are constrained from radioassay measurements.

  16. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds.

    PubMed

    Hillis, James M; Brainard, David H

    2005-10-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism.

  17. Chromo-natural model in anisotropic background

    SciTech Connect

    Maleknejad, Azadeh; Erfani, Encieh E-mail: eerfani@ipm.ir

    2014-03-01

    In this work we study the chromo-natural inflation model in the anisotropic setup. Initiating inflation from Bianchi type-I cosmology, we analyze the system thoroughly during the slow-roll inflation, from both analytical and numerical points of view. We show that the isotropic FRW inflation is an attractor of the system. In other words, anisotropies are damped within few e-folds and the chromo-natural model respects the cosmic no-hair conjecture. Furthermore, we demonstrate that in the slow-roll limit, the anisotropies in both chromo-natural and gauge-flation models share the same dynamics.

  18. Adaptive contour-based statistical background subtraction method for moving target detection in infrared video sequences

    NASA Astrophysics Data System (ADS)

    Akula, Aparna; Khanna, Nidhi; Ghosh, Ripul; Kumar, Satish; Das, Amitava; Sardana, H. K.

    2014-03-01

    A robust contour-based statistical background subtraction method for detection of non-uniform thermal targets in infrared imagery is presented. The foremost step of the method comprises of generation of background frame using statistical information of an initial set of frames not containing any targets. The generated background frame is made adaptive by continuously updating the background using the motion information of the scene. The background subtraction method followed by a clutter rejection stage ensure the detection of foreground objects. The next step comprises of detection of contours and distinguishing the target boundaries from the noisy background. This is achieved by using the Canny edge detector that extracts the contours followed by a k-means clustering approach to differentiate the object contour from the background contours. The post processing step comprises of morphological edge linking approach to close any broken contours and finally flood fill is performed to generate the silhouettes of moving targets. This method is validated on infrared video data consisting of a variety of moving targets. Experimental results demonstrate a high detection rate with minimal false alarms establishing the robustness of the proposed method.

  19. Modelling of mercury emissions from background soils.

    PubMed

    Scholtz, M T; Van Heyst, B J; Schroeder, W H

    2003-03-20

    Emissions of volatile mercury species from natural soils are believed to be a significant contributor to the atmospheric burden of mercury, but only order-of-magnitude estimates of emissions from these sources are available. The scaling-up of mercury flux measurements to regional or global scales is confounded by a limited understanding of the physical, chemical and biochemical processes that occur in the soil, a complex environmental matrix. This study is a first step toward the development of an air-surface exchange model for mercury (known as the mercury emission model (MEM)). The objective of the study is to model the partitioning and movement of inorganic Hg(II) and Hg(0) in open field soils, and to use MEM to interpret published data on mercury emissions to the atmosphere. MEM is a multi-layered, dynamic finite-element soil and atmospheric surface-layer model that simulates the exchange of heat, moisture and mercury between soils and the atmosphere. The model includes a simple formulation of the reduction of inorganic Hg(II) to Hg(0). Good agreement was found between the meteorological dependence of observed mercury emission fluxes, and hourly modelled fluxes, and it is concluded that MEM is able to simulate well the soil and atmospheric processes influencing the emission of Hg(0) to the atmosphere. The heretofore unexplained close correlation between soil temperature and mercury emission flux is fully modelled by MEM and is attributed to the temperature dependence of the Hg(0) Henry's Law coefficient and the control of the volumetric soil-air fraction on the diffusion of Hg(0) near the surface. The observed correlation between solar radiation intensity and mercury flux, appears in part to be due to the surface-energy balance between radiation, and sensible and latent heat fluxes which determines the soil temperature. The modelled results imply that empirical correlations that are based only on flux chamber data, may not extend to the open atmosphere for all

  20. Quadtree-adaptive tsunami modelling

    NASA Astrophysics Data System (ADS)

    Popinet, Stéphane

    2011-09-01

    The well-balanced, positivity-preserving scheme of Audusse et al. (SIAM J Sci Comput 25(6):2050-2065, 2004), for the solution of the Saint-Venant equations with wetting and drying, is generalised to an adaptive quadtree spatial discretisation. The scheme is validated using an analytical solution for the oscillation of a fluid in a parabolic container, as well as the classic Monai tsunami laboratory benchmark. An efficient database system able to dynamically reconstruct a multiscale bathymetry based on extremely large datasets is also described. This combination of methods is successfully applied to the adaptive modelling of the 2004 Indian ocean tsunami. Adaptivity is shown to significantly decrease the exponent of the power law describing computational cost as a function of spatial resolution. The new exponent is directly related to the fractal dimension of the geometrical structures characterising tsunami propagation. The implementation of the method as well as the data and scripts necessary to reproduce the results presented are freely available as part of the open-source Gerris Flow Solver framework.

  1. Modulation of prism adaptation by a shift of background in the monkey.

    PubMed

    Inoue, Masato; Harada, Hiroyuki; Fujisawa, Masahiro; Uchimura, Motoaki; Kitazawa, Shigeru

    2016-01-15

    Recent human behavioral studies have shown that the position of a visual target is instantly represented relative to the background (e.g., a large square) and used for evaluating the error in reaching the target. In the present study, we examined whether the same allocentric mechanism is shared by the monkey. We trained two monkeys to perform a fast and accurate reaching movement toward a visual target with a square in the background. Then, a visual shift (20mm or 4.1°) was introduced by wedge prisms to examine the process of decreasing the error during an exposure period (30 trials) and the size of the error upon removal of the prisms (aftereffect). The square was shifted during each movement, either in the direction of the visual displacement or in the opposite direction, by an amount equal to the size of the visual shift. The ipsilateral shift of the background increased the asymptote during the exposure period and decreased the aftereffect, i.e., prism adaptation was attenuated by the ipsilateral shift. By contrast, a contralateral shift enhanced adaptation. We further tested whether the shift of the square alone could cause an increase in the motor error. Although the target did not move, the shift of the square increased the motor error in the direction of the shift. These results were generally consistent with the results reported in human subjects, suggesting that the monkey and the human share the same neural mechanisms for representing a target relative to the background. PMID:26431765

  2. Hydraulically interconnected vehicle suspension: background and modelling

    NASA Astrophysics Data System (ADS)

    Zhang, Nong; Smith, Wade A.; Jeyakumaran, Jeku

    2010-01-01

    This paper presents a novel approach for the frequency domain analysis of a vehicle fitted with a general hydraulically interconnected suspension (HIS) system. Ideally, interconnected suspensions have the capability, unique among passive systems, to provide stiffness and damping characteristics dependent on the all-wheel suspension mode in operation. A basic, lumped-mass, four-degree-of-freedom half-car model is used to illustrate the proposed methodology. The mechanical-fluid boundary condition in the double-acting cylinders is modelled as an external force on the mechanical system and a moving boundary on the fluid system. The fluid system itself is modelled using the hydraulic impedance method, in which the relationships between the dynamic fluid states, i.e. pressures and flows, at the extremities of a single fluid circuit are determined by the transfer matrix method. A set of coupled, frequency-dependent equations, which govern the dynamics of the integrated half-car system, are then derived and the application of these equations to both free and forced vibration analysis is explained. The fluid system impedance matrix for the two general wheel-pair interconnection types-anti-synchronous and anti-oppositional-is also given. To further outline the application of the proposed methodology, the paper finishes with an example using a typical anti-roll HIS system. The integrated half-car system's free vibration solutions and frequency response functions are then obtained and discussed in some detail. The presented approach provides a scientific basis for investigating the dynamic characteristics of HIS-equipped vehicles, and the results offer further confirmation that interconnected suspension schemes can provide, at least to some extent, individual control of modal stiffness and damping characteristics.

  3. Background Models for Muons and Neutrons Underground

    SciTech Connect

    Formaggio, Joseph A.

    2005-09-08

    Cosmogenic-induced activity is an issue of great concern for many sensitive experiments sited underground. A variety of different arch-type experiments - such as those geared toward the detection of dark matter, neutrinoless double beta decay and solar neutrinos - have reached levels of cleanliness and sensitivity that warrant careful consideration of secondary activity induced by cosmic rays. This paper reviews some of the main issues associated with the modeling of cosmogenic activity underground. Comparison with data, when such data is available, is also presented.

  4. Simple method for model reference adaptive control

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1989-01-01

    A simple method is presented for combined signal synthesis and parameter adaptation within the framework of model reference adaptive control theory. The results are obtained using a simple derivation based on an improved Liapunov function.

  5. Adaptively smoothed background seismicity rates in the Intermountain West, United States

    NASA Astrophysics Data System (ADS)

    Moschetti, M. P.

    2013-05-01

    Spatially smoothed seismicity rates are an important seismic source for seismic hazard calculations across much of the Intermountain West (IMW). The U.S. national seismic hazard maps have historically used smoothed seismicity rate models generated with fixed-bandwidth smoothing methods (Frankel, 1996; Petersen et al., 2008); however, recent tests using the California earthquake catalog indicate that adapting the smoothing bandwidth to the local seismicity density (e.g., Helmstetter et al., 2007; Werner et al., 2011) produces improved seismic source models relative to models with fixed smoothing bandwidths (Schorlemmer et al., 2010). To test the ability of adaptively smoothed seismicity models to match epicenter locations from later parts of the IMW earthquake catalog, I generate time-independent maps of smoothed seismicity rates by spatially smoothing the seismicity rates of M4+ earthquake epicenters using fixed-radius and adaptive smoothing methods. I evaluate the 'forecast' smoothed seismicity models generated from the early part of the earthquake catalog by comparing the locations of earthquakes that occur in the later times of the catalog with the forecast seismicity rates. Forecasts are generated from a de-clustered catalog (Gardner and Knopoff, 1974) with completeness levels ranging from M4-6. The forecasts assume that the Gutenberg-Richter relation describes the magnitude-frequency distribution and that the locations of smaller earthquakes (M4+) can identify the locations of future large, and damaging, earthquakes. Spatially smoothed seismicity rate models are generated with isotropic Gaussian and power-law smoothing kernels using fixed and adaptive bandwidths; the adaptive smoothing bandwidths are calculated with the method of Helmstetter et al. (2007). To identify optimal smoothing methods for long-term earthquake rates, I calculate likelihood values for all smoothed seismicity models by using a Poisson distribution for earthquake occurrence and select the

  6. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  7. Adaptive Urban Dispersion Integrated Model

    SciTech Connect

    Wissink, A; Chand, K; Kosovic, B; Chan, S; Berger, M; Chow, F K

    2005-11-03

    Numerical simulations represent a unique predictive tool for understanding the three-dimensional flow fields and associated concentration distributions from contaminant releases in complex urban settings (Britter and Hanna 2003). Utilization of the most accurate urban models, based on fully three-dimensional computational fluid dynamics (CFD) that solve the Navier-Stokes equations with incorporated turbulence models, presents many challenges. We address two in this work; first, a fast but accurate way to incorporate the complex urban terrain, buildings, and other structures to enforce proper boundary conditions in the flow solution; second, ways to achieve a level of computational efficiency that allows the models to be run in an automated fashion such that they may be used for emergency response and event reconstruction applications. We have developed a new integrated urban dispersion modeling capability based on FEM3MP (Gresho and Chan 1998, Chan and Stevens 2000), a CFD model from Lawrence Livermore National Lab. The integrated capability incorporates fast embedded boundary mesh generation for geometrically complex problems and full three-dimensional Cartesian adaptive mesh refinement (AMR). Parallel AMR and embedded boundary gridding support are provided through the SAMRAI library (Wissink et al. 2001, Hornung and Kohn 2002). Embedded boundary mesh generation has been demonstrated to be an automatic, fast, and efficient approach for problem setup. It has been used for a variety of geometrically complex applications, including urban applications (Pullen et al. 2005). The key technology we introduce in this work is the application of AMR, which allows the application of high-resolution modeling to certain important features, such as individual buildings and high-resolution terrain (including important vegetative and land-use features). It also allows the urban scale model to be readily interfaced with coarser resolution meso or regional scale models. This talk

  8. Background noise cancellation of manatee vocalizations using an adaptive line enhancer.

    PubMed

    Yan, Zheng; Niezrecki, Christopher; Cattafesta, Louis N; Beusse, Diedrich O

    2006-07-01

    The West Indian manatee (Trichechus manatus latirostris) has become an endangered species partly because of an increase in the number of collisions with boats. A device to alert boaters of the presence of manatees is desired. Previous research has shown that background noise limits the manatee vocalization detection range (which is critical for practical implementation). By improving the signal-to-noise ratio of the measured manatee vocalization signal, it is possible to extend the detection range. The finite impulse response (FIR) structure of the adaptive line enhancer (ALE) can detect and track narrow-band signals buried in broadband noise. In this paper, a constrained infinite impulse response (IIR) ALE, called a feedback ALE (FALE), is implemented to reduce the background noise. In addition, a bandpass filter is used as a baseline for comparison. A library consisting of 100 manatee calls spanning ten different signal categories is used to evaluate the performance of the bandpass filter, FIR-ALE, and FALE. The results show that the FALE is capable of reducing background noise by about 6.0 and 21.4 dB better than that of the FIR-ALE and bandpass filter, respectively, when the signal-to-noise ratio (SNR) of the original manatee call is -5 dB. PMID:16875212

  9. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  10. Reference analysis of the signal + background model in counting experiments

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2012-01-01

    The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.

  11. Adaptation to background light enables contrast coding at rod bipolar cell synapses.

    PubMed

    Ke, Jiang-Bin; Wang, Yanbin V; Borghuis, Bart G; Cembrowski, Mark S; Riecke, Hermann; Kath, William L; Demb, Jonathan B; Singer, Joshua H

    2014-01-22

    Rod photoreceptors contribute to vision over an ∼ 6-log-unit range of light intensities. The wide dynamic range of rod vision is thought to depend upon light intensity-dependent switching between two parallel pathways linking rods to ganglion cells: a rod → rod bipolar (RB) cell pathway that operates at dim backgrounds and a rod → cone → cone bipolar cell pathway that operates at brighter backgrounds. We evaluated this conventional model of rod vision by recording rod-mediated light responses from ganglion and AII amacrine cells and by recording RB-mediated synaptic currents from AII amacrine cells in mouse retina. Contrary to the conventional model, we found that the RB pathway functioned at backgrounds sufficient to activate the rod → cone pathway. As background light intensity increased, the RB's role changed from encoding the absorption of single photons to encoding contrast modulations around mean luminance. This transition is explained by the intrinsic dynamics of transmission from RB synapses.

  12. Adaptation to background light enables contrast coding at rod bipolar cell synapses

    PubMed Central

    Ke, Jiang-Bin; Wang, Yanbin V.; Borghuis, Bart G.; Cembrowski, Mark S.; Riecke, Hermann; Kath, William L.; Demb, Jonathan B.; Singer, Joshua H.

    2013-01-01

    SUMMARY Rod photoreceptors contribute to vision over a ~6 log-unit range of light intensities. The wide dynamic range of rod vision is thought to depend upon light intensity-dependent switching between two parallel pathways linking rods to ganglion cells: a rod→rod bipolar (RB) cell pathway that operates at dim backgrounds and a rod→cone→cone bipolar cell pathway that operates at brighter backgrounds. We evaluated this conventional model of rod vision by recording rod-mediated light responses from ganglion and AII amacrine cells and by recording RB-mediated synaptic currents from AII amacrine cells in mouse retina. Contrary to the conventional model, we found that the RB pathway functioned at backgrounds sufficient to activate the rod→cone pathway. As background light intensity increased, the RB’s role changed from encoding the absorption of single photons to encoding contrast modulations around mean luminance. This transition is explained by the intrinsic dynamics of transmission from RB synapses. PMID:24373883

  13. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  14. Quantifying the CV: Adapting an Impact Assessment Model to Astronomy

    NASA Astrophysics Data System (ADS)

    Bohémier, K. A.

    2015-04-01

    We present the process and results of applying the Becker Model to the curriculum vitae of a Yale University astronomy professor. As background, in July 2013, the Becker Medical Library at Washington Univ. in St. Louis held a workshop for librarians on the Becker Model, a framework developed by research assessment librarians for quantifying medical researchers' individual and group outputs. Following the workshop, the model was analyzed for content to adapt it to the physical sciences.

  15. Neural network approach to background modeling for video object segmentation.

    PubMed

    Culibrk, Dubravko; Marques, Oge; Socek, Daniel; Kalva, Hari; Furht, Borko

    2007-11-01

    This paper presents a novel background modeling and subtraction approach for video object segmentation. A neural network (NN) architecture is proposed to form an unsupervised Bayesian classifier for this application domain. The constructed classifier efficiently handles the segmentation in natural-scene sequences with complex background motion and changes in illumination. The weights of the proposed NN serve as a model of the background and are temporally updated to reflect the observed statistics of background. The segmentation performance of the proposed NN is qualitatively and quantitatively examined and compared to two extant probabilistic object segmentation algorithms, based on a previously published test pool containing diverse surveillance-related sequences. The proposed algorithm is parallelized on a subpixel level and designed to enable efficient hardware implementation.

  16. Image Discrimination Models Predict Object Detection in Natural Backgrounds

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Rohaly, A. M.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    Object detection involves looking for one of a large set of object sub-images in a large set of background images. Image discrimination models only predict the probability that an observer will detect a difference between two images. In a recent study based on only six different images, we found that discrimination models can predict the relative detectability of objects in those images, suggesting that these simpler models may be useful in some object detection applications. Here we replicate this result using a new, larger set of images. Fifteen images of a vehicle in an other-wise natural setting were altered to remove the vehicle and mixed with the original image in a proportion chosen to make the target neither perfectly recognizable nor unrecognizable. The target was also rotated about a vertical axis through its center and mixed with the background. Sixteen observers rated these 30 target images and the 15 background-only images for the presence of a vehicle. The likelihoods of the observer responses were computed from a Thurstone scaling model with the assumption that the detectabilities are proportional to the predictions of an image discrimination model. Three image discrimination models were used: a cortex transform model, a single channel model with a contrast sensitivity function filter, and the Root-Mean-Square (RMS) difference of the digital target and background-only images. As in the previous study, the cortex transform model performed best; the RMS difference predictor was second best; and last, but still a reasonable predictor, was the single channel model. Image discrimination models can predict the relative detectabilities of objects in natural backgrounds.

  17. Adaptive Modeling of the International Space Station Electrical Power System

    NASA Technical Reports Server (NTRS)

    Thomas, Justin Ray

    2007-01-01

    Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.

  18. On fractional Model Reference Adaptive Control.

    PubMed

    Shi, Bao; Yuan, Jian; Dong, Chao

    2014-01-01

    This paper extends the conventional Model Reference Adaptive Control systems to fractional ones based on the theory of fractional calculus. A control law and an incommensurate fractional adaptation law are designed for the fractional plant and the fractional reference model. The stability and tracking convergence are analyzed using the frequency distributed fractional integrator model and Lyapunov theory. Moreover, numerical simulations of both linear and nonlinear systems are performed to exhibit the viability and effectiveness of the proposed methodology. PMID:24574897

  19. Gravitoinertial force background level affects adaptation to coriolis force perturbations of reaching movements.

    PubMed

    Lackner, J R; Dizio, P

    1998-08-01

    We evaluated the combined effects on reaching movements of the transient, movement-dependent Coriolis forces and the static centrifugal forces generated in a rotating environment. Specifically, we assessed the effects of comparable Coriolis force perturbations in different static force backgrounds. Two groups of subjects made reaching movements toward a just-extinguished visual target before rotation began, during 10 rpm counterclockwise rotation, and after rotation ceased. One group was seated on the axis of rotation, the other 2.23 m away. The resultant of gravity and centrifugal force on the hand was 1.0 g for the on-center group during 10 rpm rotation, and 1.031 g for the off-center group because of the 0.25 g centrifugal force present. For both groups, rightward Coriolis forces, approximately 0.2 g peak, were generated during voluntary arm movements. The endpoints and paths of the initial per-rotation movements were deviated rightward for both groups by comparable amounts. Within 10 subsequent reaches, the on-center group regained baseline accuracy and straight-line paths; however, even after 40 movements the off-center group had not resumed baseline endpoint accuracy. Mirror-image aftereffects occurred when rotation stopped. These findings demonstrate that manual control is disrupted by transient Coriolis force perturbations and that adaptation can occur even in the absence of visual feedback. An increase, even a small one, in background force level above normal gravity does not affect the size of the reaching errors induced by Coriolis forces nor does it affect the rate of reacquiring straight reaching paths; however, it does hinder restoration of reaching accuracy.

  20. Gravitoinertial force background level affects adaptation to coriolis force perturbations of reaching movements

    NASA Technical Reports Server (NTRS)

    Lackner, J. R.; Dizio, P.

    1998-01-01

    We evaluated the combined effects on reaching movements of the transient, movement-dependent Coriolis forces and the static centrifugal forces generated in a rotating environment. Specifically, we assessed the effects of comparable Coriolis force perturbations in different static force backgrounds. Two groups of subjects made reaching movements toward a just-extinguished visual target before rotation began, during 10 rpm counterclockwise rotation, and after rotation ceased. One group was seated on the axis of rotation, the other 2.23 m away. The resultant of gravity and centrifugal force on the hand was 1.0 g for the on-center group during 10 rpm rotation, and 1.031 g for the off-center group because of the 0.25 g centrifugal force present. For both groups, rightward Coriolis forces, approximately 0.2 g peak, were generated during voluntary arm movements. The endpoints and paths of the initial per-rotation movements were deviated rightward for both groups by comparable amounts. Within 10 subsequent reaches, the on-center group regained baseline accuracy and straight-line paths; however, even after 40 movements the off-center group had not resumed baseline endpoint accuracy. Mirror-image aftereffects occurred when rotation stopped. These findings demonstrate that manual control is disrupted by transient Coriolis force perturbations and that adaptation can occur even in the absence of visual feedback. An increase, even a small one, in background force level above normal gravity does not affect the size of the reaching errors induced by Coriolis forces nor does it affect the rate of reacquiring straight reaching paths; however, it does hinder restoration of reaching accuracy.

  1. Background noise model development for seismic stations of KMA

    NASA Astrophysics Data System (ADS)

    Jeon, Youngsoo

    2010-05-01

    The background noise recorded at seismometer is exist at any seismic signal due to the natural phenomena of the medium which the signal passed through. Reducing the seismic noise is very important to improve the data quality in seismic studies. But, the most important aspect of reducing seismic noise is to find the appropriate place before installing the seismometer. For this reason, NIMR(National Institution of Meteorological Researches) starts to develop a model of standard background noise for the broadband seismic stations of the KMA(Korea Meteorological Administration) using a continuous data set obtained from 13 broadband stations during the period of 2007 and 2008. We also developed the model using short period seismic data from 10 stations at the year of 2009. The method of Mcmara and Buland(2004) is applied to analyse background noise of Korean Peninsula. The fact that borehole seismometer records show low noise level at frequency range greater than 1 Hz compared with that of records at the surface indicate that the cultural noise of inland Korean Peninsula should be considered to process the seismic data set. Reducing Double Frequency peak also should be regarded because the Korean Peninsula surrounded by the seas from eastern, western and southern part. The development of KMA background model shows that the Peterson model(1993) is not applicable to fit the background noise signal generated from Korean Peninsula.

  2. Modeling surface backgrounds from radon progeny plate-out

    SciTech Connect

    Perumpilly, G.; Guiseppe, V. E.; Snyder, N.

    2013-08-08

    The next generation low-background detectors operating deep underground aim for unprecedented low levels of radioactive backgrounds. The surface deposition and subsequent implantation of radon progeny in detector materials will be a source of energetic background events. We investigate Monte Carlo and model-based simulations to understand the surface implantation profile of radon progeny. Depending on the material and region of interest of a rare event search, these partial energy depositions can be problematic. Motivated by the use of Ge crystals for the detection of neutrinoless double-beta decay, we wish to understand the detector response of surface backgrounds from radon progeny. We look at the simulation of surface decays using a validated implantation distribution based on nuclear recoils and a realistic surface texture. Results of the simulations and measured α spectra are presented.

  3. Graphical Models and Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Almond, Russell G.; Mislevy, Robert J.

    1999-01-01

    Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)

  4. Comparative Study of MHD Modeling of the Background Solar Wind

    NASA Astrophysics Data System (ADS)

    Gressl, C.; Veronig, A. M.; Temmer, M.; Odstrčil, D.; Linker, J. A.; Mikić, Z.; Riley, P.

    2014-05-01

    Knowledge about the background solar wind plays a crucial role in the framework of space-weather forecasting. In-situ measurements of the background solar wind are only available for a few points in the heliosphere where spacecraft are located, therefore we have to rely on heliospheric models to derive the distribution of solar-wind parameters in interplanetary space. We test the performance of different solar-wind models, namely Magnetohydrodynamic Algorithm outside a Sphere/ENLIL (MAS/ENLIL), Wang-Sheeley-Arge/ENLIL (WSA/ENLIL), and MAS/MAS, by comparing model results with in-situ measurements from spacecraft located at 1 AU distance to the Sun (ACE, Wind). To exclude the influence of interplanetary coronal mass ejections (ICMEs), we chose the year 2007 as a time period with low solar activity for our comparison. We found that the general structure of the background solar wind is well reproduced by all models. The best model results were obtained for the parameter solar-wind speed. However, the predicted arrival times of high-speed solar-wind streams have typical uncertainties of the order of about one day. Comparison of model runs with synoptic magnetic maps from different observatories revealed that the choice of the synoptic map significantly affects the model performance.

  5. Adaptive Modeling Procedure Selection by Data Perturbation*

    PubMed Central

    Zhang, Yongli; Shen, Xiaotong

    2015-01-01

    Summary Many procedures have been developed to deal with the high-dimensional problem that is emerging in various business and economics areas. To evaluate and compare these procedures, modeling uncertainty caused by model selection and parameter estimation has to be assessed and integrated into a modeling process. To do this, a data perturbation method estimates the modeling uncertainty inherited in a selection process by perturbing the data. Critical to data perturbation is the size of perturbation, as the perturbed data should resemble the original dataset. To account for the modeling uncertainty, we derive the optimal size of perturbation, which adapts to the data, the model space, and other relevant factors in the context of linear regression. On this basis, we develop an adaptive data-perturbation method that, unlike its nonadaptive counterpart, performs well in different situations. This leads to a data-adaptive model selection method. Both theoretical and numerical analysis suggest that the data-adaptive model selection method adapts to distinct situations in that it yields consistent model selection and optimal prediction, without knowing which situation exists a priori. The proposed method is applied to real data from the commodity market and outperforms its competitors in terms of price forecasting accuracy. PMID:26640319

  6. Cosmic microwave background observables of small field models of inflation

    SciTech Connect

    Ben-Dayan, Ido; Brustein, Ram E-mail: ramyb@bgu.ac.il

    2010-09-01

    We construct a class of single small field models of inflation that can predict, contrary to popular wisdom, an observable gravitational wave signal in the cosmic microwave background anisotropies. The spectral index, its running, the tensor to scalar ratio and the number of e-folds can cover all the parameter space currently allowed by cosmological observations. A unique feature of models in this class is their ability to predict a negative spectral index running in accordance with recent cosmic microwave background observations. We discuss the new class of models from an effective field theory perspective and show that if the dimensionless trilinear coupling is small, as required for consistency, then the observed spectral index running implies a high scale of inflation and hence an observable gravitational wave signal. All the models share a distinct prediction of higher power at smaller scales, making them easy targets for detection.

  7. Arbitrary cylinder color model for the codebook based background subtraction.

    PubMed

    Zeng, Zhi; Jia, Jianyuan

    2014-09-01

    The codebook background subtraction approach is widely used in computer vision applications. One of its distinguished features is the cylinder color model used to cope with illumination changes. The performances of this approach depends strongly on the color model. However, we have found this color model is valid only if the spectrum components of the light source change in the same proportion. In fact, this is not true in many practical cases. In these cases, the performances of the approach would be degraded significantly. To tackle this problem, we propose an arbitrary cylinder color model with a highly efficient updating strategy. This model uses cylinders whose axes need not going through the origin, so that the cylinder color model is extended to much more general cases. Experimental results show that, with no loss of real-time performance, the proposed model reduces the wrong classification rate of the cylinder color model by more than fifty percent.

  8. Modeling the solar irradiance background via numerical simulation

    NASA Astrophysics Data System (ADS)

    Viticchié, B.; Vantaggiato, M.; Berrilli, F.; Del Moro, D.; Penza, V.; Pietropaolo, E.; Rast, M.

    2010-07-01

    Various small scale photospheric processes are responsible for spatial and temporal variations of solar emergent intensity. The contribution to total irradiance fluctuations of such small scale features is the solar irradiance background. Here we examine the statistical properties of irradiance background computed via a n-body numerical scheme mimicking photospheric space-time correlations and calibrated by means of IBIS/DST spectro-polarimetric data. Such computed properties are compared with experimental results derived from the analysis of a VIRGO/SPM data. A future application of the model here presented could be the interpretation of stellar irradiance power spectra observed by new missions such as Kepler.

  9. Fast background subtraction for moving cameras based on nonparametric models

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Qin, Kaihuai; Sun, Wei; Guo, Huayuan

    2016-05-01

    In this paper, a fast background subtraction algorithm for freely moving cameras is presented. A nonparametric sample consensus model is employed as the appearance background model. The as-similar-as-possible warping technique, which obtains multiple homographies for different regions of the frame, is introduced to robustly estimate and compensate the camera motion between the consecutive frames. Unlike previous methods, our algorithm does not need any preprocess step for computing the dense optical flow or point trajectories. Instead, a superpixel-based seeded region growing scheme is proposed to extend the motion cue based on the sparse optical flow to the entire image. Then, a superpixel-based temporal coherent Markov random field optimization framework is built on the raw segmentations from the background model and the motion cue, and the final background/foreground labels are obtained using the graph-cut algorithm. Extensive experimental evaluations show that our algorithm achieves satisfactory accuracy, while being much faster than the state-of-the-art competing methods.

  10. Adaptive Modeling Language and Its Derivatives

    NASA Technical Reports Server (NTRS)

    Chemaly, Adel

    2006-01-01

    Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.

  11. Graphical Models and Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Almond, Russell G.

    This paper synthesizes ideas from the fields of graphical modeling and education testing, particularly item response theory (IRT) applied to computerized adaptive testing (CAT). Graphical modeling can offer IRT a language for describing multifaceted skills and knowledge, and disentangling evidence from complex performances. IRT-CAT can offer…

  12. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Ahmed Khamayseh; Valmor de Almeida; Glen Hansen

    2008-10-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  13. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Khamayseh, Ahmed K; de Almeida, Valmor F; Hansen, Glen

    2008-01-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called "mesh motion" (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  14. An Assessment of a Technique for Modeling Lidar Background Measurements

    NASA Astrophysics Data System (ADS)

    Powell, K. A.; Hunt, W. H.; Vaughan, M. A.; Hair, J. W.; Butler, C. F.; Hostetler, C. A.

    2015-12-01

    A high-fidelity lidar simulation tool has been developed to generate synthetic lidar backscatter data that closely matches the expected performance of various lidars, including the noise characteristics inherent to analog detection and uncertainties related to the measurement environment. This tool supports performance trade studies and scientific investigations for both the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), which flies aboard Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), and the NASA Langley Research Center airborne High Spectral Resolution Lidar (HSRL). The simulation tool models the lidar instrument characteristics, the backscatter signals generated from aerosols, clouds, ocean surface and subsurface, and the solar background signals. The background signals are derived from the simulated aerosol and cloud characteristics, the surface type, and solar zenith angle, using a look-up table of upwelling radiance vs scene type. The upwelling radiances were derived from the CALIOP RMS background noise and were correlated with measurements of the particulate intensive and extensive optical properties, including surface scattering for transparent layers. Tests were conducted by tuning the tool for both HSRL and CALIOP instrument settings and the atmospheres were defined using HSRL measurements from underflights of CALIPSO. For similar scenes, the simulated and measured backgrounds were compared. Overall, comparisons showed good agreement, verifying the accuracy of the tool to support studies involving instrument characterization and advanced data analysis techniques.

  15. Sigma models for genuinely non-geometric backgrounds

    NASA Astrophysics Data System (ADS)

    Chatzistavrakidis, Athanasios; Jonke, Larisa; Lechtenfeld, Olaf

    2015-11-01

    The existence of genuinely non-geometric backgrounds, i.e. ones without geometric dual, is an important question in string theory. In this paper we examine this question from a sigma model perspective. First we construct a particular class of Courant algebroids as protobialgebroids with all types of geometric and non-geometric fluxes. For such structures we apply the mathematical result that any Courant algebroid gives rise to a 3D topological sigma model of the AKSZ type and we discuss the corresponding 2D field theories. It is found that these models are always geometric, even when both 2-form and 2-vector fields are neither vanishing nor inverse of one another. Taking a further step, we suggest an extended class of 3D sigma models, whose world volume is embedded in phase space, which allow for genuinely non-geometric backgrounds. Adopting the doubled formalism such models can be related to double field theory, albeit from a world sheet perspective.

  16. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  17. Multiple model adaptive control with mixing

    NASA Astrophysics Data System (ADS)

    Kuipers, Matthew

    Despite the remarkable theoretical accomplishments and successful applications of adaptive control, the field is not sufficiently mature to solve challenging control problems requiring strict performance and safety guarantees. Towards addressing these issues, a novel deterministic multiple-model adaptive control approach called adaptive mixing control is proposed. In this approach, adaptation comes from a high-level system called the supervisor that mixes into feedback a number of candidate controllers, each finely-tuned to a subset of the parameter space. The mixing signal, the supervisor's output, is generated by estimating the unknown parameters and, at every instant of time, calculating the contribution level of each candidate controller based on certainty equivalence. The proposed architecture provides two characteristics relevant to solving stringent, performance-driven applications. First, the full-suite of linear time invariant control tools is available. A disadvantage of conventional adaptive control is its restriction to utilizing only those control laws whose solutions can be feasibly computed in real-time, such as model reference and pole-placement type controllers. Because its candidate controllers are computed off line, the proposed approach suffers no such restriction. Second, the supervisor's output is smooth and does not necessarily depend on explicit a priori knowledge of the disturbance model. These characteristics can lead to improved performance by avoiding the unnecessary switching and chattering behaviors associated with some other multiple adaptive control approaches. The stability and robustness properties of the adaptive scheme are analyzed. It is shown that the mean-square regulation error is of the order of the modeling error. And when the parameter estimate converges to its true value, which is guaranteed if a persistence of excitation condition is satisfied, the adaptive closed-loop system converges exponentially fast to a closed

  18. Adaptive modelling of structured molecular representations for toxicity prediction

    NASA Astrophysics Data System (ADS)

    Bertinetto, Carlo; Duce, Celia; Micheli, Alessio; Solaro, Roberto; Tiné, Maria Rosaria

    2012-12-01

    We investigated the possibility of modelling structure-toxicity relationships by direct treatment of the molecular structure (without using descriptors) through an adaptive model able to retain the appropriate structural information. With respect to traditional descriptor-based approaches, this provides a more general and flexible way to tackle prediction problems that is particularly suitable when little or no background knowledge is available. Our method employs a tree-structured molecular representation, which is processed by a recursive neural network (RNN). To explore the realization of RNN modelling in toxicological problems, we employed a data set containing growth impairment concentrations (IGC50) for Tetrahymena pyriformis.

  19. Roy’s Adaptation Model-Based Patient Education for Promoting the Adaptation of Hemodialysis Patients

    PubMed Central

    Afrasiabifar, Ardashir; Karimi, Zohreh; Hassani, Parkhideh

    2013-01-01

    Background In addition to physical adaptation and psychosocial adjustment to chronic renal disease, hemodialysis (HD) patients must also adapt to dialysis therapy plan. Objectives The aim of the present study was to examine the effect of Roy’s adaptation model-based patient education on adaptation of HD patients. Patients and Methods This study is a semi-experimental research that was conducted with the participation of all patients with end-stage renal disease referred to the dialysis unit of Shahid Beheshti Hospital of Yasuj city, 2010. A total of 59 HD patients were randomly allocated to two groups of test and control. Data were collected by a questionnaire based on the Roy’s Adaptation Model (RAM). Validity and reliability of the questionnaire were approved. Patient education was determined by eight one-hour sessions over eight weeks. At the end of the education plan, the patients were given an educational booklet containing the main points of self-care for HD patients. The effectiveness of education plan was assessed two months after plan completion and data were compared with the pre-education scores. All analyses were conducted using the SPSS software (version 16) through descriptive and inferential statistics including correlation, t-test, ANOVA and ANCOVA tests. Results The results showed significant differences in the mean scores of physiological and self-concept models between the test and control groups (P = 0.01 and P = 0.03 respectively). Also a statistical difference (P = 0.04) was observed in the mean scores of the role function mode of both groups. There was no significant difference in the mean scores of interdependence modes between the two groups. Conclusions RAM based patient education could improve the patients’ adaptation in physiologic and self-concept modes. In addition to suggesting further research in this area, nurses are recommended to pay more attention in applying RAM in dialysis centers. PMID:24396575

  20. Retinal mesopic adaptation model for brightness perception under transient glare.

    PubMed

    Barrionuevo, Pablo Alejandro; Colombo, Elisa Margarita; Issolio, Luis Alberto

    2013-06-01

    A glare source in the visual field modifies the brightness of a test patch surrounded by a mesopic background. In this study, we investigated the effect of two levels of transient glare on brightness perception for several combinations of mesopic reference test luminances (Lts) and background luminances (Lbs). While brightness perception was affected by Lb, there were no appreciable effects for changes in the Lt. The highest brightness reduction was found for Lbs in the low mesopic range. Considering the main proposal that brightness can be inferred from contrast and the Lb sets the mesopic luminance adaptation, we hypothesized that contrast gain and retinal adaptation mechanisms would act when a transient glare source was present in the visual field. A physiology-based model that adequately fitted the present and previous results was developed.

  1. Fourier models and the loci of adaptation.

    PubMed

    Makous, W L

    1997-09-01

    First measures of sensitivity and the need for a model to interpret them are addressed. Then modeling in the Fourier domain is promoted by a demonstration of how much an approach explains spatial sensitization and its dependence on luminance. Then the retinal illuminance and receptor absorptions produced by various stimuli are derived to foster interpretation of the neural mechanisms underlying various psychophysical phenomena. Finally, the sequence and the anatomical loci of the processes controlling visual sensitivity are addressed. It is concluded that multiplicative adaptation often has effects identical to response compression followed by subtractive adaptation and that, perhaps as a consequence, there is no evidence of retinal gain changes in human cone vision until light levels are well above those available in natural scenes and in most contemporary psychophysical experiments; that contrast gain control fine tunes sensitivity to patterns at all luminances; and that response compression, modulated by subtractive adaptation, predominates in the control of sensitivity in human cone vision.

  2. Peaks in the Cosmic Microwave Background: Flat versus Open Models

    NASA Astrophysics Data System (ADS)

    Barreiro, R. B.; Sanz, J. L.; Martínez-González, E.; Cayón, L.; Silk, Joseph

    1997-03-01

    We present properties of the peaks (maxima) of the microwave background anisotropies expected in flat and open cold dark matter models. We obtain analytical expressions of several topological descriptors: mean number of maxima and the probability distribution of the Gaussian curvature and the eccentricity of the peaks. These quantities are calculated as functions of the radiation power spectrum, assuming a Gaussian distribution of temperature anisotropies. We present results for angular resolutions ranging from 5' to 20' (antenna FWHM), scales that are relevant for the MAP and COBRAS/SAMBA space missions and the ground-based interferometer experiments. Our analysis also includes the effects of noise. We find that the number of peaks can discriminate between standard cold dark matter models and that the Gaussian curvature distribution provides a useful test for these various models, whereas the eccentricity distribution cannot distinguish between them.

  3. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  4. Modeling Background Attenuation by Sample Matrix in Gamma Spectrometric Analyses

    SciTech Connect

    Bastos, Rodrigo O.; Appoloni, Carlos R.

    2008-08-07

    In laboratory gamma spectrometric analyses, the procedures for estimating background usually overestimate it. If an empty container similar to that used to hold samples is measured, it does not consider the background attenuation by sample matrix. If a 'blank' sample is measured, the hypothesis that this sample will be free of radionuclides is generally not true. The activity of this 'blank' sample is frequently sufficient to mask or to overwhelm the effect of attenuation so that the background remains overestimated. In order to overcome this problem, a model was developed to obtain the attenuated background from the spectrum acquired with the empty container. Beyond reasonable hypotheses, the model presumes the knowledge of the linear attenuation coefficient of the samples and its dependence on photon energy and samples densities. An evaluation of the effects of this model on the Lowest Limit of Detection (LLD) is presented for geological samples placed in cylindrical containers that completely cover the top of an HPGe detector that has a 66% relative efficiency. The results are presented for energies in the range of 63 to 2614keV, for sample densities varying from 1.5 to 2.5 g{center_dot}cm{sup -3}, and for the height of the material on the detector of 2 cm and 5 cm. For a sample density of 2.0 g{center_dot}cm{sup -3} and with a 2cm height, the method allowed for a lowering of 3.4% of the LLD for the energy of 1460keV, from {sup 40}K, 3.9% for the energy of 911keV from {sup 228}Ac, 4.5% for the energy of 609keV from {sup 214}Bi, and8.3% for the energy of 92keV from {sup 234}Th. For a sample density of 1.75 g{center_dot}cm{sup -3} and a 5cm height, the method indicates a lowering of 6.5%, 7.4%, 8.3% and 12.9% of the LLD for the same respective energies.

  5. Adaptive-network models of collective dynamics

    NASA Astrophysics Data System (ADS)

    Zschaler, G.

    2012-09-01

    Complex systems can often be modelled as networks, in which their basic units are represented by abstract nodes and the interactions among them by abstract links. This network of interactions is the key to understanding emergent collective phenomena in such systems. In most cases, it is an adaptive network, which is defined by a feedback loop between the local dynamics of the individual units and the dynamical changes of the network structure itself. This feedback loop gives rise to many novel phenomena. Adaptive networks are a promising concept for the investigation of collective phenomena in different systems. However, they also present a challenge to existing modelling approaches and analytical descriptions due to the tight coupling between local and topological degrees of freedom. In this work, which is essentially my PhD thesis, I present a simple rule-based framework for the investigation of adaptive networks, using which a wide range of collective phenomena can be modelled and analysed from a common perspective. In this framework, a microscopic model is defined by the local interaction rules of small network motifs, which can be implemented in stochastic simulations straightforwardly. Moreover, an approximate emergent-level description in terms of macroscopic variables can be derived from the microscopic rules, which we use to analyse the system's collective and long-term behaviour by applying tools from dynamical systems theory. We discuss three adaptive-network models for different collective phenomena within our common framework. First, we propose a novel approach to collective motion in insect swarms, in which we consider the insects' adaptive interaction network instead of explicitly tracking their positions and velocities. We capture the experimentally observed onset of collective motion qualitatively in terms of a bifurcation in this non-spatial model. We find that three-body interactions are an essential ingredient for collective motion to emerge

  6. Adaptive Behaviour Assessment System: Indigenous Australian Adaptation Model (ABAS: IAAM)

    ERIC Educational Resources Information Center

    du Plessis, Santie

    2015-01-01

    The study objectives were to develop, trial and evaluate a cross-cultural adaptation of the Adaptive Behavior Assessment System-Second Edition Teacher Form (ABAS-II TF) ages 5-21 for use with Indigenous Australian students ages 5-14. This study introduced a multiphase mixed-method design with semi-structured and informal interviews, school…

  7. Adaptive importance sampling for network growth models

    PubMed Central

    Holmes, Susan P.

    2016-01-01

    Network Growth Models such as Preferential Attachment and Duplication/Divergence are popular generative models with which to study complex networks in biology, sociology, and computer science. However, analyzing them within the framework of model selection and statistical inference is often complicated and computationally difficult, particularly when comparing models that are not directly related or nested. In practice, ad hoc methods are often used with uncertain results. If possible, the use of standard likelihood-based statistical model selection techniques is desirable. With this in mind, we develop an Adaptive Importance Sampling algorithm for estimating likelihoods of Network Growth Models. We introduce the use of the classic Plackett-Luce model of rankings as a family of importance distributions. Updates to importance distributions are performed iteratively via the Cross-Entropy Method with an additional correction for degeneracy/over-fitting inspired by the Minimum Description Length principle. This correction can be applied to other estimation problems using the Cross-Entropy method for integration/approximate counting, and it provides an interpretation of Adaptive Importance Sampling as iterative model selection. Empirical results for the Preferential Attachment model are given, along with a comparison to an alternative established technique, Annealed Importance Sampling. PMID:27182098

  8. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  9. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  10. Adaptive Control with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This paper presents a modification of the conventional model reference adaptive control (MRAC) architecture in order to improve transient performance of the input and output signals of uncertain systems. A simple modification of the reference model is proposed by feeding back the tracking error signal. It is shown that the proposed approach guarantees tracking of the given reference command and the reference control signal (one that would be designed if the system were known) not only asymptotically but also in transient. Moreover, it prevents generation of high frequency oscillations, which are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference commands of any magnitude from any initial position without re-tuning. The benefits of the method are demonstrated with a simulation example

  11. Adaptive cyber-attack modeling system

    NASA Astrophysics Data System (ADS)

    Gonsalves, Paul G.; Dougherty, Edward T.

    2006-05-01

    The pervasiveness of software and networked information systems is evident across a broad spectrum of business and government sectors. Such reliance provides an ample opportunity not only for the nefarious exploits of lone wolf computer hackers, but for more systematic software attacks from organized entities. Much effort and focus has been placed on preventing and ameliorating network and OS attacks, a concomitant emphasis is required to address protection of mission critical software. Typical software protection technique and methodology evaluation and verification and validation (V&V) involves the use of a team of subject matter experts (SMEs) to mimic potential attackers or hackers. This manpower intensive, time-consuming, and potentially cost-prohibitive approach is not amenable to performing the necessary multiple non-subjective analyses required to support quantifying software protection levels. To facilitate the evaluation and V&V of software protection solutions, we have designed and developed a prototype adaptive cyber attack modeling system. Our approach integrates an off-line mechanism for rapid construction of Bayesian belief network (BN) attack models with an on-line model instantiation, adaptation and knowledge acquisition scheme. Off-line model construction is supported via a knowledge elicitation approach for identifying key domain requirements and a process for translating these requirements into a library of BN-based cyber-attack models. On-line attack modeling and knowledge acquisition is supported via BN evidence propagation and model parameter learning.

  12. Modeling and adaptive control of acoustic noise

    NASA Astrophysics Data System (ADS)

    Venugopal, Ravinder

    Active noise control is a problem that receives significant attention in many areas including aerospace and manufacturing. The advent of inexpensive high performance processors has made it possible to implement real-time control algorithms to effect active noise control. Both fixed-gain and adaptive methods may be used to design controllers for this problem. For fixed-gain methods, it is necessary to obtain a mathematical model of the system to design controllers. In addition, models help us gain phenomenological insights into the dynamics of the system. Models are also necessary to perform numerical simulations. However, models are often inadequate for the purpose of controller design because they involve parameters that are difficult to determine and also because there are always unmodeled effects. This fact motivates the use of adaptive algorithms for control since adaptive methods usually require significantly less model information than fixed-gain methods. The first part of this dissertation deals with derivation of a state space model of a one-dimensional acoustic duct. Two types of actuation, namely, a side-mounted speaker (interior control) and an end-mounted speaker (boundary control) are considered. The techniques used to derive the model of the acoustic duct are extended to the problem of fluid surface wave control. A state space model of small amplitude surfaces waves of a fluid in a rectangular container is derived and two types of control methods, namely, surface pressure control and map actuator based control are proposed and analyzed. The second part of this dissertation deals with the development of an adaptive disturbance rejection algorithm that is applied to the problem of active noise control. ARMARKOV models which have the same structure as predictor models are used for system representation. The algorithm requires knowledge of only one path of the system, from control to performance, and does not require a measurement of the disturbance nor

  13. Adaptive human behavior in epidemiological models.

    PubMed

    Fenichel, Eli P; Castillo-Chavez, Carlos; Ceddia, M G; Chowell, Gerardo; Parra, Paula A Gonzalez; Hickling, Graham J; Holloway, Garth; Horan, Richard; Morin, Benjamin; Perrings, Charles; Springborn, Michael; Velazquez, Leticia; Villalobos, Cristina

    2011-04-12

    The science and management of infectious disease are entering a new stage. Increasingly public policy to manage epidemics focuses on motivating people, through social distancing policies, to alter their behavior to reduce contacts and reduce public disease risk. Person-to-person contacts drive human disease dynamics. People value such contacts and are willing to accept some disease risk to gain contact-related benefits. The cost-benefit trade-offs that shape contact behavior, and hence the course of epidemics, are often only implicitly incorporated in epidemiological models. This approach creates difficulty in parsing out the effects of adaptive behavior. We use an epidemiological-economic model of disease dynamics to explicitly model the trade-offs that drive person-to-person contact decisions. Results indicate that including adaptive human behavior significantly changes the predicted course of epidemics and that this inclusion has implications for parameter estimation and interpretation and for the development of social distancing policies. Acknowledging adaptive behavior requires a shift in thinking about epidemiological processes and parameters.

  14. Involvement of melanin-concentrating hormone 2 in background color adaptation of barfin flounder Verasper moseri.

    PubMed

    Mizusawa, Kanta; Kawashima, Yusuke; Sunuma, Toshikazu; Hamamoto, Akie; Kobayashi, Yuki; Kodera, Yoshio; Saito, Yumiko; Takahashi, Akiyoshi

    2015-04-01

    In teleosts, melanin-concentrating hormone (MCH) plays a key role in skin color changes. MCH is released into general circulation from the neurohypophysis, which causes pigment aggregation in the skin chromatophores. Recently, a novel MCH (MCH2) precursor gene, which is orthologous to the mammalian MCH precursor gene, has been identified in some teleosts using genomic data mining. The physiological function of MCH2 remains unclear. In the present study, we cloned the cDNA for MCH2 from barfin flounder, Verasper moseri. The putative prepro-MCH2 contains 25 amino acids of MCH2 peptide region. Liquid chromatography-electrospray ionization mass spectrometry with a high resolution mass analyzer were used for confirming the amino acid sequences of MCH1 and MCH2 peptides from the pituitary extract. In vitro synthesized MCH1 and MCH2 induced pigment aggregation in a dose-dependent manner. A mammalian cell-based assay indicated that both MCH1 and MCH2 functionally interacted with both the MCH receptor types 1 and 2. Mch1 and mch2 are exclusively expressed in the brain and pituitary. The levels of brain mch2 transcript were three times higher in the fish that were chronically acclimated to a white background than those acclimated to a black background. These results suggest that in V. moseri, MCH1 and MCH2 are involved in the response to changes in background colors, during the process of chromatophore control.

  15. A quadtree-adaptive spectral wave model

    NASA Astrophysics Data System (ADS)

    Popinet, Stéphane; Gorman, Richard M.; Rickard, Graham J.; Tolman, Hendrik L.

    A spectral wave model coupling a quadtree-adaptive discretisation of the two spatial dimensions with a standard discretisation of the two spectral dimensions is described. The implementation is greatly simplified by reusing components of the Gerris solver (for spatial advection on quadtrees) and WAVEWATCH III (for spectral advection and source terms). Strict equivalence between the anisotropic diffusion and spatial filtering methods for alleviation of the Garden Sprinkler Effect (GSE) is demonstrated. This equivalence facilitates the generalisation of GSE alleviation techniques to quadtree grids. For the case of a cyclone-generated wave field, the cost of the adaptive method increases linearly with spatial resolution compared to quadratically for constant-resolution methods. This leads to decrease in runtimes of one to two orders of magnitude for practical spatial resolutions. Similar efficiency gains are shown to be possible for global spectral wave forecasting.

  16. An adaptive contextual quantum language model

    NASA Astrophysics Data System (ADS)

    Li, Jingfei; Zhang, Peng; Song, Dawei; Hou, Yuexian

    2016-08-01

    User interactions in search system represent a rich source of implicit knowledge about the user's cognitive state and information need that continuously evolves over time. Despite massive efforts that have been made to exploiting and incorporating this implicit knowledge in information retrieval, it is still a challenge to effectively capture the term dependencies and the user's dynamic information need (reflected by query modifications) in the context of user interaction. To tackle these issues, motivated by the recent Quantum Language Model (QLM), we develop a QLM based retrieval model for session search, which naturally incorporates the complex term dependencies occurring in user's historical queries and clicked documents with density matrices. In order to capture the dynamic information within users' search session, we propose a density matrix transformation framework and further develop an adaptive QLM ranking model. Extensive comparative experiments show the effectiveness of our session quantum language models.

  17. Gravitational wave background from Standard Model physics: qualitative features

    NASA Astrophysics Data System (ADS)

    Ghiglieri, J.; Laine, M.

    2015-07-01

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.

  18. Gravitational wave background from Standard Model physics: qualitative features

    SciTech Connect

    Ghiglieri, J.; Laine, M.

    2015-07-16

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at T>160 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.

  19. Gravitational wave background from Standard Model physics: qualitative features

    SciTech Connect

    Ghiglieri, J.; Laine, M. E-mail: laine@itp.unibe.ch

    2015-07-01

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.

  20. Synaptic dynamics: linear model and adaptation algorithm.

    PubMed

    Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W

    2014-08-01

    In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and

  1. Adaptive neuro-fuzzy inference system for classification of background EEG signals from ESES patients and controls.

    PubMed

    Yang, Zhixian; Wang, Yinghua; Ouyang, Gaoxiang

    2014-01-01

    Background electroencephalography (EEG), recorded with scalp electrodes, in children with electrical status epilepticus during slow-wave sleep (ESES) syndrome and control subjects has been analyzed. We considered 10 ESES patients, all right-handed and aged 3-9 years. The 10 control individuals had the same characteristics of the ESES ones but presented a normal EEG. Recordings were undertaken in the awake and relaxed states with their eyes open. The complexity of background EEG was evaluated using the permutation entropy (PE) and sample entropy (SampEn) in combination with the ANOVA test. It can be seen that the entropy measures of EEG are significantly different between the ESES patients and normal control subjects. Then, a classification framework based on entropy measures and adaptive neuro-fuzzy inference system (ANFIS) classifier is proposed to distinguish ESES and normal EEG signals. The results are promising and a classification accuracy of about 89% is achieved. PMID:24790547

  2. Adaptive Neuro-Fuzzy Inference System for Classification of Background EEG Signals from ESES Patients and Controls

    PubMed Central

    Yang, Zhixian; Wang, Yinghua; Ouyang, Gaoxiang

    2014-01-01

    Background electroencephalography (EEG), recorded with scalp electrodes, in children with electrical status epilepticus during slow-wave sleep (ESES) syndrome and control subjects has been analyzed. We considered 10 ESES patients, all right-handed and aged 3–9 years. The 10 control individuals had the same characteristics of the ESES ones but presented a normal EEG. Recordings were undertaken in the awake and relaxed states with their eyes open. The complexity of background EEG was evaluated using the permutation entropy (PE) and sample entropy (SampEn) in combination with the ANOVA test. It can be seen that the entropy measures of EEG are significantly different between the ESES patients and normal control subjects. Then, a classification framework based on entropy measures and adaptive neuro-fuzzy inference system (ANFIS) classifier is proposed to distinguish ESES and normal EEG signals. The results are promising and a classification accuracy of about 89% is achieved. PMID:24790547

  3. Adaptive Estimation with Partially Overlapping Models

    PubMed Central

    Shin, Sunyoung; Fine, Jason; Liu, Yufeng

    2015-01-01

    In many problems, one has several models of interest that capture key parameters describing the distribution of the data. Partially overlapping models are taken as models in which at least one covariate effect is common to the models. A priori knowledge of such structure enables efficient estimation of all model parameters. However, in practice, this structure may be unknown. We propose adaptive composite M-estimation (ACME) for partially overlapping models using a composite loss function, which is a linear combination of loss functions defining the individual models. Penalization is applied to pairwise differences of parameters across models, resulting in data driven identification of the overlap structure. Further penalization is imposed on the individual parameters, enabling sparse estimation in the regression setting. The recovery of the overlap structure enables more efficient parameter estimation. An oracle result is established. Simulation studies illustrate the advantages of ACME over existing methods that fit individual models separately or make strong a priori assumption about the overlap structure. PMID:26917931

  4. Effects of background adaptation on the pituitary and plasma concentrations of some pro-opiomelanocortin-related peptides in the rainbow trout (Salmo gairdneri).

    PubMed

    Rodrigues, K T; Sumpter, J P

    1984-06-01

    Radioimmunoassays for alpha-MSH, beta-MSH, ACTH and endorphin were used to measure pituitary concentrations of these peptides in rainbow trout during adaptation to black and white backgrounds. There was no difference in the pituitary content of any of these peptides between long-term black- and white-adapted trout. Plasma levels of alpha-MSH immunoreactivity were significantly higher in black-adapted trout than in white-adapted trout. Time-course studies revealed that although the body colour of trout showed an initial rapid adaptation to background colour, this was not paralleled by a corresponding change in plasma alpha-MSH levels. These only showed significant changes after 7 or more days of background adaptation, when melanophore recruitment or degradation occurred on black or white backgrounds respectively. Intravenous administration of mammalian alpha-MSH, salmon beta-MSH I or antibodies to these peptides did not affect short-term background adaptation. However, long-term administration of mammalian alpha-MSH via osmotic minipump maintained melanophore numbers in grey-adapted trout transferred to a white background, although this observation was based on only two fish. It is concluded that peptides derived from pro-opiomelanocortin do not appear to be involved in controlling physiological colour change but may be involved in regulating morphological colour change of the rainbow trout.

  5. Physic-mathematical modeling of atmospheric tides influence on background circulation and background temperature of lower Earth thermosphere.

    NASA Astrophysics Data System (ADS)

    Gavrilov, Anatoliy; Kapitsa, Andrey

    Nonstationary semiempirical model of: 1) atmospheric thermal tides (ATT) in middle Earth atmosphere conditional on ozone and water vapor absorption; 2) tidal disturbances (TD) gen-erated by global ozone anomalies. Geophysical phenomenon -distant wave action (teleconnec-tion) of Antarctic ozone anomaly on thermal tidal wind structure, background circulation and background temperature in middle-latitude and polar lower thermosphere of northern hemi-sphere, which was not known earlier, and which was found by means of numerical experiments on constructed model and confirmed by observations, is described. Mean zonal numerical cor-recting models of background circulation and background temperature in lower thermosphere due to semidiurnal and diurnal ATT dissipation at these heights are given. It is noted that background temperature corrections reach maximum value of 40-50 degrees in polar lower ther-mosphere of both hemispheres at 110-metricconverterProductID120 km120 km height due to "heating" caused by semidiurnal ATT, both during equinox and solstice.

  6. A model of incomplete chromatic adaptation for calculating corresponding colors

    SciTech Connect

    Fairchild, M.D.

    1990-01-01

    A new mathematical model of chromatic adaptation for calculating corresponding colors across changes in illumination is formulated and tested. This model consists of a modified von Kries transform that accounts for incomplete levels of adaptation. The model predicts that adaptation will be less complete as the saturation of the adapting stimulus increases and more complete as the luminance of the adapting stimulus increases. The model is tested with experimental results from two different studies and found to be significantly better at predicting corresponding colors than other proposed models. This model represents a first step toward the specification of color appearance across varying conditions. 30 refs., 3 figs., 1 tab.

  7. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  8. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  9. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  10. Complex interplay between neutral and adaptive evolution shaped differential genomic background and disease susceptibility along the Italian peninsula

    PubMed Central

    Sazzini, Marco; Gnecchi Ruscone, Guido Alberto; Giuliani, Cristina; Sarno, Stefania; Quagliariello, Andrea; De Fanti, Sara; Boattini, Alessio; Gentilini, Davide; Fiorito, Giovanni; Catanoso, Mariagrazia; Boiardi, Luigi; Croci, Stefania; Macchioni, Pierluigi; Mantovani, Vilma; Di Blasio, Anna Maria; Matullo, Giuseppe; Salvarani, Carlo; Franceschi, Claudio; Pettener, Davide; Garagnani, Paolo; Luiselli, Donata

    2016-01-01

    The Italian peninsula has long represented a natural hub for human migrations across the Mediterranean area, being involved in several prehistoric and historical population movements. Coupled with a patchy environmental landscape entailing different ecological/cultural selective pressures, this might have produced peculiar patterns of population structure and local adaptations responsible for heterogeneous genomic background of present-day Italians. To disentangle this complex scenario, genome-wide data from 780 Italian individuals were generated and set into the context of European/Mediterranean genomic diversity by comparison with genotypes from 50 populations. To maximize possibility of pinpointing functional genomic regions that have played adaptive roles during Italian natural history, our survey included also ~250,000 exomic markers and ~20,000 coding/regulatory variants with well-established clinical relevance. This enabled fine-grained dissection of Italian population structure through the identification of clusters of genetically homogeneous provinces and of genomic regions underlying their local adaptations. Description of such patterns disclosed crucial implications for understanding differential susceptibility to some inflammatory/autoimmune disorders, coronary artery disease and type 2 diabetes of diverse Italian subpopulations, suggesting the evolutionary causes that made some of them particularly exposed to the metabolic and immune challenges imposed by dietary and lifestyle shifts that involved western societies in the last centuries. PMID:27582244

  11. Complex interplay between neutral and adaptive evolution shaped differential genomic background and disease susceptibility along the Italian peninsula.

    PubMed

    Sazzini, Marco; Gnecchi Ruscone, Guido Alberto; Giuliani, Cristina; Sarno, Stefania; Quagliariello, Andrea; De Fanti, Sara; Boattini, Alessio; Gentilini, Davide; Fiorito, Giovanni; Catanoso, Mariagrazia; Boiardi, Luigi; Croci, Stefania; Macchioni, Pierluigi; Mantovani, Vilma; Di Blasio, Anna Maria; Matullo, Giuseppe; Salvarani, Carlo; Franceschi, Claudio; Pettener, Davide; Garagnani, Paolo; Luiselli, Donata

    2016-01-01

    The Italian peninsula has long represented a natural hub for human migrations across the Mediterranean area, being involved in several prehistoric and historical population movements. Coupled with a patchy environmental landscape entailing different ecological/cultural selective pressures, this might have produced peculiar patterns of population structure and local adaptations responsible for heterogeneous genomic background of present-day Italians. To disentangle this complex scenario, genome-wide data from 780 Italian individuals were generated and set into the context of European/Mediterranean genomic diversity by comparison with genotypes from 50 populations. To maximize possibility of pinpointing functional genomic regions that have played adaptive roles during Italian natural history, our survey included also ~250,000 exomic markers and ~20,000 coding/regulatory variants with well-established clinical relevance. This enabled fine-grained dissection of Italian population structure through the identification of clusters of genetically homogeneous provinces and of genomic regions underlying their local adaptations. Description of such patterns disclosed crucial implications for understanding differential susceptibility to some inflammatory/autoimmune disorders, coronary artery disease and type 2 diabetes of diverse Italian subpopulations, suggesting the evolutionary causes that made some of them particularly exposed to the metabolic and immune challenges imposed by dietary and lifestyle shifts that involved western societies in the last centuries. PMID:27582244

  12. Plant adaptive behaviour in hydrological models (Invited)

    NASA Astrophysics Data System (ADS)

    van der Ploeg, M. J.; Teuling, R.

    2013-12-01

    Models that will be able to cope with future precipitation and evaporation regimes need a solid base that describes the essence of the processes involved [1]. Micro-behaviour in the soil-vegetation-atmosphere system may have a large impact on patterns emerging at larger scales. A complicating factor in the micro-behaviour is the constant interaction between vegetation and geology in which water plays a key role. The resilience of the coupled vegetation-soil system critically depends on its sensitivity to environmental changes. As a result of environmental changes vegetation may wither and die, but such environmental changes may also trigger gene adaptation. Constant exposure to environmental stresses, biotic or abiotic, influences plant physiology, gene adaptations, and flexibility in gene adaptation [2-6]. Gene expression as a result of different environmental conditions may profoundly impact drought responses across the same plant species. Differences in response to an environmental stress, has consequences for the way species are currently being treated in models (single plant to global scale). In particular, model parameters that control root water uptake and plant transpiration are generally assumed to be a property of the plant functional type. Assigning plant functional types does not allow for local plant adaptation to be reflected in the model parameters, nor does it allow for correlations that might exist between root parameters and soil type. Models potentially provide a means to link root water uptake and transport to large scale processes (e.g. Rosnay and Polcher 1998, Feddes et al. 2001, Jung 2010), especially when powered with an integrated hydrological, ecological and physiological base. We explore the experimental evidence from natural vegetation to formulate possible alternative modeling concepts. [1] Seibert, J. 2000. Multi-criteria calibration of a conceptual runoff model using a genetic algorithm. Hydrology and Earth System Sciences 4(2): 215

  13. The Adaptive Calibration Model of stress responsivity

    PubMed Central

    Ellis, Bruce J.; Shirtcliff, Elizabeth A.

    2010-01-01

    This paper presents the Adaptive Calibration Model (ACM), an evolutionary-developmental theory of individual differences in the functioning of the stress response system. The stress response system has three main biological functions: (1) to coordinate the organism’s allostatic response to physical and psychosocial challenges; (2) to encode and filter information about the organism’s social and physical environment, mediating the organism’s openness to environmental inputs; and (3) to regulate the organism’s physiology and behavior in a broad range of fitness-relevant areas including defensive behaviors, competitive risk-taking, learning, attachment, affiliation and reproductive functioning. The information encoded by the system during development feeds back on the long-term calibration of the system itself, resulting in adaptive patterns of responsivity and individual differences in behavior. Drawing on evolutionary life history theory, we build a model of the development of stress responsivity across life stages, describe four prototypical responsivity patterns, and discuss the emergence and meaning of sex differences. The ACM extends the theory of biological sensitivity to context (BSC) and provides an integrative framework for future research in the field. PMID:21145350

  14. European upper mantle tomography: adaptively parameterized models

    NASA Astrophysics Data System (ADS)

    Schäfer, J.; Boschi, L.

    2009-04-01

    We have devised a new algorithm for upper-mantle surface-wave tomography based on adaptive parameterization: i.e. the size of each parameterization pixel depends on the local density of seismic data coverage. The advantage in using this kind of parameterization is that a high resolution can be achieved in regions with dense data coverage while a lower (and cheaper) resolution is kept in regions with low coverage. This way, parameterization is everywhere optimal, both in terms of its computational cost, and of model resolution. This is especially important for data sets with inhomogenous data coverage, as it is usually the case for global seismic databases. The data set we use has an especially good coverage around Switzerland and over central Europe. We focus on periods from 35s to 150s. The final goal of the project is to determine a new model of seismic velocities for the upper mantle underlying Europe and the Mediterranean Basin, of resolution higher than what is currently found in the literature. Our inversions involve regularization via norm and roughness minimization, and this in turn requires that discrete norm and roughness operators associated with our adaptive grid be precisely defined. The discretization of the roughness damping operator in the case of adaptive parameterizations is not as trivial as it is for the uniform ones; important complications arise from the significant lateral variations in the size of pixels. We chose to first define the roughness operator in a spherical harmonic framework, and subsequently translate it to discrete pixels via a linear transformation. Since the smallest pixels we allow in our parameterization have a size of 0.625 °, the spherical-harmonic roughness operator has to be defined up to harmonic degree 899, corresponding to 810.000 harmonic coefficients. This results in considerable computational costs: we conduct the harmonic-pixel transformations on a small Beowulf cluster. We validate our implementation of adaptive

  15. DANA: distributed numerical and adaptive modelling framework.

    PubMed

    Rougier, Nicolas P; Fix, Jérémy

    2012-01-01

    DANA is a python framework ( http://dana.loria.fr ) whose computational paradigm is grounded on the notion of a unit that is essentially a set of time dependent values varying under the influence of other units via adaptive weighted connections. The evolution of a unit's value are defined by a set of differential equations expressed in standard mathematical notation which greatly ease their definition. The units are organized into groups that form a model. Each unit can be connected to any other unit (including itself) using a weighted connection. The DANA framework offers a set of core objects needed to design and run such models. The modeler only has to define the equations of a unit as well as the equations governing the training of the connections. The simulation is completely transparent to the modeler and is handled by DANA. This allows DANA to be used for a wide range of numerical and distributed models as long as they fit the proposed framework (e.g. cellular automata, reaction-diffusion system, decentralized neural networks, recurrent neural networks, kernel-based image processing, etc.).

  16. Genetic background influences adaptation to cardiac hypertrophy and Ca(2+) handling gene expression.

    PubMed

    Waters, Steve B; Diak, Douglass M; Zuckermann, Matthew; Goldspink, Paul H; Leoni, Lara; Roman, Brian B

    2013-01-01

    Genetic variability has a profound effect on the development of cardiac hypertrophy in response to stress. Consequently, using a variety of inbred mouse strains with known genetic profiles may be powerful models for studying the response to cardiovascular stress. To explore this approach we looked at male C57BL/6J and 129/SvJ mice. Hemodynamic analyses of left ventricular pressures (LVPs) indicated significant differences in 129/SvJ and C57BL/6J mice that implied altered Ca(2+) handling. Specifically, 129/SvJ mice demonstrated reduced rates of relaxation and insensitivity to dobutamine (Db). We hypothesized that altered expression of genes controlling the influx and efflux of Ca(2+) from the sarcoplasmic reticulum (SR) was responsible and investigated the expression of several genes involved in maintaining the intracellular and sarcoluminal Ca(2+) concentration using quantitative real-time PCR analyses (qRT-PCR). We observed significant differences in baseline gene expression as well as different responses in expression to isoproterenol (ISO) challenge. In untreated control animals, 129/SvJ mice expressed 1.68× more ryanodine receptor 2(Ryr2) mRNA than C57BL/6J mice but only 0.37× as much calsequestrin 2 (Casq2). After treatment with ISO, sarco(endo)plasmic reticulum Ca(2+)-ATPase(Serca2) expression was reduced nearly two-fold in 129/SvJ while expression in C57BL/6J was stable. Interestingly, β (1) adrenergic receptor(Adrb1) expression was lower in 129/SvJ compared to C57BL/6J at baseline and lower in both strains after treatment. Metabolically, the brain isoform of creatine kinase (Ckb) was up-regulated in response to ISO in C57BL/6J but not in 129/SvJ. These data suggest that the two strains of mice regulate Ca(2+) homeostasis via different mechanisms and may be useful in developing personalized therapies in human patients.

  17. Adaptable Multivariate Calibration Models for Spectral Applications

    SciTech Connect

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  18. Patterns of coral bleaching: Modeling the adaptive bleaching hypothesis

    USGS Publications Warehouse

    Ware, J.R.; Fautin, D.G.; Buddemeier, R.W.

    1996-01-01

    Bleaching - the loss of symbiotic dinoflagellates (zooxanthellae) from animals normally possessing them - can be induced by a variety of stresses, of which temperature has received the most attention. Bleaching is generally considered detrimental, but Buddemeier and Fautin have proposed that bleaching is also adaptive, providing an opportunity for recombining hosts with alternative algal types to form symbioses that might be better adapted to altered circumstances. Our mathematical model of this "adaptive bleaching hypothesis" provides insight into how animal-algae symbioses might react under various circumstances. It emulates many aspects of the coral bleaching phenomenon including: corals bleaching in response to a temperature only slightly greater than their average local maximum temperature; background bleaching; bleaching events being followed by bleaching of lesser magnitude in the subsequent one to several years; higher thermal tolerance of corals subject to environmental variability compared with those living under more constant conditions; patchiness in bleaching; and bleaching at temperatures that had not previously resulted in bleaching. ?? 1996 Elsevier Science B.V. All rights reserved.

  19. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  20. A novel approach to model EPIC variable background

    NASA Astrophysics Data System (ADS)

    Marelli, M.; De Luca, A.; Salvetti, D.; Belfiore, A.; Pizzocaro, D.

    2016-06-01

    In the past years XMM-Newton revolutionized our way to look at the X-ray sky. With more than 200 Ms of exposure, it allowed for numerous discoveries in every field of astronomy. Unfortunately, about 35% of the observing time is badly affected by soft proton flares, with background increasing by orders of magnitudes hampering any classical analysis of field sources. One of the main aim of the EXTraS ("Exploring the X-ray Transient and variable Sky") project is to characterise the variability of XMM-Newton sources within each single observation, including periods of high background. This posed severe challenges. I will describe a novel approach that we implemented within the EXTraS project to produce background-subtracted light curves, that allows to treat the case of very faint sources and very large proton flares. EXTraS light curves will be soon released to the community, together with new tools that will allow the user to reproduce EXTraS results, as well as to extend a similar analysis to future data. Results of this work (including an unprecedented characterisation of the soft proton phenomenon and instrument response) will also serve as a reference for future missions and will be particularly relevant for the Athena observatory.

  1. Image Discrimination Models for Object Detection in Natural Backgrounds

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.

    2000-01-01

    This paper reviews work accomplished and in progress at NASA Ames relating to visual target detection. The focus is on image discrimination models, starting with Watson's pioneering development of a simple spatial model and progressing through this model's descendents and extensions. The application of image discrimination models to target detection will be described and results reviewed for Rohaly's vehicle target data and the Search 2 data. The paper concludes with a description of work we have done to model the process by which observers learn target templates and methods for elucidating those templates.

  2. A Roy model study of adapting to being HIV positive.

    PubMed

    Perrett, Stephanie E; Biley, Francis C

    2013-10-01

    Roy's adaptation model outlines a generic process of adaptation useful to nurses in any situation where a patient is facing change. To advance nursing practice, nursing theories and frameworks must be constantly tested and developed through research. This article describes how the results of a qualitative grounded theory study have been used to test components of the Roy adaptation model. A framework for "negotiating uncertainty" was the result of a grounded theory study exploring adaptation to HIV. This framework has been compared to the Roy adaptation model, strengthening concepts such as focal and contextual stimuli, Roy's definition of adaptation and her description of adaptive modes, while suggesting areas for further development including the role of perception. The comparison described in this article demonstrates the usefulness of qualitative research in developing nursing models, specifically highlighting opportunities to continue refining Roy's work. PMID:24085671

  3. Computational modeling of multispectral remote sensing systems: Background investigations

    NASA Technical Reports Server (NTRS)

    Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic process of remote sensing has been developed based upon the results of the investigations presented. The model is used in studying concepts for improving worldwide environment and resource monitoring. A review of various atmospheric radiative transfer models is presented as well as details of the selected model. Functional forms for spectral diffuse reflectance with variability introduced are also presented. A cloud detection algorithm and the stochastic nature of remote sensing data with its implications are considered.

  4. An Online Adaptive Model for Location Prediction

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, Theodoros; Anagnostopoulos, Christos; Hadjiefthymiades, Stathes

    Context-awareness is viewed as one of the most important aspects in the emerging pervasive computing paradigm. Mobile context-aware applications are required to sense and react to changing environment conditions. Such applications, usually, need to recognize, classify and predict context in order to act efficiently, beforehand, for the benefit of the user. In this paper, we propose a mobility prediction model, which deals with context representation and location prediction of moving users. Machine Learning (ML) techniques are used for trajectory classification. Spatial and temporal on-line clustering is adopted. We rely on Adaptive Resonance Theory (ART) for location prediction. Location prediction is treated as a context classification problem. We introduce a novel classifier that applies a Hausdorff-like distance over the extracted trajectories handling location prediction. Since our approach is time-sensitive, the Hausdorff distance is considered more advantageous than a simple Euclidean norm. A learning method is presented and evaluated. We compare ART with Offline kMeans and Online kMeans algorithms. Our findings are very promising for the use of the proposed model in mobile context aware applications.

  5. Transitional Jobs: Background, Program Models, and Evaluation Evidence

    ERIC Educational Resources Information Center

    Bloom, Dan

    2010-01-01

    The budget for the U.S. Department of Labor for Fiscal Year 2010 includes a total of $45 million to support and study transitional jobs. This paper describes the origins of the transitional jobs models that are operating today, reviews the evidence on the effectiveness of this approach and other subsidized employment models, and offers some…

  6. Photovoltaic market analysis program: Background, model development, applications and extensions

    NASA Astrophysics Data System (ADS)

    Lilien, G. L.; Fuller, F. H.

    1981-04-01

    Tools and procedures to help guide government spending decisions associated with stimulating photovoltaic market penetration were developed. The program has three main components: (1) theoretical analysis aimed at understanding qualitatively what general types of policies are likely to be most cost effective in stimulating PV market penetration; (2) operational model developent (PV1), providing a user oriented tool to study quantitatively the relative effectiveness of specific government spending options and (3) field measurements, aimed at providing objective estimates of the parameters used in the diffusion model used in PV1. Existing models of solar technology diffusion are reviewed and the structure of the PV1 model is described. Theoretical results on optimal strategies for spending federal market development and subsidy funds are reviewed. The validity of these results is checked by comparing them with PV1 projections of penetration and cost forecasts for 15 government policy strategies which are simulated on the PV1 model.

  7. Adapting the ALP Model for Student and Institutional Needs

    ERIC Educational Resources Information Center

    Sides, Meredith

    2016-01-01

    With the increasing adoption of accelerated models of learning comes the necessary step of adapting these models to fit the unique needs of the student population at each individual institution. One such college adapted the ALP (Accelerated Learning Program) model and made specific changes to the target population, structure and scheduling, and…

  8. A Sharing Item Response Theory Model for Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Segall, Daniel O.

    2004-01-01

    A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection…

  9. A Comparison between High-Energy Radiation Background Models and SPENVIS Trapped-Particle Radiation Models

    NASA Technical Reports Server (NTRS)

    Krizmanic, John F.

    2013-01-01

    We have been assessing the effects of background radiation in low-Earth orbit for the next generation of X-ray and Cosmic-ray experiments, in particular for International Space Station orbit. Outside the areas of high fluxes of trapped radiation, we have been using parameterizations developed by the Fermi team to quantify the high-energy induced background. For the low-energy background, we have been using the AE8 and AP8 SPENVIS models to determine the orbit fractions where the fluxes of trapped particles are too high to allow for useful operation of the experiment. One area we are investigating is how the fluxes of SPENVIS predictions at higher energies match the fluxes at the low-energy end of our parameterizations. I will summarize our methodology for background determination from the various sources of cosmogenic and terrestrial radiation and how these compare to SPENVIS predictions in overlapping energy ranges.

  10. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  11. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  12. Radiation Background and Attenuation Model Validation and Development

    SciTech Connect

    Peplow, Douglas E.; Santiago, Claudio P.

    2015-08-05

    This report describes the initial results of a study being conducted as part of the Urban Search Planning Tool project. The study is comparing the Urban Scene Simulator (USS), a one-dimensional (1D) radiation transport model developed at LLNL, with the three-dimensional (3D) radiation transport model from ORNL using the MCNP, SCALE/ORIGEN and SCALE/MAVRIC simulation codes. In this study, we have analyzed the differences between the two approaches at every step, from source term representation, to estimating flux and detector count rates at a fixed distance from a simple surface (slab), and at points throughout more complex 3D scenes.

  13. Modeling Background Radiation in our Environment Using Geochemical Data

    SciTech Connect

    Malchow, Russell L.; Marsac, Kara; Burnley, Pamela; Hausrath, Elisabeth; Haber, Daniel; Adcock, Christopher

    2015-02-01

    Radiation occurs naturally in bedrock and soil. Gamma rays are released from the decay of the radioactive isotopes K, U, and Th. Gamma rays observed at the surface come from the first 30 cm of rock and soil. The energy of gamma rays is specific to each isotope, allowing identification. For this research, data was collected from national databases, private companies, scientific literature, and field work. Data points were then evaluated for self-consistency. A model was created by converting concentrations of U, K, and Th for each rock and soil unit into a ground exposure rate using the following equation: D=1.32 K+ 0.548 U+ 0.272 Th. The first objective of this research was to compare the original Aerial Measurement System gamma ray survey to results produced by the model. The second objective was to improve the method and learn the constraints of the model. Future work will include sample data analysis from field work with a goal of improving the geochemical model.

  14. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  15. Adapting of the Background-Oriented Schlieren (BOS) Technique in the Characterization of the Flow Regimes in Thermal Spraying Processes

    NASA Astrophysics Data System (ADS)

    Tillmann, W.; Abdulgader, M.; Rademacher, H. G.; Anjami, N.; Hagen, L.

    2014-01-01

    In thermal spraying technique, the changes in the in-flight particle velocities are considered to be only a function of the drag forces caused by the dominating flow regimes in the spray jet. Therefore, the correct understanding of the aerodynamic phenomena occurred at nozzle out let and at the substrate interface is an important task in the targeted improvement in the nozzle and air-cap design as well as in the spraying process in total. The presented work deals with the adapting of an innovative technique for the flow characterization called background-oriented Schlieren. The flow regimes in twin wire arc spraying (TWAS) and high velocity oxygen fuel (HVOF) were analyzed with this technique. The interfering of the atomization gas flow with the intersected wires causes in case of TWAS process a deformation of the jet shape. It leads also to areas with different aero dynamic forces. The configurations of the outlet air-caps in TWAS effect predominantly the outlet flow characteristics. The ratio between fuel and oxygen determine the dominating flow regimes in the HVOF spraying jet. Enhanced understanding of the aerodynamics at outlet and at the substrate interface could lead to a targeted improvement in thermal spraying processes.

  16. Background model systematics for the Fermi GeV excess

    NASA Astrophysics Data System (ADS)

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy Ebreak = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of mχ=49+6.4-5.4 GeV.

  17. Background model systematics for the Fermi GeV excess

    SciTech Connect

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E(break) = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of m(χ)=49(+6.4)(-)(5.4)  GeV.

  18. Modeling Adaptation as a Flow and Stock Decision with Mitigation

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-liv...

  19. Modeling Adaptation as a Flow and Stock Decsion with Mitigation

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-live...

  20. Modeling Two Types of Adaptation to Climate Change

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-live...

  1. Gradient-based adaptation of continuous dynamic model structures

    NASA Astrophysics Data System (ADS)

    La Cava, William G.; Danai, Kourosh

    2016-01-01

    A gradient-based method of symbolic adaptation is introduced for a class of continuous dynamic models. The proposed model structure adaptation method starts with the first-principles model of the system and adapts its structure after adjusting its individual components in symbolic form. A key contribution of this work is its introduction of the model's parameter sensitivity as the measure of symbolic changes to the model. This measure, which is essential to defining the structural sensitivity of the model, not only accommodates algebraic evaluation of candidate models in lieu of more computationally expensive simulation-based evaluation, but also makes possible the implementation of gradient-based optimisation in symbolic adaptation. The proposed method is applied to models of several virtual and real-world systems that demonstrate its potential utility.

  2. Identifying traits for genotypic adaptation using crop models.

    PubMed

    Ramirez-Villegas, Julian; Watson, James; Challinor, Andrew J

    2015-06-01

    Genotypic adaptation involves the incorporation of novel traits in crop varieties so as to enhance food productivity and stability and is expected to be one of the most important adaptation strategies to future climate change. Simulation modelling can provide the basis for evaluating the biophysical potential of crop traits for genotypic adaptation. This review focuses on the use of models for assessing the potential benefits of genotypic adaptation as a response strategy to projected climate change impacts. Some key crop responses to the environment, as well as the role of models and model ensembles for assessing impacts and adaptation, are first reviewed. Next, the review describes crop-climate models can help focus the development of future-adapted crop germplasm in breeding programmes. While recently published modelling studies have demonstrated the potential of genotypic adaptation strategies and ideotype design, it is argued that, for model-based studies of genotypic adaptation to be used in crop breeding, it is critical that modelled traits are better grounded in genetic and physiological knowledge. To this aim, two main goals need to be pursued in future studies: (i) a better understanding of plant processes that limit productivity under future climate change; and (ii) a coupling between genetic and crop growth models-perhaps at the expense of the number of traits analysed. Importantly, the latter may imply additional complexity (and likely uncertainty) in crop modelling studies. Hence, appropriately constraining processes and parameters in models and a shift from simply quantifying uncertainty to actually quantifying robustness towards modelling choices are two key aspects that need to be included into future crop model-based analyses of genotypic adaptation.

  3. Image Watermarking Based on Adaptive Models of Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Khawne, Amnach; Hamamoto, Kazuhiko; Chitsobhuk, Orachat

    This paper proposes a digital image watermarking based on adaptive models of human visual perception. The algorithm exploits the local activities estimated from wavelet coefficients of each subband to adaptively control the luminance masking. The adaptive luminance is thus delicately combined with the contrast masking and edge detection and adopted as a visibility threshold. With the proposed combination of adaptive visual sensitivity parameters, the proposed perceptual model can be more appropriate to the different characteristics of various images. The weighting function is chosen such that the fidelity, imperceptibility and robustness could be preserved without making any perceptual difference to the image quality.

  4. Consensus time and conformity in the adaptive voter model

    NASA Astrophysics Data System (ADS)

    Rogers, Tim; Gross, Thilo

    2013-09-01

    The adaptive voter model is a paradigmatic model in the study of opinion formation. Here we propose an extension for this model, in which conflicts are resolved by obtaining another opinion, and analytically study the time required for consensus to emerge. Our results shed light on the rich phenomenology of both the original and extended adaptive voter models, including a dynamical phase transition in the scaling behavior of the mean time to consensus.

  5. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    SciTech Connect

    Archer, Daniel E.; Hornback, Donald Eric; Johnson, Jeffrey O.; Nicholson, Andrew D.; Patton, Bruce W.; Peplow, Douglas E.; Miller, Thomas Martin; Ayaz-Maierhafer, Birsen

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  6. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  7. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  8. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  9. Modeling Family Adaptation to Fragile X Syndrome

    ERIC Educational Resources Information Center

    Raspa, Melissa; Bailey, Donald, Jr.; Bann, Carla; Bishop, Ellen

    2014-01-01

    Using data from a survey of 1,099 families who have a child with Fragile X syndrome, we examined adaptation across 7 dimensions of family life: parenting knowledge, social support, social life, financial impact, well-being, quality of life, and overall impact. Results illustrate that although families report a high quality of life, they struggle…

  10. Particle Swarm Based Collective Searching Model for Adaptive Environment

    SciTech Connect

    Cui, Xiaohui; Patton, Robert M; Potok, Thomas E; Treadwell, Jim N

    2008-01-01

    This report presents a pilot study of an integration of particle swarm algorithm, social knowledge adaptation and multi-agent approaches for modeling the collective search behavior of self-organized groups in an adaptive environment. The objective of this research is to apply the particle swarm metaphor as a model of social group adaptation for the dynamic environment and to provide insight and understanding of social group knowledge discovering and strategic searching. A new adaptive environment model, which dynamically reacts to the group collective searching behaviors, is proposed in this research. The simulations in the research indicate that effective communication between groups is not the necessary requirement for whole self-organized groups to achieve the efficient collective searching behavior in the adaptive environment.

  11. Particle Swarm Based Collective Searching Model for Adaptive Environment

    SciTech Connect

    Cui, Xiaohui; Patton, Robert M; Potok, Thomas E; Treadwell, Jim N

    2007-01-01

    This report presents a pilot study of an integration of particle swarm algorithm, social knowledge adaptation and multi-agent approaches for modeling the collective search behavior of self-organized groups in an adaptive environment. The objective of this research is to apply the particle swarm metaphor as a model of social group adaptation for the dynamic environment and to provide insight and understanding of social group knowledge discovering and strategic searching. A new adaptive environment model, which dynamically reacts to the group collective searching behaviors, is proposed in this research. The simulations in the research indicate that effective communication between groups is not the necessary requirement for whole self-organized groups to achieve the efficient collective searching behavior in the adaptive environment.

  12. Adapted Lethality: What We Can Learn from Guinea Pig-Adapted Ebola Virus Infection Model

    PubMed Central

    Cheresiz, S. V.; Semenova, E. A.; Chepurnov, A. A.

    2016-01-01

    Establishment of small animal models of Ebola virus (EBOV) infection is important both for the study of genetic determinants involved in the complex pathology of EBOV disease and for the preliminary screening of antivirals, production of therapeutic heterologic immunoglobulins, and experimental vaccine development. Since the wild-type EBOV is avirulent in rodents, the adaptation series of passages in these animals are required for the virulence/lethality to emerge in these models. Here, we provide an overview of our several adaptation series in guinea pigs, which resulted in the establishment of guinea pig-adapted EBOV (GPA-EBOV) variants different in their characteristics, while uniformly lethal for the infected animals, and compare the virologic, genetic, pathomorphologic, and immunologic findings with those obtained in the adaptation experiments of the other research groups. PMID:26989413

  13. Context aware adaptive security service model

    NASA Astrophysics Data System (ADS)

    Tunia, Marcin A.

    2015-09-01

    Present systems and devices are usually protected against different threats concerning digital data processing. The protection mechanisms consume resources, which are either highly limited or intensively utilized by many entities. The optimization of these resources usage is advantageous. The resources that are saved performing optimization may be utilized by other mechanisms or may be sufficient for longer time. It is usually assumed that protection has to provide specific quality and attack resistance. By interpreting context situation of business services - users and services themselves, it is possible to adapt security services parameters to countermeasure threats associated with current situation. This approach leads to optimization of used resources and maintains sufficient security level. This paper presents architecture of adaptive security service, which is context-aware and exploits quality of context data issue.

  14. Domain Adaptation of Deformable Part-Based Models.

    PubMed

    Xu, Jiaolong; Ramos, Sebastian; Vázquez, David; López, Antonio M

    2014-12-01

    The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors. PMID:26353145

  15. Fantastic animals as an experimental model to teach animal adaptation

    PubMed Central

    Guidetti, Roberto; Baraldi, Laura; Calzolai, Caterina; Pini, Lorenza; Veronesi, Paola; Pederzoli, Aurora

    2007-01-01

    Background Science curricula and teachers should emphasize evolution in a manner commensurate with its importance as a unifying concept in science. The concept of adaptation represents a first step to understand the results of natural selection. We settled an experimental project of alternative didactic to improve knowledge of organism adaptation. Students were involved and stimulated in learning processes by creative activities. To set adaptation in a historic frame, fossil records as evidence of past life and evolution were considered. Results The experimental project is schematized in nine phases: review of previous knowledge; lesson on fossils; lesson on fantastic animals; planning an imaginary world; creation of an imaginary animal; revision of the imaginary animals; adaptations of real animals; adaptations of fossil animals; and public exposition. A rubric to evaluate the student's performances is reported. The project involved professors and students of the University of Modena and Reggio Emilia and of the "G. Marconi" Secondary School of First Degree (Modena, Italy). Conclusion The educational objectives of the project are in line with the National Indications of the Italian Ministry of Public Instruction: knowledge of the characteristics of living beings, the meanings of the term "adaptation", the meaning of fossils, the definition of ecosystem, and the particularity of the different biomes. At the end of the project, students will be able to grasp particular adaptations of real organisms and to deduce information about the environment in which the organism evolved. This project allows students to review previous knowledge and to form their personalities. PMID:17767729

  16. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  17. Post-Revolution Egypt: The Roy Adaptation Model in Community.

    PubMed

    Buckner, Britton S; Buckner, Ellen B

    2015-10-01

    The 2011 Arab Spring swept across the Middle East creating profound instability in Egypt, a country already challenged with poverty and internal pressures. To respond to this crisis, Catholic Relief Services led a community-based program called "Egypt Works" that included community improvement projects and psychosocial support. Following implementation, program outcomes were analyzed using the middle-range theory of adaptation to situational life events, based on the Roy adaptation model. The comprehensive, community-based approach facilitated adaptation, serving as a model for applying theory in post-crisis environments. PMID:26396214

  18. Adaptive Input Reconstruction with Application to Model Refinement, State Estimation, and Adaptive Control

    NASA Astrophysics Data System (ADS)

    D'Amato, Anthony M.

    Input reconstruction is the process of using the output of a system to estimate its input. In some cases, input reconstruction can be accomplished by determining the output of the inverse of a model of the system whose input is the output of the original system. Inversion, however, requires an exact and fully known analytical model, and is limited by instabilities arising from nonminimum-phase zeros. The main contribution of this work is a novel technique for input reconstruction that does not require model inversion. This technique is based on a retrospective cost, which requires a limited number of Markov parameters. Retrospective cost input reconstruction (RCIR) does not require knowledge of nonminimum-phase zero locations or an analytical model of the system. RCIR provides a technique that can be used for model refinement, state estimation, and adaptive control. In the model refinement application, data are used to refine or improve a model of a system. It is assumed that the difference between the model output and the data is due to an unmodeled subsystem whose interconnection with the modeled system is inaccessible, that is, the interconnection signals cannot be measured and thus standard system identification techniques cannot be used. Using input reconstruction, these inaccessible signals can be estimated, and the inaccessible subsystem can be fitted. We demonstrate input reconstruction in a model refinement framework by identifying unknown physics in a space weather model and by estimating an unknown film growth in a lithium ion battery. The same technique can be used to obtain estimates of states that cannot be directly measured. Adaptive control can be formulated as a model-refinement problem, where the unknown subsystem is the idealized controller that minimizes a measured performance variable. Minimal modeling input reconstruction for adaptive control is useful for applications where modeling information may be difficult to obtain. We demonstrate

  19. Location- and lesion-dependent estimation of background tissue complexity for anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Avanaki, Ali R. N.; Espig, Kathryn; Knippel, Eddie; Kimpe, Tom R. L.; Xthona, Albert; Maidment, Andrew D. A.

    2016-03-01

    In this paper, we specify a notion of background tissue complexity (BTC) as perceived by a human observer that is suited for use with model observers. This notion of BTC is a function of image location and lesion shape and size. We propose four unsupervised BTC estimators based on: (i) perceived pre- and post-lesion similarity of images, (ii) lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), (iii) tissue anomaly detection, and (iv) mammogram density measurement. The latter two are existing methods we adapt for location- and lesion-dependent BTC estimation. To validate the BTC estimators, we ask human observers to measure BTC as the visibility threshold amplitude of an inserted lesion at specified locations in a mammogram. Both human-measured and computationally estimated BTC varied with lesion shape (from circular to oval), size (from small circular to larger circular), and location (different points across a mammogram). BTCs measured by different human observers are correlated (ρ=0.67). BTC estimators are highly correlated to each other (0.84model observer, with applications such as optimization of contrast-enhanced medical imaging systems, and creation of a diversified image dataset with characteristics of a desired population.

  20. Hierarchical ensemble of background models for PTZ-based video surveillance.

    PubMed

    Liu, Ning; Wu, Hefeng; Lin, Liang

    2015-01-01

    In this paper, we study a novel hierarchical background model for intelligent video surveillance with the pan-tilt-zoom (PTZ) camera, and give rise to an integrated system consisting of three key components: background modeling, observed frame registration, and object tracking. First, we build the hierarchical background model by separating the full range of continuous focal lengths of a PTZ camera into several discrete levels and then partitioning the wide scene at each level into many partial fixed scenes. In this way, the wide scenes captured by a PTZ camera through rotation and zoom are represented by a hierarchical collection of partial fixed scenes. A new robust feature is presented for background modeling of each partial scene. Second, we locate the partial scenes corresponding to the observed frame in the hierarchical background model. Frame registration is then achieved by feature descriptor matching via fast approximate nearest neighbor search. Afterwards, foreground objects can be detected using background subtraction. Last, we configure the hierarchical background model into a framework to facilitate existing object tracking algorithms under the PTZ camera. Foreground extraction is used to assist tracking an object of interest. The tracking outputs are fed back to the PTZ controller for adjusting the camera properly so as to maintain the tracked object in the image plane. We apply our system on several challenging scenarios and achieve promising results.

  1. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  2. Modeling Students' Memory for Application in Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  3. Internal models in sensorimotor integration: perspectives from adaptive control theory.

    PubMed

    Tin, Chung; Poon, Chi-Sang

    2005-09-01

    Internal models and adaptive controls are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models' architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods, such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning, are reviewed and their possible relevance to motor control is discussed. Possible applicability of a Luenberger observer and an extended Kalman filter to state estimation problems-such as sensorimotor prediction or the resolution of vestibular sensory ambiguity-is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal models in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future.

  4. Modeling-Error-Driven Performance-Seeking Direct Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V.; Kaneshige, John; Krishnakumar, Kalmanje; Burken, John

    2008-01-01

    This paper presents a stable discrete-time adaptive law that targets modeling errors in a direct adaptive control framework. The update law was developed in our previous work for the adaptive disturbance rejection application. The approach is based on the philosophy that without modeling errors, the original control design has been tuned to achieve the desired performance. The adaptive control should, therefore, work towards getting this performance even in the face of modeling uncertainties/errors. In this work, the baseline controller uses dynamic inversion with proportional-integral augmentation. Dynamic inversion is carried out using the assumed system model. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to the dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. Contrary to the typical Lyapunov-based adaptive approaches that guarantee only stability, the current approach investigates conditions for stability as well as performance. A high-fidelity F-15 model is used to illustrate the overall approach.

  5. Simple reminiscence: a stress-adaptation model of the phenomenon.

    PubMed

    Puentes, William J

    2002-01-01

    The phenomenon of Simple Reminiscence may play an important role in the individual's ability to adapt to anxiety-provoking stressors across the life span. However, a clearly articulated model of the underlying psychodynamics of the phenomenon has not been developed. In this paper, a proposed model of the phenomenon of Simple Reminiscence is presented. The important components of the model-developmental issues, triggers, uses, processes, and outcomes-are interpreted within the context of Peplau's conceptualization of stress and stress adaptation. Implications of the model for future empirical investigations of Simple Reminiscence are discussed.

  6. Modeling hospitals' adaptive capacity during a loss of infrastructure services.

    PubMed

    Vugrin, Eric D; Verzi, Stephen J; Finley, Patrick D; Turnquist, Mark A; Griffin, Anne R; Ricci, Karen A; Wyte-Lake, Tamar

    2015-01-01

    Resilience in hospitals - their ability to withstand, adapt to, and rapidly recover from disruptive events - is vital to their role as part of national critical infrastructure. This paper presents a model to provide planning guidance to decision makers about how to make hospitals more resilient against possible disruption scenarios. This model represents a hospital's adaptive capacities that are leveraged to care for patients during loss of infrastructure services (power, water, etc.). The model is an optimization that reallocates and substitutes resources to keep patients in a high care state or allocates resources to allow evacuation if necessary. An illustrative example demonstrates how the model might be used in practice.

  7. Adaptive tracking for complex systems using reduced-order models

    NASA Technical Reports Server (NTRS)

    Carignan, Craig R.

    1990-01-01

    Reduced-order models are considered in the context of parameter adaptive controllers for tracking workspace trajectories. A dual-arm manipulation task is used to illustrate the methodology and provide simulation results. A parameter adaptive controller is designed to track the desired position trajectory of a payload using a four-parameter model instead of a full-order, nine-parameter model. Several simulations with different payload-to-arm mass ratios are used to illustrate the capabilities of the reduced-order model in tracking the desired trajectory.

  8. Background modeling for moving object detection in long-distance imaging through turbulent medium.

    PubMed

    Elkabetz, Adiel; Yitzhaky, Yitzhak

    2014-02-20

    A basic step in automatic moving objects detection is often modeling the background (i.e., the scene excluding the moving objects). The background model describes the temporal intensity distribution expected at different image locations. Long-distance imaging through atmospheric turbulent medium is affected mainly by blur and spatiotemporal movements in the image, which have contradicting effects on the temporal intensity distribution, mainly at edge locations. This paper addresses this modeling problem theoretically, and experimentally, for various long-distance imaging conditions. Results show that a unimodal distribution is usually a more appropriate model. However, if image deblurring is performed, a multimodal modeling might be more appropriate. PMID:24663312

  9. Background model for a NaI (Tl) detector devoted to dark matter searches

    NASA Astrophysics Data System (ADS)

    Cebrián, S.; Cuesta, C.; Amaré, J.; Borjabad, S.; Fortuño, D.; García, E.; Ginestra, C.; Gómez, H.; Martínez, M.; Oliván, M. A.; Ortigoza, Y.; Ortiz de Solórzano, A.; Pobes, C.; Puimedón, J.; Sarsa, M. L.; Villar, J. A.

    2012-09-01

    NaI (Tl) is a well known high light yield scintillator. Very large crystals can be grown to be used in a wide range of applications. In particular, such large crystals are very good-performing detectors in the search for dark matter, where they have been used for a long time and reported first evidence of the presence of an annual modulation in the detection rate, compatible with that expected for a dark matter signal. In the frame of the ANAIS (Annual modulation with NaI Scintillators) dark matter search project, a large and long effort has been carried out in order to characterize the background of sodium iodide crystals. In this paper we present in detail our background model for a 9.6 kg NaI (Tl) detector taking data at the Canfranc Underground Laboratory (LSC): most of the contaminations contributing to the background have been precisely identified and quantified by different complementary techniques such as HPGe spectrometry, discrimination of alpha particles vs. beta/gamma background by Pulse Shape Analysis (PSA) and coincidence techniques; then, Monte Carlo (MC) simulations using Geant4 package have been carried out for the different contributions. Only a few assumptions are required in order to explain most of the measured background at high energy, supporting the goodness of the proposed model for the present ANAIS prototype whose background is dominated by 40K bulk contamination. At low energy, some non-explained background components are still present and additional work is required to improve background understanding, but some plausible background sources contributing in this range have been studied in this work. Prospects of achievable backgrounds, at low and high energy, for the ANAIS-upgraded detectors, relying on the proposed background model conveniently scaled, are also presented.

  10. Modeling Power Systems as Complex Adaptive Systems

    SciTech Connect

    Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.

  11. The reduced order model problem in distributed parameter systems adaptive identification and control. [adaptive control of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.

    1981-01-01

    The reduced order model problem in distributed parameter systems adaptive identification and control is investigated. A comprehensive examination of real-time centralized adaptive control options for flexible spacecraft is provided.

  12. Modelling global multi-conjugated adaptive optics

    NASA Astrophysics Data System (ADS)

    Viotto, Valentina; Ragazzoni, Roberto; Magrin, Demetrio; Bergomi, Maria; Dima, Marco; Farinato, Jacopo; Marafatto, Luca; Greggio, Davide

    2014-08-01

    The recently proposed concept of Global MCAO (GMCAO) aims to look for Natural Guide Stars in a very wide technical Field of View (FoV), to increase the overall sky coverage, and deals with the consequent depth of focus reduction introducing numerically a quite-high number of Virtual Deformable Mirrors (VDMs), which are then the starting point for an optimization of the real DMs shapes for the correction of the -smaller- scientific FoV. To translate the GMCAO concept into a real system, a number of parameters requires to be analyzed and optimized, like the number of references and VDMs to be used, the technical FoV size, the spatial samplings, the sensing wavelength. These and some other major choices, like the open loop WFSs concept and design, will then drive the requirements and the performance of the system (e.g. limiting magnitude, linear response, and sensitivity). This paper collects some major results of the on-going study on the feasibility of an Adaptive Optics system for the E-ELT, based on GMCAO, with a particular emphasis on the sky coverage issue. Besides the sensitivity analysis of the optimization of the already mentioned parameters, such a topic involves the implementation of an IDL code simulation tool to estimate the system performance in terms of Strehl Ratio in a 2×2 arcmin FoV, when a variable number of NGSs and VDMs are used. Different technical FoV diameters for the references selection and various constellations can be also compared. This study could be the starting point for a dedicated laboratory testing and, in the future, an on-sky experiment at an 8m telescope with a "scaled down" demonstrator.

  13. Adaptive network models of collective decision making in swarming systems.

    PubMed

    Chen, Li; Huepe, Cristián; Gross, Thilo

    2016-08-01

    We consider a class of adaptive network models where links can only be created or deleted between nodes in different states. These models provide an approximate description of a set of systems where nodes represent agents moving in physical or abstract space, the state of each node represents the agent's heading direction, and links indicate mutual awareness. We show analytically that the adaptive network description captures a phase transition to collective motion in some swarming systems, such as the Vicsek model, and that the properties of this transition are determined by the number of states (discrete heading directions) that can be accessed by each agent.

  14. Adaptive network models of collective decision making in swarming systems

    NASA Astrophysics Data System (ADS)

    Chen, Li; Huepe, Cristián; Gross, Thilo

    2016-08-01

    We consider a class of adaptive network models where links can only be created or deleted between nodes in different states. These models provide an approximate description of a set of systems where nodes represent agents moving in physical or abstract space, the state of each node represents the agent's heading direction, and links indicate mutual awareness. We show analytically that the adaptive network description captures a phase transition to collective motion in some swarming systems, such as the Vicsek model, and that the properties of this transition are determined by the number of states (discrete heading directions) that can be accessed by each agent.

  15. Adaptive network models of collective decision making in swarming systems.

    PubMed

    Chen, Li; Huepe, Cristián; Gross, Thilo

    2016-08-01

    We consider a class of adaptive network models where links can only be created or deleted between nodes in different states. These models provide an approximate description of a set of systems where nodes represent agents moving in physical or abstract space, the state of each node represents the agent's heading direction, and links indicate mutual awareness. We show analytically that the adaptive network description captures a phase transition to collective motion in some swarming systems, such as the Vicsek model, and that the properties of this transition are determined by the number of states (discrete heading directions) that can be accessed by each agent. PMID:27627342

  16. Nutrient-dependent/pheromone-controlled adaptive evolution: a model

    PubMed Central

    Kohl, James Vaughn

    2013-01-01

    Background The prenatal migration of gonadotropin-releasing hormone (GnRH) neurosecretory neurons allows nutrients and human pheromones to alter GnRH pulsatility, which modulates the concurrent maturation of the neuroendocrine, reproductive, and central nervous systems, thus influencing the development of ingestive behavior, reproductive sexual behavior, and other behaviors. Methods This model details how chemical ecology drives adaptive evolution via: (1) ecological niche construction, (2) social niche construction, (3) neurogenic niche construction, and (4) socio-cognitive niche construction. This model exemplifies the epigenetic effects of olfactory/pheromonal conditioning, which alters genetically predisposed, nutrient-dependent, hormone-driven mammalian behavior and choices for pheromones that control reproduction via their effects on luteinizing hormone (LH) and systems biology. Results Nutrients are metabolized to pheromones that condition behavior in the same way that food odors condition behavior associated with food preferences. The epigenetic effects of olfactory/pheromonal input calibrate and standardize molecular mechanisms for genetically predisposed receptor-mediated changes in intracellular signaling and stochastic gene expression in GnRH neurosecretory neurons of brain tissue. For example, glucose and pheromones alter the hypothalamic secretion of GnRH and LH. A form of GnRH associated with sexual orientation in yeasts links control of the feedback loops and developmental processes required for nutrient acquisition, movement, reproduction, and the diversification of species from microbes to man. Conclusion An environmental drive evolved from that of nutrient ingestion in unicellular organisms to that of pheromone-controlled socialization in insects. In mammals, food odors and pheromones cause changes in hormones such as LH, which has developmental affects on pheromone-controlled sexual behavior in nutrient-dependent reproductively fit individuals

  17. Object detection in natural backgrounds predicted by discrimination performance and models

    NASA Technical Reports Server (NTRS)

    Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.

    1997-01-01

    Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.

  18. Hardware performance versus video quality trade-off for Gaussian mixture model based background identification systems

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore; Petra, Nicola

    2014-04-01

    Background identification is a fundamental task in many video processing systems. The Gaussian Mixture Model is a background identification algorithm that models the pixel luminance with a mixture of K Gaussian distributions. The number of Gaussian distributions determines the accuracy of the background model and the computational complexity of the algorithm. This paper compares two hardware implementations of the Gaussian Mixture Model that use three and five Gaussians per pixel. A trade off analysis is carried out by evaluating the quality of the processed video sequences and the hardware performances. The circuits are implemented on FPGA by exploiting state of the art, hardware oriented, formulation of the Gaussian Mixture Model equations and by using truncated binary multipliers. The results suggest that the circuit that uses three Gaussian distributions provides video with good accuracy while requiring significant less resources than the option that uses five Gaussian distributions per pixel.

  19. A Model of Adaptive Language Learning

    ERIC Educational Resources Information Center

    Woodrow, Lindy J.

    2006-01-01

    This study applies theorizing from educational psychology and language learning to hypothesize a model of language learning that takes into account affect, motivation, and language learning strategies. The study employed a questionnaire to assess variables of motivation, self-efficacy, anxiety, and language learning strategies. The sample…

  20. Modeling Developmental Transitions in Adaptive Resonance Theory

    ERIC Educational Resources Information Center

    Raijmakers, Maartje E. J.; Molenaar, Peter C. M.

    2004-01-01

    Neural networks are applied to a theoretical subject in developmental psychology: modeling developmental transitions. Two issues that are involved will be discussed: discontinuities and acquiring qualitatively new knowledge. We will argue that by the appearance of a bifurcation, a neural network can show discontinuities and may acquire…

  1. Hybrid and adaptive meta-model-based global optimization

    NASA Astrophysics Data System (ADS)

    Gu, J.; Li, G. Y.; Dong, Z.

    2012-01-01

    As an efficient and robust technique for global optimization, meta-model-based search methods have been increasingly used in solving complex and computation intensive design optimization problems. In this work, a hybrid and adaptive meta-model-based global optimization method that can automatically select appropriate meta-modelling techniques during the search process to improve search efficiency is introduced. The search initially applies three representative meta-models concurrently. Progress towards a better performing model is then introduced by selecting sample data points adaptively according to the calculated values of the three meta-models to improve modelling accuracy and search efficiency. To demonstrate the superior performance of the new algorithm over existing search methods, the new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization example involving vehicle crash simulation. The method is particularly suitable for design problems involving computation intensive, black-box analyses and simulations.

  2. The Importance of Formalizing Computational Models of Face Adaptation Aftereffects

    PubMed Central

    Ross, David A.; Palmeri, Thomas J.

    2016-01-01

    Face adaptation is widely used as a means to probe the neural representations that support face recognition. While the theories that relate face adaptation to behavioral aftereffects may seem conceptually simple, our work has shown that testing computational instantiations of these theories can lead to unexpected results. Instantiating a model of face adaptation not only requires specifying how faces are represented and how adaptation shapes those representations but also specifying how decisions are made, translating hidden representational states into observed responses. Considering the high-dimensionality of face representations, the parallel activation of multiple representations, and the non-linearity of activation functions and decision mechanisms, intuitions alone are unlikely to succeed. If the goal is to understand mechanism, not simply to examine the boundaries of a behavioral phenomenon or correlate behavior with brain activity, then formal computational modeling must be a component of theory testing. To illustrate, we highlight our recent computational modeling of face adaptation aftereffects and discuss how models can be used to understand the mechanisms by which faces are recognized. PMID:27378960

  3. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  4. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  5. Comparison of background ozone estimates over the western United States based on two separate model methodologies

    NASA Astrophysics Data System (ADS)

    Dolwick, Pat; Akhtar, Farhan; Baker, Kirk R.; Possiel, Norm; Simon, Heather; Tonnesen, Gail

    2015-05-01

    Two separate air quality model methodologies for estimating background ozone levels over the western U.S. are compared in this analysis. The first approach is a direct sensitivity modeling approach that considers the ozone levels that would remain after certain emissions are entirely removed (i.e., zero-out modeling). The second approach is based on an instrumented air quality model which tracks the formation of ozone within the simulation and assigns the source of that ozone to pre-identified categories (i.e., source apportionment modeling). This analysis focuses on a definition of background referred to as U.S. background (USB) which is designed to represent the influence of all sources other than U.S. anthropogenic emissions. Two separate modeling simulations were completed for an April-October 2007 period, both focused on isolating the influence of sources other than domestic manmade emissions. The zero-out modeling was conducted with the Community Multiscale Air Quality (CMAQ) model and the source apportionment modeling was completed with the Comprehensive Air Quality Model with Extensions (CAMx). Our analysis shows that the zero-out and source apportionment techniques provide relatively similar estimates of the magnitude of seasonal mean daily 8-h maximum U.S. background ozone at locations in the western U.S. when base case model ozone biases are considered. The largest differences between the two sets of USB estimates occur in urban areas where interactions with local NOx emissions can be important, especially when ozone levels are relatively low. Both methodologies conclude that seasonal mean daily 8-h maximum U.S. background ozone levels can be as high as 40-45 ppb over rural portions of the western U.S. Background fractions tend to decrease as modeled total ozone concentrations increase, with typical fractions of 75-100 percent on the lowest ozone days (<25 ppb) and typical fractions between 30 and 50% on days with ozone above 75 ppb. The finding that

  6. Adaptation of the microdosimetric kinetic model to hypoxia

    NASA Astrophysics Data System (ADS)

    Bopp, C.; Hirayama, R.; Inaniwa, T.; Kitagawa, A.; Matsufuji, N.; Noda, K.

    2016-11-01

    Ion beams present a potential advantage in terms of treatment of lesions with hypoxic regions. In order to use this potential, it is important to accurately model the cell survival of oxic as well as hypoxic cells. In this work, an adaptation of the microdosimetric kinetic (MK) model making it possible to account for cell hypoxia is presented. The adaptation relies on the modification of damage quantity (double strand breaks and more complex lesions) due to the radiation. Model parameters such as domain size and nucleus size are then adapted through a fitting procedure. We applied this approach to two cell lines, HSG and V79 for helium, carbon and neon ions. A similar behaviour of the parameters was found for the two cell lines, namely a reduction of the domain size and an increase in the sensitive nuclear volume of hypoxic cells compared to those of oxic cells. In terms of oxygen enhancement ratio (OER), the experimental data behaviour can be reproduced, including dependence on particle type at the same linear energy transfer (LET). Errors on the cell survival prediction are of the same order of magnitude than for the original MK model. Our adaptation makes it possible to account for hypoxia without modelling the OER as a function of the LET of the particles, but directly accounting for hypoxic cell survival data.

  7. Subjective quality assessment of an adaptive video streaming model

    NASA Astrophysics Data System (ADS)

    Tavakoli, Samira; Brunnström, Kjell; Wang, Kun; Andrén, Börje; Shahid, Muhammad; Garcia, Narciso

    2014-01-01

    With the recent increased popularity and high usage of HTTP Adaptive Streaming (HAS) techniques, various studies have been carried out in this area which generally focused on the technical enhancement of HAS technology and applications. However, a lack of common HAS standard led to multiple proprietary approaches which have been developed by major Internet companies. In the emerging MPEG-DASH standard the packagings of the video content and HTTP syntax have been standardized; but all the details of the adaptation behavior are left to the client implementation. Nevertheless, to design an adaptation algorithm which optimizes the viewing experience of the enduser, the multimedia service providers need to know about the Quality of Experience (QoE) of different adaptation schemes. Taking this into account, the objective of this experiment was to study the QoE of a HAS-based video broadcast model. The experiment has been carried out through a subjective study of the end user response to various possible clients' behavior for changing the video quality taking different QoE-influence factors into account. The experimental conclusions have made a good insight into the QoE of different adaptation schemes which can be exploited by HAS clients for designing the adaptation algorithms.

  8. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    SciTech Connect

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; Arge, C. Nick

    2015-03-17

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF) to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.

  9. Modeling neural adaptation in the frog auditory system

    NASA Astrophysics Data System (ADS)

    Wotton, Janine; McArthur, Kimberly; Bohara, Amit; Ferragamo, Michael; Megela Simmons, Andrea

    2005-09-01

    Extracellular recordings from the auditory midbrain, Torus semicircularis, of the leopard frog reveal a wide diversity of tuning patterns. Some cells seem to be well suited for time-based coding of signal envelope, and others for rate-based coding of signal frequency. Adaptation for ongoing stimuli plays a significant role in shaping the frequency-dependent response rate at different levels of the frog auditory system. Anuran auditory-nerve fibers are unusual in that they reveal frequency-dependent adaptation [A. L. Megela, J. Acoust. Soc. Am. 75, 1155-1162 (1984)], and therefore provide rate-based input. In order to examine the influence of these peripheral inputs on central responses, three layers of auditory neurons were modeled to examine short-term neural adaptation to pure tones and complex signals. The response of each neuron was simulated with a leaky integrate and fire model, and adaptation was implemented by means of an increasing threshold. Auditory-nerve fibers, dorsal medullary nucleus neurons, and toral cells were simulated and connected in three ascending layers. Modifying the adaptation properties of the peripheral fibers dramatically alters the response at the midbrain. [Work supported by NOHR to M.J.F.; Gustavus Presidential Scholarship to K.McA.; NIH DC05257 to A.M.S.

  10. The ADaptation and Anticipation Model (ADAM) of sensorimotor synchronization

    PubMed Central

    van der Steen, M. C. (Marieke); Keller, Peter E.

    2013-01-01

    A constantly changing environment requires precise yet flexible timing of movements. Sensorimotor synchronization (SMS)—the temporal coordination of an action with events in a predictable external rhythm—is a fundamental human skill that contributes to optimal sensory-motor control in daily life. A large body of research related to SMS has focused on adaptive error correction mechanisms that support the synchronization of periodic movements (e.g., finger taps) with events in regular pacing sequences. The results of recent studies additionally highlight the importance of anticipatory mechanisms that support temporal prediction in the context of SMS with sequences that contain tempo changes. To investigate the role of adaptation and anticipatory mechanisms in SMS we introduce ADAM: an ADaptation and Anticipation Model. ADAM combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models. The combination of simulations and experimental manipulations based on ADAM creates a novel and promising approach for exploring adaptation and anticipation in SMS. The current paper describes the conceptual basis and architecture of ADAM. PMID:23772211

  11. Communicating to Farmers about Skin Cancer: The Behavior Adaptation Model.

    ERIC Educational Resources Information Center

    Parrott, Roxanne; Monahan, Jennifer; Ainsworth, Stuart; Steiner, Carol

    1998-01-01

    States health campaign messages designed to encourage behavior adaptation have greater likelihood of success than campaigns promoting avoidance of at-risk behaviors that cannot be avoided. Tests a model of health risk behavior using four different behaviors in a communication campaign aimed at reducing farmers' risk for skin cancer--questions…

  12. Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies.

    PubMed

    Yu, Chao; Tan, Guozhen; Lv, Hongtao; Wang, Zhen; Meng, Jun; Hao, Jianye; Ren, Fenghui

    2016-01-01

    Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people's adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics. PMID:27282089

  13. Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies

    PubMed Central

    Yu, Chao; Tan, Guozhen; Lv, Hongtao; Wang, Zhen; Meng, Jun; Hao, Jianye; Ren, Fenghui

    2016-01-01

    Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people’s adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics. PMID:27282089

  14. Why Reinvent the Wheel? Let's Adapt Our Institutional Assessment Model.

    ERIC Educational Resources Information Center

    Aguirre, Francisco; Hawkins, Linda

    This paper reports on the implementation of an Integrated Assessment and Strategic Planning (IASP) process to comply with accountability requirements at the community college of New Mexico State University at Alamogordo. The IASP model adapted an existing compliance matrix and applied it to the business college program in 1995 to assess and…

  15. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  16. Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Tan, Guozhen; Lv, Hongtao; Wang, Zhen; Meng, Jun; Hao, Jianye; Ren, Fenghui

    2016-06-01

    Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people’s adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics.

  17. A Model of Internal Communication in Adaptive Communication Systems.

    ERIC Educational Resources Information Center

    Williams, M. Lee

    A study identified and categorized different types of internal communication systems and developed an applied model of internal communication in adaptive organizational systems. Twenty-one large organizations were selected for their varied missions and diverse approaches to managing internal communication. Individual face-to-face or telephone…

  18. Adapting the Transtheoretical Model of Change to the Bereavement Process

    ERIC Educational Resources Information Center

    Calderwood, Kimberly A.

    2011-01-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals…

  19. A Context-Adaptive Model for Program Evaluation.

    ERIC Educational Resources Information Center

    Lynch, Brian K.

    1990-01-01

    Presents an adaptable, context-sensitive model for ESL/EFL program evaluation, consisting of seven steps that guide an evaluator through consideration of relevant issues, information, and design elements. Examples from an evaluation of the Reading for Science and Technology Project at the University of Guadalajara, Mexico are given. (31…

  20. Background Modelling in Very-High-Energy Gamma-Ray Astronomy

    SciTech Connect

    Berge, David; Funk, S.; Hinton, J.; /Heidelberg, Max Planck Inst. /Heidelberg Observ. /Leeds U.

    2006-11-07

    Ground based Cherenkov telescope systems measure astrophysical {gamma}-ray emission against a background of cosmic-ray induced air showers. The subtraction of this background is a major challenge for the extraction of spectra and morphology of {gamma}-ray sources. The unprecedented sensitivity of the new generation of ground based very-high-energy {gamma}-ray experiments such as H.E.S.S. has lead to the discovery of many previously unknown extended sources. The analysis of such sources requires a range of different background modeling techniques. Here we describe some of the techniques that have been applied to data from the H.E.S.S. instrument and compare their performance. Each background model is introduced and discussed in terms of suitability for image generation or spectral analysis and possible caveats are mentioned. We show that there is not a single multi-purpose model, different models are appropriate for different tasks. To keep systematic uncertainties under control it is important to apply several models to the same data set and compare the results.

  1. Bayesian inference with an adaptive proposal density for GARCH models

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2010-04-01

    We perform the Bayesian inference of a GARCH model by the Metropolis-Hastings algorithm with an adaptive proposal density. The adaptive proposal density is assumed to be the Student's t-distribution and the distribution parameters are evaluated by using the data sampled during the simulation. We apply the method for the QGARCH model which is one of asymmetric GARCH models and make empirical studies for Nikkei 225, DAX and Hang indexes. We find that autocorrelation times from our method are very small, thus the method is very efficient for generating uncorrelated Monte Carlo data. The results from the QGARCH model show that all the three indexes show the leverage effect, i.e. the volatility is high after negative observations.

  2. Evaluating mallard adaptive management models with time series

    USGS Publications Warehouse

    Conn, P.B.; Kendall, W.L.

    2004-01-01

    Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these

  3. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  4. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  5. OMEGA: The operational multiscale environment model with grid adaptivity

    SciTech Connect

    Bacon, D.P.

    1995-07-01

    This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track moving internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed.

  6. Object Detection in Natural Backgrounds Predicted by Discrimination Performance and Models

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Watson, A. B.; Rohaly, A. M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    In object detection, an observer looks for an object class member in a set of backgrounds. In discrimination, an observer tries to distinguish two images. Discrimination models predict the probability that an observer detects a difference between two images. We compare object detection and image discrimination with the same stimuli by: (1) making stimulus pairs of the same background with and without the target object and (2) either giving many consecutive trials with the same background (discrimination) or intermixing the stimuli (object detection). Six images of a vehicle in a natural setting were altered to remove the vehicle and mixed with the original image in various proportions. Detection observers rated the images for vehicle presence. Discrimination observers rated the images for any difference from the background image. Estimated detectabilities of the vehicles were found by maximizing the likelihood of a Thurstone category scaling model. The pattern of estimated detectabilities is similar for discrimination and object detection, and is accurately predicted by a Cortex Transform discrimination model. Predictions of a Contrast- Sensitivity- Function filter model and a Root-Mean-Square difference metric based on the digital image values are less accurate. The discrimination detectabilities averaged about twice those of object detection.

  7. Modeled summer background concentration nutrients and suspended sediment in the mid-continent (USA) great rivers

    EPA Science Inventory

    We used regression models to predict background concentration of four water quality indictors: total nitrogen (N), total phosphorus (P), chloride, and total suspended solids (TSS), in the mid-continent (USA) great rivers, the Upper Mississippi, the Lower Missouri, and the Ohio. F...

  8. Missile guidance law design using adaptive cerebellar model articulation controller.

    PubMed

    Lin, Chih-Min; Peng, Ya-Fu

    2005-05-01

    An adaptive cerebellar model articulation controller (CMAC) is proposed for command to line-of-sight (CLOS) missile guidance law design. In this design, the three-dimensional (3-D) CLOS guidance problem is formulated as a tracking problem of a time-varying nonlinear system. The adaptive CMAC control system is comprised of a CMAC and a compensation controller. The CMAC control is used to imitate a feedback linearization control law and the compensation controller is utilized to compensate the difference between the feedback linearization control law and the CMAC control. The online adaptive law is derived based on the Lyapunov stability theorem to learn the weights of receptive-field basis functions in CMAC control. In addition, in order to relax the requirement of approximation error bound, an estimation law is derived to estimate the error bound. Then the adaptive CMAC control system is designed to achieve satisfactory tracking performance. Simulation results for different engagement scenarios illustrate the validity of the proposed adaptive CMAC-based guidance law.

  9. A model of excitation and adaptation in bacterial chemotaxis.

    PubMed Central

    Hauri, D C; Ross, J

    1995-01-01

    We present a model of the chemotactic mechanism of Escherichia coli that exhibits both initial excitation and eventual complete adaptation to any and all levels of stimulus ("exact" adaptation). In setting up the reaction network, we use only known interactions and experimentally determined cytosolic concentrations. Whenever possible, rate coefficients are first assigned experimentally measured values; second, we permit some variation in these rate coefficients by using a multiple-well optimization technique and incremental adjustment to obtain values that are sufficient to engender initial response to stimuli (excitation) and an eventual return of behavior to baseline (adaptation). The predictions of the model are similar to the observed behavior of wild-type bacteria in regard to the time scale of excitation in the presence of both attractant and repellent. The model predicts a weaker response to attractant than that observed experimentally, and the time scale of adaptation does not depend as strongly upon stimulant concentration as does that for wild-type bacteria. The mechanism responsible for long-term adaptation is local rather than global: on addition of a repellent or attractant, the receptor types not sensitive to that attractant or repellent do not change their average methylation level in the long term, although transient changes do occur. By carrying out a phenomenological simulation of bacterial chemotaxis, we find that the model is insufficiently sensitive to effect taxis in a gradient of attractant. However, by arbitrarily increasing the sensitivity of the motor to the tumble effector (phosphorylated CheY), we can obtain chemotactic behavior. Images FIGURE 6 FIGURE 7 PMID:7696522

  10. Nonresonant Background in Isobaric Models of Photoproduction of η-Mesons on Nucleons

    NASA Astrophysics Data System (ADS)

    Tryasuchev, V. A.; Alekseev, B. A.; Yakovleva, V. S.; Kondratyeva, A. G.

    2016-07-01

    Within the framework of isobaric models of pseudoscalar meson photoproduction, the nonresonant background of photoproduction of η-mesons on nucleons is investigated as a function of energy. A bound on the magnitude of the pseudoscalar coupling constant of the η-meson with a nucleon is obtained: g η NN 2 /4π ≤ 0.01, and a bound on vector meson exchange models is also obtained.

  11. A generalized spatiotemporal covariance model for stationary background in analysis of MEG data.

    PubMed

    Plis, S M; Schmidt, D M; Jun, S C; Ranken, D M

    2006-01-01

    Using a noise covariance model based on a single Kronecker product of spatial and temporal covariance in the spatiotemporal analysis of MEG data was demonstrated to provide improvement in the results over that of the commonly used diagonal noise covariance model. In this paper we present a model that is a generalization of all of the above models. It describes models based on a single Kronecker product of spatial and temporal covariance as well as more complicated multi-pair models together with any intermediate form expressed as a sum of Kronecker products of spatial component matrices of reduced rank and their corresponding temporal covariance matrices. The model provides a framework for controlling the tradeoff between the described complexity of the background and computational demand for the analysis using this model. Ways to estimate the value of the parameter controlling this tradeoff are also discussed.

  12. Career Development and Older Workers: Study Evaluating Adaptability in Older Workers Using Hall's Model

    ERIC Educational Resources Information Center

    Strate, Merwyn L.; Torraco, Richard J.

    2005-01-01

    This qualitative case study described the development of adaptive competence in older workers using a Model of Adaptability and Adaptation developed by Dr. Douglas T. Hall (2002). Few studies have focused on the development of adaptability in workers when faced with change and no studies have focused on the development of adaptability in older…

  13. A model of adaptive population migration in South Africa.

    PubMed

    Hattingh, P S

    1989-06-01

    In South Africa, political factors, as well as socioeconomic forces have traditionally shaped the distribution pattern of the population. Economic and political realities have recently brought adaptive changes in government policy with concomitant migration responses. On explaining the model, the author describes 3 recent movements. 2 stem from policy changes as reflected in the national and urban distributional patterns of blacks, and the movement of Indians to the Orange Free State. The 3rd deals with the movement of elderly whites in the city of Pretoria. In the case of the blacks, migration into the white area has been a spontaneous evolutionary adaptation to the presence of strong push factors in the homelands and pull factors in the white area. Since 1910, governments have tried to restrict the influx of blacks by formulating and implementing normative policies of intervention, and since the 1960s, by actively promoting urban development in the homelands. Despite these measures, the numbers of blacks in the white area has swelled to such an extent that the government has adapted by increasing the rights of blacks. Blacks, Asians, and coloreds have also filtered into exclusive, white suburbs, ignoring government legislation. Currently, the government is reacting adaptively by proposing to create free settlement areas, but also normatively by placing more emphasis on areas reserved for specific racial groups. The 2nd example shows that despite efforts by Indians to move into the Orange Free State, progress is very slow. However, the process for adaptive migration to and within the Orange Free State has been set in motion. The 3rd example, that of elderly whites in Pretoria, reflects the migratory behavior of this group in response to the natural process of aging. Here there are no normative policies, but the authorities will probably formulate adaptive policies as the white South African population ages rapidly. Both normative and adaptive government policies

  14. ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment

    SciTech Connect

    Parton, William; Hanson, Paul J; Swanston, Chris; Torn, Margaret S.; Trumbore, Susan E.; Riley, William J.; Kelly, Robin

    2010-01-01

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamics of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.

  15. ForCent model development and testing using the Enriched Background Isotope Study experiment

    SciTech Connect

    Parton, W.J.; Hanson, P. J.; Swanston, C.; Torn, M.; Trumbore, S. E.; Riley, W.; Kelly, R.

    2010-10-01

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulates the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.

  16. Adaptive filters and internal models: multilevel description of cerebellar function.

    PubMed

    Porrill, John; Dean, Paul; Anderson, Sean R

    2013-11-01

    Cerebellar function is increasingly discussed in terms of engineering schemes for motor control and signal processing that involve internal models. To address the relation between the cerebellum and internal models, we adopt the chip metaphor that has been used to represent the combination of a homogeneous cerebellar cortical microcircuit with individual microzones having unique external connections. This metaphor indicates that identifying the function of a particular cerebellar chip requires knowledge of both the general microcircuit algorithm and the chip's individual connections. Here we use a popular candidate algorithm as embodied in the adaptive filter, which learns to decorrelate its inputs from a reference ('teaching', 'error') signal. This algorithm is computationally powerful enough to be used in a very wide variety of engineering applications. However, the crucial issue is whether the external connectivity required by such applications can be implemented biologically. We argue that some applications appear to be in principle biologically implausible: these include the Smith predictor and Kalman filter (for state estimation), and the feedback-error-learning scheme for adaptive inverse control. However, even for plausible schemes, such as forward models for noise cancellation and novelty-detection, and the recurrent architecture for adaptive inverse control, there is unlikely to be a simple mapping between microzone function and internal model structure. This initial analysis suggests that cerebellar involvement in particular behaviours is therefore unlikely to have a neat classification into categories such as 'forward model'. It is more likely that cerebellar microzones learn a task-specific adaptive-filter operation which combines a number of signal-processing roles.

  17. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-07-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.

  18. Adaptive deployment of model reductions for tau-leaping simulation

    NASA Astrophysics Data System (ADS)

    Wu, Sheng; Fu, Jin; Petzold, Linda R.

    2015-05-01

    Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups.

  19. Adaptive deployment of model reductions for tau-leaping simulation

    PubMed Central

    Fu, Jin; Petzold, Linda R.

    2015-01-01

    Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups. PMID:26026435

  20. Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods

    NASA Astrophysics Data System (ADS)

    Chung, Eric; Efendiev, Yalchin; Hou, Thomas Y.

    2016-09-01

    In this paper, we discuss a general multiscale model reduction framework based on multiscale finite element methods. We give a brief overview of related multiscale methods. Due to page limitations, the overview focuses on a few related methods and is not intended to be comprehensive. We present a general adaptive multiscale model reduction framework, the Generalized Multiscale Finite Element Method. Besides the method's basic outline, we discuss some important ingredients needed for the method's success. We also discuss several applications. The proposed method allows performing local model reduction in the presence of high contrast and no scale separation.

  1. Adaptive deployment of model reductions for tau-leaping simulation.

    PubMed

    Wu, Sheng; Fu, Jin; Petzold, Linda R

    2015-05-28

    Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups.

  2. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    NASA Astrophysics Data System (ADS)

    Casadei, D.

    2014-10-01

    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  3. Phase structure of a Yukawa-like model in the presence of magnetic background and boundaries

    NASA Astrophysics Data System (ADS)

    Abreu, L. M.; Nery, E. S.

    2016-08-01

    In this paper, we investigate the thermodynamics of a Yukawa-like model, constituted of a complex scalar field interacting with real scalar and vector fields, in the presence of an external magnetic field and boundaries. By making use of mean-field approximation, we analyze the phase structure of this model at effective chemical equilibrium, under change of values of the relevant parameters of the model, focusing on the influence of the boundaries on the phase structure. The findings reveal a strong dependence of the nature of the phase structure on temperature, magnetic background and size of compactified coordinate, with possibility of a two-step phase transition.

  4. Regional and global modeling estimates of policy relevant background ozone over the United States

    NASA Astrophysics Data System (ADS)

    Emery, Christopher; Jung, Jaegun; Downey, Nicole; Johnson, Jeremiah; Jimenez, Michele; Yarwood, Greg; Morris, Ralph

    2012-02-01

    Policy Relevant Background (PRB) ozone, as defined by the US Environmental Protection Agency (EPA), refers to ozone concentrations that would occur in the absence of all North American anthropogenic emissions. PRB enters into the calculation of health risk benefits, and as the US ozone standard approaches background levels, PRB is increasingly important in determining the feasibility and cost of compliance. As PRB is a hypothetical construct, modeling is a necessary tool. Since 2006 EPA has relied on global modeling to establish PRB for their regulatory analyses. Recent assessments with higher resolution global models exhibit improved agreement with remote observations and modest upward shifts in PRB estimates. This paper shifts the paradigm to a regional model (CAMx) run at 12 km resolution, for which North American boundary conditions were provided by a low-resolution version of the GEOS-Chem global model. We conducted a comprehensive model inter-comparison, from which we elucidate differences in predictive performance against ozone observations and differences in temporal and spatial background variability over the US. In general, CAMx performed better in replicating observations at remote monitoring sites, and performance remained better at higher concentrations. While spring and summer mean PRB predicted by GEOS-Chem ranged 20-45 ppb, CAMx predicted PRB ranged 25-50 ppb and reached well over 60 ppb in the west due to event-oriented phenomena such as stratospheric intrusion and wildfires. CAMx showed a higher correlation between modeled PRB and total observed ozone, which is significant for health risk assessments. A case study during April 2006 suggests that stratospheric exchange of ozone is underestimated in both models on an event basis. We conclude that wildfires, lightning NO x and stratospheric intrusions contribute a significant level of uncertainty in estimating PRB, and that PRB will require careful consideration in the ozone standard setting process.

  5. Framework for dynamic background modeling and shadow suppression for moving object segmentation in complex wavelet domain

    NASA Astrophysics Data System (ADS)

    Kushwaha, Alok Kumar Singh; Srivastava, Rajeev

    2015-09-01

    Moving object segmentation using change detection in wavelet domain under continuous variations of lighting condition is a challenging problem in video surveillance systems. There are several methods proposed in the literature for change detection in wavelet domain for moving object segmentation having static backgrounds, but it has not been addressed effectively for dynamic background changes. The methods proposed in the literature suffer from various problems, such as ghostlike appearance, object shadows, and noise. To deal with these issues, a framework for dynamic background modeling and shadow suppression under rapidly changing illumination conditions for moving object segmentation in complex wavelet domain is proposed. The proposed method consists of eight steps applied on given video frames, which include wavelet decomposition of frame using complex wavelet transform; use of change detection on detail coefficients (LH, HL, and HH), use of improved Gaussian mixture-based dynamic background modeling on approximate coefficient (LL subband); cast shadow suppression; use of soft thresholding for noise removal; strong edge detection; inverse wavelet transformation for reconstruction; and finally using closing morphology operator. A comparative analysis of the proposed method is presented both qualitatively and quantitatively with other standard methods available in the literature for six datasets in terms of various performance measures. Experimental results demonstrate the efficacy of the proposed method.

  6. Network and adaptive system of systems modeling and analysis.

    SciTech Connect

    Lawton, Craig R.; Campbell, James E. Dr.; Anderson, Dennis James; Eddy, John P.

    2007-05-01

    This report documents the results of an LDRD program entitled ''Network and Adaptive System of Systems Modeling and Analysis'' that was conducted during FY 2005 and FY 2006. The purpose of this study was to determine and implement ways to incorporate network communications modeling into existing System of Systems (SoS) modeling capabilities. Current SoS modeling, particularly for the Future Combat Systems (FCS) program, is conducted under the assumption that communication between the various systems is always possible and occurs instantaneously. A more realistic representation of these communications allows for better, more accurate simulation results. The current approach to meeting this objective has been to use existing capabilities to model network hardware reliability and adding capabilities to use that information to model the impact on the sustainment supply chain and operational availability.

  7. Language Model Combination and Adaptation Using Weighted Finite State Transducers

    NASA Technical Reports Server (NTRS)

    Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.

    2010-01-01

    In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences

  8. An adaptive distance measure for use with nonparametric models

    SciTech Connect

    Garvey, D. R.; Hines, J. W.

    2006-07-01

    Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction

  9. The method of narrow-band audio classification based on universal noise background model

    NASA Astrophysics Data System (ADS)

    Rui, Rui; Bao, Chang-chun

    2013-03-01

    Audio classification is the basis of content-based audio analysis and retrieval. The conventional classification methods mainly depend on feature extraction of audio clip, which certainly increase the time requirement for classification. An approach for classifying the narrow-band audio stream based on feature extraction of audio frame-level is presented in this paper. The audio signals are divided into speech, instrumental music, song with accompaniment and noise using the Gaussian mixture model (GMM). In order to satisfy the demand of actual environment changing, a universal noise background model (UNBM) for white noise, street noise, factory noise and car interior noise is built. In addition, three feature schemes are considered to optimize feature selection. The experimental results show that the proposed algorithm achieves a high accuracy for audio classification, especially under each noise background we used and keep the classification time less than one second.

  10. Model Adaptation for Prognostics in a Particle Filtering Framework

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, Kai Frank

    2011-01-01

    One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.

  11. CMAQ (Community Multi-Scale Air Quality) atmospheric distribution model adaptation to region of Hungary

    NASA Astrophysics Data System (ADS)

    Lázár, Dóra; Weidinger, Tamás

    2016-04-01

    For our days, it has become important to measure and predict the concentration of harmful atmospheric pollutants such as dust, aerosol particles of different size ranges, nitrogen compounds, and ozone. The Department of Meteorology at Eötvös Loránd University has been applying the WRF (Weather Research and Forecasting) model several years ago, which is suitable for weather forecasting tasks and provides input data for various environmental models (e.g. DNDC). By adapting the CMAQ (Community Multi-scale Air Quality) model we have designed a combined ambient air-meteorological model (WRF-CMAQ). In this research it is important to apply different emission databases and a background model describing the initial distribution of the pollutant. We used SMOKE (Sparse Matrix Operator Kernel Emissions) model for construction emission dataset from EMEP (European Monitoring and Evaluation Programme) inventories and GEOS-Chem model for initial and boundary conditions. Our model settings were CMAQ CB05 (Carbon Bond 2005) chemical mechanism with 108 x 108 km, 36 x 36 km and 12 x 12 km grids for regions of Europe, the Carpathian Basin and Hungary respectively. i) The structure of the model system, ii) a case study for Carpathian Basin (an anticyclonic weather situation at 21th September 2012) are presented. iii) Verification of ozone forecast has been provided based on the measurements of background air pollution stations. iv) Effects of model attributes (f.e. transition time, emission dataset, parameterizations) for the ozone forecast in Hungary are also investigated.

  12. Integrated modeling of the GMT laser tomography adaptive optics system

    NASA Astrophysics Data System (ADS)

    Piatrou, Piotr

    2014-08-01

    Laser Tomography Adaptive Optics (LTAO) is one of adaptive optics systems planned for the Giant Magellan Telescope (GMT). End-to-end simulation tools that are able to cope with the complexity and computational burden of the AO systems to be installed on the extremely large telescopes such as GMT prove to be an integral part of the GMT LTAO system development endeavors. SL95, the Fortran 95 Simulation Library, is one of the software tools successfully used for the LTAO system end-to-end simulations. The goal of SL95 project is to provide a complete set of generic, richly parameterized mathematical models for key elements of the segmented telescope wavefront control systems including both active and adaptive optics as well as the models for atmospheric turbulence, extended light sources like Laser Guide Stars (LGS), light propagation engines and closed-loop controllers. The library is implemented as a hierarchical collection of classes capable of mutual interaction, which allows one to assemble complex wavefront control system configurations with multiple interacting control channels. In this paper we demonstrate the SL95 capabilities by building an integrated end-to-end model of the GMT LTAO system with 7 control channels: LGS tomography with Adaptive Secondary and on-instrument deformable mirrors, tip-tilt and vibration control, LGS stabilization, LGS focus control, truth sensor-based dynamic noncommon path aberration rejection, pupil position control, SLODAR-like embedded turbulence profiler. The rich parameterization of the SL95 classes allows to build detailed error budgets propagating through the system multiple errors and perturbations such as turbulence-, telescope-, telescope misalignment-, segment phasing error-, non-common path-induced aberrations, sensor noises, deformable mirror-to-sensor mis-registration, vibration, temporal errors, etc. We will present a short description of the SL95 architecture, as well as the sample GMT LTAO system simulation

  13. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    PubMed

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed.

  14. The Family Adaptation Model: A Life Course Perspective. Technical Report 880.

    ERIC Educational Resources Information Center

    Bowen, Gary L.

    This conceptual model for explaining the factors and processes that underlie family adaptation in the Army relies heavily upon two traditions: the "Double ABCX" model of family stress and adaptation and the "Person-Environment Fit" model. The new model has three major parts: the environmental system, the personal system, and family adaptation.…

  15. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    DOE PAGES

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; Arge, C. Nick

    2015-03-17

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF)more » to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.« less

  16. Numerical modeling of seismic waves using frequency-adaptive meshes

    NASA Astrophysics Data System (ADS)

    Hu, Jinyin; Jia, Xiaofeng

    2016-08-01

    An improved modeling algorithm using frequency-adaptive meshes is applied to meet the computational requirements of all seismic frequency components. It automatically adopts coarse meshes for low-frequency computations and fine meshes for high-frequency computations. The grid intervals are adaptively calculated based on a smooth inversely proportional function of grid size with respect to the frequency. In regular grid-based methods, the uniform mesh or non-uniform mesh is used for frequency-domain wave propagators and it is fixed for all frequencies. A too coarse mesh results in inaccurate high-frequency wavefields and unacceptable numerical dispersion; on the other hand, an overly fine mesh may cause storage and computational overburdens as well as invalid propagation angles of low-frequency wavefields. Experiments on the Padé generalized screen propagator indicate that the Adaptive mesh effectively solves these drawbacks of regular fixed-mesh methods, thus accurately computing the wavefield and its propagation angle in a wide frequency band. Several synthetic examples also demonstrate its feasibility for seismic modeling and migration.

  17. Modeling electrostrictive deformable mirrors in adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Hom, Craig L.; Dean, Peter D.; Winzer, Stephen R.

    2000-06-01

    Adaptive optics correct light wavefront distortion caused by atmospheric turbulence or internal heating of optical components. This distortion often limits performance in ground-based astronomy, space-based earth observation and high energy laser applications. The heart of the adaptive optics system is the deformable mirror. In this study, an electromechanical model of a deformable mirror was developed as a design tool. The model consisted of a continuous, mirrored face sheet driven with multilayered, electrostrictive actuators. A fully coupled constitutive law simulated the nonlinear, electromechanical behavior of the actuators, while finite element computations determined the mirror's mechanical stiffness observed by the array. Static analysis of the mirror/actuator system related different electrical inputs to the array with the deformation of the mirrored surface. The model also examined the nonlinear influence of internal stresses on the active array's electromechanical performance and quantified crosstalk between neighboring elements. The numerical predictions of the static version of the model agreed well with experimental measurements made on an actual mirror system. The model was also used to simulate the systems level performance of a deformable mirror correcting a thermally bloomed laser beam. The nonlinear analysis determined the commanded actuator voltages required for the phase compensation and the resulting wavefront error.

  18. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    SciTech Connect

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, we can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.

  19. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE PAGES

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  20. Consistent depth video segmentation using adaptive surface models.

    PubMed

    Husain, Farzad; Dellen, Babette; Torras, Carme

    2015-02-01

    We propose a new approach for the segmentation of 3-D point clouds into geometric surfaces using adaptive surface models. Starting from an initial configuration, the algorithm converges to a stable segmentation through a new iterative split-and-merge procedure, which includes an adaptive mechanism for the creation and removal of segments. This allows the segmentation to adjust to changing input data along the movie, leading to stable, temporally coherent, and traceable segments. We tested the method on a large variety of data acquired with different range imaging devices, including a structured-light sensor and a time-of-flight camera, and successfully segmented the videos into surface segments. We further demonstrated the feasibility of the approach using quantitative evaluations based on ground-truth data.

  1. Extending the radial diffusion model of Falthammar to non-dipole background field

    SciTech Connect

    Cunningham, Gregory Scott

    2015-05-26

    A model for radial diffusion caused by electromagnetic disturbances was published by Falthammar (1965) using a two-parameter model of the disturbance perturbing a background dipole magnetic field. Schulz and Lanzerotti (1974) extended this model by recognizing the two parameter perturbation as the leading (non--dipole) terms of the Mead Williams magnetic field model. They emphasized that the magnetic perturbation in such a model induces an electric ield that can be calculated from the motion of field lines on which the particles are ‘frozen’. Roederer and Zhang (2014) describe how the field lines on which the particles are frozen can be calculated by tracing the unperturbed field lines from the minimum-B location to the ionospheric footpoint, and then tracing the perturbed field (which shares the same ionospheric footpoint due to the frozen -in condition) from the ionospheric footpoint back to a perturbed minimum B location. The instantaneous change n Roederer L*, dL*/dt, can then be computed as the product (dL*/dphi)*(dphi/dt). dL*/Dphi is linearly dependent on the perturbation parameters (to first order) and is obtained by computing the drift across L*-labeled perturbed field lines, while dphi/dt is related to the bounce-averaged gradient-curvature drift velocity. The advantage of assuming a dipole background magnetic field, as in these previous studies, is that the instantaneous dL*/dt can be computed analytically (with some approximations), as can the DLL that results from integrating dL*/dt over time and computing the expected value of (dL*)^2. The approach can also be applied to complex background magnetic field models like T89 or TS04, on top of which the small perturbations are added, but an analytical solution is not possible and so a numerical solution must be implemented. In this talk, I discuss our progress in implementing a numerical solution to the calculation of DL*L* using arbitrary background field models with simple electromagnetic

  2. Mouse model of Sanfilippo syndrome type B: relation of phenotypic features to background strain.

    PubMed

    Gografe, Sylvia I; Garbuzova-Davis, Svitlana; Willing, Alison E; Haas, Ken; Chamizo, Wilfredo; Sanberg, Paul R

    2003-12-01

    Sanfilippo syndrome type B or mucopolysaccharidosis type III B (MPS IIIB) is a lysosomal storage disorder that is inherited in autosomal recessive manner. It is characterized by systemic heparan sulfate accumulation in lysosomes due to deficiency of the enzyme alpha-N-acetylglucosaminidase (Naglu). Devastating clinical abnormalities with severe central nervous system involvement and somatic disease lead to premature death. A mouse model of Sanfilippo syndrome type B was created by targeted disruption of the gene encoding Naglu, providing a powerful tool for understanding pathogenesis and developing novel therapeutic strategies. However, the JAX GEMM Strain B6.129S6-Naglutm1Efn mouse, although showing biochemical similarities to humans with Sanfilippo syndrome, exhibits aging and behavioral differences. We observed idiosyncrasies, such as skeletal dysmorphism, hydrocephalus, ocular abnormalities, organomegaly, growth retardation, and anomalies of the integument, in our breeding colony of Naglu mutant mice and determined that several of them were at least partially related to the background strain C57BL/6. These background strain abnormalities, therefore, potentially mimic or overlap signs of the induced syndrome in our mice. Our observations may prove useful in studies of Naglu mutant mice. The necessity for distinguishing background anomalies from signs of the modeled disease is apparent. PMID:14727810

  3. A Comparison of Three Programming Models for Adaptive Applications

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.

  4. Prediction of Conductivity by Adaptive Neuro-Fuzzy Model

    PubMed Central

    Akbarzadeh, S.; Arof, A. K.; Ramesh, S.; Khanmirzaei, M. H.; Nor, R. M.

    2014-01-01

    Electrochemical impedance spectroscopy (EIS) is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity. PMID:24658582

  5. An Adaptive Complex Network Model for Brain Functional Networks

    PubMed Central

    Gomez Portillo, Ignacio J.; Gleiser, Pablo M.

    2009-01-01

    Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902

  6. Prequential Analysis of Complex Data with Adaptive Model Reselection†

    PubMed Central

    Clarke, Jennifer; Clarke, Bertrand

    2010-01-01

    In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias–variance tradeoff in statistical modeling. PMID:20617104

  7. A heuristic model on the role of plasticity in adaptive evolution: plasticity increases adaptation, population viability and genetic variation.

    PubMed

    Gomez-Mestre, Ivan; Jovani, Roger

    2013-11-22

    An ongoing new synthesis in evolutionary theory is expanding our view of the sources of heritable variation beyond point mutations of fixed phenotypic effects to include environmentally sensitive changes in gene regulation. This expansion of the paradigm is necessary given ample evidence for a heritable ability to alter gene expression in response to environmental cues. In consequence, single genotypes are often capable of adaptively expressing different phenotypes in different environments, i.e. are adaptively plastic. We present an individual-based heuristic model to compare the adaptive dynamics of populations composed of plastic or non-plastic genotypes under a wide range of scenarios where we modify environmental variation, mutation rate and costs of plasticity. The model shows that adaptive plasticity contributes to the maintenance of genetic variation within populations, reduces bottlenecks when facing rapid environmental changes and confers an overall faster rate of adaptation. In fluctuating environments, plasticity is favoured by selection and maintained in the population. However, if the environment stabilizes and costs of plasticity are high, plasticity is reduced by selection, leading to genetic assimilation, which could result in species diversification. More broadly, our model shows that adaptive plasticity is a common consequence of selection under environmental heterogeneity, and hence a potentially common phenomenon in nature. Thus, taking adaptive plasticity into account substantially extends our view of adaptive evolution.

  8. Oxidative DNA damage background estimated by a system model of base excision repair

    SciTech Connect

    Sokhansanj, B A; Wilson, III, D M

    2004-05-13

    Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parameters from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.

  9. A background error covariance model of significant wave height employing Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Guo, Yanyou; Hou, Yijun; Zhang, Chunmei; Yang, Jie

    2012-09-01

    The quality of background error statistics is one of the key components for successful assimilation of observations in a numerical model. The background error covariance (BEC) of ocean waves is generally estimated under an assumption that it is stationary over a period of time and uniform over a domain. However, error statistics are in fact functions of the physical processes governing the meteorological situation and vary with the wave condition. In this paper, we simulated the BEC of the significant wave height (SWH) employing Monte Carlo methods. An interesting result is that the BEC varies consistently with the mean wave direction (MWD). In the model domain, the BEC of the SWH decreases significantly when the MWD changes abruptly. A new BEC model of the SWH based on the correlation between the BEC and MWD was then developed. A case study of regional data assimilation was performed, where the SWH observations of buoy 22001 were used to assess the SWH hindcast. The results show that the new BEC model benefits wave prediction and allows reasonable approximations of anisotropy and inhomogeneous errors.

  10. The Role of Scale and Model Bias in ADAPT's Photospheric Eatimation

    SciTech Connect

    Godinez Vazquez, Humberto C.; Hickmann, Kyle Scott; Arge, Charles Nicholas; Henney, Carl

    2015-05-20

    The Air Force Assimilative Photospheric flux Transport model (ADAPT), is a magnetic flux propagation based on Worden-Harvey (WH) model. ADAPT would be used to provide a global photospheric map of the Earth. A data assimilation method based on the Ensemble Kalman Filter (EnKF), a method of Monte Carlo approximation tied with Kalman filtering, is used in calculating the ADAPT models.

  11. Systematic spectral analysis of GX 339-4: Influence of Galactic background and reflection models

    NASA Astrophysics Data System (ADS)

    Clavel, M.; Rodriguez, J.; Corbel, S.; Coriat, M.

    2016-05-01

    Black hole X-ray binaries display large outbursts, during which their properties are strongly variable. We develop a systematic spectral analysis of the 3-40 keV {RXTE}/PCA data in order to study the evolution of these systems and apply it to GX 339-4. Using the low count rate observations, we provide a precise model of the Galactic background at GX 339-4's location and discuss its possible impact on the source spectral parameters. At higher fluxes, the use of a Gaussian line to model the reflection component can lead to the detection of a high-temperature disk, in particular in the high-hard state. We demonstrate that this component is an artifact arising from an incomplete modeling of the reflection spectrum.

  12. Mt Response of a 1d Earth Model Employing the Born Approximation with Variable Background Conductivities

    NASA Astrophysics Data System (ADS)

    Tejero, A.; Chavez, R. E.

    2001-12-01

    The Born approximation method has been commonly employed to study the electromagnetic field response. Other interpretative techniques have benn employed based upon the Born Approximation, like the extended Born approximation (EBA). This method employs the total field, instead of the primary field. Also, the Quasi Linear Approximation method (QLA) is an extension of EVA. In the present work, we propose an alternative technique, which employs the Born Approximation using variable background conductivities (BAVBC). The Green function is represented as a Born perturbation of zero order. Such that, the reference medium conductivity is a parameter selected according the working frequency. A similar procedure has been reported for stratified 1D-earth seismic models. This technique (BAVBC) has been applied to model computational models with reasonable results, as compared with available computational packages in the market. This method permits variations in the conductivity contrast of up to 80%, which provides solutions with 30% error, with respect of the analytical solution.

  13. A Model of the Soft X-ray Background as a Blast Wave Viewed from Inside

    NASA Technical Reports Server (NTRS)

    Edgar, R. J.; Cox, D. P.

    1984-01-01

    The suggestion that the soft X-ray background arises in part from the Sun which is inside a large supernova blastwave was examined by models of spherical blastwaves. The models can produce quantitative fits to both surface brightnesses and energy band ratios when t = 10 to the 5th power E sub o = 5 x 10 to the 50th power ergs, and n sub approx. 0.004 cm to the -3 power. The models are generalized by varying the relative importance of factors such as thermal conduction, Coulomb heating of electrons, and external pressure; and to allow the explosions to occur in preexisting cavities with steep density gradients, or by examination of the effects of large obstructions or other anisotrophies in the ambient medium.

  14. Direct model reference adaptive control of a flexible robotic manipulator

    NASA Technical Reports Server (NTRS)

    Meldrum, D. R.

    1985-01-01

    Quick, precise control of a flexible manipulator in a space environment is essential for future Space Station repair and satellite servicing. Numerous control algorithms have proven successful in controlling rigid manipulators wih colocated sensors and actuators; however, few have been tested on a flexible manipulator with noncolocated sensors and actuators. In this thesis, a model reference adaptive control (MRAC) scheme based on command generator tracker theory is designed for a flexible manipulator. Quicker, more precise tracking results are expected over nonadaptive control laws for this MRAC approach. Equations of motion in modal coordinates are derived for a single-link, flexible manipulator with an actuator at the pinned-end and a sensor at the free end. An MRAC is designed with the objective of controlling the torquing actuator so that the tip position follows a trajectory that is prescribed by the reference model. An appealing feature of this direct MRAC law is that it allows the reference model to have fewer states than the plant itself. Direct adaptive control also adjusts the controller parameters directly with knowledge of only the plant output and input signals.

  15. Adaptive Modeling, Engineering Analysis and Design of Advanced Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Hsu, Su-Yuen; Mason, Brian H.; Hicks, Mike D.; Jones, William T.; Sleight, David W.; Chun, Julio; Spangler, Jan L.; Kamhawi, Hilmi; Dahl, Jorgen L.

    2006-01-01

    This paper describes initial progress towards the development and enhancement of a set of software tools for rapid adaptive modeling, and conceptual design of advanced aerospace vehicle concepts. With demanding structural and aerodynamic performance requirements, these high fidelity geometry based modeling tools are essential for rapid and accurate engineering analysis at the early concept development stage. This adaptive modeling tool was used for generating vehicle parametric geometry, outer mold line and detailed internal structural layout of wing, fuselage, skin, spars, ribs, control surfaces, frames, bulkheads, floors, etc., that facilitated rapid finite element analysis, sizing study and weight optimization. The high quality outer mold line enabled rapid aerodynamic analysis in order to provide reliable design data at critical flight conditions. Example application for structural design of a conventional aircraft and a high altitude long endurance vehicle configuration are presented. This work was performed under the Conceptual Design Shop sub-project within the Efficient Aerodynamic Shape and Integration project, under the former Vehicle Systems Program. The project objective was to design and assess unconventional atmospheric vehicle concepts efficiently and confidently. The implementation may also dramatically facilitate physics-based systems analysis for the NASA Fundamental Aeronautics Mission. In addition to providing technology for design and development of unconventional aircraft, the techniques for generation of accurate geometry and internal sub-structure and the automated interface with the high fidelity analysis codes could also be applied towards the design of vehicles for the NASA Exploration and Space Science Mission projects.

  16. Model of adaptive temporal development of structured finite systems

    NASA Astrophysics Data System (ADS)

    Patera, Jiri; Shaw, Gordon L.; Slansky, Richard; Leng, Xiaodan

    1989-07-01

    The weight systems of level-zero representations of affine Kac-Moody algebras provide an appropriate kinematical framework for studying structured finite systems with adaptive temporal development. Much of the structure is determined by Lie algebra theory, so it is possible to restrict greatly the connection space and analytic results are possible. The time development of these systems often evolves to cyclic temporal-spatial patterns, depending on the definition of the dynamics. The purpose of this paper is to set up the mathematical formalism for this ``memory in Lie algebras'' class of models. An illustration is used to show the kinds of complex behavior that occur in simple cases.

  17. Adaptive mesh refinement techniques for 3-D skin electrode modeling.

    PubMed

    Sawicki, Bartosz; Okoniewski, Michal

    2010-03-01

    In this paper, we develop a 3-D adaptive mesh refinement technique. The algorithm is constructed with an electric impedance tomography forward problem and the finite-element method in mind, but is applicable to a much wider class of problems. We use the method to evaluate the distribution of currents injected into a model of a human body through skin contact electrodes. We demonstrate that the technique leads to a significantly improved solution, particularly near the electrodes. We discuss error estimation, efficiency, and quality of the refinement algorithm and methods that allow for preserving mesh attributes in the refinement process.

  18. Model-free adaptive control of advanced power plants

    SciTech Connect

    Cheng, George Shu-Xing; Mulkey, Steven L.; Wang, Qiang

    2015-08-18

    A novel 3-Input-3-Output (3.times.3) Model-Free Adaptive (MFA) controller with a set of artificial neural networks as part of the controller is introduced. A 3.times.3 MFA control system using the inventive 3.times.3 MFA controller is described to control key process variables including Power, Steam Throttle Pressure, and Steam Temperature of boiler-turbine-generator (BTG) units in conventional and advanced power plants. Those advanced power plants may comprise Once-Through Supercritical (OTSC) Boilers, Circulating Fluidized-Bed (CFB) Boilers, and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.

  19. Accurate Modeling of the Terrestrial Gamma-Ray Background for Homeland Security Applications

    SciTech Connect

    Sandness, Gerald A.; Schweppe, John E.; Hensley, Walter K.; Borgardt, James D.; Mitchell, Allison L.

    2009-10-24

    Abstract–The Pacific Northwest National Laboratory has developed computer models to simulate the use of radiation portal monitors to screen vehicles and cargo for the presence of illicit radioactive material. The gamma radiation emitted by the vehicles or cargo containers must often be measured in the presence of a relatively large gamma-ray background mainly due to the presence of potassium, uranium, and thorium (and progeny isotopes) in the soil and surrounding building materials. This large background is often a significant limit to the detection sensitivity for items of interest and must be modeled accurately for analyzing homeland security situations. Calculations of the expected gamma-ray emission from a disk of soil and asphalt were made using the Monte Carlo transport code MCNP and were compared to measurements made at a seaport with a high-purity germanium detector. Analysis revealed that the energy spectrum of the measured background could not be reproduced unless the model included gamma rays coming from the ground out to distances of at least 300 m. The contribution from beyond about 50 m was primarily due to gamma rays that scattered in the air before entering the detectors rather than passing directly from the ground to the detectors. These skyshine gamma rays contribute tens of percent to the total gamma-ray spectrum, primarily at energies below a few hundred keV. The techniques that were developed to efficiently calculate the contributions from a large soil disk and a large air volume in a Monte Carlo simulation are described and the implications of skyshine in portal monitoring applications are discussed.

  20. Quadratic adaptive algorithm for solving cardiac action potential models.

    PubMed

    Chen, Min-Hung; Chen, Po-Yuan; Luo, Ching-Hsing

    2016-10-01

    An adaptive integration method is proposed for computing cardiac action potential models accurately and efficiently. Time steps are adaptively chosen by solving a quadratic formula involving the first and second derivatives of the membrane action potential. To improve the numerical accuracy, we devise an extremum-locator (el) function to predict the local extremum when approaching the peak amplitude of the action potential. In addition, the time step restriction (tsr) technique is designed to limit the increase in time steps, and thus prevent the membrane potential from changing abruptly. The performance of the proposed method is tested using the Luo-Rudy phase 1 (LR1), dynamic (LR2), and human O'Hara-Rudy dynamic (ORd) ventricular action potential models, and the Courtemanche atrial model incorporating a Markov sodium channel model. Numerical experiments demonstrate that the action potential generated using the proposed method is more accurate than that using the traditional Hybrid method, especially near the peak region. The traditional Hybrid method may choose large time steps near to the peak region, and sometimes causes the action potential to become distorted. In contrast, the proposed new method chooses very fine time steps in the peak region, but large time steps in the smooth region, and the profiles are smoother and closer to the reference solution. In the test on the stiff Markov ionic channel model, the Hybrid blows up if the allowable time step is set to be greater than 0.1ms. In contrast, our method can adjust the time step size automatically, and is stable. Overall, the proposed method is more accurate than and as efficient as the traditional Hybrid method, especially for the human ORd model. The proposed method shows improvement for action potentials with a non-smooth morphology, and it needs further investigation to determine whether the method is helpful during propagation of the action potential. PMID:27639239

  1. Modelling the flux distribution function of the extragalactic gamma-ray background from dark matter annihilation

    SciTech Connect

    Feyereisen, Michael R.; Ando, Shin'ichiro; Lee, Samuel K. E-mail: s.ando@uva.nl

    2015-09-01

    The one-point function (i.e., the isotropic flux distribution) is a complementary method to (anisotropic) two-point correlations in searches for a gamma-ray dark matter annihilation signature. Using analytical models of structure formation and dark matter halo properties, we compute the gamma-ray flux distribution due to annihilations in extragalactic dark matter halos, as it would be observed by the Fermi Large Area Telescope. Combining the central limit theorem and Monte Carlo sampling, we show that the flux distribution takes the form of a narrow Gaussian of 'diffuse' light, with an 'unresolved point source' power-law tail as a result of bright halos. We argue that this background due to dark matter constitutes an irreducible and significant background component for point-source annihilation searches with galaxy clusters and dwarf spheroidal galaxies, modifying the predicted signal-to-noise ratio. A study of astrophysical backgrounds to this signal reveals that the shape of the total gamma-ray flux distribution is very sensitive to the contribution of a dark matter component, allowing us to forecast promising one-point upper limits on the annihilation cross-section. We show that by using the flux distribution at only one energy bin, one can probe the canonical cross-section required for explaining the relic density, for dark matter of masses around tens of GeV.

  2. Knowledge fusion: Time series modeling followed by pattern recognition applied to unusual sections of background data

    SciTech Connect

    Burr, T.; Doak, J.; Howell, J.A.; Martinez, D.; Strittmatter, R.

    1996-03-01

    This report describes work performed during FY 95 for the Knowledge Fusion Project, which by the Department of Energy, Office of Nonproliferation and National Security. The project team selected satellite sensor data as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features that make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series in a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report describes the implementation and application of this two-step process for separating events from unusual background. As a fortunate by-product of this activity, it is possible to gain a better understanding of the natural background.

  3. Face detection in complex background based on Adaboost algorithm and YCbCr skin color model

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    Face detection is a fundamental and important research theme in the topic of Pattern Recognition and Computer Vision. Now, remarkable fruits have been achieved. Among these methods, statistics based methods hold a dominant position. In this paper, Adaboost algorithm based on Haar-like features is used to detect faces in complex background. The method combining YCbCr skin model detection and Adaboost is researched, the skin detection method is used to validate the detection results obtained by Adaboost algorithm. It overcomes false detection problem by Adaboost. Experimental results show that nearly all non-face areas are removed, and improve the detection rate.

  4. Model observer design for detecting multiple abnormalities in anatomical background images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Park, Subok

    2016-03-01

    As psychophysical studies are resource-intensive to conduct, model observers are commonly used to assess and optimize medical imaging quality. Existing model observers were typically designed to detect at most one signal. However, in clinical practice, there may be multiple abnormalities in a single image set (e.g., multifocal and multicentric breast cancers (MMBC)), which can impact treatment planning. Prevalence of signals can be different across anatomical regions, and human observers do not know the number or location of signals a priori. As new imaging techniques have the potential to improve multiple-signal detection (e.g., digital breast tomosynthesis may be more effective for diagnosis of MMBC than planar mammography), image quality assessment approaches addressing such tasks are needed. In this study, we present a model-observer mechanism to detect multiple signals in the same image dataset. To handle the high dimensionality of images, a novel implementation of partial least squares (PLS) was developed to estimate different sets of efficient channels directly from the images. Without any prior knowledge of the background or the signals, the PLS channels capture interactions between signals and the background which provide discriminant image information. Corresponding linear decision templates are employed to generate both image-level and location-specific scores on the presence of signals. Our preliminary results show that the model observer using PLS channels, compared to our first attempts with Laguerre-Gauss channels, can achieve high performance with a reasonably small number of channels, and the optimal design of the model observer may vary as the tasks of clinical interest change.

  5. Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments

    NASA Astrophysics Data System (ADS)

    Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.

    2015-12-01

    Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.

  6. FPGA implementation for real-time background subtraction based on Horprasert model.

    PubMed

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W.

  7. CBSD Version II component models of the IR celestial background. Technical report

    SciTech Connect

    Kennealy, J.P.; Glaudell, G.A.

    1990-12-07

    CBSD Version II addresses the development of algorithms and software which implement realistic models of all the primary celestial background phenomenologies, including solar system, galactic, and extra-galactic features. During 1990, the CBSD program developed and refined IR scene generation models for the zodiacal emission, thermal emission from asteroids and planets, and the galactic point source background. Chapters in this report are devoted to each of those areas. Ongoing extensions to the point source module for extended source descriptions of nebulae and HII regions are briefly discussed. Treatment of small galaxies will also be a natural extension of the current CBSD point source module. Although no CBSD module yet exists for interstellar IR cirrus, MRC has been working closely with the Royal Aerospace Establishment in England to achieve a data-base understanding of cirrus fractal characteristics. The CBSD modules discussed in Chapters 2, 3, and 4 are all now operational and have been employed to generate a significant variety of scenes. CBSD scene generation capability has been well accepted by both the IR astronomy community and the DOD user community and directly supports the SDIO SSGM program.

  8. FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model

    PubMed Central

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J.; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W. PMID:22368487

  9. Adaptation in tunably rugged fitness landscapes: the rough Mount Fuji model.

    PubMed

    Neidhart, Johannes; Szendro, Ivan G; Krug, Joachim

    2014-10-01

    Much of the current theory of adaptation is based on Gillespie's mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage.

  10. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  11. Dynamic modeling and adaptive control for space stations

    NASA Technical Reports Server (NTRS)

    Ih, C. H. C.; Wang, S. J.

    1985-01-01

    Of all large space structural systems, space stations present a unique challenge and requirement to advanced control technology. Their operations require control system stability over an extremely broad range of parameter changes and high level of disturbances. During shuttle docking the system mass may suddenly increase by more than 100% and during station assembly the mass may vary even more drastically. These coupled with the inherent dynamic model uncertainties associated with large space structural systems require highly sophisticated control systems that can grow as the stations evolve and cope with the uncertainties and time-varying elements to maintain the stability and pointing of the space stations. The aspects of space station operational properties are first examined, including configurations, dynamic models, shuttle docking contact dynamics, solar panel interaction, and load reduction to yield a set of system models and conditions. A model reference adaptive control algorithm along with the inner-loop plant augmentation design for controlling the space stations under severe operational conditions of shuttle docking, excessive model parameter errors, and model truncation are then investigated. The instability problem caused by the zero-frequency rigid body modes and a proposed solution using plant augmentation are addressed. Two sets of sufficient conditions which guarantee the globablly asymptotic stability for the space station systems are obtained.

  12. Adaptive finite difference for seismic wavefield modelling in acoustic media.

    PubMed

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-01-01

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang's optimised finite difference scheme. PMID:27491333

  13. Modelling interactions between mitigation, adaptation and sustainable development

    NASA Astrophysics Data System (ADS)

    Reusser, D. E.; Siabatto, F. A. P.; Garcia Cantu Ros, A.; Pape, C.; Lissner, T.; Kropp, J. P.

    2012-04-01

    Managing the interdependence of climate mitigation, adaptation and sustainable development requires a good understanding of the dominant socioecological processes that have determined the pathways in the past. Key variables include water and food availability which depend on climate and overall ecosystem services, as well as energy supply and social, political and economic conditions. We present our initial steps to build a system dynamic model of nations that represents a minimal set of relevant variables of the socio- ecological development. The ultimate goal of the modelling exercise is to derive possible future scenarios and test those for their compatibility with sustainability boundaries. Where dynamics go beyond sustainability boundaries intervention points in the dynamics can be searched.

  14. Direct model reference adaptive control of robotic arms

    NASA Technical Reports Server (NTRS)

    Kaufman, Howard; Swift, David C.; Cummings, Steven T.; Shankey, Jeffrey R.

    1993-01-01

    The results of controlling A PUMA 560 Robotic Manipulator and the NASA shuttle Remote Manipulator System (RMS) using a Command Generator Tracker (CGT) based Model Reference Adaptive Controller (DMRAC) are presented. Initially, the DMRAC algorithm was run in simulation using a detailed dynamic model of the PUMA 560. The algorithm was tuned on the simulation and then used to control the manipulator using minimum jerk trajectories as the desired reference inputs. The ability to track a trajectory in the presence of load changes was also investigated in the simulation. Satisfactory performance was achieved in both simulation and on the actual robot. The obtained responses showed that the algorithm was robust in the presence of sudden load changes. Because these results indicate that the DMRAC algorithm can indeed be successfully applied to the control of robotic manipulators, additional testing was performed to validate the applicability of DMRAC to simulated dynamics of the shuttle RMS.

  15. Adaptive finite difference for seismic wavefield modelling in acoustic media

    NASA Astrophysics Data System (ADS)

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-08-01

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang’s optimised finite difference scheme.

  16. Adaptive finite difference for seismic wavefield modelling in acoustic media

    PubMed Central

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-01-01

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang’s optimised finite difference scheme. PMID:27491333

  17. Direct model reference adaptive control of robotic arms

    NASA Astrophysics Data System (ADS)

    Kaufman, Howard; Swift, David C.; Cummings, Steven T.; Shankey, Jeffrey R.

    1993-12-01

    The results of controlling A PUMA 560 Robotic Manipulator and the NASA shuttle Remote Manipulator System (RMS) using a Command Generator Tracker (CGT) based Model Reference Adaptive Controller (DMRAC) are presented. Initially, the DMRAC algorithm was run in simulation using a detailed dynamic model of the PUMA 560. The algorithm was tuned on the simulation and then used to control the manipulator using minimum jerk trajectories as the desired reference inputs. The ability to track a trajectory in the presence of load changes was also investigated in the simulation. Satisfactory performance was achieved in both simulation and on the actual robot. The obtained responses showed that the algorithm was robust in the presence of sudden load changes. Because these results indicate that the DMRAC algorithm can indeed be successfully applied to the control of robotic manipulators, additional testing was performed to validate the applicability of DMRAC to simulated dynamics of the shuttle RMS.

  18. Adaptive finite difference for seismic wavefield modelling in acoustic media.

    PubMed

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-08-05

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang's optimised finite difference scheme.

  19. Carving and adaptive drainage enforcement of grid digital elevation models

    NASA Astrophysics Data System (ADS)

    Soille, Pierre; Vogt, Jürgen; Colombo, Roberto

    2003-12-01

    An effective and widely used method for removing spurious pits in digital elevation models consists of filling them until they overflow. However, this method sometimes creates large flat regions which in turn pose a problem for the determination of accurate flow directions. In this study, we propose to suppress each pit by creating a descending path from it to the nearest point having a lower elevation value. This is achieved by carving, i.e., lowering, the terrain elevations along the detected path. Carving paths are identified through a flooding simulation starting from the river outlets. The proposed approach allows for adaptive drainage enforcement whereby river networks coming from other data sources are imposed to the digital elevation model only in places where the automatic river network extraction deviates substantially from the known networks. An improvement to methods for routing flow over flat regions is also introduced. Detailed results are presented over test areas of the Danube basin.

  20. The reduced order model problem in distributed parameter systems adaptive identification and control

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.

    1980-01-01

    The research concerning the reduced order model problem in distributed parameter systems is reported. The adaptive control strategy was chosen for investigation in the annular momentum control device. It is noted, that if there is no observation spill over, and no model errors, an indirect adaptive control strategy can be globally stable. Recent publications concerning adaptive control are included.

  1. A new adaptive data transfer library for model coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Liu, Li; Yang, Guangwen; Li, Ruizhe; Wang, Bin

    2016-06-01

    Data transfer means transferring data fields from a sender to a receiver. It is a fundamental and frequently used operation of a coupler. Most versions of state-of-the-art couplers currently use an implementation based on the point-to-point (P2P) communication of the message passing interface (MPI) (referred to as "P2P implementation" hereafter). In this paper, we reveal the drawbacks of the P2P implementation when the parallel decompositions of the sender and the receiver are different, including low communication bandwidth due to small message size, variable and high number of MPI messages, as well as network contention. To overcome these drawbacks, we propose a butterfly implementation for data transfer. Although the butterfly implementation outperforms the P2P implementation in many cases, it degrades the performance when the sender and the receiver have similar parallel decompositions or when the number of processes used for running models is small. To ensure data transfer with optimal performance, we design and implement an adaptive data transfer library that combines the advantages of both butterfly implementation and P2P implementation. As the adaptive data transfer library automatically uses the best implementation for data transfer, it outperforms the P2P implementation in many cases while it does not decrease the performance in any cases. Now, the adaptive data transfer library is open to the public and has been imported into the C-Coupler1 coupler for performance improvement of data transfer. We believe that other couplers can also benefit from this.

  2. The adaptive FEM elastic model for medical image registration.

    PubMed

    Zhang, Jingya; Wang, Jiajun; Wang, Xiuying; Feng, Dagan

    2014-01-01

    This paper proposes an adaptive mesh refinement strategy for the finite element method (FEM) based elastic registration model. The signature matrix for mesh refinement takes into account the regional intensity variance and the local deformation displacement. The regional intensity variance reflects detailed information for improving registration accuracy and the deformation displacement fine-tunes the mesh refinement for a more efficient algorithm. The gradient flows of two different similarity metrics, the sum of the squared difference and the spatially encoded mutual information for the mono-modal and multi-modal registrations, are used to derive external forces to drive the model to the equilibrium state. We compared our approach to three other models: (1) the conventional multi-resolution FEM registration algorithm; (2) the FEM elastic method that uses variation information for mesh refinement; and (3) the robust block matching based registration. Comparisons among different methods in a dataset with 20 CT image pairs upon artificial deformation demonstrate that our registration method achieved significant improvement in accuracies. Experimental results in another dataset of 40 real medical image pairs for both mono-modal and multi-modal registrations also show that our model outperforms the other three models in its accuracy.

  3. Adapting a weather forecast model for greenhouse gas simulation

    NASA Astrophysics Data System (ADS)

    Polavarapu, S. M.; Neish, M.; Tanguay, M.; Girard, C.; de Grandpré, J.; Gravel, S.; Semeniuk, K.; Chan, D.

    2015-12-01

    The ability to simulate greenhouse gases on the global domain is useful for providing boundary conditions for regional flux inversions, as well as for providing reference data for bias correction of satellite measurements. Given the existence of operational weather and environmental prediction models and assimilation systems at Environment Canada, it makes sense to use these tools for greenhouse gas simulations. In this work, we describe the adaptations needed to reasonably simulate CO2 with a weather forecast model. The main challenges were the implementation of a mass conserving advection scheme, and the careful implementation of a mixing ratio defined with respect to dry air. The transport of tracers through convection was also added, and the vertical mixing through the boundary layer was slightly modified. With all these changes, the model conserves CO2 mass well on the annual time scale, and the high resolution (0.9 degree grid spacing) permits a good description of synoptic scale transport. The use of a coupled meteorological/tracer transport model also permits an assessment of approximations needed in offline transport model approaches, such as the neglect of water vapour mass when computing a tracer mixing ratio with respect to dry air.

  4. Matrix model and holographic baryons in the D0-D4 background

    NASA Astrophysics Data System (ADS)

    Li, Si-wen; Jia, Tuo

    2015-08-01

    We study the spectrum and short-distance two-body force of holographic baryons by the matrix model, which is derived from the Sakai-Sugimoto model in the D0-D4 background (D0-D4/D8 system). The matrix model is derived by using the standard technique in string theory, and it can describe multibaryon systems. We rederive the action of the matrix model from open string theory on the baryon vertex, which is embedded in the D0-D4/D8 system. The matrix model offers a more systematic approach to the dynamics of the baryons at short distances. In our system, we find that the matrix model describes stable baryonic states only if ζ =UQ0 3/UKK 3<2 , where UQ0 3 is related to the number density of smeared D0-branes. This result in our paper is exactly the same as some previous results studied in this system, presented in [W. Cai, C. Wu, and Z. Xiao, Phys. Rev. D 90, 106001 (2014)]. We also compute the baryon spectrum (k =1 case) and short-distance two-body force of baryons (k =2 case). The baryon spectrum is modified and could be able to fit the experimental data if we choose a suitable value for ζ . And the short-distance two-body force of baryons is also modified by the appearance of smeared D0-branes from the original Sakai-Sugimoto model. If ζ >2 , we find that the baryon spectrum will be totally complex and an attractive force will appear in the short-distance interaction of baryons, which may consistently correspond to the existence of unstable baryonic states.

  5. Modes of climate variability under different background conditions: concepts, data, modelling

    NASA Astrophysics Data System (ADS)

    Lohmann, G.

    2011-12-01

    Through its nonlinear dynamics and involvement in past abrupt climate shifts the thermohaline circulation represents a key element for the understanding of rapid climate changes. By applying various statistical techniques on surface temperature data, several variability modes on decadal to millenial timescales are identified. The distinction between the modes provides a frame for interpreting past abrupt climate changes. Abrupt shifts associated to the ocean circulation are detected around 1970 and the last millenium, i.e. the medieval warm period. Such oscillations are analyzed for longer time scales covering the last glacial-interglacial cycle. During the Holocene such events seem to be Poisson distributed indicating for an internal mode. Statistical-conceptual and dynamical model concepts are proposed and tested for millenial to orbital time scales, showing the dominant role of the ocean circulation. New GCM model results indicate a strong sensitivity of long-term variability on background conditions. A transition from full glacial (with a strongly stratified ocean) to interglacial conditions is attempted. Finally, climate sensitivity on glacial-interglacial and shorter time scales will be evaluated using SST Alkenone data and GCM simulations. It is shown that the models underestimate the climate sensitivity as compared to the data by a factor of 3. It is argued that the models possibly underestimate the response to obliquity forcing.

  6. Adapting bump model for ventral photoreceptors of Limulus

    PubMed Central

    1982-01-01

    Light-evoked current fluctuations have been recorded from ventral photoreceptors of Limulus for light intensity from threshold up to 10(5) times threshold. These data are analyzed in terms of the adapting bump noise model, which postulates that (a) the response to light is a summation of bumps; and (b) the average size of bump decreases with light intensity, and this is the major mechanism of light adaptation. It is shown here that this model can account for the data well. Furthermore, the model provides a convenient framework to characterize, in terms of bump parameters, the effects of calcium ions, which are known to affect photoreceptor functions. From responses to very dim light, it is found that the average impulse response (average of a large number of responses to dim flashes) can be predicted from knowledge of both the noise characteristics under steady light and the dispersion of latencies of individual bumps. Over the range of light intensities studied, it is shown that (a) the bump rate increases in strict proportionality to light intensity, up to approximately 10(5) bumps per second; and (b) the bump height decreases approximately as the -0.7 power of light intensity; at rates greater than 10(5) bumps per second, the conductance change associated with the single bump seems to reach a minimum value of approximately 10(-11) reciprocal ohms; (c) from the lowest to the highest light intensity, the bump duration decreases approximately by a factor of 2, and the time scale of the dispersion of latencies of individual bumps decreases approximately by a factor of 3; (d) removal of calcium ions from the bath lengthens the latency process and causes an increase in bump height but appears to have no effect on either the bump rate or the bump duration. PMID:7108487

  7. An adaptive radiation model for the origin of new genefunctions

    SciTech Connect

    Francino, M. Pilar

    2004-10-18

    The evolution of new gene functions is one of the keys to evolutionary innovation. Most novel functions result from gene duplication followed by divergence. However, the models hitherto proposed to account for this process are not fully satisfactory. The classic model of neofunctionalization holds that the two paralogous gene copies resulting from a duplication are functionally redundant, such that one of them can evolve under no functional constraints and occasionally acquire a new function. This model lacks a convincing mechanism for the new gene copies to increase in frequency in the population and survive the mutational load expected to accumulate under neutrality, before the acquisition of the rare beneficial mutations that would confer new functionality. The subfunctionalization model has been proposed as an alternative way to generate genes with altered functions. This model also assumes that new paralogous gene copies are functionally redundant and therefore neutral, but it predicts that relaxed selection will affect both gene copies such that some of the capabilities of the parent gene will disappear in one of the copies and be retained in the other. Thus, the functions originally present in a single gene will be partitioned between the two descendant copies. However, although this model can explain increases in gene number, it does not really address the main evolutionary question, which is the development of new biochemical capabilities. Recently, a new concept has been introduced into the gene evolution literature which is most likely to help solve this dilemma. The key point is to allow for a period of natural selection for the duplication per se, before new function evolves, rather than considering gene duplication to be neutral as in the previous models. Here, I suggest a new model that draws on the advantage of postulating selection for gene duplication, and proposes that bursts of adaptive gene amplification in response to specific selection

  8. Turnaround Management Strategies: The Adaptive Model and the Constructive Model. ASHE 1983 Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Chaffee, Ellen E.

    The use of two management strategies by 14 liberal arts and comprehensive colleges attempting to recover from serious financial decline during 1973-1976 were studied. The adaptive model of strategy, based on resource dependence, involves managing demands in order to satisfy critical-resource providers. The constructive model of strategy, based on…

  9. A region-appearance-based adaptive variational model for 3D liver segmentation

    SciTech Connect

    Peng, Jialin; Dong, Fangfang; Chen, Yunmei; Kong, Dexing

    2014-04-15

    Purpose: Liver segmentation from computed tomography images is a challenging task owing to pixel intensity overlapping, ambiguous edges, and complex backgrounds. The authors address this problem with a novel active surface scheme, which minimizes an energy functional combining both edge- and region-based information. Methods: In this semiautomatic method, the evolving surface is principally attracted to strong edges but is facilitated by the region-based information where edge information is missing. As avoiding oversegmentation is the primary challenge, the authors take into account multiple features and appearance context information. Discriminative cues, such as multilayer consecutiveness and local organ deformation are also implicitly incorporated. Case-specific intensity and appearance constraints are included to cope with the typically large appearance variations over multiple images. Spatially adaptive balancing weights are employed to handle the nonuniformity of image features. Results: Comparisons and validations on difficult cases showed that the authors’ model can effectively discriminate the liver from adhering background tissues. Boundaries weak in gradient or with no local evidence (e.g., small edge gaps or parts with similar intensity to the background) were delineated without additional user constraint. With an average surface distance of 0.9 mm and an average volume overlap of 93.9% on the MICCAI data set, the authors’ model outperformed most state-of-the-art methods. Validations on eight volumes with different initial conditions had segmentation score variances mostly less than unity. Conclusions: The proposed model can efficiently delineate ambiguous liver edges from complex tissue backgrounds with reproducibility. Quantitative validations and comparative results demonstrate the accuracy and efficacy of the model.

  10. Fluidity: A New Adaptive, Unstructured Mesh Geodynamics Model

    NASA Astrophysics Data System (ADS)

    Davies, D. R.; Wilson, C. R.; Kramer, S. C.; Piggott, M. D.; Le Voci, G.; Collins, G. S.

    2010-05-01

    Fluidity is a sophisticated fluid dynamics package, which has been developed by the Applied Modelling and Computation Group (AMCG) at Imperial College London. It has many environmental applications, from nuclear reactor safety to simulations of ocean circulation. Fluidity has state-of-the-art features that place it at the forefront of computational fluid dynamics. The code: Dynamically optimizes the mesh, providing increased resolution in areas of dynamic importance, thus allowing for accurate simulations across a range of length scales, within a single model. Uses an unstructured mesh, which enables the representation of complex geometries. It also enhances mesh optimization using anisotropic elements, which are particularly useful for resolving one-dimensional flow features and material interfaces. Uses implicit solvers thus allowing for large time-steps with minimal loss of accuracy. PETSc provides some of these, though multigrid preconditioning methods have been developed in-house. Is optimized to run on parallel processors and has the ability to perform parallel mesh adaptivity - the subdomains used in parallel computing automatically adjust themselves to balance the computational load on each processor, as the mesh evolves. Has a novel interface-preserving advection scheme for maintaining sharp interfaces between multiple materials / components. Has an automated test-bed for verification of model developments. Such attributes provide an extremely powerful base on which to build a new geodynamical model. Incorporating into Fluidity the necessary physics and numerical technology for geodynamical flows is an ongoing task, though progress, to date, includes: Development and implementation of parallel, scalable solvers for Stokes flow, which can handle sharp, orders of magnitude variations in viscosity and, significantly, an anisotropic viscosity tensor. Modification of the multi-material interface-preserving scheme to allow for tracking of chemical

  11. Adaptive elastic networks as models of supercooled liquids

    NASA Astrophysics Data System (ADS)

    Yan, Le; Wyart, Matthieu

    2015-08-01

    The thermodynamics and dynamics of supercooled liquids correlate with their elasticity. In particular for covalent networks, the jump of specific heat is small and the liquid is strong near the threshold valence where the network acquires rigidity. By contrast, the jump of specific heat and the fragility are large away from this threshold valence. In a previous work [Proc. Natl. Acad. Sci. USA 110, 6307 (2013), 10.1073/pnas.1300534110], we could explain these behaviors by introducing a model of supercooled liquids in which local rearrangements interact via elasticity. However, in that model the disorder characterizing elasticity was frozen, whereas it is itself a dynamic variable in supercooled liquids. Here we study numerically and theoretically adaptive elastic network models where polydisperse springs can move on a lattice, thus allowing for the geometry of the elastic network to fluctuate and evolve with temperature. We show numerically that our previous results on the relationship between structure and thermodynamics hold in these models. We introduce an approximation where redundant constraints (highly coordinated regions where the frustration is large) are treated as an ideal gas, leading to analytical predictions that are accurate in the range of parameters relevant for real materials. Overall, these results lead to a description of supercooled liquids, in which the distance to the rigidity transition controls the number of directions in phase space that cost energy and the specific heat.

  12. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  13. The National Astronomy Consortium - An Adaptable Model for OAD?

    NASA Astrophysics Data System (ADS)

    Sheth, Kartik

    2015-08-01

    The National Astronomy Consortium (NAC) is a program led by the National Radio Astronomy Observatory (NRAO) and Associated Universities Inc., (AUI) in partnership with the National Society of Black Physicists (NSBP), and a number of minority and majority universities to increase the numbers of students from underrepresented groups and those otherwise overlooked by the traditional academic pipeline into STEM or STEM-related careers. The seed for the NAC was a partnership between NRAO and Howard University which began with an exchange of a few summer students five years ago. Since then the NAC has grown tremendously. Today the NAC aims to host between 4 to 5 cohorts nationally in an innovative model in which the students are mentored throughout the year with multiple mentors and peer mentoring, continued engagement in research and professional development / career training throughout the academic year and throughout their careers.The NAC model has already shown success and is a very promising and innovative model for increasing participation of young people in STEM and STEM-related careers. I will discuss how this model could be adapted in various countries at all levels of education.

  14. Preliminary Exploration of Adaptive State Predictor Based Human Operator Modeling

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Gregory, Irene M.

    2012-01-01

    Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to quantify the effects of changing aircraft dynamics on an operator s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. A gradient descent estimator and a least squares estimator with exponential forgetting used these data to predict pilot stick input. The results indicate that individual pilot characteristics and vehicle dynamics did not affect the accuracy of either estimator method to estimate pilot stick input. These methods also were able to predict pilot stick input during changing aircraft dynamics and they may have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot.

  15. A Model for Making Decisions about Text Adaptations.

    ERIC Educational Resources Information Center

    Dyck, Norma; Pemberton, Jane B.

    2002-01-01

    This article examines a process for teachers to use when deciding whether to adapt a text for a student. The following five options for text adaptations are described: bypass reading, decrease reading, support reading, organize reading, and guide reading. Adaptations for student work products and for tests are also addressed. (Contains…

  16. Extended adiabatic blast waves and a model of the soft X-ray background. [interstellar matter

    NASA Technical Reports Server (NTRS)

    Cox, D. P.; Anderson, P. R.

    1981-01-01

    An analytical approximation is generated which follows the development of an adiabatic spherical blast wave in a homogeneous ambient medium of finite pressure. An analytical approximation is also presented for the electron temperature distribution resulting from coulomb collisional heating. The dynamical, thermal, ionization, and spectral structures are calculated for blast waves of energy E sub 0 = 5 x 10 to the 50th power ergs in a hot low-density interstellar environment. A formula is presented for estimating the luminosity evolution of such explosions. The B and C bands of the soft X-ray background, it is shown, are reproduced by such a model explosion if the ambient density is about .000004 cm, the blast radius is roughly 100 pc, and the solar system is located inside the shocked region. Evolution in a pre-existing cavity with a strong density gradient may, it is suggested, remove both the M band and OVI discrepancies.

  17. Modeling common dynamics in multichannel signals with applications to artifact and background removal in EEG recordings.

    PubMed

    De Clercq, Wim; Vanrumste, Bart; Papy, Jean-Michel; Van Paesschen, Wim; Van Huffel, Sabine

    2005-12-01

    Removing artifacts and background electroencephaloraphy (EEG) from multichannel interictal and ictal EEG has become a major research topic in EEG signal processing in recent years. We applied for this purpose a recently developed subspace-based method for modeling the common dynamics in multichannel signals. When the epileptiform activity is common in the majority of channels and the artifacts appear only in a few channels the proposed method can be used to remove the latter. The performance of the method was tested on simulated data for different noise levels. For high noise levels the method was still able to identify the common dynamics. In addition, the method was applied to real life EEG recordings containing interictal and ictal activity contaminated with muscle artifact. The muscle artifacts were removed successfully. For both the synthetic data and the analyzed real life data the results were compared with the results obtained with principal component analysis (PCA). In both cases, the proposed method performed better than PCA.

  18. Durability-Based Design Guide for an Automotive Structural Composite: Part 2. Background Data and Models

    SciTech Connect

    Corum, J.M.; Battiste, R.L.; Brinkman, C.R.; Ren, W.; Ruggles, M.B.; Weitsman, Y.J.; Yahr, G.T.

    1998-02-01

    This background report is a companion to the document entitled ''Durability-Based Design Criteria for an Automotive Structural Composite: Part 1. Design Rules'' (ORNL-6930). The rules and the supporting material characterization and modeling efforts described here are the result of a U.S. Department of Energy Advanced Automotive Materials project entitled ''Durability of Lightweight Composite Structures.'' The overall goal of the project is to develop experimentally based, durability-driven design guidelines for automotive structural composites. The project is closely coordinated with the Automotive Composites Consortium (ACC). The initial reference material addressed by the rules and this background report was chosen and supplied by ACC. The material is a structural reaction injection-molded isocyanurate (urethane), reinforced with continuous-strand, swirl-mat, E-glass fibers. This report consists of 16 position papers, each summarizing the observations and results of a key area of investigation carried out to provide the basis for the durability-based design guide. The durability issues addressed include the effects of cyclic and sustained loadings, temperature, automotive fluids, vibrations, and low-energy impacts (e.g., tool drops and roadway kickups) on deformation, strength, and stiffness. The position papers cover these durability issues. Topics include (1) tensile, compressive, shear, and flexural properties; (2) creep and creep rupture; (3) cyclic fatigue; (4) the effects of temperature, environment, and prior loadings; (5) a multiaxial strength criterion; (6) impact damage and damage tolerance design; (7) stress concentrations; (8) a damage-based predictive model for time-dependent deformations; (9) confirmatory subscale component tests; and (10) damage development and growth observations.

  19. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    NASA Astrophysics Data System (ADS)

    Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.

    2016-09-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.

  20. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  1. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  2. Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation

    NASA Technical Reports Server (NTRS)

    Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet

    2015-01-01

    When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating

  3. Nonlinear geometrically adaptive finite element model of the coilbox

    SciTech Connect

    Troyani, N.

    1996-12-01

    Hot bar heat loss in the transfer table, the rolling stage between rougher stands and finishing stands in a hot mill, is of major concern for reasons for energy consumption, metallurgical uniformity, and rollability. A mathematical model, as well as the corresponding numerical solution, is presented for the evolution of temperature in a coiling and uncoiling bar in hot mills in the form of a parabolic partial differential equation for a shape-changing domain. The space discretization is achieved via a computationally efficient geometrically adaptive finite element scheme that accommodates the change in shape of the domain, using a computationally novel treatment of the resulting thermal contact problem due to coiling. Time is discretized according to a Crank-Nicolson scheme. Finally, some numerical results are presented.

  4. Evolution of Background Noise and Development of Station Noise Models for GSN Stations ANMO and TUC

    NASA Astrophysics Data System (ADS)

    Hutt, C. R.; McNamara, D. E.; Gee, L. S.

    2006-12-01

    Quality control (QC) of seismic data is an essential component of the operation of the Global Seismographic Network (GSN). In particular, the evaluation of the ambient background noise levels at each station plays a critical role in identifying potential problems, generally by comparison with historical levels. Current practice is to rely on the QC analyst's experience with each station's "fingerprint." The analyst compares current noise with noise levels seen in the past, in his or her experience. In order to formalize this activity and introduce a degree of automation, we explore the development of a "station noise model" for each station. Using the probability density function analysis of McNamara and Buland (2004), we selected GSN stations ANMO and TUC as examples for this initial study of methods for developing individualized noise models. The cities of Albuquerque and Tucson have both grown rapidly in the past 20 years or more, resulting in gradually increasing seismic noise, especially in the short period band. We present the evolution of seismic noise in various frequency bands at these two GSN stations, in particular, looking at those bands strongly affected by cultural encroachment. We also develop a method for establishing a station-specific noise model for each station that will be usable by automated QC algorithms for flagging out-of-character noise levels for that station.

  5. Data-driven modeling of background and mine-related acidity and metals in river basins.

    PubMed

    Friedel, Michael J

    2014-01-01

    A novel application of self-organizing map (SOM) and multivariate statistical techniques is used to model the nonlinear interaction among basin mineral-resources, mining activity, and surface-water quality. First, the SOM is trained using sparse measurements from 228 sample sites in the Animas River Basin, Colorado. The model performance is validated by comparing stochastic predictions of basin-alteration assemblages and mining activity at 104 independent sites. The SOM correctly predicts (>98%) the predominant type of basin hydrothermal alteration and presence (or absence) of mining activity. Second, application of the Davies-Bouldin criteria to k-means clustering of SOM neurons identified ten unique environmental groups. Median statistics of these groups define a nonlinear water-quality response along the spatiotemporal hydrothermal alteration-mining gradient. These results reveal that it is possible to differentiate among the continuum between inputs of background and mine-related acidity and metals, and it provides a basis for future research and empirical model development.

  6. Modelling MEMS deformable mirrors for astronomical adaptive optics

    NASA Astrophysics Data System (ADS)

    Blain, Celia

    As of July 2012, 777 exoplanets have been discovered utilizing mainly indirect detection techniques. The direct imaging of exoplanets is the next goal for astronomers, because it will reveal the diversity of planets and planetary systems, and will give access to the exoplanet's chemical composition via spectroscopy. With this spectroscopic knowledge, astronomers will be able to know, if a planet is terrestrial and, possibly, even find evidence of life. With so much potential, this branch of astronomy has also captivated the general public attention. The direct imaging of exoplanets remains a challenging task, due to (i) the extremely high contrast between the parent star and the orbiting exoplanet and (ii) their small angular separation. For ground-based observatories, this task is made even more difficult, due to the presence of atmospheric turbulence. High Contrast Imaging (HCI) instruments have been designed to meet this challenge. HCI instruments are usually composed of a coronagraph coupled with the full onaxis corrective capability of an Extreme Adaptive Optics (ExAO) system. An efficient coronagraph separates the faint planet's light from the much brighter starlight, but the dynamic boiling speckles, created by the stellar image, make exoplanet detection impossible without the help of a wavefront correction device. The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system is a high performance HCI instrument developed at Subaru Telescope. The wavefront control system of SCExAO consists of three wavefront sensors (WFS) coupled with a 1024- actuator Micro-Electro-Mechanical-System (MEMS) deformable mirror (DM). MEMS DMs offer a large actuator density, allowing high count DMs to be deployed in small size beams. Therefore, MEMS DMs are an attractive technology for Adaptive Optics (AO) systems and are particularly well suited for HCI instruments employing ExAO technologies. SCExAO uses coherent light modulation in the focal plane introduced by the DM, for

  7. Modeling the behavioral substrates of associate learning and memory - Adaptive neural models

    NASA Technical Reports Server (NTRS)

    Lee, Chuen-Chien

    1991-01-01

    Three adaptive single-neuron models based on neural analogies of behavior modification episodes are proposed, which attempt to bridge the gap between psychology and neurophysiology. The proposed models capture the predictive nature of Pavlovian conditioning, which is essential to the theory of adaptive/learning systems. The models learn to anticipate the occurrence of a conditioned response before the presence of a reinforcing stimulus when training is complete. Furthermore, each model can find the most nonredundant and earliest predictor of reinforcement. The behavior of the models accounts for several aspects of basic animal learning phenomena in Pavlovian conditioning beyond previous related models. Computer simulations show how well the models fit empirical data from various animal learning paradigms.

  8. Adaptive Weibull Multiplicative Model and Multilayer Perceptron neural networks for dark-spot detection from SAR imagery.

    PubMed

    Taravat, Alireza; Oppelt, Natascha

    2014-12-02

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies.

  9. Adaptive Weibull Multiplicative Model and Multilayer Perceptron Neural Networks for Dark-Spot Detection from SAR Imagery

    PubMed Central

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  10. A photoviscoplastic model for photoactivated covalent adaptive networks

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Mu, Xiaoming; Bowman, Christopher N.; Sun, Youyi; Dunn, Martin L.; Qi, H. Jerry; Fang, Daining

    2014-10-01

    Light activated polymers (LAPs) are a class of contemporary materials that when irradiated with light respond with mechanical deformation. Among the different molecular mechanisms of photoactuation, here we study radical induced bond exchange reactions (BERs) that alter macromolecular chains through an addition-fragmentation process where a free chain whose active end group attaches then breaks a network chain. Thus the BER yields a polymer with a covalently adaptable network. When a LAP sample is loaded, the macroscopic consequence of BERs is stress relaxation and plastic deformation. Furthermore, if light penetration through the sample is nonuniform, resulting in nonuniform stress relaxation, the sample will deform after unloading in order to achieve equilibrium. In the past, this light activation mechanism was modeled as a phase evolution process where chain addition-fragmentation process was considered as a phase transformation between stressed phases and newly-born phases that are undeformed and stress free at birth. Such a modeling scheme describes the underlying physics with reasonable fidelity but is computationally expensive. In this paper, we propose a new approach where the BER induced macromolecular network alteration is modeled as a viscoplastic deformation process, based on the observation that stress relaxation due to light irradiation is a time-dependent process similar to that in viscoelastic solids with an irrecoverable deformation after light irradiation. This modeling concept is further translated into a finite deformation photomechanical constitutive model. The rheological representation of this model is a photoviscoplastic element placed in series with a standard linear solid model in viscoelasticity. A two-step iterative implicit scheme is developed for time integration of the two time-dependent elements. We carry out a series of experiments to determine material parameters in our model as well as to validate the performance of the model in

  11. Modeling the distribution of Mg II absorbers around galaxies using background galaxies and quasars

    SciTech Connect

    Bordoloi, R.; Lilly, S. J.; Kacprzak, G. G.; Churchill, C. W.

    2014-04-01

    We present joint constraints on the distribution of Mg II absorption around high redshift galaxies obtained by combining two orthogonal probes, the integrated Mg II absorption seen in stacked background galaxy spectra and the distribution of parent galaxies of individual strong Mg II systems as seen in the spectra of background quasars. We present a suite of models that can be used to predict, for different two- and three-dimensional distributions, how the projected Mg II absorption will depend on a galaxy's apparent inclination, the impact parameter b and the azimuthal angle between the projected vector to the line of sight and the projected minor axis. In general, we find that variations in the absorption strength with azimuthal angles provide much stronger constraints on the intrinsic geometry of the Mg II absorption than the dependence on the inclination of the galaxies. In addition to the clear azimuthal dependence in the integrated Mg II absorption that we reported earlier in Bordoloi et al., we show that strong equivalent width Mg II absorbers (W{sub r} (2796) ≥ 0.3 Å) are also asymmetrically distributed in azimuth around their host galaxies: 72% of the absorbers in Kacprzak et al., and 100% of the close-in absorbers within 35 kpc of the center of their host galaxies, are located within 50° of the host galaxy's projected semi minor axis. It is shown that either composite models consisting of a simple bipolar component plus a spherical or disk component, or a single highly softened bipolar distribution, can well represent the azimuthal dependencies observed in both the stacked spectrum and quasar absorption-line data sets within 40 kpc. Simultaneously fitting both data sets, we find that in the composite model the bipolar cone has an opening angle of ∼100° (i.e., confined to within 50° of the disk axis) and contains about two-thirds of the total Mg II absorption in the system. The single softened cone model has an exponential fall off with azimuthal

  12. Industry Cluster's Adaptive Co-competition Behavior Modeling Inspired by Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Xiang, Wei; Ye, Feifan

    Adaptation helps the individual enterprise to adjust its behavior to uncertainties in environment and hence determines a healthy growth of both the individuals and the whole industry cluster as well. This paper is focused on the study on co-competition adaptation behavior of industry cluster, which is inspired by swarm intelligence mechanisms. By referencing to ant cooperative transportation and ant foraging behavior and their related swarm intelligence approaches, the cooperative adaptation and competitive adaptation behavior are studied and relevant models are proposed. Those adaptive co-competition behaviors model can be integrated to the multi-agent system of industry cluster to make the industry cluster model more realistic.

  13. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    PubMed Central

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  14. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles.

    PubMed

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-09-15

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems.

  15. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles.

    PubMed

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  16. Modeling high-resolution broadband discourse in complex adaptive systems.

    PubMed

    Dooley, Kevin J; Corman, Steven R; McPhee, Robert D; Kuhn, Timothy

    2003-01-01

    Numerous researchers and practitioners have turned to complexity science to better understand human systems. Simulation can be used to observe how the microlevel actions of many human agents create emergent structures and novel behavior in complex adaptive systems. In such simulations, communication between human agents is often modeled simply as message passing, where a message or text may transfer data, trigger action, or inform context. Human communication involves more than the transmission of texts and messages, however. Such a perspective is likely to limit the effectiveness and insight that we can gain from simulations, and complexity science itself. In this paper, we propose a model of how close analysis of discursive processes between individuals (high-resolution), which occur simultaneously across a human system (broadband), dynamically evolve. We propose six different processes that describe how evolutionary variation can occur in texts-recontextualization, pruning, chunking, merging, appropriation, and mutation. These process models can facilitate the simulation of high-resolution, broadband discourse processes, and can aid in the analysis of data from such processes. Examples are used to illustrate each process. We make the tentative suggestion that discourse may evolve to the "edge of chaos." We conclude with a discussion concerning how high-resolution, broadband discourse data could actually be collected. PMID:12876447

  17. An Eden model for the growth of adaptive networks

    NASA Astrophysics Data System (ADS)

    Meakin, Paul

    1991-12-01

    An adaptive growth model based on the Eden model has been investigated using computer simulations. In this model a “score” associated with all the sites along the shortest path from the newly added site to the initial seed or growth site is incremented by an amount δ 1 ( δ1=1/( l+1) η where l is the path length) and the score associated with all the sites in the cluster is decreased by a fixed amount δ2 ( δ2=1/ Nm) after each growth event. If the score associated with a site falls below zero it is removed from the cluster. In the asymptotic limit ( t→∞ where t is the number of growth events) the cluster size fluctuates about a constant value proportional to N vm where the exponent v is given by the empirical relationship v=2/(2+ η), which is supported by simple theoretical considerations. The growth of the number of occupied sites, s( t), can be represented by the scaling form s( t) = N vm ƒ(t/N vm) .

  18. Pharmacokinetic Modeling of Manganese III. Physiological Approaches Accounting for Background and Tracer Kinetics

    SciTech Connect

    Teeguarden, Justin G.; Gearhart, Jeffrey; Clewell, III, H. J.; Covington, Tammie R.; Nong, Andy; Anderson, Melvin E.

    2007-01-01

    Manganese (Mn) is an essential nutrient. Mn deficiency is associated with altered lipid (Kawano et al. 1987) and carbohydrate metabolism (Baly et al. 1984; Baly et al. 1985), abnormal skeletal cartilage development (Keen et al. 2000), decreased reproductive capacity, and brain dysfunction. Occupational and accidental inhalation exposures to aerosols containing high concentrations of Mn produce neurological symptoms with Parkinson-like characteristics in workers. At present, there is also concern about use of the manganese-containing compound, methylcyclopentadienyl manganese tricarbonyl (MMT), in unleaded gasoline as an octane enhancer. Combustion of MMT produces aerosols containing a mixture of manganese salts (Lynam et al. 1999). These Mn particulates may be inhaled at low concentrations by the general public in areas using MMT. Risk assessments for essential elements need to acknowledge that risks occur with either excesses or deficiencies and the presence of significant amounts of these nutrients in the body even in the absence of any exogenous exposures. With Mn there is an added complication, i.e., the primary risk is associated with inhalation while Mn is an essential dietary nutrient. Exposure standards for inhaled Mn will need to consider the substantial background uptake from normal ingestion. Andersen et al. (1999) suggested a generic approach for essential nutrient risk assessment. An acceptable exposure limit could be based on some ‘tolerable’ change in tissue concentration in normal and exposed individuals, i.e., a change somewhere from 10 to 25 % of the individual variation in tissue concentration seen in a large human population. A reliable multi-route, multi-species pharmacokinetic model would be necessary for the implementation of this type of dosimetry-based risk assessment approach for Mn. Physiologically-based pharmacokinetic (PBPK) models for various xenobiotics have proven valuable in contributing to a variety of chemical specific risk

  19. Towards a High Temporal Frequency Grass Canopy Thermal IR Model for Background Signatures

    NASA Technical Reports Server (NTRS)

    Ballard, Jerrell R., Jr.; Smith, James A.; Koenig, George G.

    2004-01-01

    In this paper, we present our first results towards understanding high temporal frequency thermal infrared response from a dense plant canopy and compare the application of our model, driven both by slowly varying, time-averaged meteorological conditions and by high frequency measurements of local and within canopy profiles of relative humidity and wind speed, to high frequency thermal infrared observations. Previously, we have employed three-dimensional ray tracing to compute the intercepted and scattered radiation fluxes and for final scene rendering. For the turbulent fluxes, we employed simple resistance models for latent and sensible heat with one-dimensional profiles of relative humidity and wind speed. Our modeling approach has proven successful in capturing the directional and diurnal variation in background thermal infrared signatures. We hypothesize that at these scales, where the model is typically driven by time-averaged, local meteorological conditions, the primary source of thermal variance arises from the spatial distribution of sunlit and shaded foliage elements within the canopy and the associated radiative interactions. In recent experiments, we have begun to focus on the high temporal frequency response of plant canopies in the thermal infrared at 1 second to 5 minute intervals. At these scales, we hypothesize turbulent mixing plays a more dominant role. Our results indicate that in the high frequency domain, the vertical profile of temperature change is tightly coupled to the within canopy wind speed In the results reported here, the canopy cools from the top down with increased wind velocities and heats from the bottom up at low wind velocities. .

  20. Modeling estimates of the effect of acid rain on background radiation dose.

    PubMed

    Sheppard, S C; Sheppard, M I

    1988-06-01

    Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the "background" radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable. PMID:3203639

  1. Modeling estimates of the effect of acid rain on background radiation dose.

    PubMed

    Sheppard, S C; Sheppard, M I

    1988-06-01

    Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the "background" radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable.

  2. Modeling estimates of the effect of acid rain on background radiation dose.

    PubMed Central

    Sheppard, S C; Sheppard, M I

    1988-01-01

    Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the "background" radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable. PMID:3203639

  3. A Nonlinear Dynamic Inversion Predictor-Based Model Reference Adaptive Controller for a Generic Transport Model

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.

    2010-01-01

    Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).

  4. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  5. A regional adaptive and assimilative three-dimensional ionospheric model

    NASA Astrophysics Data System (ADS)

    Sabbagh, Dario; Scotto, Carlo; Sgrigna, Vittorio

    2016-03-01

    A regional adaptive and assimilative three-dimensional (3D) ionospheric model is proposed. It is able to ingest real-time data from different ionosondes, providing the ionospheric bottomside plasma frequency fp over the Italian area. The model is constructed on the basis of empirical values for a set of ionospheric parameters Pi[base] over the considered region, some of which have an assigned variation ΔPi. The values for the ionospheric parameters actually observed at a given time at a given site will thus be Pi = Pi[base] + ΔPi. These Pi values are used as input for an electron density N(h) profiler. The latter is derived from the Advanced Ionospheric Profiler (AIP), which is software used by Autoscala as part of the process of automatic inversion of ionogram traces. The 3D model ingests ionosonde data by minimizing the root-mean-square deviation between the observed and modeled values of fp(h) profiles obtained from the associated N(h) values at the points where observations are available. The ΔPi values are obtained from this minimization procedure. The 3D model is tested using data collected at the ionospheric stations of Rome (41.8N, 12.5E) and Gibilmanna (37.9N, 14.0E), and then comparing the results against data from the ionospheric station of San Vito dei Normanni (40.6N, 18.0E). The software developed is able to produce maps of the critical frequencies foF2 and foF1, and of fp at a fixed altitude, with transverse and longitudinal cross-sections of the bottomside ionosphere in a color scale. fp(h) and associated simulated ordinary ionogram traces can easily be produced for any geographic location within the Italian region. fp values within the volume in question can also be provided.

  6. A Adaptive Mixing Depth Model for AN Industrialized Shoreline Area.

    NASA Astrophysics Data System (ADS)

    Dunk, Richard H.

    1993-01-01

    Internal boundary layer characteristics are often overlooked in atmospheric diffusion modeling applications but are essential for accurate air quality assessment. This study focuses on a unique air pollution problem that is partially resolved by representative internal boundary layer description and prediction. Emissions from a secondary non-ferrous smelter located adjacent to a large waterway, which is situated near a major coastal zone, became suspect in causing adverse air quality. In an effort to prove or disprove this allegation, "accepted" air quality modeling was performed. Predicted downwind concentrations indicated that the smelter plume was not responsible for causing regulatory standards to be exceeded. However, chronic community complaints continued to be directed toward the smelter facility. Further investigation into the problem revealed that complaint occurrences coincided with onshore southeasterly flows. Internal boundary layer development during onshore flow was assumed to produce a mixing depth conducive to plume trapping or fumigation. The preceding premise led to the utilization of estimated internal boundary layer depths for dispersion model input in an attempt to improve prediction accuracy. Monitored downwind ambient air concentrations showed that model predictions were still substantially lower than actual values. After analyzing the monitored values and comparing them with actual plume observations conducted during several onshore flow occurrences, the author hypothesized that the waterway could cause a damping effect on internal boundary layer development. This effective decrease in mixing depths would explain the abnormally high ambient air concentrations experienced during onshore flows. Therefore, a full-scale field study was designed and implemented to study the waterway's influence on mixing depth characteristics. The resultant data were compiled and formulated into an area-specific mixing depth model that can be adapted to

  7. Simulating the quartic Galileon gravity model on adaptively refined meshes

    SciTech Connect

    Li, Baojiu; Barreira, Alexandre; Baugh, Carlton M.; Hellwing, Wojciech A.; Koyama, Kazuya; Zhao, Gong-Bo; Pascoli, Silvia E-mail: baojiu.li@durham.ac.uk E-mail: wojciech.hellwing@durham.ac.uk E-mail: silvia.pascoli@durham.ac.uk

    2013-11-01

    We develop a numerical algorithm to solve the high-order nonlinear derivative-coupling equation associated with the quartic Galileon model, and implement it in a modified version of the ramses N-body code to study the effect of the Galileon field on the large-scale matter clustering. The algorithm is tested for several matter field configurations with different symmetries, and works very well. This enables us to perform the first simulations for a quartic Galileon model which provides a good fit to the cosmic microwave background (CMB) anisotropy, supernovae and baryonic acoustic oscillations (BAO) data. Our result shows that the Vainshtein mechanism in this model is very efficient in suppressing the spatial variations of the scalar field. However, the time variation of the effective Newtonian constant caused by the curvature coupling of the Galileon field cannot be suppressed by the Vainshtein mechanism. This leads to a significant weakening of the strength of gravity in high-density regions at late times, and therefore a weaker matter clustering on small scales. We also find that without the Vainshtein mechanism the model would have behaved in a completely different way, which shows the crucial role played by nonlinearities in modified gravity theories and the importance of performing self-consistent N-body simulations for these theories.

  8. Visual model of human blur perception for scene adaptive capturing

    NASA Astrophysics Data System (ADS)

    Kim, Sung-Su; Chung, DaeSu; Park, Byung-Kwan; Kim, Jung-Bae; Lee, Seong-Deok

    2009-01-01

    Despite fast spreading of digital cameras, many people cannot take pictures of high quality, they want, due to lack of photography. To help users under the unfavorable capturing environments, e.g. 'Night', 'Backlighting', 'Indoor', or 'Portrait', the automatic mode of cameras provides parameter sets by manufactures. Unfortunately, this automatic functionality does not give pleasing image quality in general. Especially, length of exposure (shutter speed) is critical factor in taking high quality pictures in the night. One of key factors causing this bad quality in the night is the image blur, which mainly comes from hand-shaking in long capturing. In this study, to circumvent this problem and to enhance image quality of automatic cameras, we propose an intelligent camera processing core having BASE (Scene Adaptive Blur Estimation) and VisBLE (Visual Blur Limitation Estimation). SABE analyzes the high frequency component in the DCT (Discrete Cosine Transform) domain. VisBLE determines acceptable blur level on the basis of human visual tolerance and Gaussian model. This visual tolerance model is developed on the basis of human perception physiological mechanism. In the experiments proposed method outperforms existing imaging systems by general users and photographers, as well.

  9. An adaptive correspondence algorithm for modeling scenes with strong interreflections.

    PubMed

    Xu, Yi; Aliaga, Daniel G

    2009-01-01

    Modeling real-world scenes, beyond diffuse objects, plays an important role in computer graphics, virtual reality, and other commercial applications. One active approach is projecting binary patterns in order to obtain correspondence and reconstruct a densely sampled 3D model. In such structured-light systems, determining whether a pixel is directly illuminated by the projector is essential to decoding the patterns. When a scene has abundant indirect light, this process is especially difficult. In this paper, we present a robust pixel classification algorithm for this purpose. Our method correctly establishes the lower and upper bounds of the possible intensity values of an illuminated pixel and of a non-illuminated pixel. Based on the two intervals, our method classifies a pixel by determining whether its intensity is within one interval but not in the other. Our method performs better than standard method due to the fact that it avoids gross errors during decoding process caused by strong inter-reflections. For the remaining uncertain pixels, we apply an iterative algorithm to reduce the inter-reflection within the scene. Thus, more points can be decoded and reconstructed after each iteration. Moreover, the iterative algorithm is carried out in an adaptive fashion for fast convergence.

  10. Barley: a translational model for adaptation to climate change.

    PubMed

    Dawson, Ian K; Russell, Joanne; Powell, Wayne; Steffenson, Brian; Thomas, William T B; Waugh, Robbie

    2015-05-01

    Barley (Hordeum vulgare ssp. vulgare) is an excellent model for understanding agricultural responses to climate change. Its initial domestication over 10 millennia ago and subsequent wide migration provide striking evidence of adaptation to different environments, agro-ecologies and uses. A bottleneck in the selection of modern varieties has resulted in a reduction in total genetic diversity and a loss of specific alleles relevant to climate-smart agriculture. However, extensive and well-curated collections of landraces, wild barley accessions (H. vulgare ssp. spontaneum) and other Hordeum species exist and are important new allele sources. A wide range of genomic and analytical tools have entered the public domain for exploring and capturing this variation, and specialized populations, mutant stocks and transgenics facilitate the connection between genetic diversity and heritable phenotypes. These lay the biological, technological and informational foundations for developing climate-resilient crops tailored to specific environments that are supported by extensive environmental and geographical databases, new methods for climate modelling and trait/environment association analyses, and decentralized participatory improvement methods. Case studies of important climate-related traits and their constituent genes - including examples that are indicative of the complexities involved in designing appropriate responses - are presented, and key developments for the future highlighted.

  11. Attitude determination using an adaptive multiple model filtering Scheme

    NASA Astrophysics Data System (ADS)

    Lam, Quang; Ray, Surendra N.

    1995-05-01

    Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown

  12. Space weather circulation model of plasma clouds as background radiation medium of space environment.

    NASA Astrophysics Data System (ADS)

    Kalu, A. E.

    A model for Space Weather (SW) Circulation with Plasma Clouds as background radiation medium of Space Environment has been proposed and discussed. Major characteristics of the model are outlined and the model assumes a baroclinic Space Environment in view of observed pronounced horizontal electron temperature gradient with prevailing weak vertical temperature gradient. The primary objective of the study is to be able to monitor and realistically predict on real- or near real-time SW and Space Storms (SWS) affecting human economic systems on Earth as well as the safety and Physiologic comfort of human payload in Space Environment in relation to planned increase in human space flights especially with reference to the ISS Space Shuttle Taxi (ISST) Programme and other prolonged deep Space Missions. Although considerable discussions are now available in the literature on SW issues, routine Meteorological operational applications of SW forecast data and information for Space Environment are still yet to receive adequate attention. The paper attempts to fill this gap in the literature of SW. The paper examines the sensitivity and variability in 3-D continuum of Plasmas in response to solar radiation inputs into the magnetosphere under disturbed Sun condition. Specifically, the presence of plasma clouds in the form of Coronal Mass Ejections (CMEs) is stressed as a major source of danger to Space crews, spacecraft instrumentation and architecture charging problems as well as impacts on numerous radiation - sensitive human economic systems on Earth. Finally, the paper considers the application of model results in the form of effective monitoring of each of the two major phases of manned Spaceflights - take-off and re-entry phases where all-time assessment of spacecraft transient ambient micro-incabin and outside Space Environment is vital for all manned Spaceflights as recently evidenced by the loss of vital information during take-off of the February 1, 2003 US Columbia

  13. Adaptive invasive species distribution models: A framework for modeling incipient invasions

    USGS Publications Warehouse

    Uden, Daniel R.; Allen, Craig R.; Angeler, David G.; Corral, Lucia; Fricke, Kent A.

    2015-01-01

    The utilization of species distribution model(s) (SDM) for approximating, explaining, and predicting changes in species’ geographic locations is increasingly promoted for proactive ecological management. Although frameworks for modeling non-invasive species distributions are relatively well developed, their counterparts for invasive species—which may not be at equilibrium within recipient environments and often exhibit rapid transformations—are lacking. Additionally, adaptive ecological management strategies address the causes and effects of biological invasions and other complex issues in social-ecological systems. We conducted a review of biological invasions, species distribution models, and adaptive practices in ecological management, and developed a framework for adaptive, niche-based, invasive species distribution model (iSDM) development and utilization. This iterative, 10-step framework promotes consistency and transparency in iSDM development, allows for changes in invasive drivers and filters, integrates mechanistic and correlative modeling techniques, balances the avoidance of type 1 and type 2 errors in predictions, encourages the linking of monitoring and management actions, and facilitates incremental improvements in models and management across space, time, and institutional boundaries. These improvements are useful for advancing coordinated invasive species modeling, management and monitoring from local scales to the regional, continental and global scales at which biological invasions occur and harm native ecosystems and economies, as well as for anticipating and responding to biological invasions under continuing global change.

  14. Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.

    2010-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.

  15. Use of Time Information in Models behind Adaptive System for Building Fluency in Mathematics

    ERIC Educational Resources Information Center

    Rihák, Jirí

    2015-01-01

    In this work we introduce the system for adaptive practice of foundations of mathematics. Adaptivity of the system is primarily provided by selection of suitable tasks, which uses information from a domain model and a student model. The domain model does not use prerequisites but works with splitting skills to more concrete sub-skills. The student…

  16. Constraints on Dark Matter Interactions with Standard Model Particles from Cosmic Microwave Background Spectral Distortions.

    PubMed

    Ali-Haïmoud, Yacine; Chluba, Jens; Kamionkowski, Marc

    2015-08-14

    We propose a new method to constrain elastic scattering between dark matter (DM) and standard model particles in the early Universe. Direct or indirect thermal coupling of nonrelativistic DM with photons leads to a heat sink for the latter. This results in spectral distortions of the cosmic microwave background (CMB), the amplitude of which can be as large as a few times the DM-to-photon-number ratio. We compute CMB spectral distortions due to DM-proton, DM-electron, and DM-photon scattering for generic energy-dependent cross sections and DM mass m_{χ}≳1 keV. Using Far-Infrared Absolute Spectrophotometer measurements, we set constraints on the cross sections for m_{χ}≲0.1 MeV. In particular, for energy-independent scattering we obtain σ_{DM-proton}≲10^{-24} cm^{2} (keV/m_{χ})^{1/2}, σ_{DM-electron}≲10^{-27} cm^{2} (keV/m_{χ})^{1/2}, and σ_{DM-photon}≲10^{-39} cm^{2} (m_{χ}/keV). An experiment with the characteristics of the Primordial Inflation Explorer would extend the regime of sensitivity up to masses m_{χ}~1 GeV.

  17. Detection of Bird Nests during Mechanical Weeding by Incremental Background Modeling and Visual Saliency

    PubMed Central

    Steen, Kim Arild; Therkildsen, Ole Roland; Green, Ole; Karstoft, Henrik

    2015-01-01

    Mechanical weeding is an important tool in organic farming. However, the use of mechanical weeding in conventional agriculture is increasing, due to public demands to lower the use of pesticides and an increased number of pesticide-resistant weeds. Ground nesting birds are highly susceptible to farming operations, like mechanical weeding, which may destroy the nests and reduce the survival of chicks and incubating females. This problem has limited focus within agricultural engineering. However, when the number of machines increases, destruction of nests will have an impact on various species. It is therefore necessary to explore and develop new technology in order to avoid these negative ethical consequences. This paper presents a vision-based approach to automated ground nest detection. The algorithm is based on the fusion of visual saliency, which mimics human attention, and incremental background modeling, which enables foreground detection with moving cameras. The algorithm achieves a good detection rate, as it detects 28 of 30 nests at an average distance of 3.8 m, with a true positive rate of 0.75. PMID:25738766

  18. Detection of bird nests during mechanical weeding by incremental background modeling and visual saliency.

    PubMed

    Steen, Kim Arild; Therkildsen, Ole Roland; Green, Ole; Karstoft, Henrik

    2015-03-02

    Mechanical weeding is an important tool in organic farming. However, the use of mechanical weeding in conventional agriculture is increasing, due to public demands to lower the use of pesticides and an increased number of pesticide-resistant weeds. Ground nesting birds are highly susceptible to farming operations, like mechanical weeding, which may destroy the nests and reduce the survival of chicks and incubating females. This problem has limited focus within agricultural engineering. However, when the number of machines increases, destruction of nests will have an impact on various species. It is therefore necessary to explore and develop new technology in order to avoid these negative ethical consequences. This paper presents a vision-based approach to automated ground nest detection. The algorithm is based on the fusion of visual saliency, which mimics human attention, and incremental background modeling, which enables foreground detection with moving cameras. The algorithm achieves a good detection rate, as it detects 28 of 30 nests at an average distance of 3.8 m, with a true positive rate of 0.75.

  19. Cosmic string parameter constraints and model analysis using small scale Cosmic Microwave Background data

    SciTech Connect

    Urrestilla, Jon; Bevis, Neil; Hindmarsh, Mark; Kunz, Martin E-mail: n.bevis@imperial.ac.uk E-mail: martin.kunz@physics.unige.ch

    2011-12-01

    We present a significant update of the constraints on the Abelian Higgs cosmic string tension by cosmic microwave background (CMB) data, enabled both by the use of new high-resolution CMB data from suborbital experiments as well as the latest results of the WMAP satellite, and by improved predictions for the impact of Abelian Higgs cosmic strings on the CMB power spectra. The new cosmic string spectra [1] were improved especially for small angular scales, through the use of larger Abelian Higgs string simulations and careful extrapolation. If Abelian Higgs strings are present then we find improved bounds on their contribution to the CMB anisotropies, fd{sup AH} < 0.095, and on their tension, Gμ{sub AH} < 0.57 × 10{sup −6}, both at 95% confidence level using WMAP7 data; and fd{sup AH} < 0.048 and Gμ{sub AH} < 0.42 × 10{sup −6} using all the CMB data. We also find that using all the CMB data, a scale invariant initial perturbation spectrum, n{sub s} = 1, is now disfavoured at 2.4σ even if strings are present. A Bayesian model selection analysis no longer indicates a preference for strings.

  20. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  1. Data-driven estimations of Standard Model backgrounds to SUSY searches in ATLAS

    SciTech Connect

    Legger, F.

    2008-11-23

    At the Large Hadron Collider (LHC), the strategy for the observation of supersymmetry in the early days is mainly based on inclusive searches. Major backgrounds are constituted by mismeasured multi-jet events and W, Z and t quark production in association with jets. We describe recent work performed in the ATLAS Collaboration to derive these backgrounds from the first ATLAS data.

  2. A JOINT MODEL OF THE X-RAY AND INFRARED EXTRAGALACTIC BACKGROUNDS. I. MODEL CONSTRUCTION AND FIRST RESULTS

    SciTech Connect

    Shi, Yong; Helou, George; Armus, Lee; Stierwalt, Sabrina; Dale, Daniel

    2013-02-10

    We present an extragalactic population model of the cosmic background light to interpret the rich high-quality survey data in the X-ray and IR bands. The model incorporates star formation and supermassive black hole (SMBH) accretion in a co-evolution scenario to fit simultaneously 617 data points of number counts, redshift distributions, and local luminosity functions (LFs) with 19 free parameters. The model has four main components, the total IR LF, the SMBH accretion energy fraction in the IR band, the star formation spectral energy distribution (SED), and the unobscured SMBH SED extinguished with a H I column density distribution. As a result of the observational uncertainties about the star formation and SMBH SEDs, we present several variants of the model. The best-fit reduced {chi}{sup 2} reaches as small as 2.7-2.9 of which a significant amount (>0.8) is contributed by cosmic variances or caveats associated with data. Compared to previous models, the unique result of this model is to constrain the SMBH energy fraction in the IR band that is found to increase with the IR luminosity but decrease with redshift up to z {approx} 1.5; this result is separately verified using aromatic feature equivalent-width data. The joint modeling of X-ray and mid-IR data allows for improved constraints on the obscured active galactic nucleus (AGN), especially the Compton-thick AGN population. All variants of the model require that Compton-thick AGN fractions decrease with the SMBH luminosity but increase with redshift while the type 1 AGN fraction has the reverse trend.

  3. The Radio Language Arts Project: adapting the radio mathematics model.

    PubMed

    Christensen, P R

    1985-01-01

    Kenya's Radio Language Arts Project, directed by the Academy for Educational Development in cooperation with the Kenya Institute of Education in 1980-85, sought to teach English to rural school children in grades 1-3 through use of an intensive, radio-based instructional system. Daily 1/2 hour lessons are broadcast throughout the school year and supported by teachers and print materials. The project further was aimed at testing the feasibility of adaptation of the successful Nicaraguan Radio Math Project to a new subject area. Difficulties were encountered in articulating a language curriculum with the precision required for a media-based instructional system. Also a challenge was defining the acceptable regional standard for pronunciation and grammar; British English was finally selected. An important modification of the Radio Math model concerned the role of the teacher. While Radio Math sought to reduce the teacher's responsibilities during the broadcast, Radio Language Arts teachers played an important instructional role during the English lesson broadcasts by providing translation and checks on work. Evaluations of the Radio language Arts Project suggest significant gains in speaking, listening, and reading skills as well as high levels of satisfaction on the part of parents and teachers.

  4. Adaptable Information Models in the Global Change Information System

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Buddenberg, A.; Aulenbach, S.; Wolfe, R.; Goldstein, J.

    2014-12-01

    The US Global Change Research Program has sponsored the creation of the Global Change Information System () to provide a web based source of accessible, usable, and timely information about climate and global change for use by scientists, decision makers, and the public. The GCIS played multiple roles during the assembly and release of the Third National Climate Assessment. It provided human and programmable interfaces, relational and semantic representations of information, and discrete identifiers for various types of resources, which could then be manipulated by a distributed team with a wide range of specialties. The GCIS also served as a scalable backend for the web based version of the report. In this talk, we discuss the infrastructure decisions made during the design and deployment of the GCIS, as well as ongoing work to adapt to new types of information. Both a constrained relational database and an open ended triple store are used to ensure data integrity while maintaining fluidity. Using natural primary keys allows identifiers to propagate through both models. Changing identifiers are accomodated through fine grained auditing and explicit mappings to external lexicons. A practical RESTful API is used whose endpoints are also URIs in an ontology. Both the relational schema and the ontology are maleable, and stability is ensured through test driven development and continuous integration testing using modern open source techniques. Content is also validated through continuous testing techniques. A high degres of scalability is achieved through caching.

  5. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era

    PubMed Central

    2015-01-01

    Background The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public’s health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. Objective The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. Methods To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. Results The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence

  6. Temporal adaptability and the inverse relationship to sensitivity: a parameter identification model.

    PubMed

    Langley, Keith

    2005-01-01

    Following a prolonged period of visual adaptation to a temporally modulated sinusoidal luminance pattern, the threshold contrast of a similar visual pattern is elevated. The adaptive elevation in threshold contrast is selective for spatial frequency, may saturate at low adaptor contrast, and increases as a function of the spatio-temporal frequency of the adapting signal. A model for signal extraction that is capable of explaining these threshold contrast effects of adaptation is proposed. Contrast adaptation in the model is explained by the identification of the parameters of an environmental model: the autocorrelation function of the visualized signal. The proposed model predicts that the adaptability of threshold contrast is governed by unpredicted signal variations present in the visual signal, and thus represents an internal adjustment by the visual system that takes into account these unpredicted signal variations given the additional possibility for signal corruption by additive noise.

  7. Decentralized Adaptive Control of Systems with Uncertain Interconnections, Plant-Model Mismatch and Actuator Failures

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Decentralized adaptive control is considered for systems consisting of multiple interconnected subsystems. It is assumed that each subsystem s parameters are uncertain and the interconnection parameters are not known. In addition, mismatch can exist between each subsystem and its reference model. A strictly decentralized adaptive control scheme is developed, wherein each subsystem has access only to its own state but has the knowledge of all reference model states. The mismatch is estimated online for each subsystem and the mismatch estimates are used to adaptively modify the corresponding reference models. The adaptive control scheme is extended to the case with actuator failures in addition to mismatch.

  8. Comparison of Measured Galactic Background Radiation at L-Band with Model

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, William J.; Skou, Niels; Sobjaerg, Sten

    2004-01-01

    Radiation from the celestial sky in the spectral window at 1.413 GHz is strong and an accurate accounting of this background radiation is needed for calibration and retrieval algorithms. Modern radio astronomy measurements in this window have been converted into a brightness temperature map of the celestial sky at L-band suitable for such applications. This paper presents a comparison of the background predicted by this map with the measurements of several modern L-band remote sensing radiometer Keywords-Galactic background, microwave radiometry; remote sensing;

  9. A Method for Estimating Urban Background Concentrations in Support of Hybrid Air Pollution Modeling for Environmental Health Studies

    PubMed Central

    Arunachalam, Saravanan; Valencia, Alejandro; Akita, Yasuyuki; Serre, Marc L.; Omary, Mohammad; Garcia, Valerie; Isakov, Vlad

    2014-01-01

    Exposure studies rely on detailed characterization of air quality, either from sparsely located routine ambient monitors or from central monitoring sites that may lack spatial representativeness. Alternatively, some studies use models of various complexities to characterize local-scale air quality, but often with poor representation of background concentrations. A hybrid approach that addresses this drawback combines a regional-scale model to provide background concentrations and a local-scale model to assess impacts of local sources. However, this approach may double-count sources in the study regions. To address these limitations, we carefully define the background concentration as the concentration that would be measured if local sources were not present, and to estimate these background concentrations we developed a novel technique that combines space-time ordinary kriging (STOK) of observations with outputs from a detailed chemistry-transport model with local sources zeroed out. We applied this technique to support an exposure study in Detroit, Michigan, for several pollutants (including NOx and PM2.5), and evaluated the estimated hybrid concentrations (calculated by combining the background estimates that addresses this issue of double counting with local-scale dispersion model estimates) using observations. Our results demonstrate the strength of this approach specifically by eliminating the problem of double-counting reported in previous hybrid modeling approaches leading to improved estimates of background concentrations, and further highlight the relative importance of NOx vs. PM2.5 in their relative contributions to total concentrations. While a key limitation of this approach is the requirement for another detailed model simulation to avoid double-counting, STOK improves the overall characterization of background concentrations at very fine spatial scales. PMID:25321872

  10. Ampholytes as background electrolytes in capillary zone electrophoresis: sense or nonsense? Histidine as a model ampholyte.

    PubMed

    Beckers, Jozef L

    2003-01-01

    A lot of phenomena, occuring in capillary zone electrophoresis (CZE), are linked with the ionic concentration of the background electrolyte (BGE). If weak bases and acids are used as BGEs in CZE, at a pH where they are scarcely ionized, the ionic concentration of the BGE is very low and this brings a strong peak broadening, limited sample stacking and low sample load. Because the electromigration dispersion increases extremely, moreover, the existence of low-conductivity BGEs in CZE is a contradiction in terms. The behavior of ampholytes as BGE in CZE is examined, by means of histidine as a model ampholyte. For BGEs consisting of histidine, important parameters, including the ionic concentrations, buffer capacity, transfer ratio, and the indicator for electromigration dispersion E(1)m(1)/E(2)m(2), are calculated at various pH. Although the transfer ratio is fairly constant over the whole pH traject, the ionic concentration and buffer capacity decrease whereas the electromigration dispersion strongly increases near the pI of histidine. I.e., that ampholytes can be applied as BGEs in CZE, however, just not at pH near their pI value, except as the difference between the pK values of the basic and acidic group, the deltapK value, is very small. For ampholytes with a low deltapK value or at high concentrations, all the before-mentioned effects are less fatal, but in that case we can not speak of a real low-conductivity BGE. If ampholytes are used at pH near their pK values, the use of ampholytes as BGE is not advantageously compared with simple weak bases and acids. This has been confirmed by calculations and experiments. PMID:12569544

  11. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  12. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines

    PubMed Central

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  13. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines.

    PubMed

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  14. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it

  15. Modeling Lost-Particle Backgrounds in PEP-II Using LPTURTLE

    SciTech Connect

    Fieguth, T.; Barlow, R.; Kozanecki, W.; /DAPNIA, Saclay

    2005-05-17

    Background studies during the design, construction, commissioning, operation and improvement of BaBar and PEP-II have been greatly influenced by results from a program referred to as LPTURTLE (Lost Particle TURTLE) which was originally conceived for the purpose of studying gas background for SLC. This venerable program is still in use today. We describe its use, capabilities and improvements and refer to current results now being applied to BaBar.

  16. A Model of Family Background, Family Process, Youth Self-Control, and Delinquent Behavior in Two-Parent Families

    ERIC Educational Resources Information Center

    Jeong, So-Hee; Eamon, Mary Keegan

    2009-01-01

    Using data from a national sample of two-parent families with 11- and 12-year-old youths (N = 591), we tested a structural model of family background, family process (marital conflict and parenting), youth self-control, and delinquency four years later. Consistent with the conceptual model, marital conflict and youth self-control are directly…

  17. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  18. Modeling the Raman spectrum of graphitic material in rock samples with fluorescence backgrounds: accuracy of fitting and uncertainty estimation.

    PubMed

    Gasda, Patrick J; Ogliore, Ryan C

    2014-01-01

    We propose a robust technique called Savitzky-Golay second-derivative (SGSD) fitting for modeling the in situ Raman spectrum of graphitic materials in rock samples such as carbonaceous chondrite meteorites. In contrast to non-derivative techniques, with assumed locally linear or nth-order polynomial fluorescence backgrounds, SGSD produces consistently good fits of spectra with variable background fluorescence of any slowly varying form, without fitting or subtracting the background. In combination with a Monte Carlo technique, SGSD calculates Raman parameters (such as peak width and intensity) with robust uncertainties. To explain why SGSD fitting is more accurate, we compare how different background subtraction techniques model the background fluorescence with the wide and overlapping peaks present in a real Raman spectrum of carbonaceous material. Then, the utility of SGSD is demonstrated with a set of real and simulated data compared to commonly used linear background techniques. Researchers may find the SGSD technique useful if their spectra contain intense background interference with unknown functional form or wide overlapping peaks, and when the uncertainty of the spectral data is not well understood.

  19. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    NASA Astrophysics Data System (ADS)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  20. Command generator tracker based direct model reference adaptive control of a PUMA 560 manipulator. Thesis

    NASA Technical Reports Server (NTRS)

    Swift, David C.

    1992-01-01

    This project dealt with the application of a Direct Model Reference Adaptive Control algorithm to the control of a PUMA 560 Robotic Manipulator. This chapter will present some motivation for using Direct Model Reference Adaptive Control, followed by a brief historical review, the project goals, and a summary of the subsequent chapters.

  1. A Systematic Ecological Model for Adapting Physical Activities: Theoretical Foundations and Practical Examples

    ERIC Educational Resources Information Center

    Hutzler, Yeshayahu

    2007-01-01

    This article proposes a theory- and practice-based model for adapting physical activities. The ecological frame of reference includes Dynamic and Action System Theory, World Health Organization International Classification of Function and Disability, and Adaptation Theory. A systematic model is presented addressing (a) the task objective, (b) task…

  2. Characterization of background air pollution exposure in urban environments using a metric based on Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Gómez-Losada, Álvaro; Pires, José Carlos M.; Pino-Mejías, Rafael

    2016-02-01

    Urban area air pollution results from local air pollutants (from different sources) and horizontal transport (background pollution). Understanding urban air pollution background (lowest) concentration profiles is key in population exposure assessment and epidemiological studies. To this end, air pollution registered at background monitoring sites is studied, but background pollution levels are given as the average of the air pollutant concentrations measured at these sites over long periods of time. This short communication shows how a metric based on Hidden Markov Models (HMMs) can characterise the air pollutant background concentration profiles. HMMs were applied to daily average concentrations of CO, NO2, PM10 and SO2 at thirteen urban monitoring sites from three cities from 2010 to 2013. Using the proposed metric, the mean values of background and ambient air pollution registered at these sites for these primary pollutants were estimated and the ratio of ambient to background air pollution and the difference between them were studied. The ratio indicator for the studied air pollutants during the four-year study sets the background air pollution at 48%-69% of the ambient air pollution, while the difference between these values ranges from 101 to 193 μg/m3, 7-12 μg/m3, 11-13 μg/m3 and 2-3 μg/m3 for CO, NO2, PM10 and SO2, respectively.

  3. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  4. Characterizing Decision-Analysis Performances of Risk Prediction Models Using ADAPT Curves

    PubMed Central

    Lee, Wen-Chung; Wu, Yun-Chun

    2016-01-01

    Abstract The area under the receiver operating characteristic curve is a widely used index to characterize the performance of diagnostic tests and prediction models. However, the index does not explicitly acknowledge the utilities of risk predictions. Moreover, for most clinical settings, what counts is whether a prediction model can guide therapeutic decisions in a way that improves patient outcomes, rather than to simply update probabilities. Based on decision theory, the authors propose an alternative index, the “average deviation about the probability threshold” (ADAPT). An ADAPT curve (a plot of ADAPT value against the probability threshold) neatly characterizes the decision-analysis performances of a risk prediction model. Several prediction models can be compared for their ADAPT values at a chosen probability threshold, for a range of plausible threshold values, or for the whole ADAPT curves. This should greatly facilitate the selection of diagnostic tests and prediction models. PMID:26765451

  5. Visual Tracking Based on the Adaptive Color Attention Tuned Sparse Generative Object Model.

    PubMed

    Tian, Chunna; Gao, Xinbo; Wei, Wei; Zheng, Hong

    2015-12-01

    This paper presents a new visual tracking framework based on an adaptive color attention tuned local sparse model. The histograms of sparse coefficients of all patches in an object are pooled together according to their spatial distribution. A particle filter methodology is used as the location model to predict candidates for object verification during tracking. Since color is an important visual clue to distinguish objects from background, we calculate the color similarity between objects in the previous frames and the candidates in current frame, which is adopted as color attention to tune the local sparse representation-based appearance similarity measurement between the object template and candidates. The color similarity can be calculated efficiently with hash coded color names, which helps the tracker find more reliable objects during tracking. We use a flexible local sparse coding of the object to evaluate the degeneration degree of the appearance model, based on which we build a model updating mechanism to alleviate drifting caused by temporal varying multi-factors. Experiments on 76 challenging benchmark color sequences and the evaluation under the object tracking benchmark protocol demonstrate the superiority of the proposed tracker over the state-of-the-art methods in accuracy. PMID:26390460

  6. Modelling the background aerosol climatologies (1989-2010) for the Mediterranean basin

    NASA Astrophysics Data System (ADS)

    Jimenez-Guerrero, Pedro; Jerez, Sonia

    2014-05-01

    seasonally; here the sea spray clearly follows the wind speed variation. The results confirm the capability of the modelling strategies to reproduce the particulate matter levels, composition and variation in the Mediterranean area. This kind of information is useful for establishing improvement strategies for the prediction aerosols and to achieve the standards set in European Directives for modeling applications. Kulmala, M., Asmi, A., Lappalainen, H.K., Carslaw, K.S., Pöschl, U., Baltensperger, U. Hov, O., Brenquier, J.-L., Pandis, S.N., Facchini, M.C., Hanson, H.-C., Wiedensohler, A., O'Dowd, C.D., 2009. Introduction: European Integrated Project on Aerosol Cloud Climate and Air Quality interactions (EUCAARI) - integrating aerosol research from nano to global scales. Amos. Chem. Phys., 9, 2825-2841. Querol, X., Alastuey, A., Pey, J., Cusack, M., Pérez, N., Mihalopoulos, N., Theodosi, C., Gerasopoulos, E., Kubilay, N., Koçak, M., 2009. Variability in regional background aerosols within the Mediterranean. Atmos. Chem. Phys., 9, 4575-4591.

  7. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  8. Tensor product model transformation based adaptive integral-sliding mode controller: equivalent control method.

    PubMed

    Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

  9. Fully nonlinear and exact perturbations of the Friedmann world model: non-flat background

    NASA Astrophysics Data System (ADS)

    Noh, Hyerim

    2014-07-01

    We extend the fully non-linear and exact cosmological perturbation equations in a Friedmann background universe to include the background curvature. The perturbation equations are presented in a gauge ready form, so any temporal gauge condition can be adopted freely depending on the problem to be solved. We consider the scalar, and vector perturbations without anisotropic stress. As an application, we analyze the equations in the special case of irrotational zero-pressure fluid in the comoving gauge condition. We also present the fully nonlinear formulation for a minimally coupled scalar field.

  10. Video Adaptation Model Based on Cognitive Lattice in Ubiquitous Computing

    NASA Astrophysics Data System (ADS)

    Kim, Svetlana; Yoon, Yong-Ik

    The multimedia service delivery chain poses today many challenges. There are an increasing terminal diversity, network heterogeneity and a pressure to satisfy the user preferences. The situation encourages the need for the personalized contents to provide the user in the best possible experience in ubiquitous computing. This paper introduces a personalized content preparation and delivery framework for multimedia service. The personalized video adaptation is expected to satisfy individual users' need in video content. Cognitive lattice plays a significant role of video annotation to meet users' preference on video content. In this paper, a comprehensive solution for the PVA (Personalized Video Adaptation) is proposed based on Cognitive lattice concept. The PVA is implemented based on MPEG-21 Digital Item Adaptation framework. One of the challenges is how to quantify users' preference on video content.

  11. Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees

    SciTech Connect

    Bremer, P; Pascucci, V; Hamann, B

    2008-12-08

    We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.

  12. Adaptive Failure Compensation for Aircraft Tracking Control Using Engine Differential Based Model

    NASA Technical Reports Server (NTRS)

    Liu, Yu; Tang, Xidong; Tao, Gang; Joshi, Suresh M.

    2006-01-01

    An aircraft model that incorporates independently adjustable engine throttles and ailerons is employed to develop an adaptive control scheme in the presence of actuator failures. This model captures the key features of aircraft flight dynamics when in the engine differential mode. Based on this model an adaptive feedback control scheme for asymptotic state tracking is developed and applied to a transport aircraft model in the presence of two types of failures during operation, rudder failure and aileron failure. Simulation results are presented to demonstrate the adaptive failure compensation scheme.

  13. Management of bulimia nervosa: a case study with the Roy adaptation model.

    PubMed

    Seah, Xin Yi; Tham, Xiang Cong

    2015-04-01

    Bulimia nervosa is a crippling and chronic disorder, with individuals experiencing repeated binge-purge episodes. It is not widely understood by society. The use of the Roy adaptation model for the management of bulimia nervosa is examined in this article. Nursing models are utilized to provide a structure for planning and implementation of patient management. The Roy adaptation model focuses on the importance of individuals as able to adapt well to their changing surrounding environments. This model can be useful in managing patients with bulimia nervosa.

  14. A model for practice guideline adaptation and implementation: empowerment of the physician.

    PubMed

    Wise, C G; Billi, J E

    1995-09-01

    The Medical Center model of practice guideline adaptation and implementation uses local clinical leaders to evaluate nationally endorsed guidelines, adapt those guidelines for use in the local setting, work with support staff to develop and apply methods for guideline implementation, and assist the evaluation of clinical practice and outcomes data. The model described here combines the guideline dissemination techniques of clinical leadership, implementation, and data support and feedback. This model overcomes the failures of previous models by incorporating local physician involvement during every step of practice guideline selection, adaptation, implementation, and evaluation, and by supporting the physician leaders with quality data, resources to support guideline implementation, and outcomes assessment and feedback.

  15. An adapted Coffey model for studying susceptibility losses in interacting magnetic nanoparticles

    PubMed Central

    Osaci, Mihaela

    2015-01-01

    Summary Background: Nanoparticles can be used in biomedical applications, such as contrast agents for magnetic resonance imaging, in tumor therapy or against cardiovascular diseases. Single-domain nanoparticles dissipate heat through susceptibility losses in two modes: Néel relaxation and Brownian relaxation. Results: Since a consistent theory for the Néel relaxation time that is applicable to systems of interacting nanoparticles has not yet been developed, we adapted the Coffey theoretical model for the Néel relaxation time in external magnetic fields in order to consider local dipolar magnetic fields. Then, we obtained the effective relaxation time. The effective relaxation time is further used for obtaining values of specific loss power (SLP) through linear response theory (LRT). A comparative analysis between our model and the discrete orientation model, more often used in literature, and a comparison with experimental data from literature have been carried out, in order to choose the optimal magnetic parameters of a nanoparticle system. Conclusion: In this way, we can study effects of the nanoparticle concentration on SLP in an acceptable range of frequencies and amplitudes of external magnetic fields for biomedical applications, especially for tumor therapy by magnetic hyperthermia. PMID:26665090

  16. Application of the Bifactor Model to Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Seo, Dong Gi

    2011-01-01

    Most computerized adaptive tests (CAT) have been studied under the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CAT. In addition, a number of psychological variables (e.g., quality of life, depression) can be conceptualized…

  17. Energetic Metabolism and Biochemical Adaptation: A Bird Flight Muscle Model

    ERIC Educational Resources Information Center

    Rioux, Pierre; Blier, Pierre U.

    2006-01-01

    The main objective of this class experiment is to measure the activity of two metabolic enzymes in crude extract from bird pectoral muscle and to relate the differences to their mode of locomotion and ecology. The laboratory is adapted to stimulate the interest of wildlife management students to biochemistry. The enzymatic activities of cytochrome…

  18. A Comprehensive and Systematic Model of User Evaluation of Web Search Engines: I. Theory and Background.

    ERIC Educational Resources Information Center

    Su, Louise T.

    2003-01-01

    Reports on a project that proposes and tests a comprehensive and systematic model of user evaluation of Web search engines. This article describes the model, including a set of criteria and measures and a method for implementation. A literature review portrays settings for developing the model and places applications of the model in contemporary…

  19. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  20. A model for homeopathic remedy effects: low dose nanoparticles, allostatic cross-adaptation, and time-dependent sensitization in a complex adaptive system

    PubMed Central

    2012-01-01

    Background This paper proposes a novel model for homeopathic remedy action on living systems. Research indicates that homeopathic remedies (a) contain measurable source and silica nanoparticles heterogeneously dispersed in colloidal solution; (b) act by modulating biological function of the allostatic stress response network (c) evoke biphasic actions on living systems via organism-dependent adaptive and endogenously amplified effects; (d) improve systemic resilience. Discussion The proposed active components of homeopathic remedies are nanoparticles of source substance in water-based colloidal solution, not bulk-form drugs. Nanoparticles have unique biological and physico-chemical properties, including increased catalytic reactivity, protein and DNA adsorption, bioavailability, dose-sparing, electromagnetic, and quantum effects different from bulk-form materials. Trituration and/or liquid succussions during classical remedy preparation create “top-down” nanostructures. Plants can biosynthesize remedy-templated silica nanostructures. Nanoparticles stimulate hormesis, a beneficial low-dose adaptive response. Homeopathic remedies prescribed in low doses spaced intermittently over time act as biological signals that stimulate the organism’s allostatic biological stress response network, evoking nonlinear modulatory, self-organizing change. Potential mechanisms include time-dependent sensitization (TDS), a type of adaptive plasticity/metaplasticity involving progressive amplification of host responses, which reverse direction and oscillate at physiological limits. To mobilize hormesis and TDS, the remedy must be appraised as a salient, but low level, novel threat, stressor, or homeostatic disruption for the whole organism. Silica nanoparticles adsorb remedy source and amplify effects. Properly-timed remedy dosing elicits disease-primed compensatory reversal in direction of maladaptive dynamics of the allostatic network, thus promoting resilience and recovery from

  1. Modeling of neutron induced backgrounds in x-ray framing camerasa)

    NASA Astrophysics Data System (ADS)

    Hagmann, C.; Izumi, N.; Bell, P.; Bradley, D.; Conder, A.; Eckart, M.; Khater, H.; Koch, J.; Moody, J.; Stone, G.

    2010-10-01

    Fast neutrons from inertial confinement fusion implosions pose a severe background to conventional multichannel plate (MCP)-based x-ray framing cameras for deuterium-tritium yields >1013. Nuclear reactions of neutrons in photosensitive elements (charge coupled device or film) cause some of the image noise. In addition, inelastic neutron collisions in the detector and nearby components create a large gamma pulse. The background from the resulting secondary charged particles is twofold: (1) production of light through the Cherenkov effect in optical components and by excitation of the MCP phosphor and (2) direct excitation of the photosensitive elements. We give theoretical estimates of the various contributions to the overall noise and present mitigation strategies for operating in high yield environments.

  2. Comparison of Model Prediction with Measurements of Galactic Background Noise at L-Band

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, Willam J.; Skou, Niels; Sobjaerg, S.

    2004-01-01

    The spectral window at L-band (1.413 GHz) is important for passive remote sensing of surface parameters such as soil moisture and sea surface salinity that are needed to understand the hydrological cycle and ocean circulation. Radiation from celestial (mostly galactic) sources is strong in this window and an accurate accounting for this background radiation is often needed for calibration. Modem radio astronomy measurements in this spectral window have been converted into a brightness temperature map of the celestial sky at L-band suitable for use in correcting passive measurements. This paper presents a comparison of the background radiation predicted by this map with measurements made with several modem L-band remote sensing radiometers. The agreement validates the map and the procedure for locating the source of down-welling radiation.

  3. Dynamic modeling, property investigation, and adaptive controller design of serial robotic manipulators modeled with structural compliance

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert; Tosunoglu, Sabri; Lin, Shyng-Her

    1990-01-01

    Research results on general serial robotic manipulators modeled with structural compliances are presented. Two compliant manipulator modeling approaches, distributed and lumped parameter models, are used in this study. System dynamic equations for both compliant models are derived by using the first and second order influence coefficients. Also, the properties of compliant manipulator system dynamics are investigated. One of the properties, which is defined as inaccessibility of vibratory modes, is shown to display a distinct character associated with compliant manipulators. This property indicates the impact of robot geometry on the control of structural oscillations. Example studies are provided to illustrate the physical interpretation of inaccessibility of vibratory modes. Two types of controllers are designed for compliant manipulators modeled by either lumped or distributed parameter techniques. In order to maintain the generality of the results, neither linearization is introduced. Example simulations are given to demonstrate the controller performance. The second type controller is also built for general serial robot arms and is adaptive in nature which can estimate uncertain payload parameters on-line and simultaneously maintain trajectory tracking properties. The relation between manipulator motion tracking capability and convergence of parameter estimation properties is discussed through example case studies. The effect of control input update delays on adaptive controller performance is also studied.

  4. Generalized Galileons: All scalar models whose curved background extensions maintain second-order field equations and stress tensors

    SciTech Connect

    Deffayet, C.; Deser, S.; Esposito-Farese, G.

    2009-09-15

    We extend to curved backgrounds all flat-space scalar field models that obey purely second-order equations, while maintaining their second-order dependence on both field and metric. This extension simultaneously restores to second order the, originally higher derivative, stress tensors as well. The process is transparent and uniform for all dimensions.

  5. Estimating North American background ozone in U.S. surface air with two independent global models: Variability, uncertainties, and recommendations

    EPA Science Inventory

    Accurate estimates for North American background (NAB) ozone (O3) in surface air over the United States are needed for setting and implementing an attainable national O3 standard. These estimates rely on simulations with atmospheric chemistry-transport models that set North Amer...

  6. Investigation of the Multiple Model Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The application was investigated of control theoretic ideas to the design of flight control systems for the F-8 aircraft. The design of an adaptive control system based upon the so-called multiple model adaptive control (MMAC) method is considered. Progress is reported.

  7. Illness behavior, social adaptation, and the management of illness. A comparison of educational and medical models.

    PubMed

    Mechanic, D

    1977-08-01

    Motivational needs and coping are important aspects of illness response. Clinicians must help guide illness response by suggesting constructive adaptive opportunities and by avoiding reinforcement of maladaptive patterns. This paper examines how the patient's search for meaning, social attributions, and social comparisons shapes adaptation to illness and subsequent disability. It proposes a coping-adaptation model involving the following five resources relevant to rehabilitation: economic assets, abilities and skills, defensive techniques, social supports, and motivational impetus. It is maintained that confusion between illness and illness behavior obfuscates the alternatives available to guide patients through smoother adaptations and resumption of social roles. PMID:328824

  8. Cold dark matter confronts the cosmic microwave background - Large-angular-scale anisotropies in Omega sub 0 + lambda 1 models

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola

    1992-01-01

    A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.

  9. Sensorimotor synchronization with tempo-changing auditory sequences: Modeling temporal adaptation and anticipation.

    PubMed

    van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E

    2015-11-11

    The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal

  10. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  11. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler*

    PubMed Central

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2014-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency. PMID:24653788

  12. Adaptation to stroke using a model of successful aging.

    PubMed

    Donnellan, C; Hevey, D; Hickey, A; O'Neill, D

    2012-01-01

    The process of adaptation to the physical and psychosocial consequences after stroke is a major challenge for many individuals affected. The aim of this study was to examine if stroke patients within 1 month of admission (n = 153) and followed up at 1 year (n = 107) engage in selection, optimization, and compensation (SOC) adaptive strategies and the relationship of these strategies with functional ability, health-related quality of life (HRQOL) and depression 1 year later. Adaptive strategies were measured using a 15-item SOC questionnaire. Internal and external resources were assessed including recovery locus of control, stroke severity, and socio-demographics. Outcome measures were the Stroke Specific Quality of Life Questionnaire (SS-QoL), the Nottingham Extended Activities of Daily Living Scale and the Depression Subscale of the Hospital Anxiety and Depression Scale. Findings indicated that stroke patients engaged in the use of SOC strategies but the use of these strategies were not predictive of HRQOL, functional ability or depression 1 year after stroke. The use of SOC strategies were not age specific and were consistent over time, with the exception of the compensation subscale. Results indicate that SOC strategies may potentially be used in response to loss regulation after stroke and that an individual's initial HRQOL functional ability, levels of depression and socio-economic status that are important factors in determining outcome 1 year after stroke. A stroke-specific measure of SOC may be warranted in order to detect significant differences in determining outcomes for a stroke population.

  13. Background and Derivation of ANS-5.4 Standard Fission Product Release Model

    SciTech Connect

    Beyer, Carl E.; Turnbull, Andrew J.

    2010-01-29

    This background report describes the technical basis for the newly proposed American Nuclear Society (ANS) 5.4 standard, Methods for Calculating the Fractional Release of Volatile Fission Products from Oxide Fuels. The proposed ANS 5.4 standard provides a methodology for determining the radioactive fission product releases from the fuel for use in assessing radiological consequences of postulated accidents that do not involve abrupt power transients. When coupled with isotopic yields, this method establishes the 'gap activity,' which is the inventory of volatile fission products that are released from the fuel rod if the cladding are breached.

  14. Receptor modelling of both particle composition and size distribution from a background site in London, UK

    NASA Astrophysics Data System (ADS)

    Beddows, D. C. S.; Harrison, R. M.; Green, D. C.; Fuller, G. W.

    2015-09-01

    Positive matrix factorisation (PMF) analysis was applied to PM10 chemical composition and particle number size distribution (NSD) data measured at an urban background site (North Kensington) in London, UK, for the whole of 2011 and 2012. The PMF analyses for these 2 years revealed six and four factors respectively which described seven sources or aerosol types. These included nucleation, traffic, urban background, secondary, fuel oil, marine and non-exhaust/crustal sources. Urban background, secondary and traffic sources were identified by both the chemical composition and particle NSD analysis, but a nucleation source was identified only from the particle NSD data set. Analysis of the PM10 chemical composition data set revealed fuel oil, marine, non-exhaust traffic/crustal sources which were not identified from the NSD data. The two methods appear to be complementary, as the analysis of the PM10 chemical composition data is able to distinguish components contributing largely to particle mass, whereas the number particle size distribution data set - although limited to detecting sources of particles below the diameter upper limit of the SMPS (604 nm) - is more effective for identifying components making an appreciable contribution to particle number. Analysis was also conducted on the combined chemical composition and NSD data set, revealing five factors representing urban background, nucleation, secondary, aged marine and traffic sources. However, the combined analysis appears not to offer any additional power to discriminate sources above that of the aggregate of the two separate PMF analyses. Day-of-the-week and month-of-the-year associations of the factors proved consistent with their assignment to source categories, and bivariate polar plots which examined the wind directional and wind speed association of the different factors also proved highly consistent with their inferred sources. Source attribution according to the air mass back trajectory showed, as

  15. Development and extension of an aggregated scale model: Part 1 - Background to ASMITA

    NASA Astrophysics Data System (ADS)

    Townend, Ian; Wang, Zheng Bing; Stive, Marcel; Zhou, Zeng

    2016-07-01

    Whilst much attention has been given to models that describe wave, tide and sediment transport processes in sufficient detail to determine the local changes in bed level over a relatively detailed representation of the bathymetry, far less attention has been given to models that consider the problem at a much larger scale (e.g. that of geomorphological elements such as a tidal flat and tidal channel). Such aggregated or lumped models tend not to represent the processes in detail but rather capture the behaviour at the scale of interest. One such model developed using the concept of an equilibrium concentration is the Aggregated Scale Morphological Interaction between Tidal basin and Adjacent coast (ASMITA). In this paper we provide some new insights into the concepts of equilibrium, and horizontal and vertical exchange that are key components of this modelling approach. In a companion paper, we summarise a range of developments that have been undertaken to extend the original model concept, to illustrate the flexibility and power of the conceptual framework. However, adding detail progressively moves the model in the direction of the more detailed process-based models and we give some consideration to the boundary between the two. Highlights The concept of aggregating model scales is explored and the basis of the ASMITA model is outlined in detail

  16. Probabilistic choice models in health-state valuation research: background, theories, assumptions and applications.

    PubMed

    Arons, Alexander M M; Krabbe, Paul F M

    2013-02-01

    Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the area of health evaluation. They increasingly use discrete choice models based on random utility theory to derive values for healthcare goods or services. Recent attempts have been made to use discrete choice models as an alternative method to derive values for health states. In this article, various probabilistic choice models are described according to their underlying theory. A historical overview traces their development and applications in diverse fields. The discussion highlights some theoretical and technical aspects of the choice models and their similarity and dissimilarity. The objective of the article is to elucidate the position of each model and their applications for health-state valuation.

  17. A Direct Adaptive Control Approach in the Presence of Model Mismatch

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.; Tao, Gang; Khong, Thuan

    2009-01-01

    This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.

  18. Anisotropies of the cosmic microwave background in nonstandard cold dark matter models

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Silk, Joseph

    1992-01-01

    Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.

  19. Parent Management Training-Oregon Model (PMTO™) in Mexico City: Integrating Cultural Adaptation Activities in an Implementation Model

    PubMed Central

    Baumann, Ana A.; Domenech Rodríguez, Melanie M.; Amador, Nancy G.; Forgatch, Marion S.; Parra-Cardona, J. Rubén

    2015-01-01

    This article describes the process of cultural adaptation at the start of the implementation of the Parent Management Training intervention-Oregon model (PMTO) in Mexico City. The implementation process was guided by the model, and the cultural adaptation of PMTO was theoretically guided by the cultural adaptation process (CAP) model. During the process of the adaptation, we uncovered the potential for the CAP to be embedded in the implementation process, taking into account broader training and economic challenges and opportunities. We discuss how cultural adaptation and implementation processes are inextricably linked and iterative and how maintaining a collaborative relationship with the treatment developer has guided our work and has helped expand our research efforts, and how building human capital to implement PMTO in Mexico supported the implementation efforts of PMTO in other places in the United States. PMID:26052184

  20. Integrated optimal allocation model for complex adaptive system of water resources management (I): Methodologies

    NASA Astrophysics Data System (ADS)

    Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Ye, Yushi

    2015-12-01

    Due to the adaption, dynamic and multi-objective characteristics of complex water resources system, it is a considerable challenge to manage water resources in an efficient, equitable and sustainable way. An integrated optimal allocation model is proposed for complex adaptive system of water resources management. The model consists of three modules: (1) an agent-based module for revealing evolution mechanism of complex adaptive system using agent-based, system dynamic and non-dominated sorting genetic algorithm II methods, (2) an optimal module for deriving decision set of water resources allocation using multi-objective genetic algorithm, and (3) a multi-objective evaluation module for evaluating the efficiency of the optimal module and selecting the optimal water resources allocation scheme using project pursuit method. This study has provided a theoretical framework for adaptive allocation, dynamic allocation and multi-objective optimization for a complex adaptive system of water resources management.

  1. Cosmic microwave background anisotropies in cold dark matter models with cosmological constant: The intermediate versus large angular scales

    NASA Technical Reports Server (NTRS)

    Stompor, Radoslaw; Gorski, Krzysztof M.

    1994-01-01

    We obtain predictions for cosmic microwave background anisotropies at angular scales near 1 deg in the context of cold dark matter models with a nonzero cosmological constant, normalized to the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) detection. The results are compared to those computed in the matter-dominated models. We show that the coherence length of the Cosmic Microwave Background (CMB) anisotropy is almost insensitive to cosmological parameters, and the rms amplitude of the anisotropy increases moderately with decreasing total matter density, while being most sensitive to the baryon abundance. We apply these results in the statistical analysis of the published data from the UCSB South Pole (SP) experiment (Gaier et al. 1992; Schuster et al. 1993). We reject most of the Cold Dark Matter (CDM)-Lambda models at the 95% confidence level when both SP scans are simulated together (although the combined data set renders less stringent limits than the Gaier et al. data alone). However, the Schuster et al. data considered alone as well as the results of some other recent experiments (MAX, MSAM, Saskatoon), suggest that typical temperature fluctuations on degree scales may be larger than is indicated by the Gaier et al. scan. If so, CDM-Lambda models may indeed provide, from a point of view of CMB anisotropies, an acceptable alternative to flat CDM models.

  2. Data for Environmental Modeling (D4EM): Background and Applications of Data Automation

    EPA Science Inventory

    The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...

  3. On the role of model-based monitoring for adaptive planning under uncertainty

    NASA Astrophysics Data System (ADS)

    Raso, Luciano; Kwakkel, Jan; Timmermans, Jos; Haasnoot, Mariolijn

    2016-04-01

    , triggered by the challenge of uncertainty in operational control, may offer solutions from which monitoring for adaptive planning can benefit. Specifically: (i) in control, observations are incorporated into the model through data assimilation, updating the present state, boundary conditions, and parameters based on new observations, diminishing the shadow of the past; (ii) adaptive control is a way to modify the characteristics of the internal model, incorporating new knowledge on the system, countervailing the inhibition of learning; and (iii) in closed-loop control, a continuous system update equips the controller with "inherent robustness", i.e. to capacity to adapts to new conditions even when these were not initially considered. We aim to explore how inherent robustness addresses the challenge of surprise. Innovations in model-based control might help to improve and adapt the models used to support adaptive delta management to new information (reducing uncertainty). Moreover, this would offer a starting point for using these models not only in the design of adaptive plans, but also as part of the monitoring. The proposed research requires multidisciplinary cooperation between control theory, the policy sciences, and integrated assessment modeling.

  4. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-02-06

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively.

  5. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  6. Construction and solution of an adaptive image-restoration model for removing blur and mixed noise

    NASA Astrophysics Data System (ADS)

    Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun

    2016-03-01

    We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.

  7. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  8. Physical modeling of the feedback path in hearing aids with application to adaptive feedback cancellation

    NASA Astrophysics Data System (ADS)

    Hayes, Joanna L.; Rafaely, Boaz

    2002-05-01

    Hearing aid system modeling based on two-port network theory has been used previously to study the forward gain and the feedback path in hearing aids. The two-port modeling approach is employed in this work to develop an analytic model of the feedback path by reducing the model matrices to simplified analytic expressions. Such an analytic model can simulate the frequency response of the feedback path given the values of relatively few physical parameters such as vent dimensions. The model was extended to include variability in the feedback path due to slit leaks, for example. The analytic model was then incorporated in an adaptive feedback cancellation system, where the physical parameters of the model were adapted to match the actual feedback path and cancel the feedback signal. In the initial stage of this study, the ability of the model to match the frequency response of various measured feedback paths was studied using numerical optimization. Then, an adaptive filtering configuration based on the physical model was developed and studied using computer simulations. Results show that this new approach to adaptive feedback cancellation has the potential to improve both adaptation speed and performance robustness.

  9. Time domain and frequency domain design techniques for model reference adaptive control systems

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1971-01-01

    Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.

  10. Study of Facial Features Combination Using a Novel Adaptive Fuzzy Integral Fusion Model

    NASA Astrophysics Data System (ADS)

    Ardakani, M. Mahdi Ghazaei; Shokouhi, Shahriar Baradaran

    A new adaptive model based on fuzzy integrals has been presented and used for combining three well-known methods, Eigenface, Fisherface and SOMface, for face classification. After training the competence estimation functions, the adaptive mechanism enables our system the filtering of unsure judgments of classifiers for a specific input. Comparison with classical and non-adaptive approaches proves the superiority of this model. Also we examined how these features contribute to the combined result and whether they can together establish a more robust feature.

  11. Modeling the performance of direct-detection Doppler lidar systems including cloud and solar background variability.

    PubMed

    McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D

    1999-10-20

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design. PMID:18324169

  12. Demand modelling of passenger air travel: An analysis and extension. Volume 1: Background and summary

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.

    1978-01-01

    The framework for a model of travel demand which will be useful in predicting the total market for air travel between two cities is discussed. Variables to be used in determining the need for air transportation where none currently exists and the effect of changes in system characteristics on attracting latent demand are identified. Existing models are examined in order to provide insight into their strong points and shortcomings. Much of the existing behavioral research in travel demand is incorporated to allow the inclusion of non-economic factors, such as convenience. The model developed is characterized as a market segmentation model. This is a consequence of the strengths of disaggregation and its natural evolution to a usable aggregate formulation. The need for this approach both pedagogically and mathematically is discussed.

  13. Direct Adaptive Control Methodologies for Flexible-Joint Space Manipulators with Uncertainties and Modeling Errors

    NASA Astrophysics Data System (ADS)

    Ulrich, Steve

    This work addresses the direct adaptive trajectory tracking control problem associated with lightweight space robotic manipulators that exhibit elastic vibrations in their joints, and which are subject to parametric uncertainties and modeling errors. Unlike existing adaptive control methodologies, the proposed flexible-joint control techniques do not require identification of unknown parameters, or mathematical models of the system to be controlled. The direct adaptive controllers developed in this work are based on the model reference adaptive control approach, and manage modeling errors and parametric uncertainties by time-varying the controller gains using new adaptation mechanisms, thereby reducing the errors between an ideal model and the actual robot system. More specifically, new decentralized adaptation mechanisms derived from the simple adaptive control technique and fuzzy logic control theory are considered in this work. Numerical simulations compare the performance of the adaptive controllers with a nonadaptive and a conventional model-based controller, in the context of 12.6 m xx 12.6 m square trajectory tracking. To validate the robustness of the controllers to modeling errors, a new dynamics formulation that includes several nonlinear effects usually neglected in flexible-joint dynamics models is proposed. Results obtained with the adaptive methodologies demonstrate an increased robustness to both uncertainties in joint stiffness coefficients and dynamics modeling errors, as well as highly improved tracking performance compared with the nonadaptive and model-based strategies. Finally, this work considers the partial state feedback problem related to flexible-joint space robotic manipulators equipped only with sensors that provide noisy measurements of motor positions and velocities. An extended Kalman filter-based estimation strategy is developed to estimate all state variables in real-time. The state estimation filter is combined with an adaptive

  14. Particle Swarm Social Adaptive Model for Multi-Agent Based Insurgency Warfare Simulation

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2009-12-01

    To better understand insurgent activities and asymmetric warfare, a social adaptive model for modeling multiple insurgent groups attacking multiple military and civilian targets is proposed and investigated. This report presents a pilot study using the particle swarm modeling, a widely used non-linear optimal tool to model the emergence of insurgency campaign. The objective of this research is to apply the particle swarm metaphor as a model of insurgent social adaptation for the dynamically changing environment and to provide insight and understanding of insurgency warfare. Our results show that unified leadership, strategic planning, and effective communication between insurgent groups are not the necessary requirements for insurgents to efficiently attain their objective.

  15. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  16. Public Knowledge of Oral Cancer and Modelling of Demographic Background Factors Affecting this Knowledge in Khartoum State, Sudan

    PubMed Central

    Al-Hakimi, Hamdi A.; Othman, Abdulqaher E.; Mohamed, Omima G.; Saied, Abdulaal M.; Ahmed, Waled A.

    2016-01-01

    Objectives: Knowledge of oral cancer affects early detection and diagnosis of this disease. This study aimed to assess the current level of public knowledge of oral cancer in Khartoum State, Sudan, and examine how demographic background factors affect this knowledge. Methods: This cross-sectional study involved 501 participants recruited by systematic random sampling from the outpatient records of three major hospitals in Khartoum State between November 2012 and February 2013. A pretested structured questionnaire was designed to measure knowledge levels. A logistic regression model was utilised with demographic background variables as independent variables and knowledge of oral cancer as the dependent variable. A path analysis was conducted to build a structural model. Results: Of the 501 participants, 42.5% had no knowledge of oral cancer, while 5.4%, 39.9% and 12.2% had low, moderate and high knowledge levels, respectively. Logistic regression modelling showed that age, place of residence and education levels were significantly associated with knowledge levels (P = 0.009, 0.017 and <0.001, respectively). According to the structural model, age and place of residence had a prominent direct effect on knowledge, while age and residence also had a prominent indirect effect mediated through education levels. Conclusion: Education levels had the most prominent positive effect on knowledge of oral cancer among outpatients at major hospitals in Khartoum State. Moreover, education levels were found to mediate the effect of other background variables.

  17. Public Knowledge of Oral Cancer and Modelling of Demographic Background Factors Affecting this Knowledge in Khartoum State, Sudan

    PubMed Central

    Al-Hakimi, Hamdi A.; Othman, Abdulqaher E.; Mohamed, Omima G.; Saied, Abdulaal M.; Ahmed, Waled A.

    2016-01-01

    Objectives: Knowledge of oral cancer affects early detection and diagnosis of this disease. This study aimed to assess the current level of public knowledge of oral cancer in Khartoum State, Sudan, and examine how demographic background factors affect this knowledge. Methods: This cross-sectional study involved 501 participants recruited by systematic random sampling from the outpatient records of three major hospitals in Khartoum State between November 2012 and February 2013. A pretested structured questionnaire was designed to measure knowledge levels. A logistic regression model was utilised with demographic background variables as independent variables and knowledge of oral cancer as the dependent variable. A path analysis was conducted to build a structural model. Results: Of the 501 participants, 42.5% had no knowledge of oral cancer, while 5.4%, 39.9% and 12.2% had low, moderate and high knowledge levels, respectively. Logistic regression modelling showed that age, place of residence and education levels were significantly associated with knowledge levels (P = 0.009, 0.017 and <0.001, respectively). According to the structural model, age and place of residence had a prominent direct effect on knowledge, while age and residence also had a prominent indirect effect mediated through education levels. Conclusion: Education levels had the most prominent positive effect on knowledge of oral cancer among outpatients at major hospitals in Khartoum State. Moreover, education levels were found to mediate the effect of other background variables. PMID:27606114

  18. Sensitivities of eyewall replacement cycle to model physics, vortex structure, and background winds in numerical simulations of tropical cyclones

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenduo; Zhu, Ping

    2015-01-01

    series of sensitivity experiments by the Weather Research and Forecasting (WRF) model is used to investigate the impact of model physics, vortex axisymmetric radial structure, and background wind on secondary eyewall formation (SEF) and eyewall replacement cycle (ERC) in three-dimensional full physics numerical simulations. It is found that the vertical turbulent mixing parameterization can substantially affect the concentric ring structure of tangential wind associated with SEF through a complicated interaction among eyewall and outer rainband heating, radial inflow in the boundary layer, surface layer processes, and shallow convection in the moat. Large snow terminal velocity can substantially change the vertical distribution of eyewall diabatic heating to result in a strong radial inflow in the boundary layer, and thus, favors the development of shallow convection in the moat allowing the outer rainband convection to move closer to the inner eyewall, which may leave little room both temporally and spatially for a full development of a secondary maximum of tangential wind. Small radius of maximum wind (RMW) of a vortex and small potential vorticity (PV) skirt outside the RMW tend to generate double-eyewall replacement and may lead to an ERC without a clean secondary concentric maximum of tangential wind. A sufficiently large background wind can smooth out an ERC that would otherwise occur without background wind for a vortex with a small or moderate PV skirt. However, background wind does not appear to have an impact on an ERC if the vortex has a sufficiently large PV skirt.

  19. Response normalization and blur adaptation: Data and multi-scale model

    PubMed Central

    Elliott, Sarah L.; Georgeson, Mark A.; Webster, Michael A.

    2011-01-01

    Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log–log) slopes from −2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels. PMID:21307174

  20. Extended adiabatic blast waves and a model of the soft X-ray background

    NASA Technical Reports Server (NTRS)

    Cox, D. P.; Anderson, P. R.

    1982-01-01

    The suggestion has been made that much of the soft X-ray background observed in X-ray astronomy might arise from being inside a very large supernova blast wave propagating in the hot, low-density component of the interstellar (ISM) medium. An investigation is conducted to study this possibility. An analytic approximation is presented for the nonsimilar time evolution of the dynamic structure of an adiabatic blast wave generated by a point explosion in a homogeneous ambient medium. A scheme is provided for evaluating the electron-temperature distribution for the evolving structure, and a procedure is presented for following the state of a given fluid element through the evolving dynamical and thermal structures. The results of the investigation show that, if the solar system were located within a blast wave, the Wisconsin soft X-ray rocket payload would measure the B and C band count rates that it does measure, provided conditions correspond to the values calculated in the investigation.

  1. Adaptation of a general circulation model to ocean dynamics

    NASA Technical Reports Server (NTRS)

    Turner, R. E.; Rees, T. H.; Woodbury, G. E.

    1976-01-01

    A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.

  2. Simulation of the dispersion of nuclear contamination using an adaptive Eulerian grid model.

    PubMed

    Lagzi, I; Kármán, D; Turányi, T; Tomlin, A S; Haszpra, L

    2004-01-01

    Application of an Eulerian model using layered adaptive unstructured grids coupled to a meso-scale meteorological model is presented for modelling the dispersion of nuclear contamination following the accidental release from a single but strong source to the atmosphere. The model automatically places a finer resolution grid, adaptively in time, in regions were high spatial numerical error is expected. The high-resolution grid region follows the movement of the contaminated air over time. Using this method, grid resolutions of the order of 6 km can be achieved in a computationally effective way. The concept is illustrated by the simulation of hypothetical nuclear accidents at the Paks NPP, in Central Hungary. The paper demonstrates that the adaptive model can achieve accuracy comparable to that of a high-resolution Eulerian model using significantly less grid points and computer simulation time. PMID:15149762

  3. Adaptive Ambient Illumination Based on Color Harmony Model

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ayano; Hirai, Keita; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi

    We investigated the relationship between ambient illumination and psychological effect by applying a modified color harmony model. We verified the proposed model by analyzing correlation between psychological value and modified color harmony score. Experimental results showed the possibility to obtain the best color for illumination using this model.

  4. Adapting the Sport Education Model for Children with Disabilities

    ERIC Educational Resources Information Center

    Presse, Cindy; Block, Martin E.; Horton, Mel; Harvey, William J.

    2011-01-01

    The sport education model (SEM) has been widely used as a curriculum and instructional model to provide children with authentic and active sport experiences in physical education. In this model, students are assigned various roles to gain a deeper understanding of the sport or activity. This article provides a brief overview of the SEM and…

  5. Design of a Model Reference Adaptive Controller for an Unmanned Air Vehicle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.

    2010-01-01

    This paper presents the "Adaptive Control Technology for Safe Flight (ACTS)" architecture, which consists of a non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off nominal ones. The design and implementation procedures of both controllers are presented. The aim of these procedures, which encompass both theoretical and practical considerations, is to develop a controller suitable for flight. The ACTS architecture is applied to the Generic Transport Model developed by NASA-Langley Research Center. The GTM is a dynamically scaled test model of a transport aircraft for which a flight-test article and a high-fidelity simulation are available. The nominal controller at the core of the ACTS architecture has a multivariable LQR-PI structure while the adaptive one has a direct, model reference structure. The main control surfaces as well as the throttles are used as control inputs. The inclusion of the latter alleviates the pilot s workload by eliminating the need for cancelling the pitch coupling generated by changes in thrust. Furthermore, the independent usage of the throttles by the adaptive controller enables their use for attitude control. Advantages and potential drawbacks of adaptation are demonstrated by performing high fidelity simulations of a flight-validated controller and of its adaptive augmentation.

  6. DATA FOR ENVIRONMENTAL MODELING (D4EM): BACKGROUND AND EXAMPLE APPLICATIONS OF DATA AUTOMATION

    EPA Science Inventory

    Data is a basic requirement for most modeling applications. Collecting data is expensive and time consuming. High speed internet connections and growing databases of online environmental data go a long way to overcoming issues of data scarcity. Among the obstacles still remaining...

  7. Human search for a target on a textured background is consistent with a stochastic model.

    PubMed

    Clarke, Alasdair D F; Green, Patrick; Chantler, Mike J; Hunt, Amelia R

    2016-05-01

    Previous work has demonstrated that search for a target in noise is consistent with the predictions of the optimal search strategy, both in the spatial distribution of fixation locations and in the number of fixations observers require to find the target. In this study we describe a challenging visual-search task and compare the number of fixations required by human observers to find the target to predictions made by a stochastic search model. This model relies on a target-visibility map based on human performance in a separate detection task. If the model does not detect the target, then it selects the next saccade by randomly sampling from the distribution of saccades that human observers made. We find that a memoryless stochastic model matches human performance in this task. Furthermore, we find that the similarity in the distribution of fixation locations between human observers and the ideal observer does not replicate: Rather than making the signature doughnut-shaped distribution predicted by the ideal search strategy, the fixations made by observers are best described by a central bias. We conclude that, when searching for a target in noise, humans use an essentially random strategy, which achieves near optimal behavior due to biases in the distributions of saccades we have a tendency to make. The findings reconcile the existence of highly efficient human search performance with recent studies demonstrating clear failures of optimality in single and multiple saccade tasks. PMID:27145531

  8. Forecasting Library Futures: Participative Decisionmaking with a Microcomputer Model. Background Paper. Workshop 3.

    ERIC Educational Resources Information Center

    Mason, Thomas R.; Newton, Evan

    This paper describes the use of a microcomputer model program to predict library collection growth at Cornell University, particularly in Olin Library, which is Cornell's central research facility. The possible effects of increased online information retrieval and microform or videodisc usage on library storage needs are also briefly discussed. A…

  9. Revising Item Responses in Computerized Adaptive Tests: A Comparison of Three Models.

    ERIC Educational Resources Information Center

    Stocking, Martha L.

    1997-01-01

    Investigated three models that permit restricted examinee control over revising previous answers in the context of adaptive testing, using simulation. Two models permitting item revisions worked well in preserving test fairness and accuracy, and one model may preserve some cognitive processing styles developed by examinees for a linear testing…

  10. The Targowski and Bowman Model of Communication: Problems and Proposals for Adaptation.

    ERIC Educational Resources Information Center

    van Hoorde, Johan

    1990-01-01

    Outlines and analyzes the Targowski/Bowman model of communication. Suggests adaptations for the model, noting that these changes increase the model's explanatory power and its capacity to predict the communicative outcome of a message given in a business situation. (MM)

  11. REVIEW: Internal models in sensorimotor integration: perspectives from adaptive control theory

    NASA Astrophysics Data System (ADS)

    Tin, Chung; Poon, Chi-Sang

    2005-09-01

    Internal models and adaptive controls are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models' architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods, such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning, are reviewed and their possible relevance to motor control is discussed. Possible applicability of a Luenberger observer and an extended Kalman filter to state estimation problems—such as sensorimotor prediction or the resolution of vestibular sensory ambiguity—is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal models in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future.

  12. MGGPOD: a Monte Carlo Suite for Modeling Instrumental Line and Continuum Backgrounds in Gamma-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Weidenspointner, G.; Harris, M. J.; Sturner, S.; Teegarden, B. J.; Ferguson, C.

    2004-01-01

    Intense and complex instrumental backgrounds, against which the much smaller signals from celestial sources have to be discerned, are a notorious problem for low and intermediate energy gamma-ray astronomy (approximately 50 keV - 10 MeV). Therefore a detailed qualitative and quantitative understanding of instrumental line and continuum backgrounds is crucial for most stages of gamma-ray astronomy missions, ranging from the design and development of new instrumentation through performance prediction to data reduction. We have developed MGGPOD, a user-friendly suite of Monte Carlo codes built around the widely used GEANT (Version 3.21) package, to simulate ab initio the physical processes relevant for the production of instrumental backgrounds. These include the build-up and delayed decay of radioactive isotopes as well as the prompt de-excitation of excited nuclei, both of which give rise to a plethora of instrumental gamma-ray background lines in addition t o continuum backgrounds. The MGGPOD package and documentation are publicly available for download. We demonstrate the capabilities of the MGGPOD suite by modeling high resolution gamma-ray spectra recorded by the Transient Gamma-Ray Spectrometer (TGRS) on board Wind during 1995. The TGRS is a Ge spectrometer operating in the 40 keV to 8 MeV range. Due to its fine energy resolution, these spectra reveal the complex instrumental background in formidable detail, particularly the many prompt and delayed gamma-ray lines. We evaluate the successes and failures of the MGGPOD package in reproducing TGRS data, and provide identifications for the numerous instrumental lines.

  13. Neuro- and sensoriphysiological Adaptations to Microgravity using Fish as Model System

    NASA Astrophysics Data System (ADS)

    Anken, R.

    The phylogenetic development of all organisms took place under constant gravity conditions, against which they achieved specific countermeasures for compensation and adaptation. On this background, it is still an open question to which extent altered gravity such as hyper- or microgravity (centrifuge/spaceflight) affects the normal individual development, either on the systemic level of the whole organism or on the level of individual organs or even single cells. The present review provides information on this topic, focusing on the effects of altered gravity on developing fish as model systems even for higher vertebrates including humans, with special emphasis on the effect of altered gravity on behaviour and particularly on the developing brain and vestibular system. Overall, the results speak in favour of the following concept: Short-term altered gravity (˜ 1 day) can induce transient sensorimotor disorders (kinetoses) due to malfunctions of the inner ear, originating from asymmetric otoliths. The regain of normal postural control is likely due to a reweighing of sensory inputs. During long-term altered gravity (several days and more), complex adptations on the level of the central and peripheral vestibular system occur. This work was financially supported by the German Aerospace Center (DLR) e.V. (FKZ: 50 WB 9997).

  14. Cosmic-Ray Background Flux Model Baed on a Gamma-Ray Large Area Space Telescope Baloon Flight Engineering

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4(pi) sr), the sensitive energy range of the instrument ((approx) 10 MeV to 100 GeV) and abundant components (proton, alpha, e(sup -), e(sup +), (mu)(sup -), (mu)(sup +) and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functions in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.

  15. Using box models to quantify zonal distributions and emissions of halocarbons in the background atmosphere.

    NASA Astrophysics Data System (ADS)

    Elkins, J. W.; Nance, J. D.; Dutton, G. S.; Montzka, S. A.; Hall, B. D.; Miller, B.; Butler, J. H.; Mondeel, D. J.; Siso, C.; Moore, F. L.; Hintsa, E. J.; Wofsy, S. C.; Rigby, M. L.

    2015-12-01

    The Halocarbons and other Atmospheric Trace Species (HATS) of NOAA's Global Monitoring Division started measurements of the major chlorofluorocarbons and nitrous oxide in 1977 from flask samples collected at five remote sites around the world. Our program has expanded to over 40 compounds at twelve sites, which includes six in situ instruments and twelve flask sites. The Montreal Protocol for Substances that Deplete the Ozone Layer and its subsequent amendments has helped to decrease the concentrations of many of the ozone depleting compounds in the atmosphere. Our goal is to provide zonal emission estimates for these trace gases from multi-box models and their estimated atmospheric lifetimes in this presentation and make the emission values available on our web site. We plan to use our airborne measurements to calibrate the exchange times between the boxes for 5-box and 12-box models using sulfur hexafluoride where emissions are better understood.

  16. Modeling the fluctuations of the cosmic infrared background: what did we learn from Planck?

    NASA Astrophysics Data System (ADS)

    Bethermin, Matthieu

    2015-08-01

    The CIB is the relic emission of the dust heated by young stars across. It is a powerful probe of the star formation history in the Universe. The distribution of star-forming galaxies in the large-scale structures is imprinted in the anisotropies of the CIB. They are thus one of the keys to understand how large-scale structures shaped the evolution of the galaxies. Planck measured these anisotropies with an unprecedented accuracy. However, the CIB is an integrated emission and a model is necessary to disentangle the contribution of the different redshifts.Large-scale anisotropies can be interpreted using a linear model. This simple approach relies on a minimal number of hypotheses. We found a star formation history consistent with the extrapolation of the Herschel luminosity function. This rules out any major contribution of faint IR galaxies. We also constrained the mean mass of the dark matter halos hosting the galaxies, which emit the CIB. This mass is almost constant from z=4 to z=0, while dark matter halos grew very quickly during this interval of time. The structures hosting star formation are thus not the same at low and high redshifts. This also suggests the existence of a halo mass for which the star formation is most efficient.Halo occupation models can describe in details how dark matter halos are populated by infrared galaxies. We coupled a phenomenological model of galaxy evolution calibrated on Herschel data with a halo model, using the technique of abundance matching. This approach allows to naturally reproduce the CIB anisotropies. We found that the efficiency of halos to convert accreted baryons into stars varies strongly with halo mass, but not with time. This highlights the role played by host halos as regulator of the star formation in galaxies.I will finally explain how we could have access to 3D information with future instruments and isolate more efficiently the highest redshift using intensity mapping of bright sub-millimeter lines. I will

  17. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    NASA Astrophysics Data System (ADS)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  18. Multi-objective parameter optimization of common land model using adaptive surrogate modeling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2015-05-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.

  19. High energy neutrino emission and neutrino background from gamma-ray bursts in the internal shock model

    SciTech Connect

    Murase, Kohta; Nagataki, Shigehiro

    2006-03-15

    High energy neutrino emission from gamma-ray bursts (GRBs) is discussed. In this paper, by using the simulation kit GEANT4, we calculate proton cooling efficiency including pion-multiplicity and proton-inelasticity in photomeson production. First, we estimate the maximum energy of accelerated protons in GRBs. Using the obtained results, neutrino flux from one burst and a diffuse neutrino background are evaluated quantitatively. We also take account of cooling processes of pion and muon, which are crucial for resulting neutrino spectra. We confirm the validity of analytic approximate treatments on GRB fiducial parameter sets, but also find that the effects of multiplicity and high-inelasticity can be important on both proton cooling and resulting spectra in some cases. Finally, assuming that the GRB rate traces the star formation rate, we obtain a diffuse neutrino background spectrum from GRBs for specific parameter sets. We introduce the nonthermal baryon-loading factor, rather than assume that GRBs are main sources of ultra-high energy cosmic rays (UHECRs). We find that the obtained neutrino background can be comparable with the prediction of Waxman and Bahcall, although our ground in estimation is different from theirs. In this paper, we study on various parameters since there are many parameters in the model. The detection of high energy neutrinos from GRBs will be one of the strong evidences that protons are accelerated to very high energy in GRBs. Furthermore, the observations of a neutrino background has a possibility not only to test the internal shock model of GRBs but also to give us information about parameters in the model and whether GRBs are sources of UHECRs or not.

  20. Adaptation of Mesoscale Weather Models to Local Forecasting

    NASA Technical Reports Server (NTRS)

    Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.

    2003-01-01

    Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes

  1. Cosmic microwave background and large-scale structure constraints on a simple quintessential inflation model

    SciTech Connect

    Rosenfeld, Rogerio; Frieman, Joshua A.; /Fermilab /Chicago U., Astron. Astrophys. Ctr.

    2006-11-01

    We derive constraints on a simple quintessential inflation model, based on a spontaneously broken {Phi}{sup 4} theory, imposed by the Wilkinson Microwave Anisotropy Probe three-year data (WMAP3) and by galaxy clustering results from the Sloan Digital Sky Survey (SDSS). We find that the scale of symmetry breaking must be larger than about 3 Planck masses in order for inflation to generate acceptable values of the scalar spectral index and of the tensor-to-scalar ratio. We also show that the resulting quintessence equation-of-state can evolve rapidly at recent times and hence can potentially be distinguished from a simple cosmological constant in this parameter regime.

  2. Competition and fixation of cohorts of adaptive mutations under Fisher geometrical model.

    PubMed

    Moura de Sousa, Jorge A; Alpedrinha, João; Campos, Paulo R A; Gordo, Isabel

    2016-01-01

    One of the simplest models of adaptation to a new environment is Fisher's Geometric Model (FGM), in which populations move on a multidimensional landscape defined by the traits under selection. The predictions of this model have been found to be consistent with current observations of patterns of fitness increase in experimentally evolved populations. Recent studies investigated the dynamics of allele frequency change along adaptation of microbes to simple laboratory conditions and unveiled a dramatic pattern of competition between cohorts of mutations, i.e., multiple mutations simultaneously segregating and ultimately reaching fixation. Here, using simulations, we study the dynamics of phenotypic and genetic change as asexual populations under clonal interference climb a Fisherian landscape, and ask about the conditions under which FGM can display the simultaneous increase and fixation of multiple mutations-mutation cohorts-along the adaptive walk. We find that FGM under clonal interference, and with varying levels of pleiotropy, can reproduce the experimentally observed competition between different cohorts of mutations, some of which have a high probability of fixation along the adaptive walk. Overall, our results show that the surprising dynamics of mutation cohorts recently observed during experimental adaptation of microbial populations can be expected under one of the oldest and simplest theoretical models of adaptation-FGM. PMID:27547562

  3. An Adaptive Presentation Model for Hypermedia Information Systems.

    ERIC Educational Resources Information Center

    Hekmatpour, Amir

    1995-01-01

    Proposes a model for online hypermedia courseware that presents information in a controlled and predefined structure. Measures and evaluates the model's impact on design time and student study time. Finds positive improvements in delivery and dissemination of technical subject matters and provides a better understanding of required development…

  4. Conceptual Models To Study the Adaptation of the Oldest Old.

    ERIC Educational Resources Information Center

    Martin, Peter

    In recent years there has been an increased awareness about the growing number of the oldest old. A structural model for the study of the oldest old was introduced by Lehr (1987) and was built on experience with data from the Bonn Longitudinal Study of Aging. In the Lehr model, genetic, environmental, and ecological factors affect longevity…

  5. A structural model of the adaptive human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1979-01-01

    A compensatory tracking model of the human pilot is offered which attempts to provide a more realistic representation of the human's signal processing structure than that which is exhibited by pilot models currently in use. Two features of the model distinguish it from other representations of the human pilot. First, proprioceptive information from the control stick or manipulator constitutes one of the major feedback paths in the model, providing feedback of vehicle output rate due to control activity. Implicit in this feedback loop is a model of the vehicle dynamics which is valid in and beyond the region of crossover. Second, error-rate information is continuously derived and independently but intermittently controlled. An output injected remnant model is offered and qualitatively justified on the basis of providing a measure of the effect of inaccuracies such as time variations in the pilot's internal model of the controlled element dynamics. The data from experimental tracking tasks involving five different controlled element dynamics and one nonideal viewing condition were matched with model generated describing functions and remnant power spectral densities.

  6. Columbia River Statistical Update Model, Version 4. 0 (COLSTAT4): Background documentation and user's guide

    SciTech Connect

    Whelan, G.; Damschen, D.W.; Brockhaus, R.D.

    1987-08-01

    Daily-averaged temperature and flow information on the Columbia River just downstream of Priest Rapids Dam and upstream of river mile 380 were collected and stored in a data base. The flow information corresponds to discharges that were collected daily from October 1, 1959, through July 28, 1986. The temperature information corresponds to values that were collected daily from January 1, 1965, through May 27, 1986. The computer model, COLSTAT4 (Columbia River Statistical Update - Version 4.0 model), uses the temperature-discharge data base to statistically analyze temperature and flow conditions by computing the frequency of occurrence and duration of selected temperatures and flow rates for the Columbia River. The COLSTAT4 code analyzes the flow and temperature information in a sequential time frame (i.e., a continuous analysis over a given time period); it also analyzes this information in a seasonal time frame (i.e., a periodic analysis over a specific season from year to year). A provision is included to enable the user to edit and/or extend the data base of temperature and flow information. This report describes the COLSTAT4 code and the information contained in its data base.

  7. Adaptive Work Strategy for Evaluating a Conceptual Site Model

    NASA Astrophysics Data System (ADS)

    Dietrich, P.; Utom, A. U.; Werban, U.

    2015-12-01

    A comprehensive, diagnostic, procedural and adaptive scheme involving a combination of geophysical and direct push methods was developed and applied in the Wurmlingen study site situated within the region of Baden-Württemberg (southwest Germany). The goal of the study was to test the applicability of electrical resistivity method in imaging resistivity contrasts, and mapping the depth to and lateral extent of field-scale subsurface structures and existence of flow paths that may control concentration gradients of groundwater solution contents. Based on a relatively fast and cost-effective areal mapping with vertical electrical sounding technique, a northwest-southeast trending stream-channel-like depression (low apparent resistivity feature) through a Pleistocene aquifer was detected. For a more detailed characterization, we implemented electrical resistivity tomography method followed by direct push (DP) technologies. Beside the use of DP for verification of structures identified by geophysical tools, we used it for multi-level groundwater sampling. Results from groundwater chemistry indicate zones of steep nitrate concentration gradients associated with the feature.

  8. Modeling bee swarming behavior through diffusion adaptation with asymmetric information sharing

    NASA Astrophysics Data System (ADS)

    Li, Jinchao; Sayed, Ali H.

    2012-12-01

    Honeybees swarm when they move to a new site for their hive. During the process of swarming, their behavior can be analyzed by classifying them as informed bees or uninformed bees, where the informed bees have some information about the destination while the uninformed bees follow the informed bees. The swarm's movement can be viewed as a network of mobile nodes with asymmetric information exchange about their destination. In these networks, adaptive and mobile agents share information on the fly and adapt their estimates in response to local measurements and data shared with neighbors. Diffusion adaptation is used to model the adaptation process in the presence of asymmetric nodes and noisy data. The simulations indicate that the models are able to emulate the swarming behavior of bees under varied conditions such as a small number of informed bees, sharing of target location, sharing of target direction, and noisy measurements.

  9. Improvement in adaptive nonuniformity correction method with nonlinear model for infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Rui, Lai; Yin-Tang, Yang; Qing, Li; Hui-Xin, Zhou

    2009-09-01

    The scene adaptive nonuniformity correction (NUC) technique is commonly used to decrease the fixed pattern noise (FPN) in infrared focal plane arrays (IRFPA). However, the correction precision of existing scene adaptive NUC methods is reduced by the nonlinear response of IRFPA detectors seriously. In this paper, an improved scene adaptive NUC method that employs "S"-curve model to approximate the detector response is presented. The performance of the proposed method is tested with real infrared video sequence, and the experimental results validate that our method can promote the correction precision considerably.

  10. An object-oriented, technology-adaptive information model

    NASA Technical Reports Server (NTRS)

    Anyiwo, Joshua C.

    1995-01-01

    The primary objective was to develop a computer information system for effectively presenting NASA's technologies to American industries, for appropriate commercialization. To this end a comprehensive information management model, applicable to a wide variety of situations, and immune to computer software/hardware technological gyrations, was developed. The model consists of four main elements: a DATA_STORE, a data PRODUCER/UPDATER_CLIENT and a data PRESENTATION_CLIENT, anchored to a central object-oriented SERVER engine. This server engine facilitates exchanges among the other model elements and safeguards the integrity of the DATA_STORE element. It is designed to support new technologies, as they become available, such as Object Linking and Embedding (OLE), on-demand audio-video data streaming with compression (such as is required for video conferencing), Worldwide Web (WWW) and other information services and browsing, fax-back data requests, presentation of information on CD-ROM, and regular in-house database management, regardless of the data model in place. The four components of this information model interact through a system of intelligent message agents which are customized to specific information exchange needs. This model is at the leading edge of modern information management models. It is independent of technological changes and can be implemented in a variety of ways to meet the specific needs of any communications situation. This summer a partial implementation of the model has been achieved. The structure of the DATA_STORE has been fully specified and successfully tested using Microsoft's FoxPro 2.6 database management system. Data PRODUCER/UPDATER and PRESENTATION architectures have been developed and also successfully implemented in FoxPro; and work has started on a full implementation of the SERVER engine. The model has also been successfully applied to a CD-ROM presentation of NASA's technologies in support of Langley Research Center's TAG

  11. Effect of mouse strain as a background for Alzheimer’s disease models on the clearance of amyloid-β

    PubMed Central

    Qosa, Hisham; Kaddoumi, Amal

    2016-01-01

    Novel animal models of Alzheimer’s disease (AD) are relentlessly being developed and existing ones are being fine-tuned; however, these models face multiple challenges associated with the complexity of the disease where most of these models do not reproduce the full phenotypical disease spectrum. Moreover, different AD models express different phenotypes that could affect their validity to recapitulate disease pathogenesis and/or response to a drug. One of the most important and understudied differences between AD models is differences in the phenotypic characteristics of the background species. Here, we used the brain clearance index (BCI) method to investigate the effect of strain differences on the clearance of amyloid β (Aβ) from the brains of four mouse strains. These mouse strains, namely C57BL/6, FVB/N, BALB/c and SJL/J, are widely used as a background for the development of AD mouse models. Findings showed that while Aβ clearance across the blood-brain barrier (BBB) was comparable between the 4 strains, levels of LRP1, an Aβ clearance protein, was significantly lower in SJL/J mice compared to other mouse strains. Furthermore, these mouse strains showed a significantly different response to rifampicin treatment with regard to Aβ clearance and effect on brain level of its clearance-related proteins. Our results provide for the first time an evidence for strain differences that could affect ability of AD mouse models to recapitulate response to a drug, and opens a new research avenue that requires further investigation to successfully develop mouse models that could simulate clinically important phenotypic characteristics of AD. PMID:27478623

  12. FEMHD: An adaptive finite element method for MHD and edge modelling

    SciTech Connect

    Strauss, H.R.

    1995-07-01

    This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.

  13. "Your Model Is Predictive-- but Is It Useful?" Theoretical and Empirical Considerations of a New Paradigm for Adaptive Tutoring Evaluation

    ERIC Educational Resources Information Center

    González-Brenes, José P.; Huang, Yun

    2015-01-01

    Classification evaluation metrics are often used to evaluate adaptive tutoring systems-- programs that teach and adapt to humans. Unfortunately, it is not clear how intuitive these metrics are for practitioners with little machine learning background. Moreover, our experiments suggest that existing convention for evaluating tutoring systems may…

  14. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  15. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  16. Global solution for a kinetic chemotaxis model with internal dynamics and its fast adaptation limit

    NASA Astrophysics Data System (ADS)

    Liao, Jie

    2015-12-01

    A nonlinear kinetic chemotaxis model with internal dynamics incorporating signal transduction and adaptation is considered. This paper is concerned with: (i) the global solution for this model, and, (ii) its fast adaptation limit to Othmer-Dunbar-Alt type model. This limit gives some insight to the molecular origin of the chemotaxis behaviour. First, by using the Schauder fixed point theorem, the global existence of weak solution is proved based on detailed a priori estimates, under quite general assumptions. However, the Schauder theorem does not provide uniqueness, so additional analysis is required to be developed for uniqueness. Next, the fast adaptation limit of this model is derived by extracting a weak convergence subsequence in measure space. For this limit, the first difficulty is to show the concentration effect on the internal state. Another difficulty is the strong compactness argument on the chemical potential, which is essential for passing the nonlinear kinetic equation to the weak limit.

  17. Adaptive color image watermarking based on the just noticeable distortion model in balanced multiwavelet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Yuan; Ding, Yong

    2011-10-01

    In this paper, a novel adaptive color image watermarking scheme based on the just noticeable distortion (JND) model in balanced multiwavelet domain is proposed. The balanced multiwavelet transform can achieve orthogonality, symmetry, and high order of approximation simultaneously without requiring any input prefiltering, which makes it a good choice for image processing. According to the properties of the human visual system, a novel multiresolution JND model is proposed in balanced multiwavelet domain. This model incorporates the spatial contrast sensitivity function, the luminance adaptation effect, and the contrast masking effect via separating the sharp edge and the texture. Then, based on this model, the watermark is adaptively inserted into the most distortion tolerable locations of the luminance and chrominance components without introducing the perceivable distortions. Experimental results show that the proposed watermarking scheme is transparent and has a high robustness to various attacks such as low-pass filtering, noise attacking, JPEG and JPEG2000 compression.

  18. Numerical simulations of internal solitary waves interacting with uniform slopes using an adaptive model

    NASA Astrophysics Data System (ADS)

    Rickard, Graham; O'Callaghan, Joanne; Popinet, Stéphane

    Two-dimensional, non-linear, Boussinesq, non-hydrostatic simulations of internal solitary waves breaking and running up uniform slopes have been performed using an adaptive, finite volume fluid code "Gerris". It is demonstrated that the Gerris dynamical core performs well in this specific but important geophysical context. The "semi-structured" nature of Gerris is exploited to enhance model resolution along the slope where wave breaking and run-up occur. Comparison with laboratory experiments reveals that the generation of single and multiple turbulent surges ("boluses") as a function of slope angle is consistently reproduced by the model, comparable with observations and previous numerical simulations, suggesting aspects of the dynamical energy transfers are being represented by the model in two dimensions. Adaptivity is used to explore model convergence of the wave breaking dynamics, and it is shown that significant cpu memory and time savings are possible with adaptivity.

  19. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  20. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation.

    PubMed

    Hillis, James M; Brainard, David H

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  1. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model

    PubMed Central

    Minnier, Jessica; Yuan, Ming; Liu, Jun S.; Cai, Tianxi

    2014-01-01

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models. PMID:26236061

  2. Semantic Description of Educational Adaptive Hypermedia Based on a Conceptual Model

    ERIC Educational Resources Information Center

    Papasalouros, Andreas; Retalis, Symeon; Papaspyrou, Nikolaos

    2004-01-01

    The role of conceptual modeling in Educational Adaptive Hypermedia Applications (EAHA) is especially important. A conceptual model of an educational application depicts the instructional solution that is implemented, containing information about concepts that must be ac-quired by learners, tasks in which learners must be involved and resources…

  3. Adaptive Remodeling of Achilles Tendon: A Multi-scale Computational Model

    PubMed Central

    Rubenson, Jonas; Umberger, Brian

    2016-01-01

    While it is known that musculotendon units adapt to their load environments, there is only a limited understanding of tendon adaptation in vivo. Here we develop a computational model of tendon remodeling based on the premise that mechanical damage and tenocyte-mediated tendon damage and repair processes modify the distribution of its collagen fiber lengths. We explain how these processes enable the tendon to geometrically adapt to its load conditions. Based on known biological processes, mechanical and strain-dependent proteolytic fiber damage are incorporated into our tendon model. Using a stochastic model of fiber repair, it is assumed that mechanically damaged fibers are repaired longer, whereas proteolytically damaged fibers are repaired shorter, relative to their pre-damage length. To study adaptation of tendon properties to applied load, our model musculotendon unit is a simplified three-component Hill-type model of the human Achilles-soleus unit. Our model results demonstrate that the geometric equilibrium state of the Achilles tendon can coincide with minimization of the total metabolic cost of muscle activation. The proposed tendon model independently predicts rates of collagen fiber turnover that are in general agreement with in vivo experimental measurements. While the computational model here only represents a first step in a new approach to understanding the complex process of tendon remodeling in vivo, given these findings, it appears likely that the proposed framework may itself provide a useful theoretical foundation for developing valuable qualitative and quantitative insights into tendon physiology and pathology. PMID:27684554

  4. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    ERIC Educational Resources Information Center

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  5. A Standard-Based Model for Adaptive E-Learning Platform for Mauritian Academic Institutions

    ERIC Educational Resources Information Center

    Kanaksabee, P.; Odit, M. P.; Ramdoyal, A.

    2011-01-01

    The key aim of this paper is to introduce a standard-based model for adaptive e-learning platform for Mauritian academic institutions and to investigate the conditions and tools required to implement this model. The main forces of the system are that it allows collaborative learning, communication among user, and reduce considerable paper work.…

  6. Families of Chronically Ill Children: A Systems and Social-Ecological Model of Adaptation and Challenge.

    ERIC Educational Resources Information Center

    Kazak, Anne E.

    1989-01-01

    Presents family systems model for understanding adaptation and coping in childhood chronic illness. Provides overview of systems and social-ecological theories relevant to this population. Reviews literature on stress and coping in these families. Examines unique issues and discusses importance of these models for responding to families with…

  7. Heuristic-Leadership Model: Adapting to Current Training and Changing Times.

    ERIC Educational Resources Information Center

    Danielson, Mary Ann

    A model was developed for training individuals to adapt better to the changing work environment by focusing on the subordinate to supervisor relationship and providing a heuristic approach to leadership. The model emphasizes a heuristic approach to decision-making through the active participation of both members of the dyad. The demand among…

  8. Towards Increased Relevance: Context-Adapted Models of the Learning Organization

    ERIC Educational Resources Information Center

    Örtenblad, Anders

    2015-01-01

    Purpose: The purposes of this paper are to take a closer look at the relevance of the idea of the learning organization for organizations in different generalized organizational contexts; to open up for the existence of multiple, context-adapted models of the learning organization; and to suggest a number of such models.…

  9. Computerized Adaptive Testing Using a Class of High-Order Item Response Theory Models

    ERIC Educational Resources Information Center

    Huang, Hung-Yu; Chen, Po-Hsi; Wang, Wen-Chung

    2012-01-01

    In the human sciences, a common assumption is that latent traits have a hierarchical structure. Higher order item response theory models have been developed to account for this hierarchy. In this study, computerized adaptive testing (CAT) algorithms based on these kinds of models were implemented, and their performance under a variety of…

  10. Coastal Adaptation Planning for Sea Level Rise and Extremes: A Global Model for Adaptation Decision-making at the Local Level Given Uncertain Climate Projections

    NASA Astrophysics Data System (ADS)

    Turner, D.

    2014-12-01

    Understanding the potential economic and physical impacts of climate change on coastal resources involves evaluating a number of distinct adaptive responses. This paper presents a tool for such analysis, a spatially-disaggregated optimization model for adaptation to sea level rise (SLR) and storm surge, the Coastal Impact and Adaptation Model (CIAM). This decision-making framework fills a gap between very detailed studies of specific locations and overly aggregate global analyses. While CIAM is global in scope, the optimal adaptation strategy is determined at the local level, evaluating over 12,000 coastal segments as described in the DIVA database (Vafeidis et al. 2006). The decision to pursue a given adaptation measure depends on local socioeconomic factors like income, population, and land values and how they develop over time, relative to the magnitude of potential coastal impacts, based on geophysical attributes like inundation zones and storm surge. For example, the model's decision to protect or retreat considers the costs of constructing and maintaining coastal defenses versus those of relocating people and capital to minimize damages from land inundation and coastal storms. Uncertain storm surge events are modeled with a generalized extreme value distribution calibrated to data on local surge extremes. Adaptation is optimized for the near-term outlook, in an "act then learn then act" framework that is repeated over the model time horizon. This framework allows the adaptation strategy to be flexibly updated, reflecting the process of iterative risk management. CIAM provides new estimates of the economic costs of SLR; moreover, these detailed results can be compactly represented in a set of adaptation and damage functions for use in integrated assessment models. Alongside the optimal result, CIAM evaluates suboptimal cases and finds that global costs could increase by an order of magnitude, illustrating the importance of adaptive capacity and coastal policy.

  11. Trauma and Victimization: A Model of Psychological Adaptation.

    ERIC Educational Resources Information Center

    McCann, I. Lisa; And Others

    1988-01-01

    Synthesizes theoretical and empirical findings about psychological responses to traumatization across survivors of rape, childhood sexual or physical abuse, domestic violence, crime, disasters, and the Vietnam War. Describes five major categories of response and presents new theoretical model for understanding individual variations in victim…

  12. An Adaptation Dilemma Caused by Impacts-Modeling Uncertainty

    NASA Astrophysics Data System (ADS)

    Frieler, K.; Müller, C.; Elliott, J. W.; Heinke, J.; Arneth, A.; Bierkens, M. F.; Ciais, P.; Clark, D. H.; Deryng, D.; Doll, P. M.; Falloon, P.; Fekete, B. M.; Folberth, C.; Friend, A. D.; Gosling, S. N.; Haddeland, I.; Khabarov, N.; Lomas, M. R.; Masaki, Y.; Nishina, K.; Neumann, K.; Oki, T.; Pavlick, R.; Ruane, A. C.; Schmid, E.; Schmitz, C.; Stacke, T.; Stehfest, E.; Tang, Q.; Wisser, D.

    2013-12-01

    Ensuring future well-being for a growing population under either strong climate change or an aggressive mitigation strategy requires a subtle balance of potentially conflicting response measures. In the case of competing goals, uncertainty in impact estimates plays a central role when high confidence in achieving a primary objective (such as food security) directly implies an increased probability of uncertainty induced failure with regard to a competing target (such as climate protection). We use cross sectoral consistent multi-impact model simulations from the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP, www.isi-mip.org) to illustrate this uncertainty dilemma: RCP projections from 7 global crop, 11 hydrological, and 7 biomes models are combined to analyze irrigation and land use changes as possible responses to climate change and increasing crop demand due to population growth and economic development. We show that - while a no-regrets option with regard to climate protection - additional irrigation alone is not expected to balance the demand increase by 2050. In contrast, a strong expansion of cultivated land closes the projected production-demand gap in some crop models. However, it comes at the expense of a loss of natural carbon sinks of order 50%. Given the large uncertainty of state of the art crop model projections even these strong land use changes would not bring us ';on the safe side' with respect to food supply. In a world where increasing carbon emissions continue to shrink the overall solution space, we demonstrate that current impacts-modeling uncertainty is a luxury we cannot afford. ISI-MIP is intended to provide cross sectoral consistent impact projections for model intercomparison and improvement as well as cross-sectoral integration. The results presented here were generated within the first Fast-Track phase of the project covering global impact projections. The second phase will also include regional projections. It is the aim

  13. Symmetry-adapted digital modeling I. Axial symmetric proteins.

    PubMed

    Janner, A

    2016-05-01

    Considered are axial symmetric proteins exemplified by the octameric mitochondrial creatine kinase, the Pyr RNA-binding attenuation protein, the D-aminopeptidase and the cyclophilin A-cyclosporin complex, with tetragonal (422), trigonal (32), pentagonal (52) and pentagonal (52) point-group symmetry, respectively. One starts from the protein enclosing form, which is characterized by vertices at points of a lattice (the form lattice) whose dimension depends on the point group. This allows the indexing of Cα's at extreme radial positions. The indexing is extended to additional residues on the basis of a finer lattice, the digital modeling lattice Λ, which includes the form lattice as a sublattice. This leads to a coarse-grained description of the protein. In the crystallographic point-group case, the planar indices are obtained from a projection of atomic positions along the rotation axis, taken as the z axis. The planar indices of a Cα are then those of the nearest projected lattice point. In the non-crystallographic case, low indices are an additional requirement. The coarse-grained bead follows from the condition imposed on the residues selected to have a z coordinate within a band of value δ above and below the height of lattice points. The choice of δ permits a variation of the coarse-grained bead model. For example, the value δ = 0.5 leads to a fine-grained indexing of the full set of residues, whereas with δ = 0.25 one gets a coarse-grained model which includes only about half of these residues. Within this procedure, the indexing of the Cα only depends on the choice of the digital modeling lattice and not on the value of δ. The characteristics which distinguish the present approach from other coarse-grained models of proteins on lattices are summarized at the end. PMID:27126107

  14. Adaptation of influenza A(H1N1)pdm09 virus in experimental mouse models.

    PubMed

    Prokopyeva, E A; Sobolev, I A; Prokopyev, M V; Shestopalov, A M

    2016-04-01

    In the present study, three mouse-adapted variants of influenza A(H1N1)pdm09 virus were obtained by lung-to-lung passages of BALB/c, C57BL/6z and CD1 mice. The significantly increased virulence and pathogenicity of all of the mouse-adapted variants induced 100% mortality in the adapted mice. Genetic analysis indicated that the increased virulence of all of the mouse-adapted variants reflected the incremental acquisition of several mutations in PB2, PB1, HA, NP, NA, and NS2 proteins. Identical amino acid substitutions were also detected in all of the mouse-adapted variants of A(H1N1)pdm09 virus, including PB2 (K251R), PB1 (V652A), NP (I353V), NA (I106V, N248D) and NS1 (G159E). Apparently, influenza A(H1N1)pdm09 virus easily adapted to the host after serial passages in the lungs, inducing 100% lethality in the last experimental group. However, cross-challenge revealed that not all adapted variants are pathogenic for different laboratory mice. Such important results should be considered when using the influenza mice model.

  15. Adaptation of influenza A(H1N1)pdm09 virus in experimental mouse models.

    PubMed

    Prokopyeva, E A; Sobolev, I A; Prokopyev, M V; Shestopalov, A M

    2016-04-01

    In the present study, three mouse-adapted variants of influenza A(H1N1)pdm09 virus were obtained by lung-to-lung passages of BALB/c, C57BL/6z and CD1 mice. The significantly increased virulence and pathogenicity of all of the mouse-adapted variants induced 100% mortality in the adapted mice. Genetic analysis indicated that the increased virulence of all of the mouse-adapted variants reflected the incremental acquisition of several mutations in PB2, PB1, HA, NP, NA, and NS2 proteins. Identical amino acid substitutions were also detected in all of the mouse-adapted variants of A(H1N1)pdm09 virus, including PB2 (K251R), PB1 (V652A), NP (I353V), NA (I106V, N248D) and NS1 (G159E). Apparently, influenza A(H1N1)pdm09 virus easily adapted to the host after serial passages in the lungs, inducing 100% lethality in the last experimental group. However, cross-challenge revealed that not all adapted variants are pathogenic for different laboratory mice. Such important results should be considered when using the influenza mice model. PMID:26829383

  16. Modeling light adaptation in circadian clock: prediction of the response that stabilizes entrainment.

    PubMed

    Tsumoto, Kunichika; Kurosawa, Gen; Yoshinaga, Tetsuya; Aihara, Kazuyuki

    2011-01-01

    Periods of biological clocks are close to but often different from the rotation period of the earth. Thus, the clocks of organisms must be adjusted to synchronize with day-night cycles. The primary signal that adjusts the clocks is light. In Neurospora, light transiently up-regulates the expression of specific clock genes. This molecular response to light is called light adaptation. Does light adaptation occur in other organisms? Using published experimental data, we first estimated the time course of the up-regulation rate of gene expression by light. Intriguingly, the estimated up-regulation rate was transient during light period in mice as well as Neurospora. Next, we constructed a computational model to consider how light adaptation had an effect on the entrainment of circadian oscillation to 24-h light-dark cycles. We found that cellular oscillations are more likely to be destabilized without light adaption especially when light intensity is very high. From the present results, we predict that the instability of circadian oscillations under 24-h light-dark cycles can be experimentally observed if light adaptation is altered. We conclude that the functional consequence of light adaptation is to increase the adjustability to 24-h light-dark cycles and then adapt to fluctuating environments in nature.

  17. Inhibitory effect of natural organic matter or other background constituents on photocatalytic advanced oxidation processes: Mechanistic model development and validation.

    PubMed

    Brame, Jonathon; Long, Mingce; Li, Qilin; Alvarez, Pedro

    2015-11-01

    The ability of reactive oxygen species (ROS) to interact with priority pollutants is crucial for efficient water treatment by photocatalytic advanced oxidation processes (AOPs). However, background compounds in water such as natural organic matter (NOM) can significantly hinder targeted reactions and removal efficiency. This inhibition can be complex, interfering with degradation in solution and at the photocatalyst surface as well as hindering illumination efficiency and ROS production. We developed an analytical model to account for various inhibition mechanisms in catalytic AOPs, including competitive adsorption of inhibitors, scavenging of produced ROS at the surface and in solution, and the inner filtering of the excitation illumination, which combine to decrease ROS-mediated degradation. This model was validated with batch experiments using a variety of ROS producing systems (OH-generating TiO2 photocatalyst and H2O2-UV; (1)O2-generating photosensitive functionalized fullerenes and rose bengal) and inhibitory compounds (NOM, tert-butyl alcohol). Competitive adsorption by NOM and ROS scavenging were the most influential inhibitory mechanisms. Overall, this model enables accurate simulation of photocatalytic AOP performance when one or more inhibitory mechanisms are at work in a wide variety of application scenarios, and underscores the need to consider the effects of background constituents on degradation efficiency.

  18. Adaptivity Assessment of Regional Semi-Parametric VTEC Modeling to Different Data Distributions

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Onur Karslıoǧlu, Mahmut

    2014-05-01

    Semi-parametric modelling of Vertical Total Electron Content (VTEC) combines parametric and non-parametric models into a single regression model for estimating the parameters and functions from Global Positioning System (GPS) observations. The parametric part is related to the Differential Code Biases (DCBs), which are fixed unknown parameters of the geometry-free linear combination (or the so called ionospheric observable). On the other hand, the non-parametric component is referred to the spatio-temporal distribution of VTEC which is estimated by applying the method of Multivariate Adaptive Regression B-Splines (BMARS). BMARS algorithm builds an adaptive model by using tensor product of univariate B-splines that are derived from the data. The algorithm searches for best fitting B-spline basis functions in a scale by scale strategy, where it starts adding large scale B-splines to the model and adaptively decreases the scale for including smaller scale features through a modified Gram-Schmidt ortho-normalization process. Then, the algorithm is extended to include the receiver DCBs where the estimates of the receiver DCBs and the spatio-temporal VTEC distribution can be obtained together in an adaptive semi-parametric model. In this work, the adaptivity of regional semi-parametric modelling of VTEC based on BMARS is assessed in different ground-station and data distribution scenarios. To evaluate the level of adaptivity the resulting DCBs and VTEC maps from different scenarios are compared not only with each other but also with CODE distributed GIMs and DCB estimates .

  19. A Predictive Model of Fragmentation using Adaptive Mesh Refinement and a Hierarchical Material Model

    SciTech Connect

    Koniges, A E; Masters, N D; Fisher, A C; Anderson, R W; Eder, D C; Benson, D; Kaiser, T B; Gunney, B T; Wang, P; Maddox, B R; Hansen, J F; Kalantar, D H; Dixit, P; Jarmakani, H; Meyers, M A

    2009-03-03

    Fragmentation is a fundamental material process that naturally spans spatial scales from microscopic to macroscopic. We developed a mathematical framework using an innovative combination of hierarchical material modeling (HMM) and adaptive mesh refinement (AMR) to connect the continuum to microstructural regimes. This framework has been implemented in a new multi-physics, multi-scale, 3D simulation code, NIF ALE-AMR. New multi-material volume fraction and interface reconstruction algorithms were developed for this new code, which is leading the world effort in hydrodynamic simulations that combine AMR with ALE (Arbitrary Lagrangian-Eulerian) techniques. The interface reconstruction algorithm is also used to produce fragments following material failure. In general, the material strength and failure models have history vector components that must be advected along with other properties of the mesh during remap stage of the ALE hydrodynamics. The fragmentation models are validated against an electromagnetically driven expanding ring experiment and dedicated laser-based fragmentation experiments conducted at the Jupiter Laser Facility. As part of the exit plan, the NIF ALE-AMR code was applied to a number of fragmentation problems of interest to the National Ignition Facility (NIF). One example shows the added benefit of multi-material ALE-AMR that relaxes the requirement that material boundaries must be along mesh boundaries.

  20. A model of the two-dimensional quantum harmonic oscillator in an AdS_3 background

    NASA Astrophysics Data System (ADS)

    Frick, R.

    2016-10-01

    In this paper we study a model of the two-dimensional quantum harmonic oscillator in a three-dimensional anti-de Sitter background. We use a generalized Schrödinger picture in which the analogs of the Schrödinger operators of the particle are independent of both the time and the space coordinates in different representations. The spacetime independent operators of the particle induce the Lie algebra of Killing vector fields of the AdS_3 spacetime. In this picture, we have a metamorphosis of the Heisenberg uncertainty relations.

  1. An adaptive atmospheric transport model for the Nevada Test Site

    SciTech Connect

    Pepper, D.W.; Randerson, D.

    1998-12-31

    The need to accurately calculate the transport of hazardous material is paramount to environmental safety and health activities, as well as to establish a sound emergency response capability, in the western United States and at the Nevada Test Site (NTS). Current efforts are under way at the University of Nevada, Las Vegas (UNLV) and the NOAA Air Resources Laboratory in Las Vegas to develop a state-of-the-art atmospheric flow and species transport model that will accurately calculate wind fields and atmospheric particulate transport over complex terrain. In addition, research efforts are needed to improve predictive capabilities for catastrophic events, e.g., volcanic eruptions, thunderstorms, heavy rains and floods, and dust storms. The model has a wide range of environmental, safety, and health applications as required by the US Department of Energy for NTS programs, including those activities associated with emergency response, the Hazard Material Spill Center, and site restoration and remediation.

  2. Adapting molar data (without density) for molal models

    NASA Astrophysics Data System (ADS)

    Marion, Giles M.

    2007-06-01

    Theoretical geochemical models for electrolyte solutions based on classical thermodynamic principles rely largely upon molal concentrations as input because molality (wt/wt) is independent of temperature and pressure. On the other hand, there are countless studies in the literature where concentrations are expressed as molarity (wt/vol) because these units are more easily measured. To convert from molarity to molality requires an estimate of solution density. Unfortunately, in many, if not most, cases where molarity is the concentration of choice, solution densities are not measured. For concentrated brines such as seawater or even more dense brines, the difference between molarity and molality is significant. Without knowledge of density, these brinish, molar-based studies are closed to theoretical electrolyte solution models. The objective of this paper is to present an algorithm that can accurately calculate the density of molar-based solutions, and, as a consequence, molality. The algorithm consist of molar inputs into a molal-based model that can calculate density (FREZCHEM). The algorithm uses an iterative process for calculating absolute salinity (SA), density (ρ), and the conversion factor (CF) for molarity to molality. Three cases were examined ranging in density from 1.023 to 1.203 kg(soln.)/l. In all three cases, the SA, ρ, and CF values converged to within 1ppm by nine iterations. In all three cases, the calculated densities agreed with experimental measurements to within ±0.1%. This algorithm opens a large literature based on molar concentrations to exploration with theoretical models based on molal concentrations and classical thermodynamic principles.

  3. Adaptive cyclic physiologic noise modeling and correction in functional MRI.

    PubMed

    Beall, Erik B

    2010-03-30

    Physiologic noise in BOLD-weighted MRI data is known to be a significant source of the variance, reducing the statistical power and specificity in fMRI and functional connectivity analyses. We show a dramatic improvement on current noise correction methods in both fMRI and fcMRI data that avoids overfitting. The traditional noise model is a Fourier series expansion superimposed on the periodicity of parallel measured breathing and cardiac cycles. Correction using this model results in removal of variance matching the periodicity of the physiologic cycles. Using this framework allows easy modeling of noise. However, using a large number of regressors comes at the cost of removing variance unrelated to physiologic noise, such as variance due to the signal of functional interest (overfitting the data). It is our hypothesis that there are a small variety of fits that describe all of the significantly coupled physiologic noise. If this is true, we can replace a large number of regressors used in the model with a smaller number of the fitted regressors and thereby account for the noise sources with a smaller reduction in variance of interest. We describe these extensions and demonstrate that we can preserve variance in the data unrelated to physiologic noise while removing physiologic noise equivalently, resulting in data with a higher effective SNR than with current corrections techniques. Our results demonstrate a significant improvement in the sensitivity of fMRI (up to a 17% increase in activation volume for fMRI compared with higher order traditional noise correction) and functional connectivity analyses.

  4. Addressing potential local adaptation in species distribution models: implications for conservation under climate change

    USGS Publications Warehouse

    Hällfors, Maria Helena; Liao, Jishan; Dzurisin, Jason D. K.; Grundel, Ralph; Hyvärinen, Marko; Towle, Kevin; Wu, Grace C.; Hellmann, Jessica J.

    2016-01-01

    Species distribution models (SDMs) have been criticized for involving assumptions that ignore or categorize many ecologically relevant factors such as dispersal ability and biotic interactions. Another potential source of model error is the assumption that species are ecologically uniform in their climatic tolerances across their range. Typically, SDMs to treat a species as a single entity, although populations of many species differ due to local adaptation or other genetic differentiation. Not taking local adaptation into account, may lead to incorrect range prediction and therefore misplaced conservation efforts. A constraint is that we often do not know the degree to which populations are locally adapted, however. Lacking experimental evidence, we still can evaluate niche differentiation within a species' range to promote better conservation decisions. We explore possible conservation implications of making type I or type II errors in this context. For each of two species, we construct three separate MaxEnt models, one considering the species as a single population and two of disjunct populations. PCA analyses and response curves indicate different climate characteristics in the current environments of the populations. Model projections into future climates indicate minimal overlap between areas predicted to be climatically suitable by the whole species versus population-based models. We present a workflow for addressing uncertainty surrounding local adaptation in SDM application and illustrate the value of conducting population-based models to compare with whole-species models. These comparisons might result in more cautious management actions when alternative range outcomes are considered.

  5. Addressing potential local adaptation in species distribution models: implications for conservation under climate change.

    PubMed

    Hällfors, Maria Helena; Liao, Jishan; Dzurisin, Jason; Grundel, Ralph; Hyvärinen, Marko; Towle, Kevin; Wu, Grace C; Hellmann, Jessica J

    2016-06-01

    Species distribution models (SDMs) have been criticized for involving assumptions that ignore or categorize many ecologically relevant factors such as dispersal ability and biotic interactions. Another potential source of model error is the assumption that species are ecologically uniform in their climatic tolerances across their range. Typically, SDMs treat a species as a single entity, although populations of many species differ due to local adaptation or other genetic differentiation. Not taking local adaptation into account may lead to incorrect range prediction and therefore misplaced conservation efforts. A constraint is that we often do not know the degree to which populations are locally adapted. Lacking experimental evidence, we still can evaluate niche differentiation within a species' range to promote better conservation decisions. We explore possible conservation implications of making type I or type II errors in this context. For each of two species, we construct three separate Max-Ent models, one considering the species as a single population and two of disjunct populations. Principal component analyses and response curves indicate different climate characteristics in the current environments of the populations. Model projections into future climates indicate minimal overlap between areas predicted to be climatically suitable by the whole species vs. population-based models. We present a workflow for addressing uncertainty surrounding local adaptation in SDM application and illustrate the value of conducting population-based models to compare with whole-species models. These comparisons might result in more cautious management actions when alternative range outcomes are considered.

  6. Addressing potential local adaptation in species distribution models: implications for conservation under climate change.

    PubMed

    Hällfors, Maria Helena; Liao, Jishan; Dzurisin, Jason; Grundel, Ralph; Hyvärinen, Marko; Towle, Kevin; Wu, Grace C; Hellmann, Jessica J

    2016-06-01

    Species distribution models (SDMs) have been criticized for involving assumptions that ignore or categorize many ecologically relevant factors such as dispersal ability and biotic interactions. Another potential source of model error is the assumption that species are ecologically uniform in their climatic tolerances across their range. Typically, SDMs treat a species as a single entity, although populations of many species differ due to local adaptation or other genetic differentiation. Not taking local adaptation into account may lead to incorrect range prediction and therefore misplaced conservation efforts. A constraint is that we often do not know the degree to which populations are locally adapted. Lacking experimental evidence, we still can evaluate niche differentiation within a species' range to promote better conservation decisions. We explore possible conservation implications of making type I or type II errors in this context. For each of two species, we construct three separate Max-Ent models, one considering the species as a single population and two of disjunct populations. Principal component analyses and response curves indicate different climate characteristics in the current environments of the populations. Model projections into future climates indicate minimal overlap between areas predicted to be climatically suitable by the whole species vs. population-based models. We present a workflow for addressing uncertainty surrounding local adaptation in SDM application and illustrate the value of conducting population-based models to compare with whole-species models. These comparisons might result in more cautious management actions when alternative range outcomes are considered. PMID:27509755

  7. Competition and fixation of cohorts of adaptive mutations under Fisher geometrical model

    PubMed Central

    Alpedrinha, João; Campos, Paulo R.A.; Gordo, Isabel

    2016-01-01

    One of the simplest models of adaptation to a new environment is Fisher’s Geometric Model (FGM), in which populations move on a multidimensional landscape defined by the traits under selection. The predictions of this model have been found to be consistent with current observations of patterns of fitness increase in experimentally evolved populations. Recent studies investigated the dynamics of allele frequency change along adaptation of microbes to simple laboratory conditions and unveiled a dramatic pattern of competition between cohorts of mutations, i.e., multiple mutations simultaneously segregating and ultimately reaching fixation. Here, using simulations, we study the dynamics of phenotypic and genetic change as asexual populations under clonal interference climb a Fisherian landscape, and ask about the conditions under which FGM can display the simultaneous increase and fixation of multiple mutations—mutation cohorts—along the adaptive walk. We find that FGM under clonal interference, and with varying levels of pleiotropy, can reproduce the experimentally observed competition between different cohorts of mutations, some of which have a high probability of fixation along the adaptive walk. Overall, our results show that the surprising dynamics of mutation cohorts recently observed during experimental adaptation of microbial populations can be expected under one of the oldest and simplest theoretical models of adaptation—FGM. PMID:27547562

  8. Adaptive discrete-time sliding-mode control of nonlinear systems described by Wiener models

    NASA Astrophysics Data System (ADS)

    Salhi, Houda; Kamoun, Samira; Essounbouli, Najib; Hamzaoui, Abdelaziz

    2016-03-01

    In this paper, we propose an adaptive control scheme that can be applied to nonlinear systems with unknown parameters. The considered class of nonlinear systems is described by the block-oriented models, specifically, the Wiener models. These models consist of dynamic linear blocks in series with static nonlinear blocks. The proposed adaptive control method is based on the inverse of the nonlinear function block and on the discrete-time sliding-mode controller. The parameters adaptation are performed using a new recursive parametric estimation algorithm. This algorithm is developed using the adjustable model method and the least squares technique. A recursive least squares (RLS) algorithm is used to estimate the inverse nonlinear function. A time-varying gain is proposed, in the discrete-time sliding mode controller, to reduce the chattering problem. The stability of the closed-loop nonlinear system, with the proposed adaptive control scheme, has been proved. An application to a pH neutralisation process has been carried out and the simulation results clearly show the effectiveness of the proposed adaptive control scheme.

  9. Theoretical models of adaptive energy management in small wintering birds.

    PubMed

    Brodin, Anders

    2007-10-29

    Many small passerines are resident in forests with very cold winters. Considering their size and the adverse conditions, this is a remarkable feat that requires optimal energy management in several respects, for example regulation of body fat reserves, food hoarding and night-time hypothermia. Besides their beneficial effect on survival, these behaviours also entail various costs. The scenario is complex with many potentially important factors, and this has made 'the little bird in winter' a popular topic for theoretic modellers. Many predictions could have been made intuitively, but models have been especially important when many factors interact. Predictions that hardly could have been made without models include: (i) the minimum mortality occurs at the fat level where the marginal values of starvation risk and predation risk are equal; (ii) starvation risk may also decrease when food requirement increases; (iii) mortality from starvation may correlate positively with fat reserves; (iv) the existence of food stores can increase fitness substantially even if the food is not eaten; (v) environmental changes may induce increases or decreases in the level of reserves depending on whether changes are temporary or permanent; and (vi) hoarding can also evolve under seemingly group-selectionistic conditions.

  10. Block-structured adaptive meshes and reduced grids for atmospheric general circulation models.

    PubMed

    Jablonowski, Christiane; Oehmke, Robert C; Stout, Quentin F

    2009-11-28

    Adaptive mesh refinement techniques offer a flexible framework for future variable-resolution climate and weather models since they can focus their computational mesh on certain geographical areas or atmospheric events. Adaptive meshes can also be used to coarsen a latitude-longitude grid in polar regions. This allows for the so-called reduced grid setups. A spherical, block-structured adaptive grid technique is applied to the Lin-Rood finite-volume dynamical core for weather and climate research. This hydrostatic dynamics package is based on a conservative and monotonic finite-volume discretization in flux form with vertically floating Lagrangian layers. The adaptive dynamical core is built upon a flexible latitude-longitude computational grid and tested in two- and three-dimensional model configurations. The discussion is focused on static mesh adaptations and reduced grids. The two-dimensional shallow water setup serves as an ideal testbed and allows the use of shallow water test cases like the advection of a cosine bell, moving vortices, a steady-state flow, the Rossby-Haurwitz wave or cross-polar flows. It is shown that reduced grid configurations are viable candidates for pure advection applications but should be used moderately in nonlinear simulations. In addition, static grid adaptations can be successfully used to resolve three-dimensional baroclinic waves in the storm-track region.

  11. An adaptive time-stepping strategy for solving the phase field crystal model

    SciTech Connect

    Zhang, Zhengru; Ma, Yuan; Qiao, Zhonghua

    2013-09-15

    In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. The numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.

  12. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    SciTech Connect

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operations or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.

  13. Testing the Nanoparticle-Allostatic Cross Adaptation-Sensitization Model for Homeopathic Remedy Effects

    PubMed Central

    Bell, Iris R.; Koithan, Mary; Brooks, Audrey J.

    2012-01-01

    Key concepts of the Nanoparticle-Allostatic Cross-Adaptation-Sensitization (NPCAS) Model for the action of homeopathic remedies in living systems include source nanoparticles as low level environmental stressors, heterotypic hormesis, cross-adaptation, allostasis (stress response network), time-dependent sensitization with endogenous amplification and bidirectional change, and self-organizing complex adaptive systems. The model accommodates the requirement for measurable physical agents in the remedy (source nanoparticles and/or source adsorbed to silica nanoparticles). Hormetic adaptive responses in the organism, triggered by nanoparticles; bipolar, metaplastic change, dependent on the history of the organism. Clinical matching of the patient’s symptom picture, including modalities, to the symptom pattern that the source material can cause (cross-adaptation and cross-sensitization). Evidence for nanoparticle-related quantum macro-entanglement in homeopathic pathogenetic trials. This paper examines research implications of the model, discussing the following hypotheses: Variability in nanoparticle size, morphology, and aggregation affects remedy properties and reproducibility of findings. Homeopathic remedies modulate adaptive allostatic responses, with multiple dynamic short- and long-term effects. Simillimum remedy nanoparticles, as novel mild stressors corresponding to the organism’s dysfunction initiate time-dependent cross-sensitization, reversing the direction of dysfunctional reactivity to environmental stressors. The NPCAS model suggests a way forward for systematic research on homeopathy. The central proposition is that homeopathic treatment is a form of nanomedicine acting by modulation of endogenous adaptation and metaplastic amplification processes in the organism to enhance long-term systemic resilience and health. PMID:23290882

  14. An adaptable neuromorphic model of orientation selectivity based on floating gate dynamics

    PubMed Central

    Gupta, Priti; Markan, C. M.

    2014-01-01

    The biggest challenge that the neuromorphic community faces today is to build systems that can be considered truly cognitive. Adaptation and self-organization are the two basic principles that underlie any cognitive function that the brain performs. If we can replicate this behavior in hardware, we move a step closer to our goal of having cognitive neuromorphic systems. Adaptive feature selectivity is a mechanism by which nature optimizes resources so as to have greater acuity for more abundant features. Developing neuromorphic feature maps can help design generic machines that can emulate this adaptive behavior. Most neuromorphic models that have attempted to build self-organizing systems, follow the approach of modeling abstract theoretical frameworks in hardware. While this is good from a modeling and analysis perspective, it may not lead to the most efficient hardware. On the other hand, exploiting hardware dynamics to build adaptive systems rather than forcing the hardware to behave like mathematical equations, seems to be a more robust methodology when it comes to developing actual hardware for real world applications. In this paper we use a novel time-staggered Winner Take All circuit, that exploits the adaptation dynamics of floating gate transistors, to model an adaptive cortical cell that demonstrates Orientation Selectivity, a well-known biological phenomenon observed in the visual cortex. The cell performs competitive learning, refining its weights in response to input patterns resembling different oriented bars, becoming selective to a particular oriented pattern. Different analysis performed on the cell such as orientation tuning, application of abnormal inputs, response to spatial frequency and periodic patterns reveal close similarity between our cell and its biological counterpart. Embedded in a RC grid, these cells interact diffusively exhibiting cluster formation, making way for adaptively building orientation selective maps in silicon. PMID

  15. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    NASA Astrophysics Data System (ADS)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  16. A Compositional Relevance Model for Adaptive Information Retrieval

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)

    1994-01-01

    There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

  17. An exploration of the adaptation and development after persecution and trauma (ADAPT) model with resettled refugee adolescents in Australia: A qualitative study.

    PubMed

    McGregor, Lucy S; Melvin, Glenn A; Newman, Louise K

    2016-06-01

    Refugee adolescents endure high rates of traumatic exposure, as well as subsequent resettlement and adaptational stressors. Research on the effects of trauma in refugee populations has focussed on psychopathological outcomes, in particular posttraumatic stress disorder. However this approach does not address the psychosocial and adaptive dimensions of refugee experience. The ADAPT model proposes an alternate conceptualization of the refugee experience, theorizing that refugee trauma challenges five core psychosocial adaptive systems, and that the impact on these systems leads to psychological difficulties. This study investigated the application of the ADAPT model to adolescents' accounts of their refugee and resettlement experiences. Deductive thematic analysis was used to analyse responses of 43 adolescent refugees to a semistructured interview. The ADAPT model was found to be a useful paradigm to conceptualize the impact of adolescents' refugee and resettlement journeys in terms of individual variation in the salience of particular adaptive systems to individuals' experiences. Findings are discussed in light of current understandings of the psychological impact of the refugee experience on adolescents.

  18. Shape-model-based adaptation of 3D deformable meshes for segmentation of medical images

    NASA Astrophysics Data System (ADS)

    Pekar, Vladimir; Kaus, Michael R.; Lorenz, Cristian; Lobregt, Steven; Truyen, Roel; Weese, Juergen

    2001-07-01

    Segmentation methods based on adaptation of deformable models have found numerous applications in medical image analysis. Many efforts have been made in the recent years to improve their robustness and reliability. In particular, increasingly more methods use a priori information about the shape of the anatomical structure to be segmented. This reduces the risk of the model being attracted to false features in the image and, as a consequence, makes the need of close initialization, which remains the principal limitation of elastically deformable models, less crucial for the segmentation quality. In this paper, we present a novel segmentation approach which uses a 3D anatomical statistical shape model to initialize the adaptation process of a deformable model represented by a triangular mesh. As the first step, the anatomical shape model is parametrically fitted to the structure of interest in the image. The result of this global adaptation is used to initialize the local mesh refinement based on an energy minimization. We applied our approach to segment spine vertebrae in CT datasets. The segmentation quality was quantitatively assessed for 6 vertebrae, from 2 datasets, by computing the mean and maximum distance between the adapted mesh and a manually segmented reference shape. The results of the study show that the presented method is a promising approach for segmentation of complex anatomical structures in medical images.

  19. Highly Adaptable Triple-Negative Breast Cancer Cells as a Functional Model for Testing Anticancer Agents

    PubMed Central

    Singh, Balraj; Shamsnia, Anna; Raythatha, Milan R.; Milligan, Ryan D.; Cady, Amanda M.; Madan, Simran; Lucci, Anthony

    2014-01-01

    A major obstacle in developing effective therapies against solid tumors stems from an inability to adequately model the rare subpopulation of panresistant cancer cells that may often drive the disease. We describe a strategy for optimally modeling highly abnormal and highly adaptable human triple-negative breast cancer cells, and evaluating therapies for their ability to eradicate such cells. To overcome the shortcomings often associated with cell culture models, we incorporated several features in our model including a selection of highly adaptable cancer cells based on their ability to survive a metabolic challenge. We have previously shown that metabolically adaptable cancer cells efficiently metastasize to multiple organs in nude mice. Here we show that the cancer cells modeled in our system feature an embryo-like gene expression and amplification of the fat mass and obesity associated gene FTO. We also provide evidence of upregulation of ZEB1 and downregulation of GRHL2 indicating increased epithelial to mesenchymal transition in metabolically adaptable cancer cells. Our results obtained with a variety of anticancer agents support the validity of the model of realistic panresistance and suggest that it could be used for developing anticancer agents that would overcome panresistance. PMID:25279830

  20. Indirect model reference adaptive control for a class of fractional order systems

    NASA Astrophysics Data System (ADS)

    Chen, Yuquan; Wei, Yiheng; Liang, Shu; Wang, Yong

    2016-10-01

    This article focuses on the indirect model reference adaptive control problem for fractional order systems. A constrained gradient estimation method was established firstly, since parameter estimation is part and parcel of the whole control problem. Then a novel adaptive control law is designed, from which the two problems, i.e., parameter estimation and reference tracking, can be unified perfectly. On these basis, an effective control scheme is established. The stability of the resulting closed-loop system is analyzed rigorously via indirect Lyapunov method and frequency distributed model. Finally, a careful simulation study is reported to illustrate the effectiveness of the proposed scheme.