Science.gov

Sample records for adaptive background model

  1. Video object segmentation via adaptive threshold based on background model diversity

    NASA Astrophysics Data System (ADS)

    Boubekeur, Mohamed Bachir; Luo, SenLin; Labidi, Hocine; Benlefki, Tarek

    2015-03-01

    The background subtraction could be presented as classification process when investigating the upcoming frames in a video stream, taking in consideration in some cases: a temporal information, in other cases the spatial consistency, and these past years both of the considerations above. The classification often relied in most of the cases on a fixed threshold value. In this paper, a framework for background subtraction and moving object detection based on adaptive threshold measure and short/long frame differencing procedure is proposed. The presented framework explored the case of adaptive threshold using mean squared differences for a sampled background model. In addition, an intuitive update policy which is neither conservative nor blind is presented. The algorithm succeeded on extracting the moving foreground and isolating an accurate background.

  2. Regularized background adaptation: a novel learning rate control scheme for gaussian mixture modeling.

    PubMed

    Lin, Horng-Horng; Chuang, Jen-Hui; Liu, Tyng-Luh

    2011-03-01

    To model a scene for background subtraction, Gaussian mixture modeling (GMM) is a popular choice for its capability of adaptation to background variations. However, GMM often suffers from a tradeoff between robustness to background changes and sensitivity to foreground abnormalities and is inefficient in managing the tradeoff for various surveillance scenarios. By reviewing the formulations of GMM, we identify that such a tradeoff can be easily controlled by adaptive adjustments of the GMM's learning rates for image pixels at different locations and of distinct properties. A new rate control scheme based on high-level feedback is then developed to provide better regularization of background adaptation for GMM and to help resolving the tradeoff. Additionally, to handle lighting variations that change too fast to be caught by GMM, a heuristic rooting in frame difference is proposed to assist the proposed rate control scheme for reducing false foreground alarms. Experiments show the proposed learning rate control scheme, together with the heuristic for adaptation of over-quick lighting change, gives better performance than conventional GMM approaches.

  3. In-Depth Functional Diagnostics of Mouse Models by Single-Flash and Flicker Electroretinograms without Adapting Background Illumination.

    PubMed

    Tanimoto, Naoyuki; Michalakis, Stylianos; Weber, Bernhard H F; Wahl-Schott, Christian A; Hammes, Hans-Peter; Seeliger, Mathias W

    2016-01-01

    Electroretinograms (ERGs) are commonly recorded at the cornea for an assessment of the functional status of the retina in mouse models. Full-field ERGs can be elicited by single-flash as well as flicker light stimulation although in most laboratories flicker ERGs are recorded much less frequently than singleflash ERGs. Whereas conventional single-flash ERGs contain information about layers, i.e., outer and inner retina, flicker ERGs permit functional assessment of the vertical pathways of the retina, i.e., rod system, cone ON-pathway, and cone OFF-pathway, when the responses are evoked at a relatively high luminance (0.5 log cd s/m(2)) with varying frequency (from 0.5 to 30 Hz) without any adapting background illumination. Therefore, both types of ERGs complement an in-depth functional characterization of the mouse retina, allowing for a discrimination of an underlying functional pathology. Here, we introduce the systematic interpretation of the single-flash and flicker ERGs by demonstrating several different patterns of functional phenotype in genetic mouse models, in which photoreceptors and/or bipolar cells are primarily or secondarily affected.

  4. The GLAST Background Model

    SciTech Connect

    Ormes, J. F.; Atwood, W.; Burnett, T.; Grove, E.; Longo, F.; McEnery, J.; Ritz, S.; Mizuno, T.

    2007-07-12

    In order to estimate the ability of the GLAST/LAT to reject unwanted background of charged particles, optimize the on-board processing, size the required telemetry and optimize the GLAST orbit, we developed a detailed model of the background particles that would affect the LAT. In addition to the well-known components of the cosmic radiation, we included splash and reentrant components of protons, electrons (e+ and e-) from 10 MeV and beyond as well as the albedo gamma rays produced by cosmic ray interactions with the atmosphere. We made estimates of the irreducible background components produced by positrons and hadrons interacting in the multilayered micrometeorite shield and spacecraft surrounding the LAT and note that because the orbital debris has increased, the shielding required and hence the background are larger than were present in EGRET. Improvements to the model are currently being made to include the east-west effect.

  5. The GLAST Background Model

    SciTech Connect

    Ormes, J.F.; Atwood, W.; Burnett, T.; Grove, E.; Longo, F.; McEnery, J.; Mizuno, T.; Ritz, S.; /NASA, Goddard

    2007-10-17

    In order to estimate the ability of the GLAST/LAT to reject unwanted background of charged particles, optimize the on-board processing, size the required telemetry and optimize the GLAST orbit, we developed a detailed model of the background particles that would affect the LAT. In addition to the well-known components of the cosmic radiation, we included splash and reentrant components of protons, electrons (e+ and e-) from 10 MeV and beyond as well as the albedo gamma rays produced by cosmic ray interactions with the atmosphere. We made estimates of the irreducible background components produced by positrons and hadrons interacting in the multilayered micrometeorite shield and spacecraft surrounding the LAT and note that because the orbital debris has increased, the shielding required and hence the background are larger than were present in EGRET. Improvements to the model are currently being made to include the east-west effect.

  6. Adaptation of a clustered lumpy background model for task-based image quality assessment in x-ray phase-contrast mammography

    PubMed Central

    Zysk, Adam M.; Brankov, Jovan G.; Wernick, Miles N.; Anastasio, Mark A.

    2012-01-01

    Purpose: Since the introduction of clinical x-ray phase-contrast mammography (PCM), a technique that exploits refractive-index variations to create edge enhancement at tissue boundaries, a number of optimization studies employing physical image-quality metrics have been performed. Ideally, task-based assessment of PCM would have been conducted with human readers. These studies have been limited, however, in part due to the large parameter-space of PCM system configurations and the difficulty of employing expert readers for large-scale studies. It has been proposed that numerical observers can be used to approximate the statistical performance of human readers, thus enabling the study of task-based performance over a large parameter-space. Methods: Methods are presented for task-based image quality assessment of PCM images with a numerical observer, the most significant of which is an adapted lumpy background from the conventional mammography literature that accounts for the unique wavefield propagation physics of PCM image formation and will be used with a numerical observer to assess image quality. These methods are demonstrated by performing a PCM task-based image quality study using a numerical observer. This study employs a signal-known-exactly, background-known-statistically Bayesian ideal observer method to assess the detectability of a calcification object in PCM images when the anode spot size and calcification diameter are varied. Results: The first realistic model for the structured background in PCM images has been introduced. A numerical study demonstrating the use of this background model has compared PCM and conventional mammography detection of calcification objects. The study data confirm the strong PCM calcification detectability dependence on anode spot size. These data can be used to balance the trade-off between enhanced image quality and the potential for motion artifacts that comes with use of a reduced spot size and increased exposure time

  7. Sensorimotor adaptation is influenced by background music.

    PubMed

    Bock, Otmar

    2010-06-01

    It is well established that listening to music can modify subjects' cognitive performance. The present study evaluates whether this so-called Mozart Effect extends beyond cognitive tasks and includes sensorimotor adaptation. Three subject groups listened to musical pieces that in the author's judgment were serene, neutral, or sad, respectively. This judgment was confirmed by the subjects' introspective reports. While listening to music, subjects engaged in a pointing task that required them to adapt to rotated visual feedback. All three groups adapted successfully, but the speed and magnitude of adaptive improvement was more pronounced with serene music than with the other two music types. In contrast, aftereffects upon restoration of normal feedback were independent of music type. These findings support the existence of a "Mozart effect" for strategic movement control, but not for adaptive recalibration. Possibly, listening to music modifies neural activity in an intertwined cognitive-emotional network.

  8. Adaptation in chemoreceptor cells. II. The effects of cross-adapting backgrounds depend on spectral tuning.

    PubMed

    Borroni, P F; Atema, J

    1989-09-01

    1. The cross-adapting effects of chemical backgrounds on the response of primary chemoreceptor cells to superimposed stimuli were studied using NH(4) receptor cells, of known spectral tuning from the lobster (Homarus americanus). 2. Spectrum experiments: The spectral tuning of NH(4) receptor cells was investigated using NH(4)C1 and 7 other compounds selected as the most stimulatory non-best compounds for NH(4) cells from a longer list of compounds tested in previous studies. Based on their responses to the compounds tested, 3 spectral subpopulations of NH(4) Bet cells which responded second-best to Betaine (Bet; and 'pure' NH(4) cells, which responded to NH(4)C1 only (Fig.1). 3. Cross-adaptation experiments: Overall, cross-adaptation with Glu and Bet backgrounds caused suppression of response of NH(4) receptor cells to various concentrations of NH(4)C1. However, the different subpopulations of NH(4) cells were affected differently: (a) The stimulus-response functions of NH(4)-Glu cells were significantly suppressed by both a 3 micrometre (G3) and 300 micrometre (G300) Glu backgrounds. (b) The stimulus-response functions of NH(4)-Bet cells was not affected by a 3 micrometre (B3), but significantly suppressed by a 300 micrometre (B300) Bet background. (c) The stimulus-response functions of pure NH(4) cells were not affected by any of the Glu or Bet back grounds (Figs. 3, 4). 4. The stimulus-response functions of 5 cells from all different subpopulations were enhanced by cross-adaptation with the G300 and B300 back-grounds (Fig 4, Table 1). 5. Whereas self-adaptation caused parallel shifts in stimulus-response functions (Borroni and Atema 1988), cross-adaptation caused a decrease in slope of stimulus-response functions. Implications of the results from cross- and self-adaptation experiments on NH(4) receptor cells, for a receptor cell model are discussed.

  9. Robust background modelling in DIALS

    PubMed Central

    Parkhurst, James M.; Winter, Graeme; Waterman, David G.; Fuentes-Montero, Luis; Gildea, Richard J.; Murshudov, Garib N.; Evans, Gwyndaf

    2016-01-01

    A method for estimating the background under each reflection during integration that is robust in the presence of pixel outliers is presented. The method uses a generalized linear model approach that is more appropriate for use with Poisson distributed data than traditional approaches to pixel outlier handling in integration programs. The algorithm is most applicable to data with a very low background level where assumptions of a normal distribution are no longer valid as an approximation to the Poisson distribution. It is shown that traditional methods can result in the systematic underestimation of background values. This then results in the reflection intensities being overestimated and gives rise to a change in the overall distribution of reflection intensities in a dataset such that too few weak reflections appear to be recorded. Statistical tests performed during data reduction may mistakenly attribute this to merohedral twinning in the crystal. Application of the robust generalized linear model algorithm is shown to correct for this bias. PMID:27980508

  10. Adaptive threshold selection for background removal in fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Li, Weishi; Yan, Jianwen; Yu, Liandong; Pan, Chengliang

    2017-03-01

    In fringe projection profilometry, background and shadow are inevitable in the image of an object, and must be identified and removed. In existing methods, it is nontrivial to determine a proper threshold to segment the background and shadow regions, especially when the gray-level histogram of the image is close to unimodal, and an improper threshold generally results in misclassification of the object and the background/shadow. In this paper, an adaptive threshold method is proposed to tackle the problem. Different from the existing automatic methods, the modulation-level histogram, instead of the gray-level histogram, of the image is employed to determine the threshold. Furthermore, a new weighting factor is proposed to improve Otsu's method to segment the image with a histogram close to unimodal, and the modulation difference of the object pixels and the background/shadow pixels is intensified significantly by the weighting factor. Moreover, the weighting factor is adaptive to the image. The proposed method outperforms existing methods either in accuracy, efficiency or automation. Experimental results are given to demonstrate the feasibility and effectiveness of the proposed method.

  11. Do weak adapting backgrounds uncover multiple components in the electroretinogram of the horseshoe crab?

    PubMed

    Lucas, J C; Weiner, W W; Ahmed, J

    2003-01-01

    The lateral eye of the horseshoe crab, Limulus polyphemus, has been used as a model system for over a century to study visual and circadian processes. One advantage of this system is the relative simplicity of the retina. The input pathway of the retina consists of photoreceptor cells that are electrically coupled to the dendrite of a second-order cell, which sends action potentials to the brain. Electroretinograms (ERGs) recorded from the lateral eye show a biphasic shape, with a leading negative wave and a later positive peak. The purpose of these experiments was to determine whether adapting backgrounds could be used to uncover multiple adaptation mechanisms within the ERG. To test this idea, ERGs were elicited using variable intensity flashes presented under dark-adapted conditions, as well as in the presence of weak adapting backgrounds. Flashes and backgrounds were generated using green LEDs (lambda max = 525 nm) under software control. ERGs were recorded using a corneal wick electrode placed on the lateral eye of the horseshoe crab. Preliminary results suggest that ERGs recorded in the presence of adapting backgrounds are linearly scaled versions of dark-adapted FRGs. This suggests that there is a single adaptation stage in the Limulus retina. This is in contrast with analogous results from mammals, including mouse, cat and monkey, which show multiple stages of adaptation within their more complex retinas.

  12. Incremental principal component pursuit for video background modeling

    DOEpatents

    Rodriquez-Valderrama, Paul A.; Wohlberg, Brendt

    2017-03-14

    An incremental Principal Component Pursuit (PCP) algorithm for video background modeling that is able to process one frame at a time while adapting to changes in background, with a computational complexity that allows for real-time processing, having a low memory footprint and is robust to translational and rotational jitter.

  13. Adaptive response modelling

    NASA Astrophysics Data System (ADS)

    Campa, Alessandro; Esposito, Giuseppe; Belli, Mauro

    Cellular response to radiation is often modified by a previous delivery of a small "priming" dose: a smaller amount of damage, defined by the end point being investigated, is observed, and for this reason the effect is called adaptive response. An improved understanding of this effect is essential (as much as for the case of the bystander effect) for a reliable radiation risk assessment when low dose irradiations are involved. Experiments on adaptive response have shown that there are a number of factors that strongly influence the occurrence (and the level) of the adaptation. In particular, priming doses and dose rates have to fall in defined ranges; the same is true for the time interval between the delivery of the small priming dose and the irradiation with the main, larger, dose (called in this case challenging dose). Different hypotheses can be formulated on the main mechanism(s) determining the adaptive response: an increased efficiency of DNA repair, an increased level of antioxidant enzymes, an alteration of cell cycle progression, a chromatin conformation change. An experimental clearcut evidence going definitely in the direction of one of these explanations is not yet available. Modelling can be done at different levels. Simple models, relating the amount of damage, through elementary differential equations, to the dose and dose rate experienced by the cell, are relatively easy to handle, and they can be modified to account for the priming irradiation. However, this can hardly be of decisive help in the explanation of the mechanisms, since each parameter of these models often incorporates in an effective way several cellular processes related to the response to radiation. In this presentation we show our attempts to describe adaptive response with models that explicitly contain, as a dynamical variable, the inducible adaptive agent. At a price of a more difficult treatment, this approach is probably more prone to give support to the experimental studies

  14. Colour vision and background adaptation in a passerine bird, the zebra finch (Taeniopygia guttata)

    PubMed Central

    2016-01-01

    Today, there is good knowledge of the physiological basis of bird colour vision and how mathematical models can be used to predict visual thresholds. However, we still know only little about how colour vision changes between different viewing conditions. This limits the understanding of how colour signalling is configured in habitats where the light of the illumination and the background may shift dramatically. I examined how colour discrimination in zebra finch (Taeniopygia guttata) is affected by adaptation to different backgrounds. I trained finches in a two-alternative choice task, to choose between red discs displayed on backgrounds with different colours. I found that discrimination thresholds correlate with stimulus contrast to the background. Thresholds are low, and in agreement with model predictions, for a background with a red colour similar to the discs. For the most contrasting green background, thresholds are about five times higher than this. Subsequently, I trained the finches for the detection of single discs on a grey background. Detection thresholds are about 2.5 to 3 times higher than discrimination thresholds. This study demonstrates close similarities in human and bird colour vision, and the quantitative data offer a new possibility to account for shifting viewing conditions in colour vision models. PMID:27703702

  15. Background modeling for the GERDA experiment

    SciTech Connect

    Becerici-Schmidt, N.; Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  16. Cosmic microwave background probes models of inflation

    NASA Technical Reports Server (NTRS)

    Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.

    1992-01-01

    Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.

  17. Modeling background radiation in Southern Nevada.

    PubMed

    Haber, Daniel A; Burnley, Pamela C; Adcock, Christopher T; Malchow, Russell L; Marsac, Kara E; Hausrath, Elisabeth M

    2017-02-06

    Aerial gamma ray surveys are an important tool for national security, scientific, and industrial interests in determining locations of both anthropogenic and natural sources of radioactivity. There is a relationship between radioactivity and geology and in the past this relationship has been used to predict geology from an aerial survey. The purpose of this project is to develop a method to predict the radiologic exposure rate of the geologic materials by creating a high resolution background model. The intention is for this method to be used in an emergency response scenario where the background radiation environment is unknown. Two study areas in Southern Nevada have been modeled using geologic data, images from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), geochemical data, and pre-existing low resolution aerial surveys from the National Uranium Resource Evaluation (NURE) Survey. Using these data, geospatial areas that are homogenous in terms of K, U, and Th, referred to as background radiation units, are defined and the gamma ray exposure rate is predicted. The prediction is compared to data collected via detailed aerial survey by the Department of Energy's Remote Sensing Lab - Nellis, allowing for the refinement of the technique. By using geologic units to define radiation background units of exposed bedrock and ASTER visualizations to subdivide and define radiation background units within alluvium, successful models have been produced for Government Wash, north of Lake Mead, and for the western shore of Lake Mohave, east of Searchlight, NV.

  18. Model-based target and background characterization

    NASA Astrophysics Data System (ADS)

    Mueller, Markus; Krueger, Wolfgang; Heinze, Norbert

    2000-07-01

    Up to now most approaches of target and background characterization (and exploitation) concentrate solely on the information given by pixels. In many cases this is a complex and unprofitable task. During the development of automatic exploitation algorithms the main goal is the optimization of certain performance parameters. These parameters are measured during test runs while applying one algorithm with one parameter set to images that constitute of image domains with very different domain characteristics (targets and various types of background clutter). Model based geocoding and registration approaches provide means for utilizing the information stored in GIS (Geographical Information Systems). The geographical information stored in the various GIS layers can define ROE (Regions of Expectations) and may allow for dedicated algorithm parametrization and development. ROI (Region of Interest) detection algorithms (in most cases MMO (Man- Made Object) detection) use implicit target and/or background models. The detection algorithms of ROIs utilize gradient direction models that have to be matched with transformed image domain data. In most cases simple threshold calculations on the match results discriminate target object signatures from the background. The geocoding approaches extract line-like structures (street signatures) from the image domain and match the graph constellation against a vector model extracted from a GIS (Geographical Information System) data base. Apart from geo-coding the algorithms can be also used for image-to-image registration (multi sensor and data fusion) and may be used for creation and validation of geographical maps.

  19. TIMSS 2011 User Guide for the International Database. Supplement 2: National Adaptations of International Background Questionnaires

    ERIC Educational Resources Information Center

    Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed.

    2013-01-01

    This supplement describes national adaptations made to the international version of the TIMSS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the TIMSS 2011 background variables. Background questionnaire adaptations…

  20. Diffusion Background Model for Moving Objects Detection

    NASA Astrophysics Data System (ADS)

    Vishnyakov, B. V.; Sidyakin, S. V.; Vizilter, Y. V.

    2015-05-01

    In this paper, we propose a new approach for moving objects detection in video surveillance systems. It is based on construction of the regression diffusion maps for the image sequence. This approach is completely different from the state of the art approaches. We show that the motion analysis method, based on diffusion maps, allows objects that move with different speed or even stop for a short while to be uniformly detected. We show that proposed model is comparable to the most popular modern background models. We also show several ways of speeding up diffusion maps algorithm itself.

  1. Influence of background size, luminance and eccentricity on different adaptation mechanisms.

    PubMed

    Gloriani, Alejandro H; Matesanz, Beatriz M; Barrionuevo, Pablo A; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A

    2016-08-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06-110cd/m(2)) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5cd/m(2). In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range.

  2. Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.

    2012-01-01

    Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.

  3. Adaptive log-quadratic approach for target detection in nonhomogeneous backgrounds perturbed with speckle fluctuations.

    PubMed

    Magraner, Eric; Bertaux, Nicolas; Réfrégier, Philippe

    2008-12-01

    An approach for point target detection in the presence of speckle fluctuations with nonhomogeneous backgrounds is proposed. This approach is based on an automatic selection between the standard constant background model and a quadratic model for the logarithm of the background values. An improvement of the regulation of the false alarm probability in nonhomogeneous backgrounds is demonstrated.

  4. PIRLS 2011 User Guide for the International Database. Supplement 2: National Adaptations of International Background Questionnaires

    ERIC Educational Resources Information Center

    Foy, Pierre, Ed.; Drucker, Kathleen T., Ed.

    2013-01-01

    This supplement describes national adaptations made to the international version of the PIRLS/prePIRLS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the PIRLS/prePIRLS 2011 background variables. Background…

  5. Background adaptation and water acidification affect pigmentation and stress physiology of tilapia, Oreochromis mossambicus.

    PubMed

    van der Salm, A L; Spanings, F A T; Gresnigt, R; Bonga, S E Wendelaar; Flik, G

    2005-10-01

    The ability to adjust skin darkness to the background is a common phenomenon in fish. The hormone alpha-melanophore-stimulating hormone (alphaMSH) enhances skin darkening. In Mozambique tilapia, Oreochromis mossambicus L., alphaMSH acts as a corticotropic hormone during adaptation to water with a low pH, in addition to its role in skin colouration. In the current study, we investigated the responses of this fish to these two environmental challenges when it is exposed to both simultaneously. The skin darkening of tilapia on a black background and the lightening on grey and white backgrounds are compromised in water with a low pH, indicating that the two vastly different processes both rely on alphaMSH-regulatory mechanisms. If the water is acidified after 25 days of undisturbed background adaptation, fish showed a transient pigmentation change but recovered after two days and continued the adaptation of their skin darkness to match the background. Black backgrounds are experienced by tilapia as more stressful than grey or white backgrounds both in neutral and in low pH water. A decrease of water pH from 7.8 to 4.5 applied over a two-day period was not experienced as stressful when combined with background adaptation, based on unchanged plasma pH and plasma alphaMSH, and Na levels. However, when water pH was lowered after 25 days of undisturbed background adaptation, particularly alphaMSH levels increased chronically. In these fish, plasma pH and Na levels had decreased, indicating a reduced capacity to maintain ion-homeostasis, implicating that the fish indeed experience stress. We conclude that simultaneous exposure to these two types of stressor has a lower impact on the physiology of tilapia than subsequent exposure to the stressors.

  6. Observations and Modeling of Seismic Background Noise

    USGS Publications Warehouse

    Peterson, Jon R.

    1993-01-01

    INTRODUCTION The preparation of this report had two purposes. One was to present a catalog of seismic background noise spectra obtained from a worldwide network of seismograph stations. The other purpose was to refine and document models of seismic background noise that have been in use for several years. The second objective was, in fact, the principal reason that this study was initiated and influenced the procedures used in collecting and processing the data. With a single exception, all of the data used in this study were extracted from the digital data archive at the U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL). This archive dates from 1972 when ASL first began deploying digital seismograph systems and collecting and distributing digital data under the sponsorship of the Defense Advanced Research Projects Agency (DARPA). There have been many changes and additions to the global seismograph networks during the past twenty years, but perhaps none as significant as the current deployment of very broadband seismographs by the U.S. Geological Survey (USGS) and the University of California San Diego (UCSD) under the scientific direction of the IRIS consortium. The new data acquisition systems have extended the bandwidth and resolution of seismic recording, and they utilize high-density recording media that permit the continuous recording of broadband data. The data improvements and continuous recording greatly benefit and simplify surveys of seismic background noise. Although there are many other sources of digital data, the ASL archive data were used almost exclusively because of accessibility and because the data systems and their calibration are well documented for the most part. Fortunately, the ASL archive contains high-quality data from other stations in addition to those deployed by the USGS. Included are data from UCSD IRIS/IDA stations, the Regional Seismic Test Network (RSTN) deployed by Sandia National Laboratories (SNL), and the

  7. Model of aircraft noise adaptation

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Coates, G. D.; Cawthorn, J. M.

    1977-01-01

    Development of an aircraft noise adaptation model, which would account for much of the variability in the responses of subjects participating in human response to noise experiments, was studied. A description of the model development is presented. The principal concept of the model, was the determination of an aircraft adaptation level which represents an annoyance calibration for each individual. Results showed a direct correlation between noise level of the stimuli and annoyance reactions. Attitude-personality variables were found to account for varying annoyance judgements.

  8. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  9. ADAPTIVE EYE MODEL - Poster Paper

    NASA Astrophysics Data System (ADS)

    Galetskiy, Sergey O.; Kudryashov, Alexey V.

    2008-01-01

    We propose experimental adaptive eye model based on flexible 18-electrode bimorph mirror reproducing human eye aberrations up to 4th radial order of Zernike polynomials at frequency of 10Hz. The accuracy of aberrations reproduction in most cases is better than λ/10 RMS. The model is introduced to aberrometer for human eye aberrations compensation to improve visual acuity test.

  10. An efficient background modeling approach based on vehicle detection

    NASA Astrophysics Data System (ADS)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  11. Dynamics of background adaptation in Xenopus laevis: role of catecholamines and melanophore-stimulating hormone.

    PubMed

    van Zoest, I D; Heijmen, P S; Cruijsen, P M; Jenks, B G

    1989-10-01

    The pars intermedia of the pituitary gland in Xenopus laevis secretes alpha-melanophore-stimulating hormone (alpha-MSH), which causes dispersion of pigment in dermal melanophores in animals on a black background. In the present study we have determined plasma levels of alpha-MSH in animals undergoing adaptation to white and black backgrounds. Plasma values of black-adapted animals were high and decreased rapidly after transfer to a white background, as did the degree of pigment dispersion in dermal melanophores. Plasma MSH values of white-adapted animals were below the detection limit of our radioimmunoassay. Transfer of white animals to a black background resulted in complete dispersion of melanophore pigment within a few hours, but plasma MSH levels remained low for at least 24 hr. This discrepancy between plasma MSH and degree of pigment dispersion suggested the involvement of an additional factor for stimulating dispersion. Results of in vitro and in vivo experiments with receptor agonists and antagonists indicated that a beta-adrenergic mechanism, functioning at the level of the melanophore, is involved in the stimulation of pigment dispersion during the early stages of background adaptation.

  12. Background model for the Majorana Demonstrator

    DOE PAGES

    Cuesta, C.; Abgrall, N.; Aguayo, E.; ...

    2015-01-01

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example usingmore » powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.« less

  13. Background model for the Majorana Demonstrator

    SciTech Connect

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, III, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y -D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V.; Gusev, K.; Hallin, A.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, W. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. K.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C. -H.; Yumatov, V.

    2015-01-01

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.

  14. An Adapted Dialogic Reading Program for Turkish Kindergarteners from Low Socio-Economic Backgrounds

    ERIC Educational Resources Information Center

    Ergül, Cevriye; Akoglu, Gözde; Sarica, Ayse D.; Karaman, Gökçe; Tufan, Mümin; Bahap-Kudret, Zeynep; Zülfikar, Deniz

    2016-01-01

    The study aimed to examine the effectiveness of the Adapted Dialogic Reading Program (ADR) on the language and early literacy skills of Turkish kindergarteners from low socio-economic (SES) backgrounds. The effectiveness of ADR was investigated across six different treatment conditions including classroom and home based implementations in various…

  15. Background Model for the Majorana Demonstrator

    SciTech Connect

    Cuesta, C.; Abgrall, N.; Aguayo, Estanislao; Avignone, Frank T.; Barabash, Alexander S.; Bertrand, F.; Boswell, M.; Brudanin, V.; Busch, Matthew; Byram, D.; Caldwell, A. S.; Chan, Yuen-Dat; Christofferson, Cabot-Ann; Combs, Dustin C.; Detwiler, Jason A.; Doe, Peter J.; Efremenko, Yuri; Egorov, Viatcheslav; Ejiri, H.; Elliott, S. R.; Fast, James E.; Finnerty, P.; Fraenkle, Florian; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, Vincente; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, Reyco; Hoppe, Eric W.; Howard, Stanley; Howe, M. A.; Keeter, K.; Kidd, M. F.; Kochetov, Oleg; Konovalov, S.; Kouzes, Richard T.; Laferriere, Brian D.; Leon, Jonathan D.; Leviner, L.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Nomachi, Masaharu; Orrell, John L.; O'Shaughnessy, C.; Overman, Nicole R.; Phillips, D.; Poon, Alan; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, Keith; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, Alexis G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, Kyle J.; Snyder, N.; Suriano, Anne-Marie; Thompson, J.; Timkin, V.; Tornow, Werner; Trimble, J. E.; Varner, R. L.; Vasilyev, Sergey; Vetter, Kai; Vorren, Kris R.; White, Brandon R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A.; Yu, Chang-Hong; Yumatov, Vladimir

    2015-06-01

    The Majorana Collaboration is constructing a prototype system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment to search for neutrinoless double-beta (0v BB) decay in 76Ge. In view of the requirement that the next generation of tonne-scale Ge-based 0vBB-decay experiment be capable of probing the neutrino mass scale in the inverted-hierarchy region, a major goal of theMajorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using Geant4 simulations of the different background components whose purity levels are constrained from radioassay measurements.

  16. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds.

    PubMed

    Hillis, James M; Brainard, David H

    2005-10-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism.

  17. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2005-10-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism.

  18. Chromo-natural model in anisotropic background

    SciTech Connect

    Maleknejad, Azadeh; Erfani, Encieh E-mail: eerfani@ipm.ir

    2014-03-01

    In this work we study the chromo-natural inflation model in the anisotropic setup. Initiating inflation from Bianchi type-I cosmology, we analyze the system thoroughly during the slow-roll inflation, from both analytical and numerical points of view. We show that the isotropic FRW inflation is an attractor of the system. In other words, anisotropies are damped within few e-folds and the chromo-natural model respects the cosmic no-hair conjecture. Furthermore, we demonstrate that in the slow-roll limit, the anisotropies in both chromo-natural and gauge-flation models share the same dynamics.

  19. Background Error Correlation Modeling with Diffusion Operators

    DTIC Science & Technology

    2013-01-01

    a general procedure for constructing a BEC model as a rational function of the diffusion operator D is presented and analytic expressions for the...Under the assumption of local homogeneity of D , a heuristic method for computing the diagonal elements of B is proposed. It is shown that the...In this chap- ter, a general procedure for constructing a BEC model as a rational function of the diffusion operator D is presented and analytic

  20. Solar Parameters for Modeling the Interplanetary Background

    NASA Astrophysics Data System (ADS)

    Bzowski, Maciej; Sokół, Justyna M.; Tokumaru, Munetoshi; Fujiki, Kenichi; Quémerais, Eric; Lallement, Rosine; Ferron, Stéphane; Bochsler, Peter; McComas, David J.

    The goal of the working group on cross-calibration of past and present ultraviolet (UV) datasets of the International Space Science Institute (ISSI) in Bern, Switzerland was to establish a photometric cross-calibration of various UV and extreme ultraviolet (EUV) heliospheric observations. Realization of this goal required a credible and up-to-date model of the spatial distribution of neutral interstellar hydrogen in the heliosphere, and to that end, a credible model of the radiation pressure and ionization processes was needed. This chapter describes the latter part of the project: the solar factors responsible for shaping the distribution of neutral interstellar H in the heliosphere. In this paper we present the solar Lyman-α flux and the topics of solar Lyman-α resonant radiation pressure force acting on neutral H atoms in the heliosphere. We will also discuss solar EUV radiation and resulting photoionization of heliospheric hydrogen along with their evolution in time and the still hypothetical variation with heliolatitude. Furthermore, solar wind and its evolution with solar activity is presented, mostly in the context of charge exchange ionization of heliospheric neutral hydrogen, and dynamic pressure variations. Also electron-impact ionization of neutral heliospheric hydrogen and its variation with time, heliolatitude, and solar distance is discussed. After a review of the state of the art in all of those topics, we proceed to present an interim model of the solar wind and the other solar factors based on up-to-date in situ and remote sensing observations. This model was used by Izmodenov et al. (2013, this volume) to calculate the distribution of heliospheric hydrogen, which in turn was the basis for intercalibrating the heliospheric UV and EUV measurements discussed in Quémerais et al. (2013, this volume). Results of this joint effort will also be used to improve the model of the solar wind evolution, which will be an invaluable asset in interpretation of

  1. Hydraulically interconnected vehicle suspension: background and modelling

    NASA Astrophysics Data System (ADS)

    Zhang, Nong; Smith, Wade A.; Jeyakumaran, Jeku

    2010-01-01

    This paper presents a novel approach for the frequency domain analysis of a vehicle fitted with a general hydraulically interconnected suspension (HIS) system. Ideally, interconnected suspensions have the capability, unique among passive systems, to provide stiffness and damping characteristics dependent on the all-wheel suspension mode in operation. A basic, lumped-mass, four-degree-of-freedom half-car model is used to illustrate the proposed methodology. The mechanical-fluid boundary condition in the double-acting cylinders is modelled as an external force on the mechanical system and a moving boundary on the fluid system. The fluid system itself is modelled using the hydraulic impedance method, in which the relationships between the dynamic fluid states, i.e. pressures and flows, at the extremities of a single fluid circuit are determined by the transfer matrix method. A set of coupled, frequency-dependent equations, which govern the dynamics of the integrated half-car system, are then derived and the application of these equations to both free and forced vibration analysis is explained. The fluid system impedance matrix for the two general wheel-pair interconnection types-anti-synchronous and anti-oppositional-is also given. To further outline the application of the proposed methodology, the paper finishes with an example using a typical anti-roll HIS system. The integrated half-car system's free vibration solutions and frequency response functions are then obtained and discussed in some detail. The presented approach provides a scientific basis for investigating the dynamic characteristics of HIS-equipped vehicles, and the results offer further confirmation that interconnected suspension schemes can provide, at least to some extent, individual control of modal stiffness and damping characteristics.

  2. Background Models for Muons and Neutrons Underground

    SciTech Connect

    Formaggio, Joseph A.

    2005-09-08

    Cosmogenic-induced activity is an issue of great concern for many sensitive experiments sited underground. A variety of different arch-type experiments - such as those geared toward the detection of dark matter, neutrinoless double beta decay and solar neutrinos - have reached levels of cleanliness and sensitivity that warrant careful consideration of secondary activity induced by cosmic rays. This paper reviews some of the main issues associated with the modeling of cosmogenic activity underground. Comparison with data, when such data is available, is also presented.

  3. Evaluation of active appearance models in varying background conditions

    NASA Astrophysics Data System (ADS)

    Kowalski, Marek; Naruniec, Jacek

    2013-10-01

    In this paper we present an evaluation of the chosen versions of Active Appearance Models (AAM) in varying background conditions. Algorithms were tested on a subset of the CMU PIE database and chosen background im- ages. Our experiments prove, that the accuracy of those methods is strictly correlated with the used background, where the differences in the success rate differ even up to 50%.

  4. An analog retina model for detecting dim moving objects against a bright moving background

    NASA Technical Reports Server (NTRS)

    Searfus, R. M.; Colvin, M. E.; Eeckman, F. H.; Teeters, J. L.; Axelrod, T. S.

    1991-01-01

    We are interested in applications that require the ability to track a dim target against a bright, moving background. Since the target signal will be less than or comparable to the variations in the background signal intensity, sophisticated techniques must be employed to detect the target. We present an analog retina model that adapts to the motion of the background in order to enhance targets that have a velocity difference with respect to the background. Computer simulation results and our preliminary concept of an analog 'Z' focal plane implementation are also presented.

  5. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  6. Adaptive Urban Dispersion Integrated Model

    SciTech Connect

    Wissink, A; Chand, K; Kosovic, B; Chan, S; Berger, M; Chow, F K

    2005-11-03

    Numerical simulations represent a unique predictive tool for understanding the three-dimensional flow fields and associated concentration distributions from contaminant releases in complex urban settings (Britter and Hanna 2003). Utilization of the most accurate urban models, based on fully three-dimensional computational fluid dynamics (CFD) that solve the Navier-Stokes equations with incorporated turbulence models, presents many challenges. We address two in this work; first, a fast but accurate way to incorporate the complex urban terrain, buildings, and other structures to enforce proper boundary conditions in the flow solution; second, ways to achieve a level of computational efficiency that allows the models to be run in an automated fashion such that they may be used for emergency response and event reconstruction applications. We have developed a new integrated urban dispersion modeling capability based on FEM3MP (Gresho and Chan 1998, Chan and Stevens 2000), a CFD model from Lawrence Livermore National Lab. The integrated capability incorporates fast embedded boundary mesh generation for geometrically complex problems and full three-dimensional Cartesian adaptive mesh refinement (AMR). Parallel AMR and embedded boundary gridding support are provided through the SAMRAI library (Wissink et al. 2001, Hornung and Kohn 2002). Embedded boundary mesh generation has been demonstrated to be an automatic, fast, and efficient approach for problem setup. It has been used for a variety of geometrically complex applications, including urban applications (Pullen et al. 2005). The key technology we introduce in this work is the application of AMR, which allows the application of high-resolution modeling to certain important features, such as individual buildings and high-resolution terrain (including important vegetative and land-use features). It also allows the urban scale model to be readily interfaced with coarser resolution meso or regional scale models. This talk

  7. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  8. Background first- and second-order modeling for point target detection.

    PubMed

    Genin, Laure; Champagnat, Frédéric; Le Besnerais, Guy

    2012-11-01

    This paper deals with point target detection in nonstationary backgrounds such as cloud scenes in aerial or satellite imaging. We propose an original spatial detection method based on first- and second-order modeling (i.e., mean and covariance) of local background statistics. We first show that state-of-the-art nonlocal denoising methods can be adapted with minimal effort to yield edge-preserving background mean estimates. These mean estimates lead to very efficient background suppression (BS) detection. However, we propose that BS be followed by a matched filter based on an estimate of the local spatial covariance matrix. The identification of these matrices derives from a robust classification of pixels in classes with homogeneous second-order statistics based on a Gaussian mixture model. The efficiency of the proposed approaches is demonstrated by evaluation on two cloudy sky background databases.

  9. Wind profiling for a coherent wind Doppler lidar by an auto-adaptive background subtraction approach.

    PubMed

    Wu, Yanwei; Guo, Pan; Chen, Siying; Chen, He; Zhang, Yinchao

    2017-04-01

    Auto-adaptive background subtraction (AABS) is proposed as a denoising method for data processing of the coherent Doppler lidar (CDL). The method is proposed specifically for a low-signal-to-noise-ratio regime, in which the drifting power spectral density of CDL data occurs. Unlike the periodogram maximum (PM) and adaptive iteratively reweighted penalized least squares (airPLS), the proposed method presents reliable peaks and is thus advantageous in identifying peak locations. According to the analysis results of simulated and actually measured data, the proposed method outperforms the airPLS method and the PM algorithm in the furthest detectable range. The proposed method improves the detection range approximately up to 16.7% and 40% when compared to the airPLS method and the PM method, respectively. It also has smaller mean wind velocity and standard error values than the airPLS and PM methods. The AABS approach improves the quality of Doppler shift estimates and can be applied to obtain the whole wind profiling by the CDL.

  10. Target detection using the background model from the topological anomaly detection algorithm

    NASA Astrophysics Data System (ADS)

    Dorado Munoz, Leidy P.; Messinger, David W.; Ziemann, Amanda K.

    2013-05-01

    The Topological Anomaly Detection (TAD) algorithm has been used as an anomaly detector in hyperspectral and multispectral images. TAD is an algorithm based on graph theory that constructs a topological model of the background in a scene, and computes an anomalousness ranking for all of the pixels in the image with respect to the background in order to identify pixels with uncommon or strange spectral signatures. The pixels that are modeled as background are clustered into groups or connected components, which could be representative of spectral signatures of materials present in the background. Therefore, the idea of using the background components given by TAD in target detection is explored in this paper. In this way, these connected components are characterized in three different approaches, where the mean signature and endmembers for each component are calculated and used as background basis vectors in Orthogonal Subspace Projection (OSP) and Adaptive Subspace Detector (ASD). Likewise, the covariance matrix of those connected components is estimated and used in detectors: Constrained Energy Minimization (CEM) and Adaptive Coherence Estimator (ACE). The performance of these approaches and the different detectors is compared with a global approach, where the background characterization is derived directly from the image. Experiments and results using self-test data set provided as part of the RIT blind test target detection project are shown.

  11. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  12. Image Discrimination Models Predict Object Detection in Natural Backgrounds

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Rohaly, A. M.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    Object detection involves looking for one of a large set of object sub-images in a large set of background images. Image discrimination models only predict the probability that an observer will detect a difference between two images. In a recent study based on only six different images, we found that discrimination models can predict the relative detectability of objects in those images, suggesting that these simpler models may be useful in some object detection applications. Here we replicate this result using a new, larger set of images. Fifteen images of a vehicle in an other-wise natural setting were altered to remove the vehicle and mixed with the original image in a proportion chosen to make the target neither perfectly recognizable nor unrecognizable. The target was also rotated about a vertical axis through its center and mixed with the background. Sixteen observers rated these 30 target images and the 15 background-only images for the presence of a vehicle. The likelihoods of the observer responses were computed from a Thurstone scaling model with the assumption that the detectabilities are proportional to the predictions of an image discrimination model. Three image discrimination models were used: a cortex transform model, a single channel model with a contrast sensitivity function filter, and the Root-Mean-Square (RMS) difference of the digital target and background-only images. As in the previous study, the cortex transform model performed best; the RMS difference predictor was second best; and last, but still a reasonable predictor, was the single channel model. Image discrimination models can predict the relative detectabilities of objects in natural backgrounds.

  13. Quantifying the CV: Adapting an Impact Assessment Model to Astronomy

    NASA Astrophysics Data System (ADS)

    Bohémier, K. A.

    2015-04-01

    We present the process and results of applying the Becker Model to the curriculum vitae of a Yale University astronomy professor. As background, in July 2013, the Becker Medical Library at Washington Univ. in St. Louis held a workshop for librarians on the Becker Model, a framework developed by research assessment librarians for quantifying medical researchers' individual and group outputs. Following the workshop, the model was analyzed for content to adapt it to the physical sciences.

  14. Optimised spectral merge of the background model in seismic inversion

    NASA Astrophysics Data System (ADS)

    White, Roy; Zabihi Naeini, Ehsan

    2017-01-01

    The inversion of seismic reflection data to absolute impedance generates low-frequency deviations around the true impedance if the frequency content of the background impedance model does not merge seamlessly into the spectrum of the inverted seismic data. We present a systematic method of selecting a background model that minimises the mismatch between the background model and the relative impedance obtained by inverting the seismic data at wells. At each well a set of well-log relative impedances is formed by passing the impedance log through a set of zero-phase high-pass filters. The corresponding background models are constructed by passing the impedance log through the complementary zero-phase low-pass filters and a set of seismic relative impedances is computed by inverting the seismic data using these background models. If the inverted seismic data is to merge perfectly with the background model, it should correspond at the well to the well-log relative impedance. This correspondence is the basis of a procedure for finding the optimum combination of background model and inverted seismic data. It is difficult to predict the low-frequency content of inverted seismic data. These low frequencies are affected by the uncertainties in (1) measuring the low-frequency response of the seismic wavelet and (2) knowing how inversion protects the signal-to-noise ratio at low frequencies. Uncertainty (1) becomes acute for broadband seismic data; the low-frequency phase is especially difficult to estimate. Moreover we show that a mismatch of low-frequency phase is a serious source of inversion artefacts. We also show that relative impedance can estimate the low-frequency phase where a well tie cannot. Consequently we include a low-frequency phase shift, applied to the seismic relative impedances, in the search for the best spectral merge. The background models are specified by a low-cut corner frequency and the phase shifts by a phase intercept at zero frequency. A scan of

  15. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  16. A diversified portfolio model of adaptability.

    PubMed

    Chandra, Siddharth; Leong, Frederick T L

    2016-12-01

    A new model of adaptability, the diversified portfolio model (DPM) of adaptability, is introduced. In the 1950s, Markowitz developed the financial portfolio model by demonstrating that investors could optimize the ratio of risk and return on their portfolios through risk diversification. The DPM integrates attractive features of a variety of models of adaptability, including Linville's self-complexity model, the risk and resilience model, and Bandura's social cognitive theory. The DPM draws on the concept of portfolio diversification, positing that diversified investment in multiple life experiences, life roles, and relationships promotes positive adaptation to life's challenges. The DPM provides a new integrative model of adaptability across the biopsychosocial levels of functioning. More importantly, the DPM addresses a gap in the literature by illuminating the antecedents of adaptive processes studied in a broad array of psychological models. The DPM is described in relation to the biopsychosocial model and propositions are offered regarding its utility in increasing adaptiveness. Recommendations for future research are also offered. (PsycINFO Database Record

  17. Adaptive Modeling of the International Space Station Electrical Power System

    NASA Technical Reports Server (NTRS)

    Thomas, Justin Ray

    2007-01-01

    Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.

  18. Modeling surface backgrounds from radon progeny plate-out

    SciTech Connect

    Perumpilly, G.; Guiseppe, V. E.; Snyder, N.

    2013-08-08

    The next generation low-background detectors operating deep underground aim for unprecedented low levels of radioactive backgrounds. The surface deposition and subsequent implantation of radon progeny in detector materials will be a source of energetic background events. We investigate Monte Carlo and model-based simulations to understand the surface implantation profile of radon progeny. Depending on the material and region of interest of a rare event search, these partial energy depositions can be problematic. Motivated by the use of Ge crystals for the detection of neutrinoless double-beta decay, we wish to understand the detector response of surface backgrounds from radon progeny. We look at the simulation of surface decays using a validated implantation distribution based on nuclear recoils and a realistic surface texture. Results of the simulations and measured α spectra are presented.

  19. Gravitoinertial force background level affects adaptation to coriolis force perturbations of reaching movements.

    PubMed

    Lackner, J R; Dizio, P

    1998-08-01

    We evaluated the combined effects on reaching movements of the transient, movement-dependent Coriolis forces and the static centrifugal forces generated in a rotating environment. Specifically, we assessed the effects of comparable Coriolis force perturbations in different static force backgrounds. Two groups of subjects made reaching movements toward a just-extinguished visual target before rotation began, during 10 rpm counterclockwise rotation, and after rotation ceased. One group was seated on the axis of rotation, the other 2.23 m away. The resultant of gravity and centrifugal force on the hand was 1.0 g for the on-center group during 10 rpm rotation, and 1.031 g for the off-center group because of the 0.25 g centrifugal force present. For both groups, rightward Coriolis forces, approximately 0.2 g peak, were generated during voluntary arm movements. The endpoints and paths of the initial per-rotation movements were deviated rightward for both groups by comparable amounts. Within 10 subsequent reaches, the on-center group regained baseline accuracy and straight-line paths; however, even after 40 movements the off-center group had not resumed baseline endpoint accuracy. Mirror-image aftereffects occurred when rotation stopped. These findings demonstrate that manual control is disrupted by transient Coriolis force perturbations and that adaptation can occur even in the absence of visual feedback. An increase, even a small one, in background force level above normal gravity does not affect the size of the reaching errors induced by Coriolis forces nor does it affect the rate of reacquiring straight reaching paths; however, it does hinder restoration of reaching accuracy.

  20. Gravitoinertial force background level affects adaptation to coriolis force perturbations of reaching movements

    NASA Technical Reports Server (NTRS)

    Lackner, J. R.; Dizio, P.

    1998-01-01

    We evaluated the combined effects on reaching movements of the transient, movement-dependent Coriolis forces and the static centrifugal forces generated in a rotating environment. Specifically, we assessed the effects of comparable Coriolis force perturbations in different static force backgrounds. Two groups of subjects made reaching movements toward a just-extinguished visual target before rotation began, during 10 rpm counterclockwise rotation, and after rotation ceased. One group was seated on the axis of rotation, the other 2.23 m away. The resultant of gravity and centrifugal force on the hand was 1.0 g for the on-center group during 10 rpm rotation, and 1.031 g for the off-center group because of the 0.25 g centrifugal force present. For both groups, rightward Coriolis forces, approximately 0.2 g peak, were generated during voluntary arm movements. The endpoints and paths of the initial per-rotation movements were deviated rightward for both groups by comparable amounts. Within 10 subsequent reaches, the on-center group regained baseline accuracy and straight-line paths; however, even after 40 movements the off-center group had not resumed baseline endpoint accuracy. Mirror-image aftereffects occurred when rotation stopped. These findings demonstrate that manual control is disrupted by transient Coriolis force perturbations and that adaptation can occur even in the absence of visual feedback. An increase, even a small one, in background force level above normal gravity does not affect the size of the reaching errors induced by Coriolis forces nor does it affect the rate of reacquiring straight reaching paths; however, it does hinder restoration of reaching accuracy.

  1. On Fractional Model Reference Adaptive Control

    PubMed Central

    Shi, Bao; Dong, Chao

    2014-01-01

    This paper extends the conventional Model Reference Adaptive Control systems to fractional ones based on the theory of fractional calculus. A control law and an incommensurate fractional adaptation law are designed for the fractional plant and the fractional reference model. The stability and tracking convergence are analyzed using the frequency distributed fractional integrator model and Lyapunov theory. Moreover, numerical simulations of both linear and nonlinear systems are performed to exhibit the viability and effectiveness of the proposed methodology. PMID:24574897

  2. Cosmic microwave background observables of small field models of inflation

    SciTech Connect

    Ben-Dayan, Ido; Brustein, Ram E-mail: ramyb@bgu.ac.il

    2010-09-01

    We construct a class of single small field models of inflation that can predict, contrary to popular wisdom, an observable gravitational wave signal in the cosmic microwave background anisotropies. The spectral index, its running, the tensor to scalar ratio and the number of e-folds can cover all the parameter space currently allowed by cosmological observations. A unique feature of models in this class is their ability to predict a negative spectral index running in accordance with recent cosmic microwave background observations. We discuss the new class of models from an effective field theory perspective and show that if the dimensionless trilinear coupling is small, as required for consistency, then the observed spectral index running implies a high scale of inflation and hence an observable gravitational wave signal. All the models share a distinct prediction of higher power at smaller scales, making them easy targets for detection.

  3. Graphical Models and Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Almond, Russell G.; Mislevy, Robert J.

    1999-01-01

    Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)

  4. Linear model for fast background subtraction in oligonucleotide microarrays

    PubMed Central

    2009-01-01

    Background One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. Results We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. Conclusion The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry. PMID:19917117

  5. Adolescents with Turkish background in Norway and Sweden: a comparative study of their psychological adaptation.

    PubMed

    Virta, Erkki; Sam, David L; Westin, Charles

    2004-02-01

    Using a questionnaire survey, this study compared psychological adaptation (self-esteem, life satisfaction, and mental health problems) of Turkish adolescents in Norway and Sweden, and examined to what extent ethnic and majority identities, acculturation strategies, and perceived discrimination accounted for adaptation among Turkish adolescents. The samples consisted of 407 Turks (111 in Norway and 296 in Sweden) with a mean age of 15.2 years and 433 host adolescents (207 in Norway, 226 in Sweden) with a mean age of 15.6 years. Turks in Norway reported poorer psychological adaptation than Turks in Sweden. Predictors of good adaptation were Turkish identity and integration, whereas poor adaptation was related to marginalization and perceived discrimination. The results indicated that the poorer adaptation of Turks in Norway compared to that of Turks in Sweden could be due to lower degree of Turkish identity and higher degree of perceived discrimination.

  6. User Modeling in Adaptive Hypermedia Educational Systems

    ERIC Educational Resources Information Center

    Martins, Antonio Constantino; Faria, Luiz; Vaz de Carvalho, Carlos; Carrapatoso, Eurico

    2008-01-01

    This document is a survey in the research area of User Modeling (UM) for the specific field of Adaptive Learning. The aims of this document are: To define what it is a User Model; To present existing and well known User Models; To analyze the existent standards related with UM; To compare existing systems. In the scientific area of User Modeling…

  7. Changes in formaldehyde-induced fluorescence of the hypothalamus and pars intermedia in the frog, Rana temporaria, following background adaptation.

    PubMed

    Prasada Rao, P D

    1982-01-01

    Adaptation of the frog, Rana temporaria, to a white background for 12 hr has resulted in an intense formaldehyde-induced fluorescence (FIF) in the neurons of the preoptic recess organ (PRO), paraventricular organ (PVO), nucleus infundibularis dorsalis (NID) and their basal processes permitting visualization of the PRO- and PVO-hypophysial tracts that extend into the median eminence (ME) and pars intermedia (PI); the FIF is reduced in all the structures by 3 days. In frogs adapted to a black background, for 12 hr and 3 days, there was a general reduction in the FIF of the PRO neurons and PRO-hypophysial tract. By 12 hr black background adaptation, the PVO/NID neurons and only their adjacent basal processes show FIF which was sharply reduced by 3 days, making the PVO-hypophysial tract undetectable. In the PI fibers the fluorescence was more intense in black-adapted frogs than in white-adapted ones at both the intervals studied. The simultaneous changes in the FIF of the hypothalamic nuclei, tracts and PI suggest that the PRO and PVO/NID neurons participate in PI control through release of neurotransmitter(s) at the axonal ends.

  8. On background-independent renormalization of spin foam models

    NASA Astrophysics Data System (ADS)

    Bahr, Benjamin

    2017-04-01

    In this article we discuss an implementation of renormalization group ideas to spin foam models, where there is no a priori length scale with which to define the flow. In the context of the continuum limit of these models, we show how the notion of cylindrical consistency of path integral measures gives a natural analogue of Wilson’s RG flow equations for background-independent systems. We discuss the conditions for the continuum measures to be diffeomorphism-invariant, and consider both exact and approximate examples.

  9. A heuristic model of sensory adaptation.

    PubMed

    McBurney, Donald H; Balaban, Carey D

    2009-11-01

    Adaptation is a universal process in organisms as diverse as bacteria and humans, and across the various senses. This article proposes a simple, heuristic, mathematical model containing tonic and phasic processes. The model demonstrates properties not commonly associated with adaptation, such as increased sensitivity to changes, range shifting, and phase lead. Changes in only four parameters permit the model to predict empirical psychophysical data from different senses. The relatively prolonged time courses of responses to oral and topical capsaicin are used to illustrate and validate this mathematical modeling approach for different stimulus profiles. Other examples of phenomena elucidated by this modeling approach include the time courses of taste sensation, brightness perception, loudness perception, cross-adaptation to oral irritants, and cutaneous mechanoreception. It also predicts such apparently unrelated phenomena as perceived alcohol intoxication, habituation, and drug tolerance. Because the integration of phasic and tonic components is a conservative, highly efficacious solution to a ubiquitous biological challenge, sensory adaptation is seen as an evolutionary adaptation, and as a prominent feature of Mother Nature's small bag of tricks.

  10. Adaptive Modeling Language and Its Derivatives

    NASA Technical Reports Server (NTRS)

    Chemaly, Adel

    2006-01-01

    Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.

  11. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Ahmed Khamayseh; Valmor de Almeida; Glen Hansen

    2008-10-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  12. Interval-Valued Model Level Fuzzy Aggregation-Based Background Subtraction.

    PubMed

    Chiranjeevi, Pojala; Sengupta, Somnath

    2016-07-29

    In a recent work, the effectiveness of neighborhood supported model level fuzzy aggregation was shown under dynamic background conditions. The multi-feature fuzzy aggregation used in that approach uses real fuzzy similarity values, and is robust for low and medium-scale dynamic background conditions such as swaying vegetation, sprinkling water, etc. The technique, however, exhibited some limitations under heavily dynamic background conditions, as features have high uncertainty under such noisy conditions and these uncertainties were not captured by real fuzzy similarity values. Our proposed algorithm is particularly focused toward improving the detection under heavy dynamic background conditions by modeling uncertainties in the data by interval-valued fuzzy set. In this paper, real-valued fuzzy aggregation has been extended to interval-valued fuzzy aggregation by considering uncertainties over real similarity values. We build up a procedure to calculate the uncertainty that varies for each feature, at each pixel, and at each time instant. We adaptively determine membership values at each pixel by the Gaussian of uncertainty value instead of fixed membership values used in recent fuzzy approaches, thereby, giving importance to a feature based on its uncertainty. Interval-valued Choquet integral is evaluated using interval similarity values and the membership values in order to calculate interval-valued fuzzy similarity between model and current. Adequate qualitative and quantitative studies are carried out to illustrate the effectiveness of the proposed method in mitigating heavily dynamic background situations as compared to state-of-the-art.

  13. Climatic influence of background and volcanic stratosphere aerosol models

    NASA Technical Reports Server (NTRS)

    Deschamps, P. Y.; Herman, M.; Lenoble, J.; Tanre, D.

    1982-01-01

    A simple modelization of the earth atmosphere system including tropospheric and stratospheric aerosols has been derived and tested. Analytical expressions are obtained for the albedo variation due to a thin stratospheric aerosol layer. Also outlined are the physical procedures and the respective influence of the main parameters: aerosol optical thickness, single scattering albedo and asymmetry factor, and sublayer albedo. The method is applied to compute the variation of the zonal and planetary albedos due to a stratospheric layer of background H2SO4 particles and of volcanic ash.

  14. Testing coupled dark energy models with their cosmological background evolution

    NASA Astrophysics Data System (ADS)

    van de Bruck, Carsten; Mifsud, Jurgen; Morrice, Jack

    2017-02-01

    We consider a cosmology in which dark matter and a quintessence scalar field responsible for the acceleration of the Universe are allowed to interact. Allowing for both conformal and disformal couplings, we perform a global analysis of the constraints on our model using Hubble parameter measurements, baryon acoustic oscillation distance measurements, and a Supernovae Type Ia data set. We find that the additional disformal coupling relaxes the conformal coupling constraints. Moreover, we show that, at the background level, a disformal interaction within the dark sector is preferred to both Λ CDM and uncoupled quintessence, hence favoring interacting dark energy.

  15. Adaptive approximation models in optimization

    SciTech Connect

    Voronin, A.N.

    1995-05-01

    The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.

  16. Modeling Background Attenuation by Sample Matrix in Gamma Spectrometric Analyses

    SciTech Connect

    Bastos, Rodrigo O.; Appoloni, Carlos R.

    2008-08-07

    In laboratory gamma spectrometric analyses, the procedures for estimating background usually overestimate it. If an empty container similar to that used to hold samples is measured, it does not consider the background attenuation by sample matrix. If a 'blank' sample is measured, the hypothesis that this sample will be free of radionuclides is generally not true. The activity of this 'blank' sample is frequently sufficient to mask or to overwhelm the effect of attenuation so that the background remains overestimated. In order to overcome this problem, a model was developed to obtain the attenuated background from the spectrum acquired with the empty container. Beyond reasonable hypotheses, the model presumes the knowledge of the linear attenuation coefficient of the samples and its dependence on photon energy and samples densities. An evaluation of the effects of this model on the Lowest Limit of Detection (LLD) is presented for geological samples placed in cylindrical containers that completely cover the top of an HPGe detector that has a 66% relative efficiency. The results are presented for energies in the range of 63 to 2614keV, for sample densities varying from 1.5 to 2.5 g{center_dot}cm{sup -3}, and for the height of the material on the detector of 2 cm and 5 cm. For a sample density of 2.0 g{center_dot}cm{sup -3} and with a 2cm height, the method allowed for a lowering of 3.4% of the LLD for the energy of 1460keV, from {sup 40}K, 3.9% for the energy of 911keV from {sup 228}Ac, 4.5% for the energy of 609keV from {sup 214}Bi, and8.3% for the energy of 92keV from {sup 234}Th. For a sample density of 1.75 g{center_dot}cm{sup -3} and a 5cm height, the method indicates a lowering of 6.5%, 7.4%, 8.3% and 12.9% of the LLD for the same respective energies.

  17. CBSD version 2 component models of the IR celestial background

    NASA Astrophysics Data System (ADS)

    Kennealy, John P.; Glaudell, Gene A.

    1990-12-01

    CBSD Version 2 addresses the development of algorithms and software which implement realistic models of all the primary celestial background phenomenologies, including solar system, galactic, and extra-galactic features. During 1990, the CBSD program developed and refined IR scene generation models for the zodiacal emission, thermal emission from asteroids and planets, and the galactic point source background. Chapters in this report are devoted to each of those areas. Ongoing extensions to the point source module for extended source descriptions of nebulae and HII regions are briefly discussed. Treatment of small galaxies will also be a natural extension of the current CBSD point source module. Although no CBSD module yet exists for inter-stellar IR cirrus, MRC has been working closely with the Royal Aerospace Establishment in England to achieve a data-base understanding of cirrus fractal characteristics. The CBSD modules discussed in Chapters 2, 3, and 4 are all now operational and have been employed to generate a significant variety of scenes. CBSD scene generation capability has been well accepted by both the IR astronomy community and the DoD user community and directly supports the SDIO SSGM program.

  18. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  19. A Model for Optimal Constrained Adaptive Testing.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Reese, Lynda M.

    1998-01-01

    Proposes a model for constrained computerized adaptive testing in which the information in the test at the trait level (theta) estimate is maximized subject to the number of possible constraints on the content of the test. Test assembly relies on a linear-programming approach. Illustrates the approach through simulation with items from the Law…

  20. Automated adaptive inference of phenomenological dynamical models

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan C.; Nemenman, Ilya

    2015-08-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  1. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  2. Characterization and modeling of a low background HPGe detector

    NASA Astrophysics Data System (ADS)

    Dokania, N.; Singh, V.; Mathimalar, S.; Nanal, V.; Pal, S.; Pillay, R. G.

    2014-05-01

    A high efficiency, low background counting setup has been made at TIFR consisting of a special HPGe detector (~ 70 %) surrounded by a low activity copper+lead shield. Detailed measurements are performed with point and extended geometry sources to obtain a complete response of the detector. An effective model of the detector has been made with GEANT4 based Monte Carlo simulations which agrees with experimental data within 5%. This setup will be used for qualification and selection of radio-pure materials to be used in a cryogenic bolometer for the study of Neutrinoless Double Beta Decay in 124Sn as well as for other rare event studies. Using this setup, radio-impurities in the rock sample from India-based Neutrino Observatory (INO) site have been estimated.

  3. The dependence of global ocean modeling on background diapycnal mixing.

    PubMed

    Deng, Zengan

    2014-01-01

    The Argo-derived background diapycnal mixing (BDM) proposed by Deng et al. (in publish) is introduced to and applied in Hybrid Coordinate Ocean Model (HYCOM). Sensitive experiments are carried out using HYCOM to detect the responses of ocean surface temperature and Meridional Overturning Circulation (MOC) to BDM in a global context. Preliminary results show that utilizing a constant BDM, with the same order of magnitude as the realistic one, may cause significant deviation in temperature and MOC. It is found that the dependence of surface temperature and MOC on BDM is prominent. Surface temperature is decreased with the increase of BDM, because diapycnal mixing can promote the deep cold water return to the upper ocean. Comparing to the control run, more striking MOC changes can be caused by the larger variation in BDM.

  4. Adaptive Behaviour Assessment System: Indigenous Australian Adaptation Model (ABAS: IAAM)

    ERIC Educational Resources Information Center

    du Plessis, Santie

    2015-01-01

    The study objectives were to develop, trial and evaluate a cross-cultural adaptation of the Adaptive Behavior Assessment System-Second Edition Teacher Form (ABAS-II TF) ages 5-21 for use with Indigenous Australian students ages 5-14. This study introduced a multiphase mixed-method design with semi-structured and informal interviews, school…

  5. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  6. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  7. Adaptive Control with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This paper presents a modification of the conventional model reference adaptive control (MRAC) architecture in order to improve transient performance of the input and output signals of uncertain systems. A simple modification of the reference model is proposed by feeding back the tracking error signal. It is shown that the proposed approach guarantees tracking of the given reference command and the reference control signal (one that would be designed if the system were known) not only asymptotically but also in transient. Moreover, it prevents generation of high frequency oscillations, which are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference commands of any magnitude from any initial position without re-tuning. The benefits of the method are demonstrated with a simulation example

  8. Adaptive cyber-attack modeling system

    NASA Astrophysics Data System (ADS)

    Gonsalves, Paul G.; Dougherty, Edward T.

    2006-05-01

    The pervasiveness of software and networked information systems is evident across a broad spectrum of business and government sectors. Such reliance provides an ample opportunity not only for the nefarious exploits of lone wolf computer hackers, but for more systematic software attacks from organized entities. Much effort and focus has been placed on preventing and ameliorating network and OS attacks, a concomitant emphasis is required to address protection of mission critical software. Typical software protection technique and methodology evaluation and verification and validation (V&V) involves the use of a team of subject matter experts (SMEs) to mimic potential attackers or hackers. This manpower intensive, time-consuming, and potentially cost-prohibitive approach is not amenable to performing the necessary multiple non-subjective analyses required to support quantifying software protection levels. To facilitate the evaluation and V&V of software protection solutions, we have designed and developed a prototype adaptive cyber attack modeling system. Our approach integrates an off-line mechanism for rapid construction of Bayesian belief network (BN) attack models with an on-line model instantiation, adaptation and knowledge acquisition scheme. Off-line model construction is supported via a knowledge elicitation approach for identifying key domain requirements and a process for translating these requirements into a library of BN-based cyber-attack models. On-line attack modeling and knowledge acquisition is supported via BN evidence propagation and model parameter learning.

  9. Adaptive human behavior in epidemiological models

    PubMed Central

    Fenichel, Eli P.; Castillo-Chavez, Carlos; Ceddia, M. G.; Chowell, Gerardo; Parra, Paula A. Gonzalez; Hickling, Graham J.; Holloway, Garth; Horan, Richard; Morin, Benjamin; Perrings, Charles; Springborn, Michael; Velazquez, Leticia; Villalobos, Cristina

    2011-01-01

    The science and management of infectious disease are entering a new stage. Increasingly public policy to manage epidemics focuses on motivating people, through social distancing policies, to alter their behavior to reduce contacts and reduce public disease risk. Person-to-person contacts drive human disease dynamics. People value such contacts and are willing to accept some disease risk to gain contact-related benefits. The cost–benefit trade-offs that shape contact behavior, and hence the course of epidemics, are often only implicitly incorporated in epidemiological models. This approach creates difficulty in parsing out the effects of adaptive behavior. We use an epidemiological–economic model of disease dynamics to explicitly model the trade-offs that drive person-to-person contact decisions. Results indicate that including adaptive human behavior significantly changes the predicted course of epidemics and that this inclusion has implications for parameter estimation and interpretation and for the development of social distancing policies. Acknowledging adaptive behavior requires a shift in thinking about epidemiological processes and parameters. PMID:21444809

  10. Adaptive human behavior in epidemiological models.

    PubMed

    Fenichel, Eli P; Castillo-Chavez, Carlos; Ceddia, M G; Chowell, Gerardo; Parra, Paula A Gonzalez; Hickling, Graham J; Holloway, Garth; Horan, Richard; Morin, Benjamin; Perrings, Charles; Springborn, Michael; Velazquez, Leticia; Villalobos, Cristina

    2011-04-12

    The science and management of infectious disease are entering a new stage. Increasingly public policy to manage epidemics focuses on motivating people, through social distancing policies, to alter their behavior to reduce contacts and reduce public disease risk. Person-to-person contacts drive human disease dynamics. People value such contacts and are willing to accept some disease risk to gain contact-related benefits. The cost-benefit trade-offs that shape contact behavior, and hence the course of epidemics, are often only implicitly incorporated in epidemiological models. This approach creates difficulty in parsing out the effects of adaptive behavior. We use an epidemiological-economic model of disease dynamics to explicitly model the trade-offs that drive person-to-person contact decisions. Results indicate that including adaptive human behavior significantly changes the predicted course of epidemics and that this inclusion has implications for parameter estimation and interpretation and for the development of social distancing policies. Acknowledging adaptive behavior requires a shift in thinking about epidemiological processes and parameters.

  11. Gravitational wave background from Standard Model physics: qualitative features

    SciTech Connect

    Ghiglieri, J.; Laine, M. E-mail: laine@itp.unibe.ch

    2015-07-01

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at 0T > 16 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.

  12. Gravitational wave background from Standard Model physics: qualitative features

    SciTech Connect

    Ghiglieri, J.; Laine, M.

    2015-07-16

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at T>160 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.

  13. A Model for Climate Change Adaptation

    NASA Astrophysics Data System (ADS)

    Pasqualini, D.; Keating, G. N.

    2009-12-01

    Climate models predict serious impacts on the western U.S. in the next few decades, including increased temperatures and reduced precipitation. In combination, these changes are linked to profound impacts on fundamental systems, such as water and energy supplies, agriculture, population stability, and the economy. Global and national imperatives for climate change mitigation and adaptation are made actionable at the state level, for instance through greenhouse gas (GHG) emission regulations and incentives for renewable energy sources. However, adaptation occurs at the local level, where energy and water usage can be understood relative to local patterns of agriculture, industry, and culture. In response to the greenhouse gas emission reductions required by California’s Assembly Bill 32 (2006), Sonoma County has committed to sharp emissions reductions across several sectors, including water, energy, and transportation. To assist Sonoma County develop a renewable energy (RE) portfolio to achieve this goal we have developed an integrated assessment model, CLEAR (CLimate-Energy Assessment for Resiliency) model. Building on Sonoma County’s existing baseline studies of energy use, carbon emissions and potential RE sources, the CLEAR model simulates the complex interactions among technology deployment, economics and social behavior. This model enables assessment of these and other components with specific analysis of their coupling and feedbacks because, due to the complex nature of the problem, the interrelated sectors cannot be studied independently. The goal is an approach to climate change mitigation and adaptation that is replicable for use by other interested communities. The model user interfaces helps stakeholders and policymakers understand options for technology implementation.

  14. Adaptive Neuro-Fuzzy Inference System for Classification of Background EEG Signals from ESES Patients and Controls

    PubMed Central

    Yang, Zhixian; Wang, Yinghua; Ouyang, Gaoxiang

    2014-01-01

    Background electroencephalography (EEG), recorded with scalp electrodes, in children with electrical status epilepticus during slow-wave sleep (ESES) syndrome and control subjects has been analyzed. We considered 10 ESES patients, all right-handed and aged 3–9 years. The 10 control individuals had the same characteristics of the ESES ones but presented a normal EEG. Recordings were undertaken in the awake and relaxed states with their eyes open. The complexity of background EEG was evaluated using the permutation entropy (PE) and sample entropy (SampEn) in combination with the ANOVA test. It can be seen that the entropy measures of EEG are significantly different between the ESES patients and normal control subjects. Then, a classification framework based on entropy measures and adaptive neuro-fuzzy inference system (ANFIS) classifier is proposed to distinguish ESES and normal EEG signals. The results are promising and a classification accuracy of about 89% is achieved. PMID:24790547

  15. Sample selection bias and presence-only distribution models: implications for background and pseudo-absence data.

    PubMed

    Phillips, Steven J; Dudík, Miroslav; Elith, Jane; Graham, Catherine H; Lehmann, Anthony; Leathwick, John; Ferrier, Simon

    2009-01-01

    Most methods for modeling species distributions from occurrence records require additional data representing the range of environmental conditions in the modeled region. These data, called background or pseudo-absence data, are usually drawn at random from the entire region, whereas occurrence collection is often spatially biased toward easily accessed areas. Since the spatial bias generally results in environmental bias, the difference between occurrence collection and background sampling may lead to inaccurate models. To correct the estimation, we propose choosing background data with the same bias as occurrence data. We investigate theoretical and practical implications of this approach. Accurate information about spatial bias is usually lacking, so explicit biased sampling of background sites may not be possible. However, it is likely that an entire target group of species observed by similar methods will share similar bias. We therefore explore the use of all occurrences within a target group as biased background data. We compare model performance using target-group background and randomly sampled background on a comprehensive collection of data for 226 species from diverse regions of the world. We find that target-group background improves average performance for all the modeling methods we consider, with the choice of background data having as large an effect on predictive performance as the choice of modeling method. The performance improvement due to target-group background is greatest when there is strong bias in the target-group presence records. Our approach applies to regression-based modeling methods that have been adapted for use with occurrence data, such as generalized linear or additive models and boosted regression trees, and to Maxent, a probability density estimation method. We argue that increased awareness of the implications of spatial bias in surveys, and possible modeling remedies, will substantially improve predictions of species

  16. Income distribution: An adaptive heterogeneous model

    NASA Astrophysics Data System (ADS)

    da Silva, L. C.; de Figueirêdo, P. H.

    2014-02-01

    In this communication an adaptive process is introduced into a many-agent model for closed economic system in order to establish general features of income distribution. In this new version agents are able to modify their exchange parameter ωi of resources through an adaptive process. The conclusions indicate that assuming an instantaneous learning behavior of all agents a Γ-distribution for income is reproduced while a frozen behavior establishes a Pareto’s distribution for income with an exponent ν=0.94±0.02. A third case occurs when a heterogeneous “inertia” behavior is introduced leading us to a Γ-distribution at the low income regime and a power-law decay for the large income values with an exponent ν=2.05±0.05. This method enables investigation of the resources flux in the economic environment and produces also bounding values for the Gini index comparable with data evidences.

  17. An adaptive contextual quantum language model

    NASA Astrophysics Data System (ADS)

    Li, Jingfei; Zhang, Peng; Song, Dawei; Hou, Yuexian

    2016-08-01

    User interactions in search system represent a rich source of implicit knowledge about the user's cognitive state and information need that continuously evolves over time. Despite massive efforts that have been made to exploiting and incorporating this implicit knowledge in information retrieval, it is still a challenge to effectively capture the term dependencies and the user's dynamic information need (reflected by query modifications) in the context of user interaction. To tackle these issues, motivated by the recent Quantum Language Model (QLM), we develop a QLM based retrieval model for session search, which naturally incorporates the complex term dependencies occurring in user's historical queries and clicked documents with density matrices. In order to capture the dynamic information within users' search session, we propose a density matrix transformation framework and further develop an adaptive QLM ranking model. Extensive comparative experiments show the effectiveness of our session quantum language models.

  18. Cancelling prism adaptation by a shift of background: a novel utility of allocentric coordinates for extracting motor errors.

    PubMed

    Uchimura, Motoaki; Kitazawa, Shigeru

    2013-04-24

    Many previous studies have reported that our brains are able to encode a target position not only in body-centered coordinates but also in terms of landmarks in the background. The importance of such allocentric memory increases when we are forced to complete a delayed reaching task after the target has disappeared. However, the merit of allocentric memory in natural situations in which we are free to make an immediate reach toward a target has remained elusive. We hypothesized that allocentric memory is essential even in an immediate reach for dissociating between error attributable to the motor system and error attributable to target motion. We show here in humans that prism adaptation, that is, adaptation of reaching movements in response to errors attributable to displacement of the visual field, can be cancelled or enhanced simply by moving the background in mid-flight of the reaching movement. The results provide direct evidence for the novel contribution of allocentric memory in providing information on "where I intended to go," thereby discriminating the effect of target motion from the error resulting from the issued motor control signals.

  19. Complex interplay between neutral and adaptive evolution shaped differential genomic background and disease susceptibility along the Italian peninsula

    PubMed Central

    Sazzini, Marco; Gnecchi Ruscone, Guido Alberto; Giuliani, Cristina; Sarno, Stefania; Quagliariello, Andrea; De Fanti, Sara; Boattini, Alessio; Gentilini, Davide; Fiorito, Giovanni; Catanoso, Mariagrazia; Boiardi, Luigi; Croci, Stefania; Macchioni, Pierluigi; Mantovani, Vilma; Di Blasio, Anna Maria; Matullo, Giuseppe; Salvarani, Carlo; Franceschi, Claudio; Pettener, Davide; Garagnani, Paolo; Luiselli, Donata

    2016-01-01

    The Italian peninsula has long represented a natural hub for human migrations across the Mediterranean area, being involved in several prehistoric and historical population movements. Coupled with a patchy environmental landscape entailing different ecological/cultural selective pressures, this might have produced peculiar patterns of population structure and local adaptations responsible for heterogeneous genomic background of present-day Italians. To disentangle this complex scenario, genome-wide data from 780 Italian individuals were generated and set into the context of European/Mediterranean genomic diversity by comparison with genotypes from 50 populations. To maximize possibility of pinpointing functional genomic regions that have played adaptive roles during Italian natural history, our survey included also ~250,000 exomic markers and ~20,000 coding/regulatory variants with well-established clinical relevance. This enabled fine-grained dissection of Italian population structure through the identification of clusters of genetically homogeneous provinces and of genomic regions underlying their local adaptations. Description of such patterns disclosed crucial implications for understanding differential susceptibility to some inflammatory/autoimmune disorders, coronary artery disease and type 2 diabetes of diverse Italian subpopulations, suggesting the evolutionary causes that made some of them particularly exposed to the metabolic and immune challenges imposed by dietary and lifestyle shifts that involved western societies in the last centuries. PMID:27582244

  20. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  1. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  2. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  3. Pattern-based video coding with dynamic background modeling

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    The existing video coding standard H.264 could not provide expected rate-distortion (RD) performance for macroblocks (MBs) with both moving objects and static background and the MBs with uncovered background (previously occluded). The pattern-based video coding (PVC) technique partially addresses the first problem by separating and encoding moving area and skipping background area at block level using binary pattern templates. However, the existing PVC schemes could not outperform the H.264 with significant margin at high bit rates due to the least number of MBs classified using the pattern mode. Moreover, both H.264 and the PVC scheme could not provide the expected RD performance for the uncovered background areas due to the unavailability of the reference areas in the existing approaches. In this paper, we propose a new PVC technique which will use the most common frame in a scene (McFIS) as a reference frame to overcome the problems. Apart from the use of McFIS as a reference frame, we also introduce a content-dependent pattern generation strategy for better RD performance. The experimental results confirm the superiority of the proposed schemes in comparison with the existing PVC and the McFIS-based methods by achieving significant image quality gain at a wide range of bit rates.

  4. Plant adaptive behaviour in hydrological models (Invited)

    NASA Astrophysics Data System (ADS)

    van der Ploeg, M. J.; Teuling, R.

    2013-12-01

    Models that will be able to cope with future precipitation and evaporation regimes need a solid base that describes the essence of the processes involved [1]. Micro-behaviour in the soil-vegetation-atmosphere system may have a large impact on patterns emerging at larger scales. A complicating factor in the micro-behaviour is the constant interaction between vegetation and geology in which water plays a key role. The resilience of the coupled vegetation-soil system critically depends on its sensitivity to environmental changes. As a result of environmental changes vegetation may wither and die, but such environmental changes may also trigger gene adaptation. Constant exposure to environmental stresses, biotic or abiotic, influences plant physiology, gene adaptations, and flexibility in gene adaptation [2-6]. Gene expression as a result of different environmental conditions may profoundly impact drought responses across the same plant species. Differences in response to an environmental stress, has consequences for the way species are currently being treated in models (single plant to global scale). In particular, model parameters that control root water uptake and plant transpiration are generally assumed to be a property of the plant functional type. Assigning plant functional types does not allow for local plant adaptation to be reflected in the model parameters, nor does it allow for correlations that might exist between root parameters and soil type. Models potentially provide a means to link root water uptake and transport to large scale processes (e.g. Rosnay and Polcher 1998, Feddes et al. 2001, Jung 2010), especially when powered with an integrated hydrological, ecological and physiological base. We explore the experimental evidence from natural vegetation to formulate possible alternative modeling concepts. [1] Seibert, J. 2000. Multi-criteria calibration of a conceptual runoff model using a genetic algorithm. Hydrology and Earth System Sciences 4(2): 215

  5. Adaptive Model of Wastewater Aeration Tank

    NASA Astrophysics Data System (ADS)

    Sniders, Andris; Laizans, Aigars

    2011-01-01

    The paper discusses the methodology of oxygen transfer virtual simulation in a wastewater biological treatment process, using the MATLAB/SIMULINK technology. A self-tuning adaptive model of a wastewater aeration tank, as a non-stationary object, with variable time dependent sensitivity and inertia indexes, as the functions of input variable - air pneumatic supply capacity Lg(t) (m3/min), output variable - dissolved oxygen concentration C(t) (g/m3) and oxygen expenditure, as a load - q(t) (g/min), required for wastewater complete purification, is expounded. Virtual models, applying Laplace transforms and SIMULINK blocks library, are composed in order to compare the transient processes of dissolved oxygen concentration in the simplified stationary model with constant sensitivity and inertia coefficients, and in the non-stationary model with variable sensitivity and inertia indexes. The simulation block-diagram for non-stationary model adoption to the variable parameters is developed, using informative links from input variable Lg(t), from variable load q(t) and feedback from output variable C(t) as inputs of calculation modulus, allowing to instantly re-calculate the variable indexes during simulation time. Comparison of the simplified stationary model and the non-stationary model shows that the simulation results of oxygen transfer differ up to 50%.

  6. A probabilistic cell model in background corrected image sequences for single cell analysis

    PubMed Central

    2010-01-01

    Background Methods of manual cell localization and outlining are so onerous that automated tracking methods would seem mandatory for handling huge image sequences, nevertheless manual tracking is, astonishingly, still widely practiced in areas such as cell biology which are outside the influence of most image processing research. The goal of our research is to address this gap by developing automated methods of cell tracking, localization, and segmentation. Since even an optimal frame-to-frame association method cannot compensate and recover from poor detection, it is clear that the quality of cell tracking depends on the quality of cell detection within each frame. Methods Cell detection performs poorly where the background is not uniform and includes temporal illumination variations, spatial non-uniformities, and stationary objects such as well boundaries (which confine the cells under study). To improve cell detection, the signal to noise ratio of the input image can be increased via accurate background estimation. In this paper we investigate background estimation, for the purpose of cell detection. We propose a cell model and a method for background estimation, driven by the proposed cell model, such that well structure can be identified, and explicitly rejected, when estimating the background. Results The resulting background-removed images have fewer artifacts and allow cells to be localized and detected more reliably. The experimental results generated by applying the proposed method to different Hematopoietic Stem Cell (HSC) image sequences are quite promising. Conclusion The understanding of cell behavior relies on precise information about the temporal dynamics and spatial distribution of cells. Such information may play a key role in disease research and regenerative medicine, so automated methods for observation and measurement of cells from microscopic images are in high demand. The proposed method in this paper is capable of localizing single cells

  7. The Adaptive Calibration Model of stress responsivity

    PubMed Central

    Ellis, Bruce J.; Shirtcliff, Elizabeth A.

    2010-01-01

    This paper presents the Adaptive Calibration Model (ACM), an evolutionary-developmental theory of individual differences in the functioning of the stress response system. The stress response system has three main biological functions: (1) to coordinate the organism’s allostatic response to physical and psychosocial challenges; (2) to encode and filter information about the organism’s social and physical environment, mediating the organism’s openness to environmental inputs; and (3) to regulate the organism’s physiology and behavior in a broad range of fitness-relevant areas including defensive behaviors, competitive risk-taking, learning, attachment, affiliation and reproductive functioning. The information encoded by the system during development feeds back on the long-term calibration of the system itself, resulting in adaptive patterns of responsivity and individual differences in behavior. Drawing on evolutionary life history theory, we build a model of the development of stress responsivity across life stages, describe four prototypical responsivity patterns, and discuss the emergence and meaning of sex differences. The ACM extends the theory of biological sensitivity to context (BSC) and provides an integrative framework for future research in the field. PMID:21145350

  8. Adaptation dynamics of the quasispecies model

    NASA Astrophysics Data System (ADS)

    Jain, Kavita

    2009-02-01

    We study the adaptation dynamics of an initially maladapted population evolving via the elementary processes of mutation and selection. The evolution occurs on rugged fitness landscapes which are defined on the multi-dimensional genotypic space and have many local peaks separated by low fitness valleys. We mainly focus on the Eigen's model that describes the deterministic dynamics of an infinite number of self-replicating molecules. In the stationary state, for small mutation rates such a population forms a {\\it quasispecies} which consists of the fittest genotype and its closely related mutants. The quasispecies dynamics on rugged fitness landscape follow a punctuated (or step-like) pattern in which a population jumps from a low fitness peak to a higher one, stays there for a considerable time before shifting the peak again and eventually reaches the global maximum of the fitness landscape. We calculate exactly several properties of this dynamical process within a simplified version of the quasispecies model.

  9. European upper mantle tomography: adaptively parameterized models

    NASA Astrophysics Data System (ADS)

    Schäfer, J.; Boschi, L.

    2009-04-01

    We have devised a new algorithm for upper-mantle surface-wave tomography based on adaptive parameterization: i.e. the size of each parameterization pixel depends on the local density of seismic data coverage. The advantage in using this kind of parameterization is that a high resolution can be achieved in regions with dense data coverage while a lower (and cheaper) resolution is kept in regions with low coverage. This way, parameterization is everywhere optimal, both in terms of its computational cost, and of model resolution. This is especially important for data sets with inhomogenous data coverage, as it is usually the case for global seismic databases. The data set we use has an especially good coverage around Switzerland and over central Europe. We focus on periods from 35s to 150s. The final goal of the project is to determine a new model of seismic velocities for the upper mantle underlying Europe and the Mediterranean Basin, of resolution higher than what is currently found in the literature. Our inversions involve regularization via norm and roughness minimization, and this in turn requires that discrete norm and roughness operators associated with our adaptive grid be precisely defined. The discretization of the roughness damping operator in the case of adaptive parameterizations is not as trivial as it is for the uniform ones; important complications arise from the significant lateral variations in the size of pixels. We chose to first define the roughness operator in a spherical harmonic framework, and subsequently translate it to discrete pixels via a linear transformation. Since the smallest pixels we allow in our parameterization have a size of 0.625 °, the spherical-harmonic roughness operator has to be defined up to harmonic degree 899, corresponding to 810.000 harmonic coefficients. This results in considerable computational costs: we conduct the harmonic-pixel transformations on a small Beowulf cluster. We validate our implementation of adaptive

  10. A novel approach to model EPIC variable background

    NASA Astrophysics Data System (ADS)

    Marelli, M.; De Luca, A.; Salvetti, D.; Belfiore, A.; Pizzocaro, D.

    2016-06-01

    In the past years XMM-Newton revolutionized our way to look at the X-ray sky. With more than 200 Ms of exposure, it allowed for numerous discoveries in every field of astronomy. Unfortunately, about 35% of the observing time is badly affected by soft proton flares, with background increasing by orders of magnitudes hampering any classical analysis of field sources. One of the main aim of the EXTraS ("Exploring the X-ray Transient and variable Sky") project is to characterise the variability of XMM-Newton sources within each single observation, including periods of high background. This posed severe challenges. I will describe a novel approach that we implemented within the EXTraS project to produce background-subtracted light curves, that allows to treat the case of very faint sources and very large proton flares. EXTraS light curves will be soon released to the community, together with new tools that will allow the user to reproduce EXTraS results, as well as to extend a similar analysis to future data. Results of this work (including an unprecedented characterisation of the soft proton phenomenon and instrument response) will also serve as a reference for future missions and will be particularly relevant for the Athena observatory.

  11. Image Discrimination Models for Object Detection in Natural Backgrounds

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.

    2000-01-01

    This paper reviews work accomplished and in progress at NASA Ames relating to visual target detection. The focus is on image discrimination models, starting with Watson's pioneering development of a simple spatial model and progressing through this model's descendents and extensions. The application of image discrimination models to target detection will be described and results reviewed for Rohaly's vehicle target data and the Search 2 data. The paper concludes with a description of work we have done to model the process by which observers learn target templates and methods for elucidating those templates.

  12. Patterns of coral bleaching: Modeling the adaptive bleaching hypothesis

    USGS Publications Warehouse

    Ware, J.R.; Fautin, D.G.; Buddemeier, R.W.

    1996-01-01

    Bleaching - the loss of symbiotic dinoflagellates (zooxanthellae) from animals normally possessing them - can be induced by a variety of stresses, of which temperature has received the most attention. Bleaching is generally considered detrimental, but Buddemeier and Fautin have proposed that bleaching is also adaptive, providing an opportunity for recombining hosts with alternative algal types to form symbioses that might be better adapted to altered circumstances. Our mathematical model of this "adaptive bleaching hypothesis" provides insight into how animal-algae symbioses might react under various circumstances. It emulates many aspects of the coral bleaching phenomenon including: corals bleaching in response to a temperature only slightly greater than their average local maximum temperature; background bleaching; bleaching events being followed by bleaching of lesser magnitude in the subsequent one to several years; higher thermal tolerance of corals subject to environmental variability compared with those living under more constant conditions; patchiness in bleaching; and bleaching at temperatures that had not previously resulted in bleaching. ?? 1996 Elsevier Science B.V. All rights reserved.

  13. Adapting the buccal micronucleus cytome assay for use in wild birds: age and sex affect background frequency in pigeons.

    PubMed

    Shepherd, G L; Somers, C M

    2012-03-01

    Micronucleus (MN) formation has been used extensively as a biomarker of damage from genotoxic exposures. The Buccal MN Cytome (BMCyt) assay provides a noninvasive means of quantifying MN frequency in humans, but it has not been developed for use in wildlife. We adapted the BMCyt assay for use in wild birds, with a focus on feral pigeons (Columba livia) as a potential indicator species. Five of six urban bird species sampled using oral cavity swabs produced sufficient buccal cells for the BMCyt assay. The body size of species sampled ranged almost 100-fold (~60 to 5,000 g), but was a not major factor influencing the number of buccal cells collected. Pigeon cells were stained and scored following published BMCyt assay protocols for humans, but with a modified fixation approach using heat and methanol. Pigeons had the same common nuclear abnormalities reported in human studies, and a similar background MN formation frequency of 0.88 MN/1,000 cells. Adult pigeons had on average a threefold higher rate of MN formation than juveniles, and males had a 1.4- to 2.2-fold higher frequency than females. Domestic and feral pigeons did not differ in overall MN frequency. Our results indicate that the BMCyt assay can be used on wild birds, and could provide a means of assessing environmental genotoxicity in pigeons, a useful indicator species. However, bird age and sex are important factors affecting background MN frequency, and thereby the design of environmental studies.

  14. Adaptable Multivariate Calibration Models for Spectral Applications

    SciTech Connect

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  15. Computational modeling of multispectral remote sensing systems: Background investigations

    NASA Technical Reports Server (NTRS)

    Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic process of remote sensing has been developed based upon the results of the investigations presented. The model is used in studying concepts for improving worldwide environment and resource monitoring. A review of various atmospheric radiative transfer models is presented as well as details of the selected model. Functional forms for spectral diffuse reflectance with variability introduced are also presented. A cloud detection algorithm and the stochastic nature of remote sensing data with its implications are considered.

  16. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  17. Transitional Jobs: Background, Program Models, and Evaluation Evidence

    ERIC Educational Resources Information Center

    Bloom, Dan

    2010-01-01

    The budget for the U.S. Department of Labor for Fiscal Year 2010 includes a total of $45 million to support and study transitional jobs. This paper describes the origins of the transitional jobs models that are operating today, reviews the evidence on the effectiveness of this approach and other subsidized employment models, and offers some…

  18. Statistical modeling of natural backgrounds in hyperspectral LWIR data

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph

    2016-09-01

    Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.

  19. Photovoltaic market analysis program: Background, model development, applications and extensions

    NASA Astrophysics Data System (ADS)

    Lilien, G. L.; Fuller, F. H.

    1981-04-01

    Tools and procedures to help guide government spending decisions associated with stimulating photovoltaic market penetration were developed. The program has three main components: (1) theoretical analysis aimed at understanding qualitatively what general types of policies are likely to be most cost effective in stimulating PV market penetration; (2) operational model developent (PV1), providing a user oriented tool to study quantitatively the relative effectiveness of specific government spending options and (3) field measurements, aimed at providing objective estimates of the parameters used in the diffusion model used in PV1. Existing models of solar technology diffusion are reviewed and the structure of the PV1 model is described. Theoretical results on optimal strategies for spending federal market development and subsidy funds are reviewed. The validity of these results is checked by comparing them with PV1 projections of penetration and cost forecasts for 15 government policy strategies which are simulated on the PV1 model.

  20. Roy’s Adaptation Model-Guided Education and Promoting the Adaptation of Veterans With Lower Extremities Amputation

    PubMed Central

    Azarmi, Somayeh; Farsi, Zahra

    2015-01-01

    Background: Any defect in extremities of the body can affect different life aspects. Objectives: The purpose of this study was to investigate the effect of Roy’s adaptation model-guided education on promoting the adaptation of veterans with lower extremities amputation. Patients and Methods: In a randomized clinical trial, 60 veterans with lower extremities amputation referring to Kowsar Orthotics and Prosthetics Center of veterans clinic in Tehran, Iran, were recruited with convenience method and were randomly assigned to intervention and control groups during 2013 - 2014. For data collection, Roy’s adaptation model questionnaire was used. After completing the questionnaires in both groups, maladaptive behaviors were determined in the intervention group and an education program based on Roy’s adaptation model was implemented. After two months, both groups completed the questionnaires again. Data was analyzed with SPSS software. Results: Independent t-test showed statistically significant differences between the two groups in the post-test stage in terms of the total score of adaptation (P = 0.001) as well as physiologic (P = 0.0001) and role function modes (P = 0.004). The total score of adaptation (139.43 ± 5.45 to 127.54 ± 14.55, P = 0.006) as well as the scores of physiologic (60.26 ± 5.45 to 53.73 ± 7.79, P = 0.001) and role function (20.30 ± 2.42 to 18.13 ± 3.18, P = 0.01) modes in the intervention group significantly increased, whereas the scores of self-concept (42.10 ± 4.71 to 39.40 ± 5.67, P = 0.21) and interdependence (16.76 ± 2.22 to 16.30 ± 2.57, P = 0.44) modes in the two stages did not have a significant difference. Conclusions: Findings of this research indicated that the Roy’s adaptation model-guided education promoted the adaptation level of physiologic and role function modes in veterans with lower extremities amputation. However, this intervention could not promote adaptation in self-concept and interdependence modes. More

  1. A Comparison between High-Energy Radiation Background Models and SPENVIS Trapped-Particle Radiation Models

    NASA Technical Reports Server (NTRS)

    Krizmanic, John F.

    2013-01-01

    We have been assessing the effects of background radiation in low-Earth orbit for the next generation of X-ray and Cosmic-ray experiments, in particular for International Space Station orbit. Outside the areas of high fluxes of trapped radiation, we have been using parameterizations developed by the Fermi team to quantify the high-energy induced background. For the low-energy background, we have been using the AE8 and AP8 SPENVIS models to determine the orbit fractions where the fluxes of trapped particles are too high to allow for useful operation of the experiment. One area we are investigating is how the fluxes of SPENVIS predictions at higher energies match the fluxes at the low-energy end of our parameterizations. I will summarize our methodology for background determination from the various sources of cosmogenic and terrestrial radiation and how these compare to SPENVIS predictions in overlapping energy ranges.

  2. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  3. Radiation Background and Attenuation Model Validation and Development

    SciTech Connect

    Peplow, Douglas E.; Santiago, Claudio P.

    2015-08-05

    This report describes the initial results of a study being conducted as part of the Urban Search Planning Tool project. The study is comparing the Urban Scene Simulator (USS), a one-dimensional (1D) radiation transport model developed at LLNL, with the three-dimensional (3D) radiation transport model from ORNL using the MCNP, SCALE/ORIGEN and SCALE/MAVRIC simulation codes. In this study, we have analyzed the differences between the two approaches at every step, from source term representation, to estimating flux and detector count rates at a fixed distance from a simple surface (slab), and at points throughout more complex 3D scenes.

  4. Modeling Background Radiation in our Environment Using Geochemical Data

    SciTech Connect

    Malchow, Russell L.; Marsac, Kara; Burnley, Pamela; Hausrath, Elisabeth; Haber, Daniel; Adcock, Christopher

    2015-02-01

    Radiation occurs naturally in bedrock and soil. Gamma rays are released from the decay of the radioactive isotopes K, U, and Th. Gamma rays observed at the surface come from the first 30 cm of rock and soil. The energy of gamma rays is specific to each isotope, allowing identification. For this research, data was collected from national databases, private companies, scientific literature, and field work. Data points were then evaluated for self-consistency. A model was created by converting concentrations of U, K, and Th for each rock and soil unit into a ground exposure rate using the following equation: D=1.32 K+ 0.548 U+ 0.272 Th. The first objective of this research was to compare the original Aerial Measurement System gamma ray survey to results produced by the model. The second objective was to improve the method and learn the constraints of the model. Future work will include sample data analysis from field work with a goal of improving the geochemical model.

  5. A Sharing Item Response Theory Model for Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Segall, Daniel O.

    2004-01-01

    A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection…

  6. Adapting the ALP Model for Student and Institutional Needs

    ERIC Educational Resources Information Center

    Sides, Meredith

    2016-01-01

    With the increasing adoption of accelerated models of learning comes the necessary step of adapting these models to fit the unique needs of the student population at each individual institution. One such college adapted the ALP (Accelerated Learning Program) model and made specific changes to the target population, structure and scheduling, and…

  7. A Conceptual Model of Childhood Adaptation to Type 1 Diabetes

    PubMed Central

    Whittemore, Robin; Jaser, Sarah; Guo, Jia; Grey, Margaret

    2010-01-01

    The Childhood Adaptation Model to Chronic Illness: Diabetes Mellitus was developed to identify factors that influence childhood adaptation to type 1 diabetes (T1D). Since this model was proposed, considerable research has been completed. The purpose of this paper is to update the model on childhood adaptation to T1D using research conducted since the original model was proposed. The framework suggests that individual and family characteristics, such as age, socioeconomic status, and in children with T1D, treatment modality (pump vs. injections), psychosocial responses (depressive symptoms and anxiety), and individual and family responses (self-management, coping, self-efficacy, family functioning, social competence) influence the level of adaptation. Adaptation has both physiologic (metabolic control) and psychosocial (QOL) components. This revised model provides greater specificity to the factors that influence adaptation to chronic illness in children. Research and clinical implications are discussed. PMID:20934079

  8. High-resolution subgrid models: background, grid generation, and implementation

    NASA Astrophysics Data System (ADS)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  9. Background model systematics for the Fermi GeV excess

    SciTech Connect

    Calore, Francesca; Weniger, Christoph; Cholis, Ilias E-mail: cholis@fnal.gov

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E{sub break} = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of b-bar b final states a dark matter mass of m{sub χ}=49{sup +6.4}{sub −5.4}  GeV.

  10. Background model systematics for the Fermi GeV excess

    SciTech Connect

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E(break) = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of m(χ)=49(+6.4)(-)(5.4)  GeV.

  11. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  12. Applicable Adaptive Testing Models for School Teachers.

    ERIC Educational Resources Information Center

    Wang, Albert Chang-hwa; Chuang, Chi-lin

    2002-01-01

    Describes a study conducted in Taipei (Taiwan) that investigated the attitudinal effects of SPRT (Sequential Probability Ratio Test) adaptive testing environment on junior high school students. Discusses test anxiety; student preferences; test adaptability; acceptance of test results; number of items answered; and computer experience. (Author/LRW)

  13. Modeling adaptive non-repudiation security services

    NASA Astrophysics Data System (ADS)

    Tunia, Marcin A.

    2016-09-01

    Non-repudiation security service helps to protect an electronic system against false denial of performing certain actions by the participants of communication involving that system. In the development process of such a security service it is important to implement all the necessary elements and adapt them in accordance with the defined protection scope. There are several types of non-repudiation and cases of their use. In this paper the author focuses on the situation when a nonrepudiation service is being implemented on an application server. All necessary actions of the users i.e. of people and/or machines are recorded by the service. Currently a new type of security services called "context-aware security services" is under studies. This type of service involves acquiring and processing additional information in order to provide flexible protection. This paper presents the elements of non-repudiation security service and how the elements can be modeled with context awareness support including reputation systems.

  14. Multiclient Identification System Using Adaptive Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Siana, Linda; Shou, Yu-Wen; Yang, Chien-Ting

    2010-12-01

    This paper aims at integrating detection and identification of human faces in a more practical and real-time face recognition system. The proposed face detection system is based on the cascade Adaboost method to improve the precision and robustness toward unstable surrounding lightings. Our Adaboost method innovates to adjust the environmental lighting conditions by histogram lighting normalization and to accurately locate the face regions by a region-based-clustering process as well. We also address on the problem of multi-scale faces in this paper by using 12 different scales of searching windows and 5 different orientations for each client in pursuit of the multi-view independent face identification. There are majorly two methodological parts in our face identification system, including PCA (principal component analysis) facial feature extraction and adaptive probabilistic model (APM). The structure of our implemented APM with a weighted combination of simple probabilistic functions constructs the likelihood functions by the probabilistic constraint in the similarity measures. In addition, our proposed method can online add a new client and update the information of registered clients due to the constructed APM. The experimental results eventually show the superior performance of our proposed system for both offline and real-time online testing.

  15. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    SciTech Connect

    Archer, Daniel E.; Hornback, Donald Eric; Johnson, Jeffrey O.; Nicholson, Andrew D.; Patton, Bruce W.; Peplow, Douglas E.; Miller, Thomas Martin; Ayaz-Maierhafer, Birsen

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  16. Moving object detection using a background modeling based on entropy theory and quad-tree decomposition

    NASA Astrophysics Data System (ADS)

    Elharrouss, Omar; Moujahid, Driss; Elkah, Samah; Tairi, Hamid

    2016-11-01

    A particular algorithm for moving object detection using a background subtraction approach is proposed. We generate the background model by combining quad-tree decomposition with entropy theory. In general, many background subtraction approaches are sensitive to sudden illumination change in the scene and cannot update the background image in scenes. The proposed background modeling approach analyzes the illumination change problem. After performing the background subtraction based on the proposed background model, the moving targets can be accurately detected at each frame of the image sequence. In order to produce high accuracy for the motion detection, the binary motion mask can be computed by the proposed threshold function. The experimental analysis based on statistical measurements proves the efficiency of our proposed method in terms of quality and quantity. And it even outperforms substantially existing methods by perceptional evaluation.

  17. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  18. Modeling Adaptation as a Flow and Stock Decsion with Mitigation

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-live...

  19. Modeling Two Types of Adaptation to Climate Change

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-live...

  20. Modeling Adaptation as a Flow and Stock Decision with Mitigation

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-liv...

  1. Adaptive Networks Foundations: Modeling, Dynamics, and Applications

    DTIC Science & Technology

    2013-02-13

    22-Mar. 2, 2012. • Shaw, L.B., Long, Y., and Gross, T. Simultaneous spread of infection and information in adaptive networks. Casablanca ...International Workshop on Mathematical Biology, Casablanca , Morocco, Jun. 20-24, 2011. • Tunc, I. and Shaw, L.B. Dynamics of infection spreading in adaptive...Defense The number of undergraduates funded by your agreement who graduated during this period and will receive scholarships or fellowships for further

  2. Gradient-based adaptation of continuous dynamic model structures

    NASA Astrophysics Data System (ADS)

    La Cava, William G.; Danai, Kourosh

    2016-01-01

    A gradient-based method of symbolic adaptation is introduced for a class of continuous dynamic models. The proposed model structure adaptation method starts with the first-principles model of the system and adapts its structure after adjusting its individual components in symbolic form. A key contribution of this work is its introduction of the model's parameter sensitivity as the measure of symbolic changes to the model. This measure, which is essential to defining the structural sensitivity of the model, not only accommodates algebraic evaluation of candidate models in lieu of more computationally expensive simulation-based evaluation, but also makes possible the implementation of gradient-based optimisation in symbolic adaptation. The proposed method is applied to models of several virtual and real-world systems that demonstrate its potential utility.

  3. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  4. Consensus time and conformity in the adaptive voter model

    NASA Astrophysics Data System (ADS)

    Rogers, Tim; Gross, Thilo

    2013-09-01

    The adaptive voter model is a paradigmatic model in the study of opinion formation. Here we propose an extension for this model, in which conflicts are resolved by obtaining another opinion, and analytically study the time required for consensus to emerge. Our results shed light on the rich phenomenology of both the original and extended adaptive voter models, including a dynamical phase transition in the scaling behavior of the mean time to consensus.

  5. An Adaptive Discontinuous Galerkin Method for Modeling Atmospheric Convection (Preprint)

    DTIC Science & Technology

    2011-04-13

    Giraldo and Volkmar Wirth 5 SENSITIVITY STUDIES One important question for each adaptive numerical model is: how accurate is the adaptive method? For...this criterion that is used later for some sensitivity studies . These studies include a comparison between a simulation on an adaptive mesh with a...simulation on a uniform mesh and a sensitivity study concerning the size of the refinement region. 5.1 Comparison Criterion For comparing different

  6. Modeling Family Adaptation to Fragile X Syndrome

    ERIC Educational Resources Information Center

    Raspa, Melissa; Bailey, Donald, Jr.; Bann, Carla; Bishop, Ellen

    2014-01-01

    Using data from a survey of 1,099 families who have a child with Fragile X syndrome, we examined adaptation across 7 dimensions of family life: parenting knowledge, social support, social life, financial impact, well-being, quality of life, and overall impact. Results illustrate that although families report a high quality of life, they struggle…

  7. Particle Swarm Based Collective Searching Model for Adaptive Environment

    SciTech Connect

    Cui, Xiaohui; Patton, Robert M; Potok, Thomas E; Treadwell, Jim N

    2008-01-01

    This report presents a pilot study of an integration of particle swarm algorithm, social knowledge adaptation and multi-agent approaches for modeling the collective search behavior of self-organized groups in an adaptive environment. The objective of this research is to apply the particle swarm metaphor as a model of social group adaptation for the dynamic environment and to provide insight and understanding of social group knowledge discovering and strategic searching. A new adaptive environment model, which dynamically reacts to the group collective searching behaviors, is proposed in this research. The simulations in the research indicate that effective communication between groups is not the necessary requirement for whole self-organized groups to achieve the efficient collective searching behavior in the adaptive environment.

  8. Particle Swarm Based Collective Searching Model for Adaptive Environment

    SciTech Connect

    Cui, Xiaohui; Patton, Robert M; Potok, Thomas E; Treadwell, Jim N

    2007-01-01

    This report presents a pilot study of an integration of particle swarm algorithm, social knowledge adaptation and multi-agent approaches for modeling the collective search behavior of self-organized groups in an adaptive environment. The objective of this research is to apply the particle swarm metaphor as a model of social group adaptation for the dynamic environment and to provide insight and understanding of social group knowledge discovering and strategic searching. A new adaptive environment model, which dynamically reacts to the group collective searching behaviors, is proposed in this research. The simulations in the research indicate that effective communication between groups is not the necessary requirement for whole self-organized groups to achieve the efficient collective searching behavior in the adaptive environment.

  9. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  10. Adaptive User Model for Web-Based Learning Environment.

    ERIC Educational Resources Information Center

    Garofalakis, John; Sirmakessis, Spiros; Sakkopoulos, Evangelos; Tsakalidis, Athanasios

    This paper describes the design of an adaptive user model and its implementation in an advanced Web-based Virtual University environment that encompasses combined and synchronized adaptation between educational material and well-known communication facilities. The Virtual University environment has been implemented to support a postgraduate…

  11. Validity of Greenspan's Models of Adaptive and Social Intelligence.

    ERIC Educational Resources Information Center

    Mathias, Jane L.; Nettelbeck, Ted

    1992-01-01

    Two studies assessed the construct validity of Greenspan's models of adaptive and social intelligence with 75 adolescents with mental retardation. Factor analysis measures of conceptual intelligence, adaptive behavior, and social intelligence yielded a practice-interpersonal competence construct. The second study, however, failed to establish the…

  12. Background-Error Correlation Model Based on the Implicit Solution of a Diffusion Equation

    DTIC Science & Technology

    2010-01-01

    1 Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation Matthew J. Carrier* and Hans Ngodock...4. TITLE AND SUBTITLE Background- Error Correlation Model Based on the Implicit Solution of a Diffusion Equation 5a. CONTRACT NUMBER 5b. GRANT...2001), which sought to model error correlations based on the explicit solution of a generalized diffusion equation. The implicit solution is

  13. Improving nonlinear modeling capabilities of functional link adaptive filters.

    PubMed

    Comminiello, Danilo; Scarpiniti, Michele; Scardapane, Simone; Parisi, Raffaele; Uncini, Aurelio

    2015-09-01

    The functional link adaptive filter (FLAF) represents an effective solution for online nonlinear modeling problems. In this paper, we take into account a FLAF-based architecture, which separates the adaptation of linear and nonlinear elements, and we focus on the nonlinear branch to improve the modeling performance. In particular, we propose a new model that involves an adaptive combination of filters downstream of the nonlinear expansion. Such combination leads to a cooperative behavior of the whole architecture, thus yielding a performance improvement, particularly in the presence of strong nonlinearities. An advanced architecture is also proposed involving the adaptive combination of multiple filters on the nonlinear branch. The proposed models are assessed in different nonlinear modeling problems, in which their effectiveness and capabilities are shown.

  14. Adaptive filter design using recurrent cerebellar model articulation controller.

    PubMed

    Lin, Chih-Min; Chen, Li-Yang; Yeung, Daniel S

    2010-07-01

    A novel adaptive filter is proposed using a recurrent cerebellar-model-articulation-controller (CMAC). The proposed locally recurrent globally feedforward recurrent CMAC (RCMAC) has favorable properties of small size, good generalization, rapid learning, and dynamic response, thus it is more suitable for high-speed signal processing. To provide fast training, an efficient parameter learning algorithm based on the normalized gradient descent method is presented, in which the learning rates are on-line adapted. Then the Lyapunov function is utilized to derive the conditions of the adaptive learning rates, so the stability of the filtering error can be guaranteed. To demonstrate the performance of the proposed adaptive RCMAC filter, it is applied to a nonlinear channel equalization system and an adaptive noise cancelation system. The advantages of the proposed filter over other adaptive filters are verified through simulations.

  15. Context aware adaptive security service model

    NASA Astrophysics Data System (ADS)

    Tunia, Marcin A.

    2015-09-01

    Present systems and devices are usually protected against different threats concerning digital data processing. The protection mechanisms consume resources, which are either highly limited or intensively utilized by many entities. The optimization of these resources usage is advantageous. The resources that are saved performing optimization may be utilized by other mechanisms or may be sufficient for longer time. It is usually assumed that protection has to provide specific quality and attack resistance. By interpreting context situation of business services - users and services themselves, it is possible to adapt security services parameters to countermeasure threats associated with current situation. This approach leads to optimization of used resources and maintains sufficient security level. This paper presents architecture of adaptive security service, which is context-aware and exploits quality of context data issue.

  16. Location- and lesion-dependent estimation of background tissue complexity for anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Avanaki, Ali R. N.; Espig, Kathryn; Knippel, Eddie; Kimpe, Tom R. L.; Xthona, Albert; Maidment, Andrew D. A.

    2016-03-01

    In this paper, we specify a notion of background tissue complexity (BTC) as perceived by a human observer that is suited for use with model observers. This notion of BTC is a function of image location and lesion shape and size. We propose four unsupervised BTC estimators based on: (i) perceived pre- and post-lesion similarity of images, (ii) lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), (iii) tissue anomaly detection, and (iv) mammogram density measurement. The latter two are existing methods we adapt for location- and lesion-dependent BTC estimation. To validate the BTC estimators, we ask human observers to measure BTC as the visibility threshold amplitude of an inserted lesion at specified locations in a mammogram. Both human-measured and computationally estimated BTC varied with lesion shape (from circular to oval), size (from small circular to larger circular), and location (different points across a mammogram). BTCs measured by different human observers are correlated (ρ=0.67). BTC estimators are highly correlated to each other (0.84model observer, with applications such as optimization of contrast-enhanced medical imaging systems, and creation of a diversified image dataset with characteristics of a desired population.

  17. A model for culturally adapting a learning system.

    PubMed

    Del Rosario, M L

    1975-12-01

    The Cross-Cultural Adaption Model (XCAM) is designed to help identify cultural values contained in the text, narration, or visual components of a learning instrument and enables the adapter to evaluate his adapted model so that he can modify or revise it, and allows him to assess the modified version by actually measuring the amount of cultural conflict still present in it. Such a model would permit world-wide adaption of learning materials in population regulation. A random sample of the target group is selected. The adapter develops a measurin g instrument, the cross-cultural adaption scale (XCA), a number of statements about the cultural affinity of the object evaluated. The pretest portion of the sample tests the clarity and understandability of the rating scale to be used for evaluating the instructional materials; the pilot group analyzes the original version of the instructional mater ials, determines the criteria for change, and analyzes the adapted version in terms of these criteria; the control group is administered the original version of the learning materials; and the experimental group is administered the adapted version. Finally, the responses obtained from the XRA rating scale and discussions of both the experimental and control groups are studied and group differences are ev aluated according to cultural conflicts met with each version. With this data, the preferred combination of elements is constructed.

  18. Fantastic animals as an experimental model to teach animal adaptation

    PubMed Central

    Guidetti, Roberto; Baraldi, Laura; Calzolai, Caterina; Pini, Lorenza; Veronesi, Paola; Pederzoli, Aurora

    2007-01-01

    Background Science curricula and teachers should emphasize evolution in a manner commensurate with its importance as a unifying concept in science. The concept of adaptation represents a first step to understand the results of natural selection. We settled an experimental project of alternative didactic to improve knowledge of organism adaptation. Students were involved and stimulated in learning processes by creative activities. To set adaptation in a historic frame, fossil records as evidence of past life and evolution were considered. Results The experimental project is schematized in nine phases: review of previous knowledge; lesson on fossils; lesson on fantastic animals; planning an imaginary world; creation of an imaginary animal; revision of the imaginary animals; adaptations of real animals; adaptations of fossil animals; and public exposition. A rubric to evaluate the student's performances is reported. The project involved professors and students of the University of Modena and Reggio Emilia and of the "G. Marconi" Secondary School of First Degree (Modena, Italy). Conclusion The educational objectives of the project are in line with the National Indications of the Italian Ministry of Public Instruction: knowledge of the characteristics of living beings, the meanings of the term "adaptation", the meaning of fossils, the definition of ecosystem, and the particularity of the different biomes. At the end of the project, students will be able to grasp particular adaptations of real organisms and to deduce information about the environment in which the organism evolved. This project allows students to review previous knowledge and to form their personalities. PMID:17767729

  19. The cerebellum as an adaptive filter: a general model?

    PubMed

    Dean, Paul; Porrill, John

    2010-01-01

    Many functional models of the cerebellar microcircuit are based on the adaptive-filter model first proposed by Fujita. The adaptive filter has powerful signal processing capacities that are suitable for both sensory and motor tasks, and uses a simple and intuitively plausible decorrelation learning rule that offers and account of the evolution of the inferior olive. Moreover, in those cases where the input-output transformations of cerebellar microzones have been sufficiently characterised, they appear to conform to those predicted by the adaptive-filter model. However, these cases are few in number, and comparing the model with the internal operations of the microcircuit itself has not proved straightforward. Whereas some microcircuit features appear compatible with adaptive-filter function, others such as simple granular-layer processing or Purkinje cell bistability, do not. How far these seeming incompatibilities indicate additional computational roles for the cerebellar microcircuit remains to be determined.

  20. Modeling adaptation of carbon use efficiency in microbial communities.

    PubMed

    Allison, Steven D

    2014-01-01

    In new microbial-biogeochemical models, microbial carbon use efficiency (CUE) is often assumed to decline with increasing temperature. Under this assumption, soil carbon losses under warming are small because microbial biomass declines. Yet there is also empirical evidence that CUE may adapt (i.e., become less sensitive) to warming, thereby mitigating negative effects on microbial biomass. To analyze potential mechanisms of CUE adaptation, I used two theoretical models to implement a tradeoff between microbial uptake rate and CUE. This rate-yield tradeoff is based on thermodynamic principles and suggests that microbes with greater investment in resource acquisition should have lower CUE. Microbial communities or individuals could adapt to warming by reducing investment in enzymes and uptake machinery. Consistent with this idea, a simple analytical model predicted that adaptation can offset 50% of the warming-induced decline in CUE. To assess the ecosystem implications of the rate-yield tradeoff, I quantified CUE adaptation in a spatially-structured simulation model with 100 microbial taxa and 12 soil carbon substrates. This model predicted much lower CUE adaptation, likely due to additional physiological and ecological constraints on microbes. In particular, specific resource acquisition traits are needed to maintain stoichiometric balance, and taxa with high CUE and low enzyme investment rely on low-yield, high-enzyme neighbors to catalyze substrate degradation. In contrast to published microbial models, simulations with greater CUE adaptation also showed greater carbon storage under warming. This pattern occurred because microbial communities with stronger CUE adaptation produced fewer degradative enzymes, despite increases in biomass. Thus, the rate-yield tradeoff prevents CUE adaptation from driving ecosystem carbon loss under climate warming.

  1. Stability of Wilkinson's linear model of prism adaptation over time for various targets.

    PubMed

    Wallace, B

    1977-01-01

    Prism adaptation as measured by negative aftereffects (NA), proprioceptive shifts (PS), and visual shifts (VS) was assessed as a function of amount of exposure time and target specificity, whether an exposure and a test target background were the same or different, to determine the validity of Wilkinson's linear model (NA = PS + VS). With few exceptions the model was found to hold well up to 40 min of prism viewing regardless of type of exposure background. In addition target specificity affected magnitude of the NA component of adapation but not the PS and the VS components.

  2. Dynamics modeling and adaptive control of flexible manipulators

    NASA Technical Reports Server (NTRS)

    Sasiadek, J. Z.

    1991-01-01

    An application of Model Reference Adaptive Control (MRAC) to the position and force control of flexible manipulators and robots is presented. A single-link flexible manipulator is analyzed. The problem was to develop a mathematical model of a flexible robot that is accurate. The objective is to show that the adaptive control works better than 'conventional' systems and is suitable for flexible structure control.

  3. Computational quantum chemistry and adaptive ligand modeling in mechanistic QSAR.

    PubMed

    De Benedetti, Pier G; Fanelli, Francesca

    2010-10-01

    Drugs are adaptive molecules. They realize this peculiarity by generating different ensembles of prototropic forms and conformers that depend on the environment. Among the impressive amount of available computational drug discovery technologies, quantitative structure-activity relationship approaches that rely on computational quantum chemistry descriptors are the most appropriate to model adaptive drugs. Indeed, computational quantum chemistry descriptors are able to account for the variation of the intramolecular interactions of the training compounds, which reflect their adaptive intermolecular interaction propensities. This enables the development of causative, interpretive and reasonably predictive quantitative structure-activity relationship models, and, hence, sound chemical information finalized to drug design and discovery.

  4. Post-Revolution Egypt: The Roy Adaptation Model in Community.

    PubMed

    Buckner, Britton S; Buckner, Ellen B

    2015-10-01

    The 2011 Arab Spring swept across the Middle East creating profound instability in Egypt, a country already challenged with poverty and internal pressures. To respond to this crisis, Catholic Relief Services led a community-based program called "Egypt Works" that included community improvement projects and psychosocial support. Following implementation, program outcomes were analyzed using the middle-range theory of adaptation to situational life events, based on the Roy adaptation model. The comprehensive, community-based approach facilitated adaptation, serving as a model for applying theory in post-crisis environments.

  5. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2016-12-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  6. A conforming to interface structured adaptive mesh refinement technique for modeling fracture problems

    NASA Astrophysics Data System (ADS)

    Soghrati, Soheil; Xiao, Fei; Nagarajan, Anand

    2017-04-01

    A Conforming to Interface Structured Adaptive Mesh Refinement (CISAMR) technique is introduced for the automated transformation of a structured grid into a conforming mesh with appropriate element aspect ratios. The CISAMR algorithm is composed of three main phases: (i) Structured Adaptive Mesh Refinement (SAMR) of the background grid; (ii) r-adaptivity of the nodes of elements cut by the crack; (iii) sub-triangulation of the elements deformed during the r-adaptivity process and those with hanging nodes generated during the SAMR process. The required considerations for the treatment of crack tips and branching cracks are also discussed in this manuscript. Regardless of the complexity of the problem geometry and without using iterative smoothing or optimization techniques, CISAMR ensures that aspect ratios of conforming elements are lower than three. Multiple numerical examples are presented to demonstrate the application of CISAMR for modeling linear elastic fracture problems with intricate morphologies.

  7. Adaptive Input Reconstruction with Application to Model Refinement, State Estimation, and Adaptive Control

    NASA Astrophysics Data System (ADS)

    D'Amato, Anthony M.

    Input reconstruction is the process of using the output of a system to estimate its input. In some cases, input reconstruction can be accomplished by determining the output of the inverse of a model of the system whose input is the output of the original system. Inversion, however, requires an exact and fully known analytical model, and is limited by instabilities arising from nonminimum-phase zeros. The main contribution of this work is a novel technique for input reconstruction that does not require model inversion. This technique is based on a retrospective cost, which requires a limited number of Markov parameters. Retrospective cost input reconstruction (RCIR) does not require knowledge of nonminimum-phase zero locations or an analytical model of the system. RCIR provides a technique that can be used for model refinement, state estimation, and adaptive control. In the model refinement application, data are used to refine or improve a model of a system. It is assumed that the difference between the model output and the data is due to an unmodeled subsystem whose interconnection with the modeled system is inaccessible, that is, the interconnection signals cannot be measured and thus standard system identification techniques cannot be used. Using input reconstruction, these inaccessible signals can be estimated, and the inaccessible subsystem can be fitted. We demonstrate input reconstruction in a model refinement framework by identifying unknown physics in a space weather model and by estimating an unknown film growth in a lithium ion battery. The same technique can be used to obtain estimates of states that cannot be directly measured. Adaptive control can be formulated as a model-refinement problem, where the unknown subsystem is the idealized controller that minimizes a measured performance variable. Minimal modeling input reconstruction for adaptive control is useful for applications where modeling information may be difficult to obtain. We demonstrate

  8. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  9. Modeling Students' Memory for Application in Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  10. Modeling-Error-Driven Performance-Seeking Direct Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V.; Kaneshige, John; Krishnakumar, Kalmanje; Burken, John

    2008-01-01

    This paper presents a stable discrete-time adaptive law that targets modeling errors in a direct adaptive control framework. The update law was developed in our previous work for the adaptive disturbance rejection application. The approach is based on the philosophy that without modeling errors, the original control design has been tuned to achieve the desired performance. The adaptive control should, therefore, work towards getting this performance even in the face of modeling uncertainties/errors. In this work, the baseline controller uses dynamic inversion with proportional-integral augmentation. Dynamic inversion is carried out using the assumed system model. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to the dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. Contrary to the typical Lyapunov-based adaptive approaches that guarantee only stability, the current approach investigates conditions for stability as well as performance. A high-fidelity F-15 model is used to illustrate the overall approach.

  11. Modeling hospitals' adaptive capacity during a loss of infrastructure services.

    PubMed

    Vugrin, Eric D; Verzi, Stephen J; Finley, Patrick D; Turnquist, Mark A; Griffin, Anne R; Ricci, Karen A; Wyte-Lake, Tamar

    2015-01-01

    Resilience in hospitals - their ability to withstand, adapt to, and rapidly recover from disruptive events - is vital to their role as part of national critical infrastructure. This paper presents a model to provide planning guidance to decision makers about how to make hospitals more resilient against possible disruption scenarios. This model represents a hospital's adaptive capacities that are leveraged to care for patients during loss of infrastructure services (power, water, etc.). The model is an optimization that reallocates and substitutes resources to keep patients in a high care state or allocates resources to allow evacuation if necessary. An illustrative example demonstrates how the model might be used in practice.

  12. Adaptive tracking for complex systems using reduced-order models

    NASA Technical Reports Server (NTRS)

    Carnigan, Craig R.

    1990-01-01

    Reduced-order models are considered in the context of parameter adaptive controllers for tracking workspace trajectories. A dual-arm manipulation task is used to illustrate the methodology and provide simulation results. A parameter adaptive controller is designed to track a payload trajectory using a four-parameter model instead of the full-order, nine-parameter model. Several simulations with different payload-to-arm mass ratios are used to illustrate the capabilities of the reduced-order model in tracking the desired trajectory.

  13. Adaptive tracking for complex systems using reduced-order models

    NASA Technical Reports Server (NTRS)

    Carignan, Craig R.

    1990-01-01

    Reduced-order models are considered in the context of parameter adaptive controllers for tracking workspace trajectories. A dual-arm manipulation task is used to illustrate the methodology and provide simulation results. A parameter adaptive controller is designed to track the desired position trajectory of a payload using a four-parameter model instead of a full-order, nine-parameter model. Several simulations with different payload-to-arm mass ratios are used to illustrate the capabilities of the reduced-order model in tracking the desired trajectory.

  14. Optical and control modeling for adaptive beam-combining experiments

    SciTech Connect

    Gruetzner, J.K.; Tucker, S.D.; Neal, D.R.; Bentley, A.E.; Simmons-Potter, K.

    1995-08-01

    The development of modeling algorithms for adaptive optics systems is important for evaluating both performance and design parameters prior to system construction. Two of the most critical subsystems to be modeled are the binary optic design and the adaptive control system. Since these two are intimately related, it is beneficial to model them simultaneously. Optic modeling techniques have some significant limitations. Diffraction effects directly limit the utility of geometrical ray-tracing models, and transform techniques such as the fast fourier transform can be both cumbersome and memory intensive. The authors have developed a hybrid system incorporating elements of both ray-tracing and fourier transform techniques. In this paper they present an analytical model of wavefront propagation through a binary optic lens system developed and implemented at Sandia. This model is unique in that it solves the transfer function for each portion of a diffractive optic analytically. The overall performance is obtained by a linear superposition of each result. The model has been successfully used in the design of a wide range of binary optics, including an adaptive optic for a beam combining system consisting of an array of rectangular mirrors, each controllable in tip/tilt and piston. Wavefront sensing and the control models for a beam combining system have been integrated and used to predict overall systems performance. Applicability of the model for design purposes is demonstrated with several lens designs through a comparison of model predictions with actual adaptive optics results.

  15. Object detection in natural backgrounds predicted by discrimination performance and models

    NASA Technical Reports Server (NTRS)

    Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.

    1997-01-01

    Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.

  16. The reduced order model problem in distributed parameter systems adaptive identification and control. [adaptive control of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.

    1981-01-01

    The reduced order model problem in distributed parameter systems adaptive identification and control is investigated. A comprehensive examination of real-time centralized adaptive control options for flexible spacecraft is provided.

  17. Dynamic model of heat inactivation kinetics for bacterial adaptation.

    PubMed

    Corradini, Maria G; Peleg, Micha

    2009-04-01

    The Weibullian-log logistic (WeLL) inactivation model was modified to account for heat adaptation by introducing a logistic adaptation factor, which rendered its "rate parameter" a function of both temperature and heating rate. The resulting model is consistent with the observation that adaptation is primarily noticeable in slow heat processes in which the cells are exposed to sublethal temperatures for a sufficiently long time. Dynamic survival patterns generated with the proposed model were in general agreement with those of Escherichia coli and Listeria monocytogenes as reported in the literature. Although the modified model's rate equation has a cumbersome appearance, especially for thermal processes having a variable heating rate, it can be solved numerically with commercial mathematical software. The dynamic model has five survival/adaptation parameters whose determination will require a large experimental database. However, with assumed or estimated parameter values, the model can simulate survival patterns of adapting pathogens in cooked foods that can be used in risk assessment and the establishment of safe preparation conditions.

  18. Modeling Power Systems as Complex Adaptive Systems

    SciTech Connect

    Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.

  19. A model of the gamma-ray background on the BATSE experiment.

    NASA Astrophysics Data System (ADS)

    Rubin, B. C.; Lei, F.; Fishman, G. J.; Finger, M. H.; Harmon, B. A.; Kouveliotou, C.; Paciesas, W. S.; Pendleton, G. N.; Wilson, R. B.; Zhang, S. N.

    1996-12-01

    The BATSE experiment on the Compton Gamma-Ray Observatory is a nearly uninterrupted all-sky monitor in the hard X-ray/gamma-ray energy range. Count rate data continuously transmitted to the ground from Low Earth Orbit (altitude ~450km) is dominated, in the 20-300keV energy range, by diffuse cosmic background modulated by blocking effects of the Earth. Other background sources include atmospheric gamma-rays and the decay of radionuclides created in cosmic ray and radiation belt trapped particle interactions with the detector. Numerous discrete cosmic sources are also present in these data. In this paper we describe a semi-empirical background model which has been used to reduce the effect of dominant background sources. The use of this model can increase the sensitivity of the experiment to sources observed with the Earth occultation technique; to long period pulsed sources; to analysis of flickering noise; and to transient events.

  20. Adaptive Modeling and Real-Time Simulation

    DTIC Science & Technology

    1984-01-01

    34 Artificial Inteligence , Vol. 13, pp. 27-39 (1980). Describes circumscription which is just the assumption that everything that is known to have a particular... Artificial Intelligence Truth Maintenance Planning Resolution Modeling Wcrld Models ~ .. ~2.. ASSTR AT (Coninue n evrse sieIf necesaran Identfy by...represents a marriage of (1) the procedural-network st, planning technology developed in artificial intelligence with (2) the PERT/CPM technology developed in

  1. Nutrient-dependent/pheromone-controlled adaptive evolution: a model

    PubMed Central

    Kohl, James Vaughn

    2013-01-01

    Background The prenatal migration of gonadotropin-releasing hormone (GnRH) neurosecretory neurons allows nutrients and human pheromones to alter GnRH pulsatility, which modulates the concurrent maturation of the neuroendocrine, reproductive, and central nervous systems, thus influencing the development of ingestive behavior, reproductive sexual behavior, and other behaviors. Methods This model details how chemical ecology drives adaptive evolution via: (1) ecological niche construction, (2) social niche construction, (3) neurogenic niche construction, and (4) socio-cognitive niche construction. This model exemplifies the epigenetic effects of olfactory/pheromonal conditioning, which alters genetically predisposed, nutrient-dependent, hormone-driven mammalian behavior and choices for pheromones that control reproduction via their effects on luteinizing hormone (LH) and systems biology. Results Nutrients are metabolized to pheromones that condition behavior in the same way that food odors condition behavior associated with food preferences. The epigenetic effects of olfactory/pheromonal input calibrate and standardize molecular mechanisms for genetically predisposed receptor-mediated changes in intracellular signaling and stochastic gene expression in GnRH neurosecretory neurons of brain tissue. For example, glucose and pheromones alter the hypothalamic secretion of GnRH and LH. A form of GnRH associated with sexual orientation in yeasts links control of the feedback loops and developmental processes required for nutrient acquisition, movement, reproduction, and the diversification of species from microbes to man. Conclusion An environmental drive evolved from that of nutrient ingestion in unicellular organisms to that of pheromone-controlled socialization in insects. In mammals, food odors and pheromones cause changes in hormones such as LH, which has developmental affects on pheromone-controlled sexual behavior in nutrient-dependent reproductively fit individuals

  2. Adaptative gravity modelling from GOCE gradient data over the Himalaya

    NASA Astrophysics Data System (ADS)

    Hayn, M.; Holschneider, M.; Panet, I.

    2013-12-01

    GOCE data deliver information about the static gravity anomalies with a spatial resolution of about 90km. The derived gravity models are used to investigate the Earth's interior structure. Most models for the GOCE data are limited by the facts 1. that they have a uniform resolution and 2. that this resolution is defined with respect to a sphere. One can improve the stability and decrease the numerical costs by adapting the resolution to the local scales of variability of the modelled field. Furthermore, respecting the ellipsoidal shape of the Earth and even its topography allows to increase the resolution of the models. We thus propose a new adaptative modelling approach. In this approach, the modelled gravity potential is represented as a superposition of potentials of poles, located closely below the Earth's surface. The distribution of the poles below the surface determines the local scale of the model. It is adapted to prior knowledge on the local roughness of the gravity field. The prior knowledge is built by estimating characteristic local scales from the wavelet transform of a prior gravity model, EGM2008. After positioning the poles by means of an iterative approach, we build a gravity model by maximising the Bayesian posterior distribution of the model. This Bayesian approach also allows estimating the model's uncertainty. We apply this modelling approach on an area over the Himalaya and show our preliminary results. The model will be used to study the dynamic processes in this active area.

  3. Adapting a Diagnostic Problem-Solving Model to Information Retrieval.

    ERIC Educational Resources Information Center

    Syu, Inien; Lang, S. D.

    2000-01-01

    Explains how a competition-based connectionist model for diagnostic problem-solving is adapted to information retrieval. Topics include probabilistic causal networks; Bayesian networks; the neural network model; empirical studies of test collections that evaluated retrieval performance; precision results; and the use of a thesaurus to provide…

  4. Adapting the Kirkpatrick Model to Technical Communication Products and Services.

    ERIC Educational Resources Information Center

    Carliner, Saul

    1997-01-01

    Proposes a four-level model for adapting the Kirkpatrick model of training evaluation to suit technical manuals and services assessing: (1) user satisfaction; (2) user performance; (3) client performance; and (4) client satisfaction. Discusses assessing of the value of work, limitations in evaluating technical communication products, and the…

  5. Multiple Model Parameter Adaptive Control for In-Flight Simulation.

    DTIC Science & Technology

    1988-03-01

    dynamics of an aircraft. The plant is control- lable by a proportional-plus-integral ( PI ) control law. This section describes two methods of calculating...adaptive model-following PI control law [20-24]. The control law bases its control gains upon the parameters of a linear difference equation model which

  6. Comparison of background ozone estimates over the western United States based on two separate model methodologies

    NASA Astrophysics Data System (ADS)

    Dolwick, Pat; Akhtar, Farhan; Baker, Kirk R.; Possiel, Norm; Simon, Heather; Tonnesen, Gail

    2015-05-01

    Two separate air quality model methodologies for estimating background ozone levels over the western U.S. are compared in this analysis. The first approach is a direct sensitivity modeling approach that considers the ozone levels that would remain after certain emissions are entirely removed (i.e., zero-out modeling). The second approach is based on an instrumented air quality model which tracks the formation of ozone within the simulation and assigns the source of that ozone to pre-identified categories (i.e., source apportionment modeling). This analysis focuses on a definition of background referred to as U.S. background (USB) which is designed to represent the influence of all sources other than U.S. anthropogenic emissions. Two separate modeling simulations were completed for an April-October 2007 period, both focused on isolating the influence of sources other than domestic manmade emissions. The zero-out modeling was conducted with the Community Multiscale Air Quality (CMAQ) model and the source apportionment modeling was completed with the Comprehensive Air Quality Model with Extensions (CAMx). Our analysis shows that the zero-out and source apportionment techniques provide relatively similar estimates of the magnitude of seasonal mean daily 8-h maximum U.S. background ozone at locations in the western U.S. when base case model ozone biases are considered. The largest differences between the two sets of USB estimates occur in urban areas where interactions with local NOx emissions can be important, especially when ozone levels are relatively low. Both methodologies conclude that seasonal mean daily 8-h maximum U.S. background ozone levels can be as high as 40-45 ppb over rural portions of the western U.S. Background fractions tend to decrease as modeled total ozone concentrations increase, with typical fractions of 75-100 percent on the lowest ozone days (<25 ppb) and typical fractions between 30 and 50% on days with ozone above 75 ppb. The finding that

  7. Statistical Models of Adaptive Immune populations

    NASA Astrophysics Data System (ADS)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  8. A Monte Carlo Method for Summing Modeled and Background Pollutant Concentrations.

    PubMed

    Dhammapala, Ranil; Bowman, Clint; Schulte, Jill

    2017-02-23

    Air quality analyses for permitting new pollution sources often involve modeling dispersion of pollutants using models like AERMOD. Representative background pollutant concentrations must be added to modeled concentrations to determine compliance with air quality standards. Summing 98(th) (or 99(th)) percentiles of two independent distributions that are unpaired in time, overestimates air quality impacts and could needlessly burden sources with restrictive permit conditions. This problem is exacerbated when emissions and background concentrations peak during different seasons. Existing methods addressing this matter either require much input data, disregard source and background seasonality, or disregard the variability of the background by utilizing a single concentration for each season, month, hour-of-day, day-of-week or wind direction. Availability of representative background concentrations are another limitation. Here we report on work to improve permitting analyses, with the development of (1) daily gridded, background concentrations interpolated from 12km-CMAQ forecasts and monitored data. A two- step interpolation reproduced measured background concentrations to within 6.2%; and (2) a Monte Carlo (MC) method to combine AERMOD output and background concentrations while respecting their seasonality. The MC method randomly combines, with replacement, data from the same months, and calculates 1000 estimates of the 98(th) or 99(th) percentiles. The design concentration of background + new source is the median of these 1000 estimates. We found that the AERMOD design value (DV) + background DV lay at the upper end of the distribution of these thousand 99(th) percentiles, while measured DVs were at the lower end. Our MC method sits between these two metrics and is sufficiently protective of public health in that it overestimates design concentrations somewhat. We also calculated probabilities of exceeding specified thresholds at each receptor, better informing

  9. Roy's adaptation model. Interview by Pamela Clarke.

    PubMed

    Barone, Stacey H; Hanna, Debra; Senesac, Pamela M

    2011-10-01

    This column presents a dialogue with three Roy scholars. They discuss research, practice, administration, and education issues in nursing from a Roy perspective and present data on curriculum in schools across the United States in relation to the use of nursing theory and Roy's model.

  10. Modeling Developmental Transitions in Adaptive Resonance Theory

    ERIC Educational Resources Information Center

    Raijmakers, Maartje E. J.; Molenaar, Peter C. M.

    2004-01-01

    Neural networks are applied to a theoretical subject in developmental psychology: modeling developmental transitions. Two issues that are involved will be discussed: discontinuities and acquiring qualitatively new knowledge. We will argue that by the appearance of a bifurcation, a neural network can show discontinuities and may acquire…

  11. A Model of Adaptive Language Learning

    ERIC Educational Resources Information Center

    Woodrow, Lindy J.

    2006-01-01

    This study applies theorizing from educational psychology and language learning to hypothesize a model of language learning that takes into account affect, motivation, and language learning strategies. The study employed a questionnaire to assess variables of motivation, self-efficacy, anxiety, and language learning strategies. The sample…

  12. Model-adaptive hybrid dynamic control for robotic assembly tasks

    SciTech Connect

    Austin, D.J.; McCarragher, B.J.

    1999-10-01

    A new task-level adaptive controller is presented for the hybrid dynamic control of robotic assembly tasks. Using a hybrid dynamic model of the assembly task, velocity constraints are derived from which satisfactory velocity commands are obtained. Due to modeling errors and parametric uncertainties, the velocity commands may be erroneous and may result in suboptimal performance. Task-level adaptive control schemes, based on the occurrence of discrete events, are used to change the model parameters from which the velocity commands are determined. Two adaptive schemes are presented: the first is based on intuitive reasoning about the vector spaces involved whereas the second uses a search region that is reduced with each iteration. For the first adaptation law, asymptotic convergence to the correct model parameters is proven except for one case. This weakness motivated the development of the second adaptation law, for which asymptotic convergence is proven in all cases. Automated control of a peg-in-hole assembly task is given as an example, and simulations and experiments for this task are presented. These results demonstrate the success of the method and also indicate properties for rapid convergence.

  13. The Importance of Formalizing Computational Models of Face Adaptation Aftereffects

    PubMed Central

    Ross, David A.; Palmeri, Thomas J.

    2016-01-01

    Face adaptation is widely used as a means to probe the neural representations that support face recognition. While the theories that relate face adaptation to behavioral aftereffects may seem conceptually simple, our work has shown that testing computational instantiations of these theories can lead to unexpected results. Instantiating a model of face adaptation not only requires specifying how faces are represented and how adaptation shapes those representations but also specifying how decisions are made, translating hidden representational states into observed responses. Considering the high-dimensionality of face representations, the parallel activation of multiple representations, and the non-linearity of activation functions and decision mechanisms, intuitions alone are unlikely to succeed. If the goal is to understand mechanism, not simply to examine the boundaries of a behavioral phenomenon or correlate behavior with brain activity, then formal computational modeling must be a component of theory testing. To illustrate, we highlight our recent computational modeling of face adaptation aftereffects and discuss how models can be used to understand the mechanisms by which faces are recognized. PMID:27378960

  14. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  15. DNDO Report: Predicting Solar Modulation Potentials for Modeling Cosmic Background Radiation

    SciTech Connect

    Behne, Patrick Alan

    2016-08-08

    The modeling of the detectability of special nuclear material (SNM) at ports and border crossings requires accurate knowledge of the background radiation at those locations. Background radiation originates from two main sources, cosmic and terrestrial. Cosmic background is produced by high-energy galactic cosmic rays (GCR) entering the atmosphere and inducing a cascade of particles that eventually impact the earth’s surface. The solar modulation potential represents one of the primary inputs to modeling cosmic background radiation. Usosokin et al. formally define solar modulation potential as “the mean energy loss [per unit charge] of a cosmic ray particle inside the heliosphere…” Modulation potential, a function of elevation, location, and time, shares an inverse relationship with cosmic background radiation. As a result, radiation detector thresholds require adjustment to account for differing background levels, caused partly by differing solar modulations. Failure to do so can result in higher rates of false positives and failed detection of SNM for low and high levels of solar modulation potential, respectively. This study focuses on solar modulation’s time dependence, and seeks the best method to predict modulation for future dates using Python. To address the task of predicting future solar modulation, we utilize both non-linear least squares sinusoidal curve fitting and cubic spline interpolation. This material will be published in transactions of the ANS winter meeting of November, 2016.

  16. Modeling and Adaptive Control of Magnetostrictive Actuators

    DTIC Science & Technology

    1999-01-01

    in using these and other smart actuators is at a high frequency – for producing large displacements with mechanical rectification, producing sonar...effects; and mechanical damping. We show rigorously that the system with the intial state at the origin has a periodic orbit as its Ω limit set. For the...Root locus of example system. . . . . . . . . . . . . . . . . . . . 195 5.18 Mechanical system model at high frequencies. . . . . . . . . . . . 195 F.1

  17. A geometric view of adaptive optics control: boiling atmosphere model

    NASA Astrophysics Data System (ADS)

    Wiberg, Donald M.; Max, Claire E.; Gavel, Donald T.

    2004-10-01

    The separation principle of optimal adaptive optics control is derived, and definitions of controllability and observability are introduced. An exact finite dimensional state space representation of the control system dynamics is obtained without the need for truncation in modes such as Zernikes. The uncertainty of sensing uncontrollable modes confuses present adaptive optics controllers. This uncertainty can be modeled by a Kalman filter. Reducing this uncertainty permits increased gain, increasing the Strehl, which is done by an optimal control law derived here. A general model of the atmosphere is considered, including boiling.

  18. Audibility of time-varying signals in time-varying backgrounds: Model and data

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.; Glasberg, Brian R.

    2004-05-01

    We have described a model for calculating the partial loudness of a steady signal in the presence of a steady background sound [Moore et al., J. Audio Eng. Soc. 45, 224-240 (1997)]. We have also described a model for calculating the loudness of time-varying signals [B. R. Glasberg and B. C. J. Moore, J. Audio Eng. Soc. 50, 331-342 (2002)]. These two models have been combined to allow calculation of the partial loudness of a time-varying signal in the presence of a time-varying background. To evaluate the model, psychometric functions for the detection of a variety of time-varying signals (e.g., telephone ring tones) have been measured in a variety of background sounds sampled from everyday listening situations, using a two-alternative forced-choice task. The different signals and backgrounds were interleaved, to create stimulus uncertainty, as would occur in everyday life. The data are used to relate the detectability index, d', to the calculated partial loudness. In this way, the model can be used to predict the detectability of any signal, based on its calculated partial loudness. [Work supported by MRC (UK) and by Nokia.

  19. Background Modelling in Very-High-Energy Gamma-Ray Astronomy

    SciTech Connect

    Berge, David; Funk, S.; Hinton, J.; /Heidelberg, Max Planck Inst. /Heidelberg Observ. /Leeds U.

    2006-11-07

    Ground based Cherenkov telescope systems measure astrophysical {gamma}-ray emission against a background of cosmic-ray induced air showers. The subtraction of this background is a major challenge for the extraction of spectra and morphology of {gamma}-ray sources. The unprecedented sensitivity of the new generation of ground based very-high-energy {gamma}-ray experiments such as H.E.S.S. has lead to the discovery of many previously unknown extended sources. The analysis of such sources requires a range of different background modeling techniques. Here we describe some of the techniques that have been applied to data from the H.E.S.S. instrument and compare their performance. Each background model is introduced and discussed in terms of suitability for image generation or spectral analysis and possible caveats are mentioned. We show that there is not a single multi-purpose model, different models are appropriate for different tasks. To keep systematic uncertainties under control it is important to apply several models to the same data set and compare the results.

  20. The adaptive cruise control vehicles in the cellular automata model

    NASA Astrophysics Data System (ADS)

    Jiang, Rui; Wu, Qing-Song

    2006-11-01

    This Letter presented a cellular automata model where the adaptive cruise control vehicles are modelled. In this model, the constant time headway policy is adopted. The fundamental diagram is presented. The simulation results are in good agreement with the analytical ones. The mixture of ACC vehicles with manually driven vehicles is investigated. It is shown that with the introduction of ACC vehicles, the jam can be suppressed.

  1. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  2. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  3. Adaptation of the microdosimetric kinetic model to hypoxia

    NASA Astrophysics Data System (ADS)

    Bopp, C.; Hirayama, R.; Inaniwa, T.; Kitagawa, A.; Matsufuji, N.; Noda, K.

    2016-11-01

    Ion beams present a potential advantage in terms of treatment of lesions with hypoxic regions. In order to use this potential, it is important to accurately model the cell survival of oxic as well as hypoxic cells. In this work, an adaptation of the microdosimetric kinetic (MK) model making it possible to account for cell hypoxia is presented. The adaptation relies on the modification of damage quantity (double strand breaks and more complex lesions) due to the radiation. Model parameters such as domain size and nucleus size are then adapted through a fitting procedure. We applied this approach to two cell lines, HSG and V79 for helium, carbon and neon ions. A similar behaviour of the parameters was found for the two cell lines, namely a reduction of the domain size and an increase in the sensitive nuclear volume of hypoxic cells compared to those of oxic cells. In terms of oxygen enhancement ratio (OER), the experimental data behaviour can be reproduced, including dependence on particle type at the same linear energy transfer (LET). Errors on the cell survival prediction are of the same order of magnitude than for the original MK model. Our adaptation makes it possible to account for hypoxia without modelling the OER as a function of the LET of the particles, but directly accounting for hypoxic cell survival data.

  4. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    NASA Astrophysics Data System (ADS)

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; Arge, C. Nick

    2015-04-01

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF) to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.

  5. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    SciTech Connect

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; Arge, C. Nick

    2015-03-17

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF) to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.

  6. Achieving runtime adaptability through automated model evolution and variant selection

    NASA Astrophysics Data System (ADS)

    Mosincat, Adina; Binder, Walter; Jazayeri, Mehdi

    2014-01-01

    Dynamically adaptive systems propose adaptation by means of variants that are specified in the system model at design time and allow for a fixed set of different runtime configurations. However, in a dynamic environment, unanticipated changes may result in the inability of the system to meet its quality requirements. To allow the system to react to these changes, this article proposes a solution for automatically evolving the system model by integrating new variants and periodically validating the existing ones based on updated quality parameters. To illustrate this approach, the article presents a BPEL-based framework using a service composition model to represent the functional requirements of the system. The framework estimates quality of service (QoS) values based on information provided by a monitoring mechanism, ensuring that changes in QoS are reflected in the system model. The article shows how the evolved model can be used at runtime to increase the system's autonomic capabilities and delivered QoS.

  7. Object Detection in Natural Backgrounds Predicted by Discrimination Performance and Models

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Watson, A. B.; Rohaly, A. M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    In object detection, an observer looks for an object class member in a set of backgrounds. In discrimination, an observer tries to distinguish two images. Discrimination models predict the probability that an observer detects a difference between two images. We compare object detection and image discrimination with the same stimuli by: (1) making stimulus pairs of the same background with and without the target object and (2) either giving many consecutive trials with the same background (discrimination) or intermixing the stimuli (object detection). Six images of a vehicle in a natural setting were altered to remove the vehicle and mixed with the original image in various proportions. Detection observers rated the images for vehicle presence. Discrimination observers rated the images for any difference from the background image. Estimated detectabilities of the vehicles were found by maximizing the likelihood of a Thurstone category scaling model. The pattern of estimated detectabilities is similar for discrimination and object detection, and is accurately predicted by a Cortex Transform discrimination model. Predictions of a Contrast- Sensitivity- Function filter model and a Root-Mean-Square difference metric based on the digital image values are less accurate. The discrimination detectabilities averaged about twice those of object detection.

  8. Extracting harmonic signal from a chaotic background with local linear model

    NASA Astrophysics Data System (ADS)

    Li, Chenlong; Su, Liyun

    2017-02-01

    In this paper, the problems of blind detection and estimation of harmonic signal in strong chaotic background are analyzed, and new methods by using local linear (LL) model are put forward. The LL model has been exhaustively researched and successfully applied for fitting and forecasting chaotic signal in many chaotic fields. We enlarge the modeling capacity substantially. Firstly, we can predict the short-term chaotic signal and obtain the fitting error based on the LL model. Then we detect the frequencies from the fitting error by periodogram, a property on the fitting error is proposed which has not been addressed before, and this property ensures that the detected frequencies are similar to that of harmonic signal. Secondly, we establish a two-layer LL model to estimate the determinate harmonic signal in strong chaotic background. To estimate this simply and effectively, we develop an efficient backfitting algorithm to select and optimize the parameters that are hard to be exhaustively searched for. In the method, based on sensitivity to initial value of chaos motion, the minimum fitting error criterion is used as the objective function to get the estimation of the parameters of the two-layer LL model. Simulation shows that the two-layer LL model and its estimation technique have appreciable flexibility to model the determinate harmonic signal in different chaotic backgrounds (Lorenz, Henon and Mackey-Glass (M-G) equations). Specifically, the harmonic signal can be extracted well with low SNR and the developed background algorithm satisfies the condition of convergence in repeated 3-5 times.

  9. The ADaptation and Anticipation Model (ADAM) of sensorimotor synchronization

    PubMed Central

    van der Steen, M. C. (Marieke); Keller, Peter E.

    2013-01-01

    A constantly changing environment requires precise yet flexible timing of movements. Sensorimotor synchronization (SMS)—the temporal coordination of an action with events in a predictable external rhythm—is a fundamental human skill that contributes to optimal sensory-motor control in daily life. A large body of research related to SMS has focused on adaptive error correction mechanisms that support the synchronization of periodic movements (e.g., finger taps) with events in regular pacing sequences. The results of recent studies additionally highlight the importance of anticipatory mechanisms that support temporal prediction in the context of SMS with sequences that contain tempo changes. To investigate the role of adaptation and anticipatory mechanisms in SMS we introduce ADAM: an ADaptation and Anticipation Model. ADAM combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models. The combination of simulations and experimental manipulations based on ADAM creates a novel and promising approach for exploring adaptation and anticipation in SMS. The current paper describes the conceptual basis and architecture of ADAM. PMID:23772211

  10. Adjoint Methods for Guiding Adaptive Mesh Refinement in Tsunami Modeling

    NASA Astrophysics Data System (ADS)

    Davis, B. N.; LeVeque, R. J.

    2016-12-01

    One difficulty in developing numerical methods for tsunami modeling is the fact that solutions contain time-varying regions where much higher resolution is required than elsewhere in the domain, particularly when tracking a tsunami propagating across the ocean. The open source GeoClaw software deals with this issue by using block-structured adaptive mesh refinement to selectively refine around propagating waves. For problems where only a target area of the total solution is of interest (e.g., one coastal community), a method that allows identifying and refining the grid only in regions that influence this target area would significantly reduce the computational cost of finding a solution. In this work, we show that solving the time-dependent adjoint equation and using a suitable inner product with the forward solution allows more precise refinement of the relevant waves. We present the adjoint methodology first in one space dimension for illustration and in a broad context since it could also be used in other adaptive software, and potentially for other tsunami applications beyond adaptive refinement. We then show how this adjoint method has been integrated into the adaptive mesh refinement strategy of the open source GeoClaw software and present tsunami modeling results showing that the accuracy of the solution is maintained and the computational time required is significantly reduced through the integration of the adjoint method into adaptive mesh refinement.

  11. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  12. A Model of Internal Communication in Adaptive Communication Systems.

    ERIC Educational Resources Information Center

    Williams, M. Lee

    A study identified and categorized different types of internal communication systems and developed an applied model of internal communication in adaptive organizational systems. Twenty-one large organizations were selected for their varied missions and diverse approaches to managing internal communication. Individual face-to-face or telephone…

  13. Adapting the Transtheoretical Model of Change to the Bereavement Process

    ERIC Educational Resources Information Center

    Calderwood, Kimberly A.

    2011-01-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals…

  14. Communicating to Farmers about Skin Cancer: The Behavior Adaptation Model.

    ERIC Educational Resources Information Center

    Parrott, Roxanne; Monahan, Jennifer; Ainsworth, Stuart; Steiner, Carol

    1998-01-01

    States health campaign messages designed to encourage behavior adaptation have greater likelihood of success than campaigns promoting avoidance of at-risk behaviors that cannot be avoided. Tests a model of health risk behavior using four different behaviors in a communication campaign aimed at reducing farmers' risk for skin cancer--questions…

  15. Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Tan, Guozhen; Lv, Hongtao; Wang, Zhen; Meng, Jun; Hao, Jianye; Ren, Fenghui

    2016-06-01

    Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people’s adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics.

  16. Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies

    PubMed Central

    Yu, Chao; Tan, Guozhen; Lv, Hongtao; Wang, Zhen; Meng, Jun; Hao, Jianye; Ren, Fenghui

    2016-01-01

    Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people’s adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics. PMID:27282089

  17. Modeling of beam loss in Tevatron and backgrounds in the BTeV detector

    SciTech Connect

    Alexandr I. Drozhdin; Nikolai V. Mokhov

    2004-07-07

    Detailed STRUCT simulations are performed on beam loss rates in the vicinity of the BTeV detector in the Tevatron CO interaction region due to beam-gas nuclear elastic interactions and out-scattering from the collimation system. Corresponding showers induced in the machine components and background rates in BTeV are modeled with the MARS14 code. It is shown that the combination of a steel collimator and concrete shielding wall located in front of the detector can reduce the accelerator-related background rates in the detector by an order of magnitude.

  18. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  19. On fractional order composite model reference adaptive control

    NASA Astrophysics Data System (ADS)

    Wei, Yiheng; Sun, Zhenyuan; Hu, Yangsheng; Wang, Yong

    2016-08-01

    This paper presents a novel composite model reference adaptive control approach for a class of fractional order linear systems with unknown constant parameters. The method is extended from the model reference adaptive control. The parameter estimation error of our method depends on both the tracking error and the prediction error, whereas the existing method only depends on the tracking error, which makes our method has better transient performance in the sense of generating smooth system output. By the aid of the continuous frequency distributed model, stability of the proposed approach is established in the Lyapunov sense. Furthermore, the convergence property of the model parameters estimation is presented, on the premise that the closed-loop control system is stable. Finally, numerical simulation examples are given to demonstrate the effectiveness of the proposed schemes.

  20. Evaluating mallard adaptive management models with time series

    USGS Publications Warehouse

    Conn, P.B.; Kendall, W.L.

    2004-01-01

    Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these

  1. ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment

    SciTech Connect

    Parton, William; Hanson, Paul J; Swanston, Chris; Torn, Margaret S.; Trumbore, Susan E.; Riley, William J.; Kelly, Robin

    2010-01-01

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamics of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.

  2. ForCent model development and testing using the Enriched Background Isotope Study experiment

    SciTech Connect

    Parton, W.J.; Hanson, P. J.; Swanston, C.; Torn, M.; Trumbore, S. E.; Riley, W.; Kelly, R.

    2010-10-01

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulates the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.

  3. ForCent model development and testing using the Enriched Background Isotope Study experiment

    NASA Astrophysics Data System (ADS)

    Parton, William J.; Hanson, Paul J.; Swanston, Chris; Torn, Margaret; Trumbore, Susan E.; Riley, William; Kelly, Robin

    2010-12-01

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (Δ 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamics of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass Δ 14C data, and with soil respiration Δ 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration Δ 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.

  4. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  5. Modeling background radiation using geochemical data: A case study in and around Cameron, Arizona.

    PubMed

    Marsac, Kara E; Burnley, Pamela C; Adcock, Christopher T; Haber, Daniel A; Malchow, Russell L; Hausrath, Elisabeth M

    2016-12-01

    This study compares high resolution forward models of natural gamma-ray background with that measured by high resolution aerial gamma-ray surveys. The ability to predict variations in natural background radiation levels should prove useful for those engaged in measuring anthropogenic contributions to background radiation for the purpose of emergency response and homeland security operations. The forward models are based on geologic maps and remote sensing multi-spectral imagery combined with two different sources of data: 1) bedrock geochemical data (uranium, potassium and thorium concentrations) collected from national databases, the scientific literature and private companies, and 2) the low spatial resolution NURE (National Uranium Resource Evaluation) aerial gamma-ray survey. The study area near Cameron, Arizona, is located in an arid region with minimal vegetation and, due to the presence of abandoned uranium mines, was the subject of a previous high resolution gamma-ray survey. We found that, in general, geologic map units form a good basis for predicting the geographic distribution of the gamma-ray background. Predictions of background gamma-radiation levels based on bedrock geochemical analyses were not as successful as those based on the NURE aerial survey data sorted by geologic unit. The less successful result of the bedrock geochemical model is most likely due to a number of factors including the need to take into account the evolution of soil geochemistry during chemical weathering and the influence of aeolian addition. Refinements to the forward models were made using ASTER visualizations to create subunits of similar exposure rate within the Chinle Formation, which contains multiple lithologies and by grouping alluvial units by drainage basin rather than age.

  6. OccuPeak: ChIP-Seq Peak Calling Based on Internal Background Modelling

    PubMed Central

    van den Boogaard, Malou; Christoffels, Vincent M.; Barnett, Phil; Ruijter, Jan M.

    2014-01-01

    ChIP-seq has become a major tool for the genome-wide identification of transcription factor binding or histone modification sites. Most peak-calling algorithms require input control datasets to model the occurrence of background reads to account for local sequencing and GC bias. However, the GC-content of reads in Input-seq datasets deviates significantly from that in ChIP-seq datasets. Moreover, we observed that a commonly used peak calling program performed equally well when the use of a simulated uniform background set was compared to an Input-seq dataset. This contradicts the assumption that input control datasets are necessary to fatefully reflect the background read distribution. Because the GC-content of the abundant single reads in ChIP-seq datasets is similar to those of randomly sampled regions we designed a peak-calling algorithm with a background model based on overlapping single reads. The application, OccuPeak, uses the abundant low frequency tags present in each ChIP-seq dataset to model the background, thereby avoiding the need for additional datasets. Analysis of the performance of OccuPeak showed robust model parameters. Its measure of peak significance, the excess ratio, is only dependent on the tag density of a peak and the global noise levels. Compared to the commonly used peak-calling applications MACS and CisGenome, OccuPeak had the highest sensitivity in an enhancer identification benchmark test, and performed similar in an overlap tests of transcription factor occupation with DNase I hypersensitive sites and H3K27ac sites. Moreover, peaks called by OccuPeak were significantly enriched with cardiac disease-associated SNPs. OccuPeak runs as a standalone application and does not require extensive tweaking of parameters, making its use straightforward and user friendly. Availability: http://occupeak.hfrc.nl PMID:24936875

  7. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data.

    PubMed

    Liu, Zitao; Hauskrecht, Milos

    2016-02-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy.

  8. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  9. A generic efficient adaptive grid scheme for rocket propulsion modeling

    NASA Technical Reports Server (NTRS)

    Mo, J. D.; Chow, Alan S.

    1993-01-01

    The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.

  10. Adaptive Modeling: An Approach for Incorporating Nonlinearity in Regression Analyses.

    PubMed

    Knafl, George J; Barakat, Lamia P; Hanlon, Alexandra L; Hardie, Thomas; Knafl, Kathleen A; Li, Yimei; Deatrick, Janet A

    2017-02-01

    Although regression relationships commonly are treated as linear, this often is not the case. An adaptive approach is described for identifying nonlinear relationships based on power transforms of predictor (or independent) variables and for assessing whether or not relationships are distinctly nonlinear. It is also possible to model adaptively both means and variances of continuous outcome (or dependent) variables and to adaptively power transform positive-valued continuous outcomes, along with their predictors. Example analyses are provided of data from parents in a nursing study on emotional-health-related quality of life for childhood brain tumor survivors as a function of the effort to manage the survivors' condition. These analyses demonstrate that relationships, including moderation relationships, can be distinctly nonlinear, that conclusions about means can be affected by accounting for non-constant variances, and that outcome transformation along with predictor transformation can provide distinct improvements and can resolve skewness problems.© 2017 Wiley Periodicals, Inc.

  11. OMEGA: The operational multiscale environment model with grid adaptivity

    SciTech Connect

    Bacon, D.P.

    1995-07-01

    This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track moving internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed.

  12. The Pattern of Neutral Molecular Variation under the Background Selection Model

    PubMed Central

    Charlesworth, D.; Charlesworth, B.; Morgan, M. T.

    1995-01-01

    Stochastic simulations of the infinite sites model were used to study the behavior of genetic diversity at a neutral locus in a genomic region without recombination, but subject to selection against deleterious alleles maintained by recurrent mutation (background selection). In large populations, the effect of background selection on the number of segregating sites approaches the effct on nucleotide site diversity, i.e., the reduction in genetic variability caused by background selection resembles that caused by a simple reduction in effective population size. We examined, by coalescence-based methods, the power of several tests for the departure from neutral expectation of the frequency spectra of alleles in samples from randomly mating populations (TAJIMA's, FU and LI's, and WATTERSON's tests). All of the tests have low power unless the selection against mutant alleles is extremely weak. In Drosophila, significant TAJIMA's tests are usually not obtained with empirical data sets from loci in genomic regions with restricted recombination frequencies and that exhibit low genetic diversity. This is consistent with the operation of background selection as opposed to selective sweeps. It remains to be decided whether background selection is sufficient to explain the observed extent of reduction in diversity in regions of restricted recombination. PMID:8601499

  13. Nonhydrostatic adaptive mesh dynamics for multiscale climate models (Invited)

    NASA Astrophysics Data System (ADS)

    Collins, W.; Johansen, H.; McCorquodale, P.; Colella, P.; Ullrich, P. A.

    2013-12-01

    Many of the atmospheric phenomena with the greatest potential impact in future warmer climates are inherently multiscale. Such meteorological systems include hurricanes and tropical cyclones, atmospheric rivers, and other types of hydrometeorological extremes. These phenomena are challenging to simulate in conventional climate models due to the relatively coarse uniform model resolutions relative to the native nonhydrostatic scales of the phenomonological dynamics. To enable studies of these systems with sufficient local resolution for the multiscale dynamics yet with sufficient speed for climate-change studies, we have adapted existing adaptive mesh dynamics for the DOE-NSF Community Atmosphere Model (CAM). In this talk, we present an adaptive, conservative finite volume approach for moist non-hydrostatic atmospheric dynamics. The approach is based on the compressible Euler equations on 3D thin spherical shells, where the radial direction is treated implicitly (using a fourth-order Runga-Kutta IMEX scheme) to eliminate time step constraints from vertical acoustic waves. Refinement is performed only in the horizontal directions. The spatial discretization is the equiangular cubed-sphere mapping, with a fourth-order accurate discretization to compute flux averages on faces. By using both space-and time-adaptive mesh refinement, the solver allocates computational effort only where greater accuracy is needed. The resulting method is demonstrated to be fourth-order accurate for model problems, and robust at solution discontinuities and stable for large aspect ratios. We present comparisons using a simplified physics package for dycore comparisons of moist physics. Hadley cell lifting an advected tracer into upper atmosphere, with horizontal adaptivity

  14. Career Development and Older Workers: Study Evaluating Adaptability in Older Workers Using Hall's Model

    ERIC Educational Resources Information Center

    Strate, Merwyn L.; Torraco, Richard J.

    2005-01-01

    This qualitative case study described the development of adaptive competence in older workers using a Model of Adaptability and Adaptation developed by Dr. Douglas T. Hall (2002). Few studies have focused on the development of adaptability in workers when faced with change and no studies have focused on the development of adaptability in older…

  15. Framework for dynamic background modeling and shadow suppression for moving object segmentation in complex wavelet domain

    NASA Astrophysics Data System (ADS)

    Kushwaha, Alok Kumar Singh; Srivastava, Rajeev

    2015-09-01

    Moving object segmentation using change detection in wavelet domain under continuous variations of lighting condition is a challenging problem in video surveillance systems. There are several methods proposed in the literature for change detection in wavelet domain for moving object segmentation having static backgrounds, but it has not been addressed effectively for dynamic background changes. The methods proposed in the literature suffer from various problems, such as ghostlike appearance, object shadows, and noise. To deal with these issues, a framework for dynamic background modeling and shadow suppression under rapidly changing illumination conditions for moving object segmentation in complex wavelet domain is proposed. The proposed method consists of eight steps applied on given video frames, which include wavelet decomposition of frame using complex wavelet transform; use of change detection on detail coefficients (LH, HL, and HH), use of improved Gaussian mixture-based dynamic background modeling on approximate coefficient (LL subband); cast shadow suppression; use of soft thresholding for noise removal; strong edge detection; inverse wavelet transformation for reconstruction; and finally using closing morphology operator. A comparative analysis of the proposed method is presented both qualitatively and quantitatively with other standard methods available in the literature for six datasets in terms of various performance measures. Experimental results demonstrate the efficacy of the proposed method.

  16. Asymmetric adaptive modeling of central tarsal bones in racing greyhounds.

    PubMed

    Johnson, K A; Muir, P; Nicoll, R G; Roush, J K

    2000-08-01

    Fatigue fracture of the cuboidal bones of the foot, especially the navicular tarsal bone, is common in athletes and dancers. The racing greyhound is a naturally occurring animal model of this injury because both microcracking and complete fracture occur in the right central (navicular) tarsal bone (CTB). The right limb is on the outside when racing in a counter-clockwise direction on circular tracks, and is subjected to asymmetric cyclic compressive loading. We wished to study in more detail adaptive modeling in the right CTB in racing greyhounds. We hypothesized that cyclic asymmetric loading of a cuboidal bone induced by racing on a circular track would induce site-specific bone adaptation. We also hypothesized that such an adaptive response would be attenuated in greyhounds that were retired from racing and no longer subjected to cyclic asymmetric loading. Central tarsal bones from racing greyhounds (racing group, n = 6) and retired greyhounds being used for breeding (nonracing group, n = 4) were examined using quantitative computed tomography (CT). Bone mineral density (BMD) was determined in a 3-mm diameter region-of-interest (ROI) in six contiguous 1-mm-thick sagittal CT slices of each CTB. Bones were subsequently examined histomorphometrically and percentage bone area (B.Ar./T.Ar., %) was determined in 10 ROI from dorsal to plantar in a transverse plane, mid-way between the proximal and distal articular surfaces. The BMD of the right CTB was greater than the left in all greyhounds (p < 0. 001). In comparing ipsilateral limbs between groups, BMD of the racing group was greater than the nonracing group for each side (p < 0.005). In sagittal plane histologic sections, bone in the dorsal region of the right CTB had undergone adaptive modeling, through thickening and compaction of trabeculae. B.Ar./T.Ar., % in the right CTB of the racing group was greater than in the contralateral CTB (p < 0.001), and the ipsilateral CTB of the nonracing group (p < 0.001). In the

  17. The method of narrow-band audio classification based on universal noise background model

    NASA Astrophysics Data System (ADS)

    Rui, Rui; Bao, Chang-chun

    2013-03-01

    Audio classification is the basis of content-based audio analysis and retrieval. The conventional classification methods mainly depend on feature extraction of audio clip, which certainly increase the time requirement for classification. An approach for classifying the narrow-band audio stream based on feature extraction of audio frame-level is presented in this paper. The audio signals are divided into speech, instrumental music, song with accompaniment and noise using the Gaussian mixture model (GMM). In order to satisfy the demand of actual environment changing, a universal noise background model (UNBM) for white noise, street noise, factory noise and car interior noise is built. In addition, three feature schemes are considered to optimize feature selection. The experimental results show that the proposed algorithm achieves a high accuracy for audio classification, especially under each noise background we used and keep the classification time less than one second.

  18. Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.

    PubMed

    Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui

    2017-01-01

    To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.

  19. Adaptive fuzzy modeling of the hypnotic process in anesthesia.

    PubMed

    Marrero, A; Méndez, J A; Reboso, J A; Martín, I; Calvo, J L

    2017-04-01

    This paper addresses the problem of patient model synthesis in anesthesia. Recent advanced drug infusion mechanisms use a patient model to establish the proper drug dose. However, due to the inherent complexity and variability of the patient dynamics, difficulty obtaining a good model is high. In this paper, a method based on fuzzy logic and genetic algorithms is proposed as an alternative to standard compartmental models. The model uses a Mamdani type fuzzy inference system developed in a two-step procedure. First, an offline model is obtained using information from real patients. Then, an adaptive strategy that uses genetic algorithms is implemented. The validation of the modeling technique was done using real data obtained from real patients in the operating room. Results show that the proposed method based on artificial intelligence appears to be an improved alternative to existing compartmental methodologies.

  20. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-07-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.

  1. Direct Model Reference Adaptive Control for a Magnetic Bearing

    SciTech Connect

    Durling, Mike

    1999-11-01

    A Direct Model Reference Adaptive Controller (DMRAC) is applied to a magnetic bearing test stand. The bearing of interest is the MBC 500 Magnetic Bearing System manufactured by Magnetic Moments, LLC. The bearing model is presented in state space form and the system transfer function is measured directly using a closed-loop swept sine technique. Next, the bearing models are used to design a phase-lead controller, notch filter and then a DMRAC. The controllers are tuned in simulations and finally are implemented using a combination of MATLAB, SIMULINK and dSPACE. The results show a successful implementation of a DMRAC on the magnetic bearing hardware.

  2. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    SciTech Connect

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, we can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.

  3. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE PAGES

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  4. On valuing information in adaptive-management models.

    PubMed

    Moore, Alana L; McCarthy, Michael A

    2010-08-01

    Active adaptive management looks at the benefit of using strategies that may be suboptimal in the near term but may provide additional information that will facilitate better management in the future. In many adaptive-management problems that have been studied, the optimal active and passive policies (accounting for learning when designing policies and designing policy on the basis of current best information, respectively) are very similar. This seems paradoxical; when faced with uncertainty about the best course of action, managers should spend very little effort on actively designing programs to learn about the system they are managing. We considered two possible reasons why active and passive adaptive solutions are often similar. First, the benefits of learning are often confined to the particular case study in the modeled scenario, whereas in reality information gained from local studies is often applied more broadly. Second, management objectives that incorporate the variance of an estimate may place greater emphasis on learning than more commonly used objectives that aim to maximize an expected value. We explored these issues in a case study of Merri Creek, Melbourne, Australia, in which the aim was to choose between two options for revegetation. We explicitly incorporated monitoring costs in the model. The value of the terminal rewards and the choice of objective both influenced the difference between active and passive adaptive solutions. Explicitly considering the cost of monitoring provided a different perspective on how the terminal reward and management objective affected learning. The states for which it was optimal to monitor did not always coincide with the states in which active and passive adaptive management differed. Our results emphasize that spending resources on monitoring is only optimal when the expected benefits of the options being considered are similar and when the pay-off for learning about their benefits is large.

  5. A self-consistent 3D model of fluctuations in the helium-ionizing background

    NASA Astrophysics Data System (ADS)

    Davies, Frederick B.; Furlanetto, Steven R.; Dixon, Keri L.

    2017-03-01

    Large variations in the effective optical depth of the He II Lyα forest have been observed at z ≳ 2.7, but the physical nature of these variations is uncertain: either the Universe is still undergoing the process of He II reionization, or the Universe is highly ionized but the He II-ionizing background fluctuates significantly on large scales. In an effort to build upon our understanding of the latter scenario, we present a novel model for the evolution of ionizing background fluctuations. Previous models have assumed the mean free path of ionizing photons to be spatially uniform, ignoring the dependence of that scale on the local ionization state of the intergalactic medium (IGM). This assumption is reasonable when the mean free path is large compared to the average distance between the primary sources of He II-ionizing photons, ≳ L⋆ quasars. However, when this is no longer the case, the background fluctuations become more severe, and an accurate description of the average propagation of ionizing photons through the IGM requires additionally accounting for the fluctuations in opacity. We demonstrate the importance of this effect by constructing 3D semi-analytic models of the helium-ionizing background from z = 2.5-3.5 that explicitly include a spatially varying mean free path of ionizing photons. The resulting distribution of effective optical depths at large scales in the He II Lyα forest is very similar to the latest observations with HST/COS at 2.5 ≲ z ≲ 3.5.

  6. On a combined adaptive tetrahedral tracing and edge diffraction model

    NASA Astrophysics Data System (ADS)

    Hart, Carl R.

    A major challenge in architectural acoustics is the unification of diffraction models and geometric acoustics. For example, geometric acoustics is insufficient to quantify the scattering characteristics of acoustic diffusors. Typically the time-independent boundary element method (BEM) is the method of choice. In contrast, time-domain computations are of interest for characterizing both the spatial and temporal scattering characteristics of acoustic diffusors. Hence, a method is sought that predicts acoustic scattering in the time-domain. A prediction method, which combines an advanced image source method and an edge diffraction model, is investigated for the prediction of time-domain scattering. Adaptive tetrahedral tracing is an advanced image source method that generates image sources through an adaptive process. Propagating tetrahedral beams adapt to ensonified geometry mapping the geometric sound field in space and along boundaries. The edge diffraction model interfaces with the adaptive tetrahedral tracing process by the transfer of edge geometry and visibility information. Scattering is quantified as the contribution of secondary sources along a single or multiple interacting edges. Accounting for a finite number of diffraction permutations approximates the scattered sound field. Superposition of the geometric and scattered sound fields results in a synthesized impulse response between a source and a receiver. Evaluation of the prediction technique involves numerical verification and numerical validation. Numerical verification is based upon a comparison with analytic and numerical (BEM) solutions for scattering geometries. Good agreement is shown for the selected scattering geometries. Numerical validation is based upon experimentally determined scattered impulse responses of acoustic diffusors. Experimental data suggests that the predictive model is appropriate for high-frequency predictions. For the experimental determination of the scattered impulse

  7. Modeling Adaptive Middleware and Its Applications to Military Tactical Datalinks

    DTIC Science & Technology

    2005-06-01

    execution of the WSOA architecture to On- Time QoS region, and associated Y offset. The above calculations are performed each time a complete image tile is...early Image A Image B 0.X I 0.Z I 0. Y I Figure 3-3. Early, On-Time and Late QoS Boundaries Table 3-4 WSOA QoS Adaptation Model Updated QoS

  8. Adaptive Control Law Design for Model Uncertainty Compensation

    DTIC Science & Technology

    1989-06-14

    AD-A211 712 WRDC-TR-89-3061 ADAPTIVE CONTROL LAW DESIGN FOR MODEL UNCERTAINTY COMPENSATION J. E. SORRELLS DYNETICS , INC. U 1000 EXPLORER BLVD. L Ell...MONITORING ORGANIZATION Dynetics , Inc. (If applicable) Wright Research and Development Center netics,_ _ I _nc.Flight Dynamics Laboratory, AFSC 6c. ADDRESS...controllers designed using Dynetics innovative aporoach were able to equal or surpass the STR and MRAC controllers in terms of performance robustness

  9. Modeling of Complex Adaptive Systems in Air Operations

    DTIC Science & Technology

    2006-09-01

    control of C3 in an increasingly complex military environment. Control theory is a multidisciplinary science associated with dynamic systems and, while...AFRL-IF-RS-TR-2006-282 In- House Final Technical Report September 2006 MODELING OF COMPLEX ADAPTIVE SYSTEMS IN AIR OPERATIONS...NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose other than Government

  10. Goal-oriented model adaptivity for viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    van Opstal, T. M.; Bauman, P. T.; Prudhomme, S.; van Brummelen, E. H.

    2015-06-01

    In van Opstal et al. (Comput Mech 50:779-788, 2012) airbag inflation simulations were performed where the flow was approximated by Stokes flow. Inside the intricately folded initial geometry the Stokes assumption is argued to hold. This linearity assumption leads to a boundary-integral representation, the key to bypassing mesh generation and remeshing. It therefore enables very large displacements with near-contact. However, such a coarse assumption cannot hold throughout the domain, where it breaks down one needs to revert to the original model. The present work formalizes this idea. A model adaptive approach is proposed, in which the coarse model (a Stokes boundary-integral equation) is locally replaced by the original high-fidelity model (Navier-Stokes) based on a-posteriori estimates of the error in a quantity of interest. This adaptive modeling framework aims at taking away the burden and heuristics of manually partitioning the domain while providing new insight into the physics. We elucidate how challenges pertaining to model disparity can be addressed. Essentially, the solution in the interior of the coarse model domain is reconstructed as a post-processing step. We furthermore present a two-dimensional numerical experiments to show that the error estimator is reliable.

  11. Network and adaptive system of systems modeling and analysis.

    SciTech Connect

    Lawton, Craig R.; Campbell, James E. Dr.; Anderson, Dennis James; Eddy, John P.

    2007-05-01

    This report documents the results of an LDRD program entitled ''Network and Adaptive System of Systems Modeling and Analysis'' that was conducted during FY 2005 and FY 2006. The purpose of this study was to determine and implement ways to incorporate network communications modeling into existing System of Systems (SoS) modeling capabilities. Current SoS modeling, particularly for the Future Combat Systems (FCS) program, is conducted under the assumption that communication between the various systems is always possible and occurs instantaneously. A more realistic representation of these communications allows for better, more accurate simulation results. The current approach to meeting this objective has been to use existing capabilities to model network hardware reliability and adding capabilities to use that information to model the impact on the sustainment supply chain and operational availability.

  12. Extending the radial diffusion model of Falthammar to non-dipole background field

    SciTech Connect

    Cunningham, Gregory Scott

    2015-05-26

    A model for radial diffusion caused by electromagnetic disturbances was published by Falthammar (1965) using a two-parameter model of the disturbance perturbing a background dipole magnetic field. Schulz and Lanzerotti (1974) extended this model by recognizing the two parameter perturbation as the leading (non--dipole) terms of the Mead Williams magnetic field model. They emphasized that the magnetic perturbation in such a model induces an electric ield that can be calculated from the motion of field lines on which the particles are ‘frozen’. Roederer and Zhang (2014) describe how the field lines on which the particles are frozen can be calculated by tracing the unperturbed field lines from the minimum-B location to the ionospheric footpoint, and then tracing the perturbed field (which shares the same ionospheric footpoint due to the frozen -in condition) from the ionospheric footpoint back to a perturbed minimum B location. The instantaneous change n Roederer L*, dL*/dt, can then be computed as the product (dL*/dphi)*(dphi/dt). dL*/Dphi is linearly dependent on the perturbation parameters (to first order) and is obtained by computing the drift across L*-labeled perturbed field lines, while dphi/dt is related to the bounce-averaged gradient-curvature drift velocity. The advantage of assuming a dipole background magnetic field, as in these previous studies, is that the instantaneous dL*/dt can be computed analytically (with some approximations), as can the DLL that results from integrating dL*/dt over time and computing the expected value of (dL*)^2. The approach can also be applied to complex background magnetic field models like T89 or TS04, on top of which the small perturbations are added, but an analytical solution is not possible and so a numerical solution must be implemented. In this talk, I discuss our progress in implementing a numerical solution to the calculation of DL*L* using arbitrary background field models with simple electromagnetic

  13. Language Model Combination and Adaptation Using Weighted Finite State Transducers

    NASA Technical Reports Server (NTRS)

    Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.

    2010-01-01

    In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences

  14. CMAQ (Community Multi-Scale Air Quality) atmospheric distribution model adaptation to region of Hungary

    NASA Astrophysics Data System (ADS)

    Lázár, Dóra; Weidinger, Tamás

    2016-04-01

    For our days, it has become important to measure and predict the concentration of harmful atmospheric pollutants such as dust, aerosol particles of different size ranges, nitrogen compounds, and ozone. The Department of Meteorology at Eötvös Loránd University has been applying the WRF (Weather Research and Forecasting) model several years ago, which is suitable for weather forecasting tasks and provides input data for various environmental models (e.g. DNDC). By adapting the CMAQ (Community Multi-scale Air Quality) model we have designed a combined ambient air-meteorological model (WRF-CMAQ). In this research it is important to apply different emission databases and a background model describing the initial distribution of the pollutant. We used SMOKE (Sparse Matrix Operator Kernel Emissions) model for construction emission dataset from EMEP (European Monitoring and Evaluation Programme) inventories and GEOS-Chem model for initial and boundary conditions. Our model settings were CMAQ CB05 (Carbon Bond 2005) chemical mechanism with 108 x 108 km, 36 x 36 km and 12 x 12 km grids for regions of Europe, the Carpathian Basin and Hungary respectively. i) The structure of the model system, ii) a case study for Carpathian Basin (an anticyclonic weather situation at 21th September 2012) are presented. iii) Verification of ozone forecast has been provided based on the measurements of background air pollution stations. iv) Effects of model attributes (f.e. transition time, emission dataset, parameterizations) for the ozone forecast in Hungary are also investigated.

  15. Mouse model of Sanfilippo syndrome type B: relation of phenotypic features to background strain.

    PubMed

    Gografe, Sylvia I; Garbuzova-Davis, Svitlana; Willing, Alison E; Haas, Ken; Chamizo, Wilfredo; Sanberg, Paul R

    2003-12-01

    Sanfilippo syndrome type B or mucopolysaccharidosis type III B (MPS IIIB) is a lysosomal storage disorder that is inherited in autosomal recessive manner. It is characterized by systemic heparan sulfate accumulation in lysosomes due to deficiency of the enzyme alpha-N-acetylglucosaminidase (Naglu). Devastating clinical abnormalities with severe central nervous system involvement and somatic disease lead to premature death. A mouse model of Sanfilippo syndrome type B was created by targeted disruption of the gene encoding Naglu, providing a powerful tool for understanding pathogenesis and developing novel therapeutic strategies. However, the JAX GEMM Strain B6.129S6-Naglutm1Efn mouse, although showing biochemical similarities to humans with Sanfilippo syndrome, exhibits aging and behavioral differences. We observed idiosyncrasies, such as skeletal dysmorphism, hydrocephalus, ocular abnormalities, organomegaly, growth retardation, and anomalies of the integument, in our breeding colony of Naglu mutant mice and determined that several of them were at least partially related to the background strain C57BL/6. These background strain abnormalities, therefore, potentially mimic or overlap signs of the induced syndrome in our mice. Our observations may prove useful in studies of Naglu mutant mice. The necessity for distinguishing background anomalies from signs of the modeled disease is apparent.

  16. Detection of cosmic microwave background anisotropy at 1.8 deg: Theoretical implications on inflationary models

    NASA Astrophysics Data System (ADS)

    de Bernardis, Paolo; de Gasperis, Giancarlo; Masi, Silvia; Vittorio, Nicola

    1994-09-01

    Theoretical scenarios for the formation of large-scale structure in the universe are strongly constrained by ARGO (a balloon borne experiment reporting detection of cosmic microwave background (CMB) anisotropy at 1.8 deg) and Cosmic Background Explorer/Differential Microwave Radiometer (COBE/DMR). Here we consider flat hybrid models with either scale invariant or tilted (n not equal to 1) initial conditions. For n less than 1, we take into account the effect of a primordial background of gravitational waves, predicted by power-law inflation scenarios. The main result of our analysis is that the ARGO and COBE/DMR data select a range of values for the primordial spectral index: n = 0.95+0.25-0.15 (values of n outside this range can be rejected at more than 95% confidence level). These bounds are basically independent of the cosmological abundance of baryons (at least in the range allowed from primordial nucleosynthesis) and of the ratio of cold to hot dark matter. So, flat, cold, or mixed dark matter models, with scale-invariant initial conditions and a standard recombination history, successfully take into account the CMB anisotropy detected at intermediate and large angular scales.

  17. Model Adaptation for Prognostics in a Particle Filtering Framework

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, Kai Frank

    2011-01-01

    One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.

  18. Evaluating the adaptive-filter model of the cerebellum.

    PubMed

    Dean, Paul; Porrill, John

    2011-07-15

    The adaptive-filter model of the cerebellar microcircuit is in widespread use, combining as it does an explanation of key microcircuit features with well-specified computational power. Here we consider two methods for its evaluation. One is to test its predictions concerning relations between cerebellar inputs and outputs. Where the relevant experimental data are available, e.g. for the floccular role in image stabilization, the predictions appear to be upheld. However, for the majority of cerebellar microzones these data have yet to be obtained. The second method is to test model predictions about details of the microcircuit. We focus on features apparently incompatible with the model, in particular non-linear patterns in Purkinje cell simple-spike firing. Analysis of these patterns suggests the following three conclusions. (i) It is important to establish whether they can be observed during task-related behaviour. (ii) Highly non-linear models based on these patterns are unlikely to be universal, because they would be incompatible with the (approximately) linear nature of floccular function. (iii) The control tasks for which these models are computationally suited need to be identified. At present, therefore, the adaptive filter remains a candidate model of at least some cerebellar microzones, and its evaluation suggests promising lines for future enquiry.

  19. Oxidative DNA damage background estimated by a system model of base excision repair

    SciTech Connect

    Sokhansanj, B A; Wilson, III, D M

    2004-05-13

    Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parameters from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.

  20. Oxidative DNA damage background estimated by a system model of base excision repair.

    PubMed

    Sokhansanj, Bahrad A; Wilson, David M

    2004-08-01

    Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level, based on measuring 8-oxoguanine lesions as a biomarker, have led to estimates that vary over three to four orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parameters from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our findings show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.

  1. Wind adaptive modeling of transmission lines using minimum description length

    NASA Astrophysics Data System (ADS)

    Jaw, Yoonseok; Sohn, Gunho

    2017-03-01

    The transmission lines are moving objects, which positions are dynamically affected by wind-induced conductor motion while they are acquired by airborne laser scanners. This wind effect results in a noisy distribution of laser points, which often hinders accurate representation of transmission lines and thus, leads to various types of modeling errors. This paper presents a new method for complete 3D transmission line model reconstruction in the framework of inner and across span analysis. The highlighted fact is that the proposed method is capable of indirectly estimating noise scales, which corrupts the quality of laser observations affected by different wind speeds through a linear regression analysis. In the inner span analysis, individual transmission line models of each span are evaluated based on the Minimum Description Length theory and erroneous transmission line segments are subsequently replaced by precise transmission line models with wind-adaptive noise scale estimated. In the subsequent step of across span analysis, detecting the precise start and end positions of the transmission line models, known as the Point of Attachment, is the key issue for correcting partial modeling errors, as well as refining transmission line models. Finally, the geometric and topological completion of transmission line models are achieved over the entire network. A performance evaluation was conducted over 138.5 km long corridor data. In a modest wind condition, the results demonstrates that the proposed method can improve the accuracy of non-wind-adaptive initial models on an average of 48% success rate to produce complete transmission line models in the range between 85% and 99.5% with the positional accuracy of 9.55 cm transmission line models and 28 cm Point of Attachment in the root-mean-square error.

  2. A model for the emergence of adaptive subsystems.

    PubMed

    Dopazo, H; Gordon, M B; Perazzo, R; Risau-Gusman, S

    2003-01-01

    We investigate the interaction of learning and evolution in a changing environment. A stable learning capability is regarded as an emergent adaptive system evolved by natural selection of genetic variants. We consider the evolution of an asexual population. Each genotype can have 'fixed' and 'flexible' alleles. The former express themselves as synaptic connections that remain unchanged during ontogeny and the latter as synapses that can be adjusted through a learning algorithm. Evolution is modelled using genetic algorithms and the changing environment is represented by two optimal synaptic patterns that alternate a fixed number of times during the 'life' of the individuals. The amplitude of the change is related to the Hamming distance between the two optimal patterns and the rate of change to the frequency with which both exchange roles. This model is an extension of that of Hinton and Nowlan in which the fitness is given by a probabilistic measure of the Hamming distance to the optimum. We find that two types of evolutionary pathways are possible depending upon how difficult (costly) it is to cope with the changes of the environment. In one case the population loses the learning ability, and the individuals inherit fixed synapses that are optimal in only one of the environmental states. In the other case a flexible subsystem emerges that allows the individuals to adapt to the changes of the environment. The model helps us to understand how an adaptive subsystem can emerge as the result of the tradeoff between the exploitation of a congenital structure and the exploration of the adaptive capabilities practised by learning.

  3. Integrated modeling of the GMT laser tomography adaptive optics system

    NASA Astrophysics Data System (ADS)

    Piatrou, Piotr

    2014-08-01

    Laser Tomography Adaptive Optics (LTAO) is one of adaptive optics systems planned for the Giant Magellan Telescope (GMT). End-to-end simulation tools that are able to cope with the complexity and computational burden of the AO systems to be installed on the extremely large telescopes such as GMT prove to be an integral part of the GMT LTAO system development endeavors. SL95, the Fortran 95 Simulation Library, is one of the software tools successfully used for the LTAO system end-to-end simulations. The goal of SL95 project is to provide a complete set of generic, richly parameterized mathematical models for key elements of the segmented telescope wavefront control systems including both active and adaptive optics as well as the models for atmospheric turbulence, extended light sources like Laser Guide Stars (LGS), light propagation engines and closed-loop controllers. The library is implemented as a hierarchical collection of classes capable of mutual interaction, which allows one to assemble complex wavefront control system configurations with multiple interacting control channels. In this paper we demonstrate the SL95 capabilities by building an integrated end-to-end model of the GMT LTAO system with 7 control channels: LGS tomography with Adaptive Secondary and on-instrument deformable mirrors, tip-tilt and vibration control, LGS stabilization, LGS focus control, truth sensor-based dynamic noncommon path aberration rejection, pupil position control, SLODAR-like embedded turbulence profiler. The rich parameterization of the SL95 classes allows to build detailed error budgets propagating through the system multiple errors and perturbations such as turbulence-, telescope-, telescope misalignment-, segment phasing error-, non-common path-induced aberrations, sensor noises, deformable mirror-to-sensor mis-registration, vibration, temporal errors, etc. We will present a short description of the SL95 architecture, as well as the sample GMT LTAO system simulation

  4. Background error covariance modelling for convective-scale variational data assimilation

    NASA Astrophysics Data System (ADS)

    Petrie, R. E.

    An essential component in data assimilation is the background error covariance matrix (B). This matrix regularizes the ill-posed data assimilation problem, describes the confidence of the background state and spreads information. Since the B-matrix is too large to represent explicitly it must be modelled. In variational data assimilation it is essentially a climatological approximation of the true covariances. Such a conventional covariance model additionally relies on the imposition of balance conditions. A toy model which is derived from the Euler equations (by making appropriate simplifications and introducing tuneable parameters) is used as a convective-scale system to investigate these issues. Its behaviour is shown to exhibit large-scale geostrophic and hydrostatic balance while permitting small-scale imbalance. A control variable transform (CVT) approach to modelling the B-matrix where the control variables are taken to be the normal modes (NM) of the linearized model is investigated. This approach is attractive for convective-scale covariance modelling as it allows for unbalanced as well as appropriately balanced relationships. Although the NM-CVT is not applied to a data assimilation problem directly, it is shown to be a viable approach to convective-scale covariance modelling. A new mathematically rigorous method to incorporate flow-dependent error covariances with the otherwise static B-matrix estimate is also proposed. This is an extension to the reduced rank Kalman filter (RRKF) where its Hessian singular vector calculation is replaced by an ensemble estimate of the covariances, and is known as the ensemble RRKF (EnRRKF). Ultimately it is hoped that together the NM-CVT and the EnRRKF would improve the predictability of small-scale features in convective-scale weather forecasting through the relaxation of inappropriate balance and the inclusion of flow-dependent covariances.

  5. Model reference adaptive control with an augmented error signal

    NASA Technical Reports Server (NTRS)

    Monopoli, R. V.

    1974-01-01

    It is shown how globally stable model reference adaptive control systems may be designed when one has access to only the plant's input and output signals. Controllers for single input-single output, nonlinear, nonautonomous plants are developed based on Lyapunov's direct method and the Meyer-Kalman-Yacubovich lemma. Derivatives of the plant output are not required, but are replaced by filtered derivative signals. An augmented error signal replaces the error normally used, which is defined as the difference between the model and plant outputs. However, global stability is assured in the sense that the normally used error signal approaches zero asymptotically.

  6. Model reference adaptive control using only input and output signals

    NASA Technical Reports Server (NTRS)

    Monopoli, R. V.

    1973-01-01

    It is shown how globally stable model reference adaptive control systems may be designed using only the plant's input and output signals. Controllers for single input-single output, nonlinear, nonautonomous plants are developed based on Liapunov's direct method and the Meyer-Kalman-Yacubovich lemma. Filtered derivatives of the plant output replace pure derivatives which are normally required in these systems. An augmented error signal replaces the error previously used which is the difference between the model and plant outputs. However, global stability is assured in the sense that this difference approaches zero asymptotically.

  7. Digital adaptive controllers using second order models with transport lag

    NASA Technical Reports Server (NTRS)

    Joshi, S.; Kaufman, H.

    1975-01-01

    Design of a discrete optimal regulator requires the a priori knowledge of a mathematical model for the system of interest. Because a second-order model with transport lag is very amenable to control computations and because this type of model has been used previously to represent certain high order single input-single output processes, an adaptive controller was designed based upon adjustment of controls computed for such a model. An extended Kalman filter was utilized for tracking the model parameters which were subsequently used to update a set of optimal control gains. Favorable results were obtained in applying this procedure to the control of several examples including a ninth order nonlinear process.

  8. Modeling Pancreatic Endocrine Cell Adaptation and Diabetes in the Zebrafish

    PubMed Central

    Maddison, Lisette A.; Chen, Wenbiao

    2017-01-01

    Glucose homeostasis is an important element of energy balance and is conserved in organisms from fruit fly to mammals. Central to the control of circulating glucose levels in vertebrates are the endocrine cells of the pancreas, particularly the insulin-producing β-cells and the glucagon producing α-cells. A feature of α- and β-cells is their plasticity, an ability to adapt, in function and number as a response to physiological and pathophysiological conditions of increased hormone demand. The molecular mechanisms underlying these adaptive responses that maintain glucose homeostasis are incompletely defined. The zebrafish is an attractive model due to the low cost, high fecundity, and amenability to genetic and compound screens, and mechanisms governing the development of the pancreatic endocrine cells are conserved between zebrafish and mammals. Post development, both β- and α-cells of zebrafish display plasticity as in mammals. Here, we summarize the studies of pancreatic endocrine cell adaptation in zebrafish. We further explore the utility of the zebrafish as a model for diabetes, a relevant topic considering the increase in diabetes in the human population. PMID:28184214

  9. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    DOE PAGES

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; ...

    2015-03-17

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF)more » to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.« less

  10. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    PubMed

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  11. A model for the distribution of material generating the soft X-ray background

    NASA Technical Reports Server (NTRS)

    Snowden, S. L.; Cox, D. P.; Mccammon, D.; Sanders, W. T.

    1990-01-01

    The observational evidence relating to the soft X-ray diffuse background is discussed, and a simple model for its source and spatial structure is presented. In this simple model with one free parameter, the observed 1/4 keV X-ray intensity originates as thermal emission from a uniform hot plasma filling a cavity in the neutral material of the Galactic disk which contains the sun. Variations in the observed X-ray intensity are due to variations in the extent of the emission volume and therefore the emission measure of the plasma. The model reproduces the observed negative correlation between X-ray intensity and H I column density and predicts reasonable values for interstellar medium parameters.

  12. A Model of the Soft X-ray Background as a Blast Wave Viewed from Inside

    NASA Technical Reports Server (NTRS)

    Edgar, R. J.; Cox, D. P.

    1984-01-01

    The suggestion that the soft X-ray background arises in part from the Sun which is inside a large supernova blastwave was examined by models of spherical blastwaves. The models can produce quantitative fits to both surface brightnesses and energy band ratios when t = 10 to the 5th power E sub o = 5 x 10 to the 50th power ergs, and n sub approx. 0.004 cm to the -3 power. The models are generalized by varying the relative importance of factors such as thermal conduction, Coulomb heating of electrons, and external pressure; and to allow the explosions to occur in preexisting cavities with steep density gradients, or by examination of the effects of large obstructions or other anisotrophies in the ambient medium.

  13. A model for massless higher spin field interacting with a geometrical background

    NASA Astrophysics Data System (ADS)

    Bandelloni, Giuseppe

    2015-04-01

    We study a very general four-dimensional field theory model describing the dynamics of a massless higher spin N symmetric tensor field particle interacting with a geometrical background. This model is invariant under the action of an extended linear diffeomorphism. We investigate the consistency of the equations of motion, and the highest spin degrees of freedom are extracted by means of a set of covariant constraints. Moreover, the highest spin equations of motions (and in general all the highest spin field 1-PI irreducible Green functions) are invariant under a chain of transformations induced by a set of N - 2 Ward operators, while the auxiliary fields equations of motion spoil this symmetry. The first steps to a quantum extension of the model are discussed on the basis of the algebraic field theory. Technical aspects are reported in Appendices, in particular, one of them is devoted to illustrate the spin-2 case.

  14. A minimal empirical model for the cosmic far-infrared background anisotropies

    NASA Astrophysics Data System (ADS)

    Wu, Hao-Yi; Doré, Olivier

    2017-01-01

    Cosmic far-infrared background (CFIRB) probes unresolved dusty star-forming galaxies across cosmic time and is complementary to ultraviolet and optical observations of galaxy evolution. In this work, we interpret the observed CFIRB anisotropies using an empirical model based on resolved galaxies in ultraviolet and optical surveys. Our model includes stellar mass functions, star-forming main sequence, and dust attenuation. We find that the commonly used linear Kennicutt relation between infrared luminosity and star-formation rate over-produces the observed CFIRB amplitudes. The observed CFIRB requires that low-mass galaxies have lower infrared luminosities than expected from the Kennicutt relation, implying that low-mass galaxies have lower dust content and weaker dust attenuation. Our results demonstrates that CFIRB not only provides a stringent consistency check for galaxy evolution models but also constrains the dust content of low-mass galaxies.

  15. Modeling the zeta potential of silica capillaries in relation to the background electrolyte composition.

    PubMed

    Berli, Claudio L A; Piaggio, María V; Deiber, Julio A

    2003-05-01

    A theoretical relation between the zeta potential of silica capillaries and the composition of the background electrolyte (BGE) is presented in order to be used in capillary zone electrophoresis (CZE). This relation is derived on the basis of the Poisson-Boltzmann equation and considering the equilibrium dissociation of silanol groups at the capillary wall as the mechanism of charge generation. The resulting model involves the relevant physicochemical parameters of the BGE-capillary interface. Special attention is paid to the characterization of the BGE, which can be either salt or/and buffer solutions. The model is successfully applied to electroosmotic flow (EOF) experimental data of different aqueous solutions, covering a wide range of pH and ionic strength. Numerical predictions are also presented showing the capability of the model to quantify the EOF, the control of which is relevant to improve analyte separation performance in CZE.

  16. Comparison of wavefront sensor models for simulation of adaptive optics.

    PubMed

    Wu, Zhiwen; Enmark, Anita; Owner-Petersen, Mette; Andersen, Torben

    2009-10-26

    The new generation of extremely large telescopes will have adaptive optics. Due to the complexity and cost of such systems, it is important to simulate their performance before construction. Most systems planned will have Shack-Hartmann wavefront sensors. Different mathematical models are available for simulation of such wavefront sensors. The choice of wavefront sensor model strongly influences computation time and simulation accuracy. We have studied the influence of three wavefront sensor models on performance calculations for a generic, adaptive optics (AO) system designed for K-band operation of a 42 m telescope. The performance of this AO system has been investigated both for reduced wavelengths and for reduced r(0) in the K band. The telescope AO system was designed for K-band operation, that is both the subaperture size and the actuator pitch were matched to a fixed value of r(0) in the K-band. We find that under certain conditions, such as investigating limiting guide star magnitude for large Strehl-ratios, a full model based on Fraunhofer propagation to the subimages is significantly more accurate. It does however require long computation times. The shortcomings of simpler models based on either direct use of average wavefront tilt over the subapertures for actuator control, or use of the average tilt to move a precalculated point spread function in the subimages are most pronounced for studies of system limitations to operating parameter variations. In the long run, efficient parallelization techniques may be developed to overcome the problem.

  17. Modelling the flux distribution function of the extragalactic gamma-ray background from dark matter annihilation

    NASA Astrophysics Data System (ADS)

    Feyereisen, Michael R.; Ando, Shin'ichiro; Lee, Samuel K.

    2015-09-01

    The one-point function (i.e., the isotropic flux distribution) is a complementary method to (anisotropic) two-point correlations in searches for a gamma-ray dark matter annihilation signature. Using analytical models of structure formation and dark matter halo properties, we compute the gamma-ray flux distribution due to annihilations in extragalactic dark matter halos, as it would be observed by the Fermi Large Area Telescope. Combining the central limit theorem and Monte Carlo sampling, we show that the flux distribution takes the form of a narrow Gaussian of `diffuse' light, with an `unresolved point source' power-law tail as a result of bright halos. We argue that this background due to dark matter constitutes an irreducible and significant background component for point-source annihilation searches with galaxy clusters and dwarf spheroidal galaxies, modifying the predicted signal-to-noise ratio. A study of astrophysical backgrounds to this signal reveals that the shape of the total gamma-ray flux distribution is very sensitive to the contribution of a dark matter component, allowing us to forecast promising one-point upper limits on the annihilation cross-section. We show that by using the flux distribution at only one energy bin, one can probe the canonical cross-section required for explaining the relic density, for dark matter of masses around tens of GeV.

  18. Modelling the flux distribution function of the extragalactic gamma-ray background from dark matter annihilation

    SciTech Connect

    Feyereisen, Michael R.; Ando, Shin'ichiro; Lee, Samuel K. E-mail: s.ando@uva.nl

    2015-09-01

    The one-point function (i.e., the isotropic flux distribution) is a complementary method to (anisotropic) two-point correlations in searches for a gamma-ray dark matter annihilation signature. Using analytical models of structure formation and dark matter halo properties, we compute the gamma-ray flux distribution due to annihilations in extragalactic dark matter halos, as it would be observed by the Fermi Large Area Telescope. Combining the central limit theorem and Monte Carlo sampling, we show that the flux distribution takes the form of a narrow Gaussian of 'diffuse' light, with an 'unresolved point source' power-law tail as a result of bright halos. We argue that this background due to dark matter constitutes an irreducible and significant background component for point-source annihilation searches with galaxy clusters and dwarf spheroidal galaxies, modifying the predicted signal-to-noise ratio. A study of astrophysical backgrounds to this signal reveals that the shape of the total gamma-ray flux distribution is very sensitive to the contribution of a dark matter component, allowing us to forecast promising one-point upper limits on the annihilation cross-section. We show that by using the flux distribution at only one energy bin, one can probe the canonical cross-section required for explaining the relic density, for dark matter of masses around tens of GeV.

  19. A Comparison of Three Programming Models for Adaptive Applications

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.

  20. Prediction of Conductivity by Adaptive Neuro-Fuzzy Model

    PubMed Central

    Akbarzadeh, S.; Arof, A. K.; Ramesh, S.; Khanmirzaei, M. H.; Nor, R. M.

    2014-01-01

    Electrochemical impedance spectroscopy (EIS) is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity. PMID:24658582

  1. A heuristic model on the role of plasticity in adaptive evolution: plasticity increases adaptation, population viability and genetic variation.

    PubMed

    Gomez-Mestre, Ivan; Jovani, Roger

    2013-11-22

    An ongoing new synthesis in evolutionary theory is expanding our view of the sources of heritable variation beyond point mutations of fixed phenotypic effects to include environmentally sensitive changes in gene regulation. This expansion of the paradigm is necessary given ample evidence for a heritable ability to alter gene expression in response to environmental cues. In consequence, single genotypes are often capable of adaptively expressing different phenotypes in different environments, i.e. are adaptively plastic. We present an individual-based heuristic model to compare the adaptive dynamics of populations composed of plastic or non-plastic genotypes under a wide range of scenarios where we modify environmental variation, mutation rate and costs of plasticity. The model shows that adaptive plasticity contributes to the maintenance of genetic variation within populations, reduces bottlenecks when facing rapid environmental changes and confers an overall faster rate of adaptation. In fluctuating environments, plasticity is favoured by selection and maintained in the population. However, if the environment stabilizes and costs of plasticity are high, plasticity is reduced by selection, leading to genetic assimilation, which could result in species diversification. More broadly, our model shows that adaptive plasticity is a common consequence of selection under environmental heterogeneity, and hence a potentially common phenomenon in nature. Thus, taking adaptive plasticity into account substantially extends our view of adaptive evolution.

  2. Adaptive-filter models of the cerebellum: computational analysis.

    PubMed

    Dean, Paul; Porrill, John

    2008-01-01

    Many current models of the cerebellar cortical microcircuit are equivalent to an adaptive filter using the covariance learning rule. The adaptive filter is a development of the original Marr-Albus framework that deals naturally with continuous time-varying signals, thus addressing the issue of 'timing' in cerebellar function, and it can be connected in a variety of ways to other parts of the system, consistent with the microzonal organization of cerebellar cortex. However, its computational capacities are not well understood. Here we summarise the results of recent work that has focused on two of its intrinsic properties. First, an adaptive filter seeks to decorrelate its (mossy fibre) inputs from a (climbing fibre) teaching signal. This procedure can be used both for sensory processing, e.g. removal of interference from sensory signals, and for learning accurate motor commands, by decorrelating an efference copy of those commands from a sensory signal of inaccuracy. As a model of the cerebellum the adaptive filter thus forms a natural link between events at the cellular level, such as forms of synaptic plasticity and the learning rules they embody, and intelligent behaviour at the system level. Secondly, it has been shown that the covariance learning rule enables the filter to handle input and intrinsic noise optimally. Such optimality may underlie the recently described role of the cerebellum in producing accurate smooth pursuit eye movements in the face of sensory noise. Moreover, it has the consequence of driving most input weights to very small values, consistent with experimental data that many parallel-fibre synapses are normally silent. The effectiveness of silent synapses can only be altered by LTP, so learning tasks depending on a reduction of Purkinje cell firing require the synapses to be embedded in a second, inhibitory pathway from parallel fibre to Purkinje cell. This pathway and the appropriate climbing-fibre related plasticity have been described

  3. Predicting the distribution of vulnerable marine ecosystems in the deep sea using presence-background models

    NASA Astrophysics Data System (ADS)

    Vierod, Alexander D. T.; Guinotte, John M.; Davies, Andrew J.

    2014-01-01

    In 2006 the United Nations called on states to implement measures to prevent significant adverse impacts to vulnerable marine ecosystems (VMEs) in the deep sea. It has been widely recognised that a major limitation to the effective application of these measures to date is uncertainty regarding the distribution of VMEs. Conservationists, researchers, resource managers and governmental bodies are increasingly turning to predictive species distribution models (SDMs) to identify the potential presence of species in areas that have not been sampled. In particular, the development of robust ‘presence-background' model algorithms has accelerated the application of these techniques for working with presence-only species data. This has allowed scientists to exploit the large amounts of species data held in global biogeographic databases. Despite improvements in model algorithms, environmental data and species presences, there are still limitations to the reliability of these techniques, especially in poorly studied areas such as the deep sea. Recent studies have begun to address a key limitation, the quality of data, by using multibeam echosounder surveys and species data from video surveys to acquire high-resolution data. Whilst these data are often amongst the very best that can be acquired, the surveys are highly localised, often targeted towards known VME-containing areas, are very expensive and time consuming. It is financially prohibitive to survey whole regions or ocean basins using these techniques, so alternative cost-effective approaches are required. Here, we review ‘presence-background' SDMs in the context of those studies conducted in the deep sea. The issues of sampling bias, spatial autocorrelation, spatial scale, model evaluation and validation are considered in detail, and reference is made to recent developments in species distribution modelling literature. Further information is provided on how these approaches are being used to influence ocean

  4. Model observer design for detecting multiple abnormalities in anatomical background images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Park, Subok

    2016-03-01

    As psychophysical studies are resource-intensive to conduct, model observers are commonly used to assess and optimize medical imaging quality. Existing model observers were typically designed to detect at most one signal. However, in clinical practice, there may be multiple abnormalities in a single image set (e.g., multifocal and multicentric breast cancers (MMBC)), which can impact treatment planning. Prevalence of signals can be different across anatomical regions, and human observers do not know the number or location of signals a priori. As new imaging techniques have the potential to improve multiple-signal detection (e.g., digital breast tomosynthesis may be more effective for diagnosis of MMBC than planar mammography), image quality assessment approaches addressing such tasks are needed. In this study, we present a model-observer mechanism to detect multiple signals in the same image dataset. To handle the high dimensionality of images, a novel implementation of partial least squares (PLS) was developed to estimate different sets of efficient channels directly from the images. Without any prior knowledge of the background or the signals, the PLS channels capture interactions between signals and the background which provide discriminant image information. Corresponding linear decision templates are employed to generate both image-level and location-specific scores on the presence of signals. Our preliminary results show that the model observer using PLS channels, compared to our first attempts with Laguerre-Gauss channels, can achieve high performance with a reasonably small number of channels, and the optimal design of the model observer may vary as the tasks of clinical interest change.

  5. An Adaptive Complex Network Model for Brain Functional Networks

    PubMed Central

    Gomez Portillo, Ignacio J.; Gleiser, Pablo M.

    2009-01-01

    Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902

  6. Prequential Analysis of Complex Data with Adaptive Model Reselection.

    PubMed

    Clarke, Jennifer; Clarke, Bertrand

    2009-11-01

    In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias-variance tradeoff in statistical modeling.

  7. The Role of Scale and Model Bias in ADAPT's Photospheric Eatimation

    SciTech Connect

    Godinez Vazquez, Humberto C.; Hickmann, Kyle Scott; Arge, Charles Nicholas; Henney, Carl

    2015-05-20

    The Air Force Assimilative Photospheric flux Transport model (ADAPT), is a magnetic flux propagation based on Worden-Harvey (WH) model. ADAPT would be used to provide a global photospheric map of the Earth. A data assimilation method based on the Ensemble Kalman Filter (EnKF), a method of Monte Carlo approximation tied with Kalman filtering, is used in calculating the ADAPT models.

  8. Internal stress field at Mount Vesuvius: A model for background seismicity at a central volcano

    NASA Astrophysics Data System (ADS)

    de Natale, Giuseppe; Petrazzuoli, Stefano M.; Troise, Claudia; Pingue, Folco; Capuano, Paolo

    2000-07-01

    We propose a model to explain the background seismicity occurring at Somma-Vesuvius in its present, mostly quiescent period. A finite element procedure has been used to simulate the stress field due to gravitational body forces in an axisymmetric volcano characterized by a central high-rigidity anomaly. Results emphasize the important effect of axial high-rigidity, which concentrates at its borders stresses resulting from the gravitational load of the volcanic edifice, as well as external (regional) stresses. The joint effect of the gravitational loading and of the presence of the anomaly produces stresses very close to or above the critical rupture threshold. The observed spatial concentrations of seismicity and moment release correlate well with peaks of computed maximum shear stress. Seismicity is then interpreted as due to small stress perturbations concentrated around the high-rigidity core and added to a system already close, to the failure threshold. This model can explain the widely observed occurrence of background seismicity at central volcanoes worldwide.

  9. FPGA implementation for real-time background subtraction based on Horprasert model.

    PubMed

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W.

  10. FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model

    PubMed Central

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J.; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W. PMID:22368487

  11. CBSD Version II component models of the IR celestial background. Technical report

    SciTech Connect

    Kennealy, J.P.; Glaudell, G.A.

    1990-12-07

    CBSD Version II addresses the development of algorithms and software which implement realistic models of all the primary celestial background phenomenologies, including solar system, galactic, and extra-galactic features. During 1990, the CBSD program developed and refined IR scene generation models for the zodiacal emission, thermal emission from asteroids and planets, and the galactic point source background. Chapters in this report are devoted to each of those areas. Ongoing extensions to the point source module for extended source descriptions of nebulae and HII regions are briefly discussed. Treatment of small galaxies will also be a natural extension of the current CBSD point source module. Although no CBSD module yet exists for interstellar IR cirrus, MRC has been working closely with the Royal Aerospace Establishment in England to achieve a data-base understanding of cirrus fractal characteristics. The CBSD modules discussed in Chapters 2, 3, and 4 are all now operational and have been employed to generate a significant variety of scenes. CBSD scene generation capability has been well accepted by both the IR astronomy community and the DOD user community and directly supports the SDIO SSGM program.

  12. Adaptive Modeling, Engineering Analysis and Design of Advanced Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Hsu, Su-Yuen; Mason, Brian H.; Hicks, Mike D.; Jones, William T.; Sleight, David W.; Chun, Julio; Spangler, Jan L.; Kamhawi, Hilmi; Dahl, Jorgen L.

    2006-01-01

    This paper describes initial progress towards the development and enhancement of a set of software tools for rapid adaptive modeling, and conceptual design of advanced aerospace vehicle concepts. With demanding structural and aerodynamic performance requirements, these high fidelity geometry based modeling tools are essential for rapid and accurate engineering analysis at the early concept development stage. This adaptive modeling tool was used for generating vehicle parametric geometry, outer mold line and detailed internal structural layout of wing, fuselage, skin, spars, ribs, control surfaces, frames, bulkheads, floors, etc., that facilitated rapid finite element analysis, sizing study and weight optimization. The high quality outer mold line enabled rapid aerodynamic analysis in order to provide reliable design data at critical flight conditions. Example application for structural design of a conventional aircraft and a high altitude long endurance vehicle configuration are presented. This work was performed under the Conceptual Design Shop sub-project within the Efficient Aerodynamic Shape and Integration project, under the former Vehicle Systems Program. The project objective was to design and assess unconventional atmospheric vehicle concepts efficiently and confidently. The implementation may also dramatically facilitate physics-based systems analysis for the NASA Fundamental Aeronautics Mission. In addition to providing technology for design and development of unconventional aircraft, the techniques for generation of accurate geometry and internal sub-structure and the automated interface with the high fidelity analysis codes could also be applied towards the design of vehicles for the NASA Exploration and Space Science Mission projects.

  13. Adaptive modeling of compression hearing aids: Convergence and tracking issues

    NASA Astrophysics Data System (ADS)

    Parsa, Vijay; Jamieson, Donald

    2003-10-01

    Typical measurements of electroacoustic performance of hearing aids include frequency response, compression ratio, threshold and time constants, equivalent input noise, and total harmonic distortion. These measurements employ artificial test signals and do not relate well to perceptual indices of hearing aid performance. Speech-based electroacoustic measures provide means to quantify the real world performance of hearing aids and have been shown to correlate better with perceptual data. This paper investigates the application of system identification paradigm for deriving the speech-based measures, where the hearing aid is modeled as a linear time-varying system and its response to speech stimuli is predicted using a linear adaptive filter. The performance of three adaptive filtering algorithms, viz. the Least Mean Square (LMS), Normalized LMS, and the Affine Projection Algorithm (APA) was investigated using simulated and real digital hearing aids. In particular, the convergence and tracking behavior of these algorithms in modeling compression hearing aids was thoroughly investigated for a range of compression ratio and threshold parameters, and attack and release time constants. Our results show that the NLMS and APA algorithms are capable of modeling digital hearing aids under a variety of compression conditions, and are suitable for deriving speech-based metrics of hearing aid performance.

  14. Direct model reference adaptive control of a flexible robotic manipulator

    NASA Technical Reports Server (NTRS)

    Meldrum, D. R.

    1985-01-01

    Quick, precise control of a flexible manipulator in a space environment is essential for future Space Station repair and satellite servicing. Numerous control algorithms have proven successful in controlling rigid manipulators wih colocated sensors and actuators; however, few have been tested on a flexible manipulator with noncolocated sensors and actuators. In this thesis, a model reference adaptive control (MRAC) scheme based on command generator tracker theory is designed for a flexible manipulator. Quicker, more precise tracking results are expected over nonadaptive control laws for this MRAC approach. Equations of motion in modal coordinates are derived for a single-link, flexible manipulator with an actuator at the pinned-end and a sensor at the free end. An MRAC is designed with the objective of controlling the torquing actuator so that the tip position follows a trajectory that is prescribed by the reference model. An appealing feature of this direct MRAC law is that it allows the reference model to have fewer states than the plant itself. Direct adaptive control also adjusts the controller parameters directly with knowledge of only the plant output and input signals.

  15. Parallel adaptive discontinuous Galerkin approximation for thin layer avalanche modeling

    NASA Astrophysics Data System (ADS)

    Patra, A. K.; Nichita, C. C.; Bauer, A. C.; Pitman, E. B.; Bursik, M.; Sheridan, M. F.

    2006-08-01

    This paper describes the development of highly accurate adaptive discontinuous Galerkin schemes for the solution of the equations arising from a thin layer type model of debris flows. Such flows have wide applicability in the analysis of avalanches induced by many natural calamities, e.g. volcanoes, earthquakes, etc. These schemes are coupled with special parallel solution methodologies to produce a simulation tool capable of very high-order numerical accuracy. The methodology successfully replicates cold rock avalanches at Mount Rainier, Washington and hot volcanic particulate flows at Colima Volcano, Mexico.

  16. Discrete model reference adaptive control with an augmented error signal

    NASA Technical Reports Server (NTRS)

    Ionescu, T.; Monopoli, R.

    1975-01-01

    A method for designing discrete model reference adaptive control systems when one has access to only the plant's input and output signals is given. Controllers for single-input, single-output, nonlinear, nonautonomous plants are developed via Liapunov's second method. Anticipative values of the plant output are not required, but are replaced by signals easily obtained from a low-pass filter operating on the plant's output. The augmented error signal method is employed, ensuring finally that the normally used error signal also approaches zero asymptotically.

  17. Model-free adaptive control of advanced power plants

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L.; Wang, Qiang

    2015-08-18

    A novel 3-Input-3-Output (3.times.3) Model-Free Adaptive (MFA) controller with a set of artificial neural networks as part of the controller is introduced. A 3.times.3 MFA control system using the inventive 3.times.3 MFA controller is described to control key process variables including Power, Steam Throttle Pressure, and Steam Temperature of boiler-turbine-generator (BTG) units in conventional and advanced power plants. Those advanced power plants may comprise Once-Through Supercritical (OTSC) Boilers, Circulating Fluidized-Bed (CFB) Boilers, and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.

  18. An adaptive multigrid model for hurricane track prediction

    NASA Technical Reports Server (NTRS)

    Fulton, Scott R.

    1993-01-01

    This paper describes a simple numerical model for hurricane track prediction which uses a multigrid method to adapt the model resolution as the vortex moves. The model is based on the modified barotropic vorticity equation, discretized in space by conservative finite differences and in time by a Runge-Kutta scheme. A multigrid method is used to solve an elliptic problem for the streamfunction at each time step. Nonuniform resolution is obtained by superimposing uniform grids of different spatial extent; these grids move with the vortex as it moves. Preliminary numerical results indicate that the local mesh refinement allows accurate prediction of the hurricane track with substantially less computer time than required on a single uniform grid.

  19. An Evolutionary Dynamics Model Adapted to Eusocial Insects

    PubMed Central

    van Oudenhove, Louise; Cerdá, Xim; Bernstein, Carlos

    2013-01-01

    This study aims to better understand the evolutionary processes allowing species coexistence in eusocial insect communities. We develop a mathematical model that applies adaptive dynamics theory to the evolutionary dynamics of eusocial insects, focusing on the colony as the unit of selection. The model links long-term evolutionary processes to ecological interactions among colonies and seasonal worker production within the colony. Colony population dynamics is defined by both worker production and colony reproduction. Random mutations occur in strategies, and mutant colonies enter the community. The interactions of colonies at the ecological timescale drive the evolution of strategies at the evolutionary timescale by natural selection. This model is used to study two specific traits in ants: worker body size and the degree of collective foraging. For both traits, trade-offs in competitive ability and other fitness components allows to determine conditions in which selection becomes disruptive. Our results illustrate that asymmetric competition underpins diversity in ant communities. PMID:23469162

  20. An evolutionary dynamics model adapted to eusocial insects.

    PubMed

    van Oudenhove, Louise; Cerdá, Xim; Bernstein, Carlos

    2013-01-01

    This study aims to better understand the evolutionary processes allowing species coexistence in eusocial insect communities. We develop a mathematical model that applies adaptive dynamics theory to the evolutionary dynamics of eusocial insects, focusing on the colony as the unit of selection. The model links long-term evolutionary processes to ecological interactions among colonies and seasonal worker production within the colony. Colony population dynamics is defined by both worker production and colony reproduction. Random mutations occur in strategies, and mutant colonies enter the community. The interactions of colonies at the ecological timescale drive the evolution of strategies at the evolutionary timescale by natural selection. This model is used to study two specific traits in ants: worker body size and the degree of collective foraging. For both traits, trade-offs in competitive ability and other fitness components allows to determine conditions in which selection becomes disruptive. Our results illustrate that asymmetric competition underpins diversity in ant communities.

  1. Multiple Model Adaptive Estimation Techniques for Adaptive Model-Based Robot Control

    DTIC Science & Technology

    1989-12-01

    Proportional Derivative (PD) or Propor- tional Integral Derivative (PID) feedback controller [6]. 1-1 The PD or PID controllers feedback the measured...Unfortunately, as the speed of the trajectory increases or the con- figuration of the robot changes, the PD or PID controllers cannot maintain track along the...desired trajectory. The main reason for poor tracking is that the PD and PID controllers were developed based on a simplified linear dynamics model

  2. Correlations of control variables for horizontal background error covariance modeling on cubed-sphere grid

    NASA Astrophysics Data System (ADS)

    Kwun, Jihye; Song, Hyo-Jong; Park, Jong-Im

    2013-04-01

    Background error covariance matrix is very important for variational data assimilation system, determining how the information from observed variables is spread to unobserved variables and spatial points. The full representation of the matrix is impossible because of the huge size so the matrix is constructed implicitly by means of a variable transformation. It is assumed that the forecast errors in the control variables chosen are statistically independent. We used the cubed-sphere geometry based on the spectral element method which is better for parallel application. In cubed-sphere grids, the grid points are located at Gauss-Legendre-Lobatto points on each local element of 6 faces on the sphere. The two stages of the transformation were used in this study. The first is the variable transformation from model to a set of control variables whose errors are assumed to be uncorrelated, which was developed on the cubed sphere-using Galerkin method. Winds are decomposed into rotational part and divergent part by introducing stream function and velocity potential as control variables. The dynamical constraint for balance between mass and wind were made by applying linear balance operator. The second is spectral transformation which is to remove the remaining spatial correlation. The bases for the spectral transform were generated for the cubed-sphere grid. 6-hr difference fields of shallow water equation (SWE) model run initialized by variational data assimilation system were used to obtain forecast error statistics. In the horizontal background error covariance modeling, the regression analysis of the control variables was performed to define the unbalanced variables as the difference between full and correlated part. Regression coefficient was used to remove the remaining correlations between variables.

  3. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  4. Distributed control in adaptive optics: deformable mirror and turbulence modeling

    NASA Astrophysics Data System (ADS)

    Ellenbroek, Rogier; Verhaegen, Michel; Doelman, Niek; Hamelinck, Roger; Rosielle, Nick; Steinbuch, Maarten

    2006-06-01

    Future large optical telescopes require adaptive optics (AO) systems whose deformable mirrors (DM) have ever more degrees of freedom. This paper describes advances that are made in a project aimed to design a new AO system that is extendible to meet tomorrow's specifications. Advances on the mechanical design are reported in a companion paper [6272-75], whereas this paper discusses the controller design aspects. The numerical complexity of controller designs often used for AO scales with the fourth power in the diameter of the telescope's primary mirror. For future large telescopes this will undoubtedly become a critical aspect. This paper demonstrates the feasibility of solving this issue with a distributed controller design. A distributed framework will be introduced in which each actuator has a separate processor that can communicate with a few direct neighbors. First, the DM will be modeled and shown to be compatible with the framework. Then, adaptive turbulence models that fit the framework will be shown to adequately capture the spatio-temporal behavior of the atmospheric disturbance, constituting a first step towards a distributed optimal control. Finally, the wavefront reconstruction step is fitted into the distributed framework such that the computational complexity for each processor increases only linearly with the telescope diameter.

  5. Adaptive Mesh Refinement in Reactive Transport Modeling of Subsurface Environments

    NASA Astrophysics Data System (ADS)

    Molins, S.; Day, M.; Trebotich, D.; Graves, D. T.

    2015-12-01

    Adaptive mesh refinement (AMR) is a numerical technique for locally adjusting the resolution of computational grids. AMR makes it possible to superimpose levels of finer grids on the global computational grid in an adaptive manner allowing for more accurate calculations locally. AMR codes rely on the fundamental concept that the solution can be computed in different regions of the domain with different spatial resolutions. AMR codes have been applied to a wide range of problem including (but not limited to): fully compressible hydrodynamics, astrophysical flows, cosmological applications, combustion, blood flow, heat transfer in nuclear reactors, and land ice and atmospheric models for climate. In subsurface applications, in particular, reactive transport modeling, AMR may be particularly useful in accurately capturing concentration gradients (hence, reaction rates) that develop in localized areas of the simulation domain. Accurate evaluation of reaction rates is critical in many subsurface applications. In this contribution, we will discuss recent applications that bring to bear AMR capabilities on reactive transport problems from the pore scale to the flood plain scale.

  6. Adaptive model reduction for continuous systems via recursive rational interpolation

    NASA Technical Reports Server (NTRS)

    Lilly, John H.

    1994-01-01

    A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.

  7. Dynamic modeling and adaptive control for space stations

    NASA Technical Reports Server (NTRS)

    Ih, C. H. C.; Wang, S. J.

    1985-01-01

    Of all large space structural systems, space stations present a unique challenge and requirement to advanced control technology. Their operations require control system stability over an extremely broad range of parameter changes and high level of disturbances. During shuttle docking the system mass may suddenly increase by more than 100% and during station assembly the mass may vary even more drastically. These coupled with the inherent dynamic model uncertainties associated with large space structural systems require highly sophisticated control systems that can grow as the stations evolve and cope with the uncertainties and time-varying elements to maintain the stability and pointing of the space stations. The aspects of space station operational properties are first examined, including configurations, dynamic models, shuttle docking contact dynamics, solar panel interaction, and load reduction to yield a set of system models and conditions. A model reference adaptive control algorithm along with the inner-loop plant augmentation design for controlling the space stations under severe operational conditions of shuttle docking, excessive model parameter errors, and model truncation are then investigated. The instability problem caused by the zero-frequency rigid body modes and a proposed solution using plant augmentation are addressed. Two sets of sufficient conditions which guarantee the globablly asymptotic stability for the space station systems are obtained.

  8. Adaptive control using neural networks and approximate models.

    PubMed

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  9. Adaptive finite difference for seismic wavefield modelling in acoustic media.

    PubMed

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-08-05

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang's optimised finite difference scheme.

  10. Modelling interactions between mitigation, adaptation and sustainable development

    NASA Astrophysics Data System (ADS)

    Reusser, D. E.; Siabatto, F. A. P.; Garcia Cantu Ros, A.; Pape, C.; Lissner, T.; Kropp, J. P.

    2012-04-01

    Managing the interdependence of climate mitigation, adaptation and sustainable development requires a good understanding of the dominant socioecological processes that have determined the pathways in the past. Key variables include water and food availability which depend on climate and overall ecosystem services, as well as energy supply and social, political and economic conditions. We present our initial steps to build a system dynamic model of nations that represents a minimal set of relevant variables of the socio- ecological development. The ultimate goal of the modelling exercise is to derive possible future scenarios and test those for their compatibility with sustainability boundaries. Where dynamics go beyond sustainability boundaries intervention points in the dynamics can be searched.

  11. Adaptive finite difference for seismic wavefield modelling in acoustic media

    PubMed Central

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-01-01

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang’s optimised finite difference scheme. PMID:27491333

  12. Direct model reference adaptive control of robotic arms

    NASA Technical Reports Server (NTRS)

    Kaufman, Howard; Swift, David C.; Cummings, Steven T.; Shankey, Jeffrey R.

    1993-01-01

    The results of controlling A PUMA 560 Robotic Manipulator and the NASA shuttle Remote Manipulator System (RMS) using a Command Generator Tracker (CGT) based Model Reference Adaptive Controller (DMRAC) are presented. Initially, the DMRAC algorithm was run in simulation using a detailed dynamic model of the PUMA 560. The algorithm was tuned on the simulation and then used to control the manipulator using minimum jerk trajectories as the desired reference inputs. The ability to track a trajectory in the presence of load changes was also investigated in the simulation. Satisfactory performance was achieved in both simulation and on the actual robot. The obtained responses showed that the algorithm was robust in the presence of sudden load changes. Because these results indicate that the DMRAC algorithm can indeed be successfully applied to the control of robotic manipulators, additional testing was performed to validate the applicability of DMRAC to simulated dynamics of the shuttle RMS.

  13. Efficient Plasma Ion Source Modeling With Adaptive Mesh Refinement (Abstract)

    SciTech Connect

    Kim, J.S.; Vay, J.L.; Friedman, A.; Grote, D.P.

    2005-03-15

    Ion beam drivers for high energy density physics and inertial fusion energy research require high brightness beams, so there is little margin of error allowed for aberration at the emitter. Thus, accurate plasma ion source computer modeling is required to model the plasma sheath region and time-dependent effects correctly.A computer plasma source simulation module that can be used with a powerful heavy ion fusion code, WARP, or as a standalone code, is being developed. In order to treat the plasma sheath region accurately and efficiently, the module will have the capability of handling multiple spatial scale problems by using Adaptive Mesh Refinement (AMR). We will report on our progress on the project.

  14. Design optimization and background modeling of the HEX experiment on Chandrayaan-I

    NASA Astrophysics Data System (ADS)

    Sudhakar, Manju; Sreekumar, P.

    2012-11-01

    Spacecraft and their subsystem components are subject to a very hazardous radiation environment in both near-Earth and deep space orbits. Knowledge of the effects of this high energy particle and electromagnetic radiation is essential in designing sensors, electronic circuits and living habitats for humans in near Earth orbit, en route to and on the Moon and Mars. This paper discusses the use of Monte Carlo simulations to optimize system design, radiation source modeling, and determination of background in sensors due to galactic cosmic rays and radiation from the Moon. The results demonstrate the use of Monte Carlo particle transport toolkits to predict secondary production, determine dose rates in space and design required shielding geometry.

  15. Extended adiabatic blast waves and a model of the soft X-ray background. [interstellar matter

    NASA Technical Reports Server (NTRS)

    Cox, D. P.; Anderson, P. R.

    1981-01-01

    An analytical approximation is generated which follows the development of an adiabatic spherical blast wave in a homogeneous ambient medium of finite pressure. An analytical approximation is also presented for the electron temperature distribution resulting from coulomb collisional heating. The dynamical, thermal, ionization, and spectral structures are calculated for blast waves of energy E sub 0 = 5 x 10 to the 50th power ergs in a hot low-density interstellar environment. A formula is presented for estimating the luminosity evolution of such explosions. The B and C bands of the soft X-ray background, it is shown, are reproduced by such a model explosion if the ambient density is about .000004 cm, the blast radius is roughly 100 pc, and the solar system is located inside the shocked region. Evolution in a pre-existing cavity with a strong density gradient may, it is suggested, remove both the M band and OVI discrepancies.

  16. The reduced order model problem in distributed parameter systems adaptive identification and control

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.

    1980-01-01

    The research concerning the reduced order model problem in distributed parameter systems is reported. The adaptive control strategy was chosen for investigation in the annular momentum control device. It is noted, that if there is no observation spill over, and no model errors, an indirect adaptive control strategy can be globally stable. Recent publications concerning adaptive control are included.

  17. Durability-Based Design Guide for an Automotive Structural Composite: Part 2. Background Data and Models

    SciTech Connect

    Corum, J.M.; Battiste, R.L.; Brinkman, C.R.; Ren, W.; Ruggles, M.B.; Weitsman, Y.J.; Yahr, G.T.

    1998-02-01

    This background report is a companion to the document entitled ''Durability-Based Design Criteria for an Automotive Structural Composite: Part 1. Design Rules'' (ORNL-6930). The rules and the supporting material characterization and modeling efforts described here are the result of a U.S. Department of Energy Advanced Automotive Materials project entitled ''Durability of Lightweight Composite Structures.'' The overall goal of the project is to develop experimentally based, durability-driven design guidelines for automotive structural composites. The project is closely coordinated with the Automotive Composites Consortium (ACC). The initial reference material addressed by the rules and this background report was chosen and supplied by ACC. The material is a structural reaction injection-molded isocyanurate (urethane), reinforced with continuous-strand, swirl-mat, E-glass fibers. This report consists of 16 position papers, each summarizing the observations and results of a key area of investigation carried out to provide the basis for the durability-based design guide. The durability issues addressed include the effects of cyclic and sustained loadings, temperature, automotive fluids, vibrations, and low-energy impacts (e.g., tool drops and roadway kickups) on deformation, strength, and stiffness. The position papers cover these durability issues. Topics include (1) tensile, compressive, shear, and flexural properties; (2) creep and creep rupture; (3) cyclic fatigue; (4) the effects of temperature, environment, and prior loadings; (5) a multiaxial strength criterion; (6) impact damage and damage tolerance design; (7) stress concentrations; (8) a damage-based predictive model for time-dependent deformations; (9) confirmatory subscale component tests; and (10) damage development and growth observations.

  18. Cosmic Microwave Background Small-Scale Structure: II. Model of the Foreground Emission

    NASA Astrophysics Data System (ADS)

    Verschuur, Gerrit L.; Schmelz, Joan T.

    2017-01-01

    We have investigated the possibility that a population of galactic electrons may contribute to the small-scale structure in the cosmic microwave background (CMB) found by WMAP and PLANCK. Model calculations of free-free emission from these electrons which include beam dilution produce a nearly flat spectrum. Data at nine frequencies from 22 to 100 GHz were fit with the model, which resulted in excellent values of reduced chi squared. The model involves three unknowns: electron excitation temperature, angular extent of the sources of emission, and emission measure. The resulting temperatures agree with the observed temperatures of related HI features. The derived angular extent of the continuum sources corresponds well with the observed angular extent of HI filamentary structures in the areas under consideration. The derived emission measures can be used to determine the fractional ionization along the path lengths through the emitting volumes of space. Understanding the role that free-free emission plays in the small-scale features observed by PLANCK and WMAP should allow us to create better masks of the galactic foreground. Pursuing such discoveries may yet transform our understanding of the origins of the universe.

  19. Data-driven modeling of background and mine-related acidity and metals in river basins.

    PubMed

    Friedel, Michael J

    2014-01-01

    A novel application of self-organizing map (SOM) and multivariate statistical techniques is used to model the nonlinear interaction among basin mineral-resources, mining activity, and surface-water quality. First, the SOM is trained using sparse measurements from 228 sample sites in the Animas River Basin, Colorado. The model performance is validated by comparing stochastic predictions of basin-alteration assemblages and mining activity at 104 independent sites. The SOM correctly predicts (>98%) the predominant type of basin hydrothermal alteration and presence (or absence) of mining activity. Second, application of the Davies-Bouldin criteria to k-means clustering of SOM neurons identified ten unique environmental groups. Median statistics of these groups define a nonlinear water-quality response along the spatiotemporal hydrothermal alteration-mining gradient. These results reveal that it is possible to differentiate among the continuum between inputs of background and mine-related acidity and metals, and it provides a basis for future research and empirical model development.

  20. Genomic resources for a model in adaptation and speciation research: characterization of the Poecilia mexicana transcriptome

    PubMed Central

    2012-01-01

    Background Elucidating the genomic basis of adaptation and speciation is a major challenge in natural systems with large quantities of environmental and phenotypic data, mostly because of the scarcity of genomic resources for non-model organisms. The Atlantic molly (Poecilia mexicana, Poeciliidae) is a small livebearing fish that has been extensively studied for evolutionary ecology research, particularly because this species has repeatedly colonized extreme environments in the form of caves and toxic hydrogen sulfide containing springs. In such extreme environments, populations show strong patterns of adaptive trait divergence and the emergence of reproductive isolation. Here, we used RNA-sequencing to assemble and annotate the first transcriptome of P. mexicana to facilitate ecological genomics studies in the future and aid the identification of genes underlying adaptation and speciation in the system. Description We provide the first annotated reference transcriptome of P. mexicana. Our transcriptome shows high congruence with other published fish transcriptomes, including that of the guppy, medaka, zebrafish, and stickleback. Transcriptome annotation uncovered the presence of candidate genes relevant in the study of adaptation to extreme environments. We describe general and oxidative stress response genes as well as genes involved in pathways induced by hypoxia or involved in sulfide metabolism. To facilitate future comparative analyses, we also conducted quantitative comparisons between P. mexicana from different river drainages. 106,524 single nucleotide polymorphisms were detected in our dataset, including potential markers that are putatively fixed across drainages. Furthermore, specimens from different drainages exhibited some consistent differences in gene regulation. Conclusions Our study provides a valuable genomic resource to study the molecular underpinnings of adaptation to extreme environments in replicated sulfide spring and cave environments. In

  1. Innate Response to Human Cancer Cells with or without IL-2 Receptor Common γ-Chain Function in NOD Background Mice Lacking Adaptive Immunity.

    PubMed

    Nishime, Chiyoko; Kawai, Kenji; Yamamoto, Takehiro; Katano, Ikumi; Monnai, Makoto; Goda, Nobuhito; Mizushima, Tomoko; Suemizu, Hiroshi; Nakamura, Masato; Murata, Mitsuru; Suematsu, Makoto; Wakui, Masatoshi

    2015-08-15

    Immunodeficient hosts exhibit high acceptance of xenogeneic or neoplastic cells mainly due to lack of adaptive immunity, although it still remains to be elucidated how innate response affects the engraftment. IL-2R common γ-chain (IL-2Rγc) signaling is required for development of NK cells and a subset of dendritic cells producing IFN-γ. To better understand innate response in the absence of adaptive immunity, we examined amounts of metastatic foci in the livers after intrasplenic transfer of human colon cancer HCT116 cells into NOD/SCID versus NOD/SCID/IL-2Rγc (null) (NOG) hosts. The intravital microscopic imaging of livers in the hosts depleted of NK cells and/or macrophages revealed that IL-2Rγc function critically contributes to elimination of cancer cells without the need for NK cells and macrophages. In the absence of IL-2Rγc, macrophages play a role in the defense against tumors despite the NOD Sirpa allele, which allows human CD47 to bind to the encoded signal regulatory protein α to inhibit macrophage phagocytosis of human cells. Analogous experiments using human pancreas cancer MIA PaCa-2 cells provided findings roughly similar to those from the experiments using HCT116 cells except for lack of suppression of metastases by macrophages in NOG hosts. Administration of mouse IFN-γ to NOG hosts appeared to partially compensate lack of IL-2Rγc-dependent elimination of transferred HCT116 cells. These results provide insights into the nature of innate response in the absence of adaptive immunity, aiding in developing tumor xenograft models in experimental oncology.

  2. Modeling the distribution of Mg II absorbers around galaxies using background galaxies and quasars

    SciTech Connect

    Bordoloi, R.; Lilly, S. J.; Kacprzak, G. G.; Churchill, C. W.

    2014-04-01

    We present joint constraints on the distribution of Mg II absorption around high redshift galaxies obtained by combining two orthogonal probes, the integrated Mg II absorption seen in stacked background galaxy spectra and the distribution of parent galaxies of individual strong Mg II systems as seen in the spectra of background quasars. We present a suite of models that can be used to predict, for different two- and three-dimensional distributions, how the projected Mg II absorption will depend on a galaxy's apparent inclination, the impact parameter b and the azimuthal angle between the projected vector to the line of sight and the projected minor axis. In general, we find that variations in the absorption strength with azimuthal angles provide much stronger constraints on the intrinsic geometry of the Mg II absorption than the dependence on the inclination of the galaxies. In addition to the clear azimuthal dependence in the integrated Mg II absorption that we reported earlier in Bordoloi et al., we show that strong equivalent width Mg II absorbers (W{sub r} (2796) ≥ 0.3 Å) are also asymmetrically distributed in azimuth around their host galaxies: 72% of the absorbers in Kacprzak et al., and 100% of the close-in absorbers within 35 kpc of the center of their host galaxies, are located within 50° of the host galaxy's projected semi minor axis. It is shown that either composite models consisting of a simple bipolar component plus a spherical or disk component, or a single highly softened bipolar distribution, can well represent the azimuthal dependencies observed in both the stacked spectrum and quasar absorption-line data sets within 40 kpc. Simultaneously fitting both data sets, we find that in the composite model the bipolar cone has an opening angle of ∼100° (i.e., confined to within 50° of the disk axis) and contains about two-thirds of the total Mg II absorption in the system. The single softened cone model has an exponential fall off with azimuthal

  3. A new adaptive data transfer library for model coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Liu, Li; Yang, Guangwen; Li, Ruizhe; Wang, Bin

    2016-06-01

    Data transfer means transferring data fields from a sender to a receiver. It is a fundamental and frequently used operation of a coupler. Most versions of state-of-the-art couplers currently use an implementation based on the point-to-point (P2P) communication of the message passing interface (MPI) (referred to as "P2P implementation" hereafter). In this paper, we reveal the drawbacks of the P2P implementation when the parallel decompositions of the sender and the receiver are different, including low communication bandwidth due to small message size, variable and high number of MPI messages, as well as network contention. To overcome these drawbacks, we propose a butterfly implementation for data transfer. Although the butterfly implementation outperforms the P2P implementation in many cases, it degrades the performance when the sender and the receiver have similar parallel decompositions or when the number of processes used for running models is small. To ensure data transfer with optimal performance, we design and implement an adaptive data transfer library that combines the advantages of both butterfly implementation and P2P implementation. As the adaptive data transfer library automatically uses the best implementation for data transfer, it outperforms the P2P implementation in many cases while it does not decrease the performance in any cases. Now, the adaptive data transfer library is open to the public and has been imported into the C-Coupler1 coupler for performance improvement of data transfer. We believe that other couplers can also benefit from this.

  4. Adapting a weather forecast model for greenhouse gas simulation

    NASA Astrophysics Data System (ADS)

    Polavarapu, S. M.; Neish, M.; Tanguay, M.; Girard, C.; de Grandpré, J.; Gravel, S.; Semeniuk, K.; Chan, D.

    2015-12-01

    The ability to simulate greenhouse gases on the global domain is useful for providing boundary conditions for regional flux inversions, as well as for providing reference data for bias correction of satellite measurements. Given the existence of operational weather and environmental prediction models and assimilation systems at Environment Canada, it makes sense to use these tools for greenhouse gas simulations. In this work, we describe the adaptations needed to reasonably simulate CO2 with a weather forecast model. The main challenges were the implementation of a mass conserving advection scheme, and the careful implementation of a mixing ratio defined with respect to dry air. The transport of tracers through convection was also added, and the vertical mixing through the boundary layer was slightly modified. With all these changes, the model conserves CO2 mass well on the annual time scale, and the high resolution (0.9 degree grid spacing) permits a good description of synoptic scale transport. The use of a coupled meteorological/tracer transport model also permits an assessment of approximations needed in offline transport model approaches, such as the neglect of water vapour mass when computing a tracer mixing ratio with respect to dry air.

  5. A region-appearance-based adaptive variational model for 3D liver segmentation

    SciTech Connect

    Peng, Jialin; Dong, Fangfang; Chen, Yunmei; Kong, Dexing

    2014-04-15

    Purpose: Liver segmentation from computed tomography images is a challenging task owing to pixel intensity overlapping, ambiguous edges, and complex backgrounds. The authors address this problem with a novel active surface scheme, which minimizes an energy functional combining both edge- and region-based information. Methods: In this semiautomatic method, the evolving surface is principally attracted to strong edges but is facilitated by the region-based information where edge information is missing. As avoiding oversegmentation is the primary challenge, the authors take into account multiple features and appearance context information. Discriminative cues, such as multilayer consecutiveness and local organ deformation are also implicitly incorporated. Case-specific intensity and appearance constraints are included to cope with the typically large appearance variations over multiple images. Spatially adaptive balancing weights are employed to handle the nonuniformity of image features. Results: Comparisons and validations on difficult cases showed that the authors’ model can effectively discriminate the liver from adhering background tissues. Boundaries weak in gradient or with no local evidence (e.g., small edge gaps or parts with similar intensity to the background) were delineated without additional user constraint. With an average surface distance of 0.9 mm and an average volume overlap of 93.9% on the MICCAI data set, the authors’ model outperformed most state-of-the-art methods. Validations on eight volumes with different initial conditions had segmentation score variances mostly less than unity. Conclusions: The proposed model can efficiently delineate ambiguous liver edges from complex tissue backgrounds with reproducibility. Quantitative validations and comparative results demonstrate the accuracy and efficacy of the model.

  6. Pharmacokinetic Modeling of Manganese III. Physiological Approaches Accounting for Background and Tracer Kinetics

    SciTech Connect

    Teeguarden, Justin G.; Gearhart, Jeffrey; Clewell, III, H. J.; Covington, Tammie R.; Nong, Andy; Anderson, Melvin E.

    2007-01-01

    Manganese (Mn) is an essential nutrient. Mn deficiency is associated with altered lipid (Kawano et al. 1987) and carbohydrate metabolism (Baly et al. 1984; Baly et al. 1985), abnormal skeletal cartilage development (Keen et al. 2000), decreased reproductive capacity, and brain dysfunction. Occupational and accidental inhalation exposures to aerosols containing high concentrations of Mn produce neurological symptoms with Parkinson-like characteristics in workers. At present, there is also concern about use of the manganese-containing compound, methylcyclopentadienyl manganese tricarbonyl (MMT), in unleaded gasoline as an octane enhancer. Combustion of MMT produces aerosols containing a mixture of manganese salts (Lynam et al. 1999). These Mn particulates may be inhaled at low concentrations by the general public in areas using MMT. Risk assessments for essential elements need to acknowledge that risks occur with either excesses or deficiencies and the presence of significant amounts of these nutrients in the body even in the absence of any exogenous exposures. With Mn there is an added complication, i.e., the primary risk is associated with inhalation while Mn is an essential dietary nutrient. Exposure standards for inhaled Mn will need to consider the substantial background uptake from normal ingestion. Andersen et al. (1999) suggested a generic approach for essential nutrient risk assessment. An acceptable exposure limit could be based on some ‘tolerable’ change in tissue concentration in normal and exposed individuals, i.e., a change somewhere from 10 to 25 % of the individual variation in tissue concentration seen in a large human population. A reliable multi-route, multi-species pharmacokinetic model would be necessary for the implementation of this type of dosimetry-based risk assessment approach for Mn. Physiologically-based pharmacokinetic (PBPK) models for various xenobiotics have proven valuable in contributing to a variety of chemical specific risk

  7. An adaptive radiation model for the origin of new genefunctions

    SciTech Connect

    Francino, M. Pilar

    2004-10-18

    The evolution of new gene functions is one of the keys to evolutionary innovation. Most novel functions result from gene duplication followed by divergence. However, the models hitherto proposed to account for this process are not fully satisfactory. The classic model of neofunctionalization holds that the two paralogous gene copies resulting from a duplication are functionally redundant, such that one of them can evolve under no functional constraints and occasionally acquire a new function. This model lacks a convincing mechanism for the new gene copies to increase in frequency in the population and survive the mutational load expected to accumulate under neutrality, before the acquisition of the rare beneficial mutations that would confer new functionality. The subfunctionalization model has been proposed as an alternative way to generate genes with altered functions. This model also assumes that new paralogous gene copies are functionally redundant and therefore neutral, but it predicts that relaxed selection will affect both gene copies such that some of the capabilities of the parent gene will disappear in one of the copies and be retained in the other. Thus, the functions originally present in a single gene will be partitioned between the two descendant copies. However, although this model can explain increases in gene number, it does not really address the main evolutionary question, which is the development of new biochemical capabilities. Recently, a new concept has been introduced into the gene evolution literature which is most likely to help solve this dilemma. The key point is to allow for a period of natural selection for the duplication per se, before new function evolves, rather than considering gene duplication to be neutral as in the previous models. Here, I suggest a new model that draws on the advantage of postulating selection for gene duplication, and proposes that bursts of adaptive gene amplification in response to specific selection

  8. Towards a High Temporal Frequency Grass Canopy Thermal IR Model for Background Signatures

    NASA Technical Reports Server (NTRS)

    Ballard, Jerrell R., Jr.; Smith, James A.; Koenig, George G.

    2004-01-01

    In this paper, we present our first results towards understanding high temporal frequency thermal infrared response from a dense plant canopy and compare the application of our model, driven both by slowly varying, time-averaged meteorological conditions and by high frequency measurements of local and within canopy profiles of relative humidity and wind speed, to high frequency thermal infrared observations. Previously, we have employed three-dimensional ray tracing to compute the intercepted and scattered radiation fluxes and for final scene rendering. For the turbulent fluxes, we employed simple resistance models for latent and sensible heat with one-dimensional profiles of relative humidity and wind speed. Our modeling approach has proven successful in capturing the directional and diurnal variation in background thermal infrared signatures. We hypothesize that at these scales, where the model is typically driven by time-averaged, local meteorological conditions, the primary source of thermal variance arises from the spatial distribution of sunlit and shaded foliage elements within the canopy and the associated radiative interactions. In recent experiments, we have begun to focus on the high temporal frequency response of plant canopies in the thermal infrared at 1 second to 5 minute intervals. At these scales, we hypothesize turbulent mixing plays a more dominant role. Our results indicate that in the high frequency domain, the vertical profile of temperature change is tightly coupled to the within canopy wind speed In the results reported here, the canopy cools from the top down with increased wind velocities and heats from the bottom up at low wind velocities. .

  9. A Universal Model of Giftedness--An Adaptation of the Munich Model

    ERIC Educational Resources Information Center

    Jessurun, J. H.; Shearer, C. B.; Weggeman, M. C. D. P.

    2016-01-01

    The Munich Model of Giftedness (MMG) by Heller and his colleagues, developed for the identification of gifted children, is adapted and expanded, with the aim of making it more universally usable as a model for the pathway from talents to performance. On the side of the talent-factors, the concept of multiple intelligences is introduced, and the…

  10. Turnaround Management Strategies: The Adaptive Model and the Constructive Model. ASHE 1983 Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Chaffee, Ellen E.

    The use of two management strategies by 14 liberal arts and comprehensive colleges attempting to recover from serious financial decline during 1973-1976 were studied. The adaptive model of strategy, based on resource dependence, involves managing demands in order to satisfy critical-resource providers. The constructive model of strategy, based on…

  11. Adaptive Weibull Multiplicative Model and Multilayer Perceptron neural networks for dark-spot detection from SAR imagery.

    PubMed

    Taravat, Alireza; Oppelt, Natascha

    2014-12-02

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies.

  12. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  13. The National Astronomy Consortium - An Adaptable Model for OAD?

    NASA Astrophysics Data System (ADS)

    Sheth, Kartik

    2015-08-01

    The National Astronomy Consortium (NAC) is a program led by the National Radio Astronomy Observatory (NRAO) and Associated Universities Inc., (AUI) in partnership with the National Society of Black Physicists (NSBP), and a number of minority and majority universities to increase the numbers of students from underrepresented groups and those otherwise overlooked by the traditional academic pipeline into STEM or STEM-related careers. The seed for the NAC was a partnership between NRAO and Howard University which began with an exchange of a few summer students five years ago. Since then the NAC has grown tremendously. Today the NAC aims to host between 4 to 5 cohorts nationally in an innovative model in which the students are mentored throughout the year with multiple mentors and peer mentoring, continued engagement in research and professional development / career training throughout the academic year and throughout their careers.The NAC model has already shown success and is a very promising and innovative model for increasing participation of young people in STEM and STEM-related careers. I will discuss how this model could be adapted in various countries at all levels of education.

  14. Adaptive elastic networks as models of supercooled liquids

    NASA Astrophysics Data System (ADS)

    Yan, Le; Wyart, Matthieu

    2015-08-01

    The thermodynamics and dynamics of supercooled liquids correlate with their elasticity. In particular for covalent networks, the jump of specific heat is small and the liquid is strong near the threshold valence where the network acquires rigidity. By contrast, the jump of specific heat and the fragility are large away from this threshold valence. In a previous work [Proc. Natl. Acad. Sci. USA 110, 6307 (2013), 10.1073/pnas.1300534110], we could explain these behaviors by introducing a model of supercooled liquids in which local rearrangements interact via elasticity. However, in that model the disorder characterizing elasticity was frozen, whereas it is itself a dynamic variable in supercooled liquids. Here we study numerically and theoretically adaptive elastic network models where polydisperse springs can move on a lattice, thus allowing for the geometry of the elastic network to fluctuate and evolve with temperature. We show numerically that our previous results on the relationship between structure and thermodynamics hold in these models. We introduce an approximation where redundant constraints (highly coordinated regions where the frustration is large) are treated as an ideal gas, leading to analytical predictions that are accurate in the range of parameters relevant for real materials. Overall, these results lead to a description of supercooled liquids, in which the distance to the rigidity transition controls the number of directions in phase space that cost energy and the specific heat.

  15. Preliminary Exploration of Adaptive State Predictor Based Human Operator Modeling

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Gregory, Irene M.

    2012-01-01

    Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to quantify the effects of changing aircraft dynamics on an operator s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. A gradient descent estimator and a least squares estimator with exponential forgetting used these data to predict pilot stick input. The results indicate that individual pilot characteristics and vehicle dynamics did not affect the accuracy of either estimator method to estimate pilot stick input. These methods also were able to predict pilot stick input during changing aircraft dynamics and they may have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot.

  16. Modeling the behavioral substrates of associate learning and memory - Adaptive neural models

    NASA Technical Reports Server (NTRS)

    Lee, Chuen-Chien

    1991-01-01

    Three adaptive single-neuron models based on neural analogies of behavior modification episodes are proposed, which attempt to bridge the gap between psychology and neurophysiology. The proposed models capture the predictive nature of Pavlovian conditioning, which is essential to the theory of adaptive/learning systems. The models learn to anticipate the occurrence of a conditioned response before the presence of a reinforcing stimulus when training is complete. Furthermore, each model can find the most nonredundant and earliest predictor of reinforcement. The behavior of the models accounts for several aspects of basic animal learning phenomena in Pavlovian conditioning beyond previous related models. Computer simulations show how well the models fit empirical data from various animal learning paradigms.

  17. Thermal modeling and adaptive control of scan welding

    SciTech Connect

    Doumanidis, C.C.

    1998-11-01

    This article introduces scan welding as a redesign of classical joining methods, employing automation technology to ensure the overall geometric, material and mechanical integrity of the joint. This is obtained by real-time control of the welding temperature field by a proper dynamic heat input distribution on the weld surface. This distribution is implemented in scan welding by a single torch, sweeping the joint surface by a controlled reciprocating motion, and power adjusted by feedback of infrared temperature measurements in-process. An off-line numerical simulation of the thermal field in scan welding is established, as well as a linearized multivariable model with real-time parameter identification. An adaptive thermal control scheme is thus implemented and validated--both computationally and experimentally on a robotic plasma arc welding (PAW) station. The resulting thermal features related to the generated material structure and properties of the joint are finally analyzed in scan welding tests and simulations.

  18. Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation

    NASA Technical Reports Server (NTRS)

    Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet

    2015-01-01

    When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating

  19. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    NASA Astrophysics Data System (ADS)

    Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.

    2016-09-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.

  20. From dysfunction to adaptation: an interactionist model of dependency.

    PubMed

    Bornstein, Robert F

    2012-01-01

    Contrary to clinical lore, a dependent personality style is associated with active as well as passive behavior and may be adaptive in certain contexts (e.g., in fostering compliance with medical and psychotherapeutic treatment regimens). The cognitive/interactionist model conceptualizes dependency-related responding in terms of four components: (a) motivational (a marked need for guidance, support, and approval from others); (b) cognitive (a perception of oneself as powerless and ineffectual); (c) affective (a tendency to become anxious when required to function autonomously); and (d) behavioral (use of diverse self-presentation strategies to strengthen ties to potential caregivers). Clinicians' understanding of the etiology and dynamics of dependency has improved substantially in recent years; current challenges include delineating useful subtypes of dependency, developing valid symptom criteria for Dependent Personality Disorder in DSM-5 and beyond, and working effectively with dependent patients in the age of managed care.

  1. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  2. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  3. Modeling and experimental results of low-background extrinsic double-injection IR detector response

    NASA Astrophysics Data System (ADS)

    Zaletaev, N. B.; Filachev, A. M.; Ponomarenko, V. P.; Stafeev, V. I.

    2006-05-01

    Bias-dependent response of an extrinsic double-injection IR detector under irradiation from extrinsic and intrinsic responsivity spectral ranges was obtained analytically and through numerical modeling. The model includes the transient response and generation-recombination noise as well. It is shown that a great increase in current responsivity (by orders of magnitude) without essential change in detectivity can take place in the range of extrinsic responsivity for detectors on semiconductor materials with long-lifetime minority charge carriers if double-injection photodiodes are made on them instead photoconductive detectors. Field dependence of the lifetimes and mobilities of charge carriers essentially influences detector characteristics especially in the voltage range where the drift length of majority carriers is greater than the distance between the contacts. The model developed is in good agreement with experimental data obtained for n-Si:Cd, p-Ge:Au, and Ge:Hg diodes, as well as for diamond detectors of radiations. A BLIP-detection responsivity of about 2000 A/W (for a wavelength of 10 micrometers) for Ge:Hg diodes has been reached in a frequency range of 500 Hz under a background of 6 x 10 11 cm -2s -1 at a temperature of 20 K. Possibilities of optimization of detector performance are discussed. Extrinsic double-injection photodiodes and other detectors of radiations with internal gain based on double injection are reasonable to use in the systems liable to strong disturbance action, in particular to vibrations, because high responsivity can ensure higher resistance to interference.

  4. Modelling MEMS deformable mirrors for astronomical adaptive optics

    NASA Astrophysics Data System (ADS)

    Blain, Celia

    As of July 2012, 777 exoplanets have been discovered utilizing mainly indirect detection techniques. The direct imaging of exoplanets is the next goal for astronomers, because it will reveal the diversity of planets and planetary systems, and will give access to the exoplanet's chemical composition via spectroscopy. With this spectroscopic knowledge, astronomers will be able to know, if a planet is terrestrial and, possibly, even find evidence of life. With so much potential, this branch of astronomy has also captivated the general public attention. The direct imaging of exoplanets remains a challenging task, due to (i) the extremely high contrast between the parent star and the orbiting exoplanet and (ii) their small angular separation. For ground-based observatories, this task is made even more difficult, due to the presence of atmospheric turbulence. High Contrast Imaging (HCI) instruments have been designed to meet this challenge. HCI instruments are usually composed of a coronagraph coupled with the full onaxis corrective capability of an Extreme Adaptive Optics (ExAO) system. An efficient coronagraph separates the faint planet's light from the much brighter starlight, but the dynamic boiling speckles, created by the stellar image, make exoplanet detection impossible without the help of a wavefront correction device. The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system is a high performance HCI instrument developed at Subaru Telescope. The wavefront control system of SCExAO consists of three wavefront sensors (WFS) coupled with a 1024- actuator Micro-Electro-Mechanical-System (MEMS) deformable mirror (DM). MEMS DMs offer a large actuator density, allowing high count DMs to be deployed in small size beams. Therefore, MEMS DMs are an attractive technology for Adaptive Optics (AO) systems and are particularly well suited for HCI instruments employing ExAO technologies. SCExAO uses coherent light modulation in the focal plane introduced by the DM, for

  5. A photoviscoplastic model for photoactivated covalent adaptive networks

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Mu, Xiaoming; Bowman, Christopher N.; Sun, Youyi; Dunn, Martin L.; Qi, H. Jerry; Fang, Daining

    2014-10-01

    Light activated polymers (LAPs) are a class of contemporary materials that when irradiated with light respond with mechanical deformation. Among the different molecular mechanisms of photoactuation, here we study radical induced bond exchange reactions (BERs) that alter macromolecular chains through an addition-fragmentation process where a free chain whose active end group attaches then breaks a network chain. Thus the BER yields a polymer with a covalently adaptable network. When a LAP sample is loaded, the macroscopic consequence of BERs is stress relaxation and plastic deformation. Furthermore, if light penetration through the sample is nonuniform, resulting in nonuniform stress relaxation, the sample will deform after unloading in order to achieve equilibrium. In the past, this light activation mechanism was modeled as a phase evolution process where chain addition-fragmentation process was considered as a phase transformation between stressed phases and newly-born phases that are undeformed and stress free at birth. Such a modeling scheme describes the underlying physics with reasonable fidelity but is computationally expensive. In this paper, we propose a new approach where the BER induced macromolecular network alteration is modeled as a viscoplastic deformation process, based on the observation that stress relaxation due to light irradiation is a time-dependent process similar to that in viscoelastic solids with an irrecoverable deformation after light irradiation. This modeling concept is further translated into a finite deformation photomechanical constitutive model. The rheological representation of this model is a photoviscoplastic element placed in series with a standard linear solid model in viscoelasticity. A two-step iterative implicit scheme is developed for time integration of the two time-dependent elements. We carry out a series of experiments to determine material parameters in our model as well as to validate the performance of the model in

  6. Comparing and evaluating model estimates of background ozone in surface air over North America

    NASA Astrophysics Data System (ADS)

    Oberman, J.; Fiore, A. M.; Lin, M.; Zhang, L.; Jacob, D. J.; Naik, V.; Horowitz, L. W.

    2011-12-01

    Tropospheric ozone adversely affects human health and vegetation, and is thus a criteria pollutant regulated by the U.S. Environmental Protection Agency (EPA) under the National Ambient Air Quality Standard (NAAQS). Ozone is produced in the atmosphere via photo-oxidation of volatile organic compounds (VOCs) and carbon monoxide (CO) in the presence of nitrogen oxides (NOx). The present EPA approach considers health risks associated with exposure to ozone enhancement above the policy-relevant background (PRB), which is currently defined as the surface concentration of ozone that would exist without North American anthropogenic emissions. PRB thus includes production by natural precursors, production by precursors emitted on foreign continents, and transport of stratospheric ozone into surface air. As PRB is not an observable quantity, it must be estimated using numerical models. We compare PRB estimates for the year 2006 from the GFDL Atmospheric Model 3 (AM3) chemistry-climate model (CCM) and the GEOS-Chem (GC) chemical transport model (CTM). We evaluate the skill of the models in reproducing total surface ozone observed at the U.S. Clean Air Status and Trends Network (CASTNet), dividing the stations into low-elevation (< 1.5 km in altitude, primarily eastern) and high-elevation (> 1.5 km in altitude, all western) subgroups. At the low-elevation sites AM3 estimates of PRB (38±9 ppbv in spring, 27±9 ppbv in summer) are higher than GC (27±7 ppbv in spring, 21±8 ppbv in summer) in both seasons. Analysis at these sites is complicated by a positive bias in AM3 total ozone with respect to the observed total ozone, the source of which is yet unclear. At high-elevation sites, AM3 PRB is higher in the spring (47±8 ppbv) than in the summer (33±8 ppbv). In contrast, GC simulates little seasonal variation at high elevation sites (39±5 ppbv in spring vs. 38±7 ppbv in summer). Seasonal average total ozone at these sites was within 4 ppbv of the observations for both

  7. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles.

    PubMed

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-09-15

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems.

  8. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    PubMed Central

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  9. A Nonlinear Dynamic Inversion Predictor-Based Model Reference Adaptive Controller for a Generic Transport Model

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.

    2010-01-01

    Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).

  10. Detection of bird nests during mechanical weeding by incremental background modeling and visual saliency.

    PubMed

    Steen, Kim Arild; Therkildsen, Ole Roland; Green, Ole; Karstoft, Henrik

    2015-03-02

    Mechanical weeding is an important tool in organic farming. However, the use of mechanical weeding in conventional agriculture is increasing, due to public demands to lower the use of pesticides and an increased number of pesticide-resistant weeds. Ground nesting birds are highly susceptible to farming operations, like mechanical weeding, which may destroy the nests and reduce the survival of chicks and incubating females. This problem has limited focus within agricultural engineering. However, when the number of machines increases, destruction of nests will have an impact on various species. It is therefore necessary to explore and develop new technology in order to avoid these negative ethical consequences. This paper presents a vision-based approach to automated ground nest detection. The algorithm is based on the fusion of visual saliency, which mimics human attention, and incremental background modeling, which enables foreground detection with moving cameras. The algorithm achieves a good detection rate, as it detects 28 of 30 nests at an average distance of 3.8 m, with a true positive rate of 0.75.

  11. Cosmic string parameter constraints and model analysis using small scale Cosmic Microwave Background data

    SciTech Connect

    Urrestilla, Jon; Bevis, Neil; Hindmarsh, Mark; Kunz, Martin E-mail: n.bevis@imperial.ac.uk E-mail: martin.kunz@physics.unige.ch

    2011-12-01

    We present a significant update of the constraints on the Abelian Higgs cosmic string tension by cosmic microwave background (CMB) data, enabled both by the use of new high-resolution CMB data from suborbital experiments as well as the latest results of the WMAP satellite, and by improved predictions for the impact of Abelian Higgs cosmic strings on the CMB power spectra. The new cosmic string spectra [1] were improved especially for small angular scales, through the use of larger Abelian Higgs string simulations and careful extrapolation. If Abelian Higgs strings are present then we find improved bounds on their contribution to the CMB anisotropies, fd{sup AH} < 0.095, and on their tension, Gμ{sub AH} < 0.57 × 10{sup −6}, both at 95% confidence level using WMAP7 data; and fd{sup AH} < 0.048 and Gμ{sub AH} < 0.42 × 10{sup −6} using all the CMB data. We also find that using all the CMB data, a scale invariant initial perturbation spectrum, n{sub s} = 1, is now disfavoured at 2.4σ even if strings are present. A Bayesian model selection analysis no longer indicates a preference for strings.

  12. Ionospheric assimilation of radio occultation and ground-based GPS data using non-stationary background model error covariance

    NASA Astrophysics Data System (ADS)

    Lin, C. Y.; Matsuo, T.; Liu, J. Y.; Lin, C. H.; Tsai, H. F.; Araujo-Pradere, E. A.

    2015-01-01

    Ionospheric data assimilation is a powerful approach to reconstruct the 3-D distribution of the ionospheric electron density from various types of observations. We present a data assimilation model for the ionosphere, based on the Gauss-Markov Kalman filter with the International Reference Ionosphere (IRI) as the background model, to assimilate two different types of slant total electron content (TEC) observations from ground-based GPS and space-based FORMOSAT-3/COSMIC (F3/C) radio occultation. Covariance models for the background model error and observational error play important roles in data assimilation. The objective of this study is to investigate impacts of stationary (location-independent) and non-stationary (location-dependent) classes of the background model error covariance on the quality of assimilation analyses. Location-dependent correlations are modeled using empirical orthogonal functions computed from an ensemble of the IRI outputs, while location-independent correlations are modeled using a Gaussian function. Observing system simulation experiments suggest that assimilation of slant TEC data facilitated by the location-dependent background model error covariance yields considerably higher quality assimilation analyses. Results from assimilation of real ground-based GPS and F3/C radio occultation observations over the continental United States are presented as TEC and electron density profiles. Validation with the Millstone Hill incoherent scatter radar data and comparison with the Abel inversion results are also presented. Our new ionospheric data assimilation model that employs the location-dependent background model error covariance outperforms the earlier assimilation model with the location-independent background model error covariance, and can reconstruct the 3-D ionospheric electron density distribution satisfactorily from both ground- and space-based GPS observations.

  13. Ionospheric assimilation of radio occultation and ground-based GPS data using non-stationary background model error covariance

    NASA Astrophysics Data System (ADS)

    Lin, C. Y.; Matsuo, T.; Liu, J. Y.; Lin, C. H.; Tsai, H. F.; Araujo-Pradere, E. A.

    2014-03-01

    Ionospheric data assimilation is a powerful approach to reconstruct the 3-D distribution of the ionospheric electron density from various types of observations. We present a data assimilation model for the ionosphere, based on the Gauss-Markov Kalman filter with the International Reference Ionosphere (IRI) as the background model, to assimilate two different types of total electron content (TEC) observations from ground-based GPS and space-based FORMOSAT-3/COSMIC (F3/C) radio occultation. Covariance models for the background model error and observational error play important roles in data assimilation. The objective of this study is to investigate impacts of stationary (location-independent) and non-stationary (location-dependent) classes of the background model error covariance on the quality of assimilation analyses. Location-dependent correlations are modeled using empirical orthogonal functions computed from an ensemble of the IRI outputs, while location-independent correlations are modeled using a Gaussian function. Observing System Simulation Experiments suggest that assimilation of TEC data facilitated by the location-dependent background model error covariance yields considerably higher quality assimilation analyses. Results from assimilation of real ground-based GPS and F3/C radio occultation observations over the continental United States are presented as TEC and electron density profiles. Validation with the Millstone Hill incoherent scatter radar data and comparison with the Abel inversion results are also presented. Our new ionospheric data assimilation model that employs the location-dependent background model error covariance outperforms the earlier assimilation model with the location-independent background model error covariance, and can reconstruct the 3-D ionospheric electron density distribution satisfactorily from both ground- and space-based GPS observations.

  14. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  15. A regional adaptive and assimilative three-dimensional ionospheric model

    NASA Astrophysics Data System (ADS)

    Sabbagh, Dario; Scotto, Carlo; Sgrigna, Vittorio

    2016-03-01

    A regional adaptive and assimilative three-dimensional (3D) ionospheric model is proposed. It is able to ingest real-time data from different ionosondes, providing the ionospheric bottomside plasma frequency fp over the Italian area. The model is constructed on the basis of empirical values for a set of ionospheric parameters Pi[base] over the considered region, some of which have an assigned variation ΔPi. The values for the ionospheric parameters actually observed at a given time at a given site will thus be Pi = Pi[base] + ΔPi. These Pi values are used as input for an electron density N(h) profiler. The latter is derived from the Advanced Ionospheric Profiler (AIP), which is software used by Autoscala as part of the process of automatic inversion of ionogram traces. The 3D model ingests ionosonde data by minimizing the root-mean-square deviation between the observed and modeled values of fp(h) profiles obtained from the associated N(h) values at the points where observations are available. The ΔPi values are obtained from this minimization procedure. The 3D model is tested using data collected at the ionospheric stations of Rome (41.8N, 12.5E) and Gibilmanna (37.9N, 14.0E), and then comparing the results against data from the ionospheric station of San Vito dei Normanni (40.6N, 18.0E). The software developed is able to produce maps of the critical frequencies foF2 and foF1, and of fp at a fixed altitude, with transverse and longitudinal cross-sections of the bottomside ionosphere in a color scale. fp(h) and associated simulated ordinary ionogram traces can easily be produced for any geographic location within the Italian region. fp values within the volume in question can also be provided.

  16. Comparing PyMorph and SDSS photometry. I. Background sky and model fitting effects

    NASA Astrophysics Data System (ADS)

    Fischer, J.-L.; Bernardi, M.; Meert, A.

    2017-01-01

    A number of recent estimates of the total luminosities of galaxies in the SDSS are significantly larger than those reported by the SDSS pipeline. This is because of a combination of three effects: one is simply a matter of defining the scale out to which one integrates the fit when defining the total luminosity, and amounts on average to ≤0.1 mags even for the most luminous galaxies. The other two are less trivial and tend to be larger; they are due to differences in how the background sky is estimated and what model is fit to the surface brightness profile. We show that PyMorph sky estimates are fainter than those of the SDSS DR7 or DR9 pipelines, but are in excellent agreement with the estimates of Blanton et al. (2011). Using the SDSS sky biases luminosities by more than a few tenths of a magnitude for objects with half-light radii ≥7 arcseconds. In the SDSS main galaxy sample these are typically luminous galaxies, so they are not necessarily nearby. This bias becomes worse when allowing the model more freedom to fit the surface brightness profile. When PyMorph sky values are used, then two component Sersic-Exponential fits to E+S0s return more light than single component deVaucouleurs fits (up to ˜0.2 mag), but less light than single Sersic fits (0.1 mag). Finally, we show that PyMorph fits of Meert et al. (2015) to DR7 data remain valid for DR9 images. Our findings show that, especially at large luminosities, these PyMorph estimates should be preferred to the SDSS pipeline values.

  17. Proximal and Distal Influences on Development: The Model of Developmental Adaptation.

    ERIC Educational Resources Information Center

    Martin, Peter; Martin, Mike

    2002-01-01

    Presents a model of developmental adaptation that explains the process of adaptation to life stress on the basis of adverse childhood events and paternal care, and internal and external resources available for adaptation to current life events. The appraisal of past and current events, as well as coping behaviors, are hypothesized to influence the…

  18. Adaptation to mechanical load determines shape and properties of heart and circulation: the CircAdapt model.

    PubMed

    Arts, Theo; Delhaas, Tammo; Bovendeerd, Peter; Verbeek, Xander; Prinzen, Frits W

    2005-04-01

    With circulatory pathology, patient-specific simulation of hemodynamics is required to minimize invasiveness for diagnosis, treatment planning, and followup. We investigated the advantages of a smart combination of often already known hemodynamic principles. The CircAdapt model was designed to simulate beat-to-beat dynamics of the four-chamber heart with systemic and pulmonary circulation while incorporating a realistic relation between pressure-volume load and tissue mechanics and adaptation of tissues to mechanical load. Adaptation was modeled by rules, where a locally sensed signal results in a local action of the tissue. The applied rules were as follows: For blood vessel walls, 1) flow shear stress dilates the wall and 2) tensile stress thickens the wall; for myocardial tissue, 3) strain dilates the wall material, 4) larger maximum sarcomere length increases contractility, and 5) contractility increases wall mass. The circulation was composed of active and passive compliances and inertias. A realistic circulation developed by self-structuring through adaptation provided mean levels of systemic pressure and flow. Ability to simulate a wide variety of patient-specific circumstances was demonstrated by application of the same adaptation rules to the conditions of fetal circulation followed by a switch to the newborn circulation around birth. It was concluded that a few adaptation rules, directed to normalize mechanical load of the tissue, were sufficient to develop and maintain a realistic circulation automatically. Adaptation rules appear to be the key to reduce dramatically the number of input parameters for simulating circulation dynamics. The model may be used to simulate circulation pathology and to predict effects of treatment.

  19. A JOINT MODEL OF THE X-RAY AND INFRARED EXTRAGALACTIC BACKGROUNDS. I. MODEL CONSTRUCTION AND FIRST RESULTS

    SciTech Connect

    Shi, Yong; Helou, George; Armus, Lee; Stierwalt, Sabrina; Dale, Daniel

    2013-02-10

    We present an extragalactic population model of the cosmic background light to interpret the rich high-quality survey data in the X-ray and IR bands. The model incorporates star formation and supermassive black hole (SMBH) accretion in a co-evolution scenario to fit simultaneously 617 data points of number counts, redshift distributions, and local luminosity functions (LFs) with 19 free parameters. The model has four main components, the total IR LF, the SMBH accretion energy fraction in the IR band, the star formation spectral energy distribution (SED), and the unobscured SMBH SED extinguished with a H I column density distribution. As a result of the observational uncertainties about the star formation and SMBH SEDs, we present several variants of the model. The best-fit reduced {chi}{sup 2} reaches as small as 2.7-2.9 of which a significant amount (>0.8) is contributed by cosmic variances or caveats associated with data. Compared to previous models, the unique result of this model is to constrain the SMBH energy fraction in the IR band that is found to increase with the IR luminosity but decrease with redshift up to z {approx} 1.5; this result is separately verified using aromatic feature equivalent-width data. The joint modeling of X-ray and mid-IR data allows for improved constraints on the obscured active galactic nucleus (AGN), especially the Compton-thick AGN population. All variants of the model require that Compton-thick AGN fractions decrease with the SMBH luminosity but increase with redshift while the type 1 AGN fraction has the reverse trend.

  20. Beyond Reactive Planning: Self Adaptive Software and Self Modeling Software in Predictive Deliberation Management

    DTIC Science & Technology

    2008-06-01

    13th ICCRTS “C2 for Complex Endeavors” Title of Paper: Beyond Reactive Planning: Self Adaptive Software and Self Modeling Software in...Adaptive Software and Self Modeling Software in Predictive Deliberation Management 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...space. We present the following hypothesis: predictive deliberation management using self adapting and self modeling software will be required to provide

  1. An adaptive Gaussian model for satellite image deblurring.

    PubMed

    Jalobeanu, André; Blanc-Féraud, Laure; Zerubia, Josiane

    2004-04-01

    The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem, which can be regularized within a Bayesian context by using an a priori model of the reconstructed solution. Since real satellite data show spatially variant characteristics, we propose here to use an inhomogeneous model. We use the maximum likelihood estimator (MLE) to estimate its parameters and we show that the MLE computed on the corrupted image is not suitable for image deconvolution because it is not robust to noise. We then show that the estimation is correct only if it is made from the original image. Since this image is unknown, we need to compute an approximation of sufficiently good quality to provide useful estimation results. Such an approximation is provided by a wavelet-based deconvolution algorithm. Thus, a hybrid method is first used to estimate the space-variant parameters from this image and then to compute the regularized solution. The obtained results on high resolution satellite images simultaneously exhibit sharp edges, correctly restored textures, and a high SNR in homogeneous areas, since the proposed technique adapts to the local characteristics of the data.

  2. Adaptive Texture Synthesis for Large Scale City Modeling

    NASA Astrophysics Data System (ADS)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  3. Attitude determination using an adaptive multiple model filtering Scheme

    NASA Technical Reports Server (NTRS)

    Lam, Quang; Ray, Surendra N.

    1995-01-01

    Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown

  4. Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.

    2010-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.

  5. Adaptive invasive species distribution models: A framework for modeling incipient invasions

    USGS Publications Warehouse

    Uden, Daniel R.; Allen, Craig R.; Angeler, David G.; Corral, Lucia; Fricke, Kent A.

    2015-01-01

    The utilization of species distribution model(s) (SDM) for approximating, explaining, and predicting changes in species’ geographic locations is increasingly promoted for proactive ecological management. Although frameworks for modeling non-invasive species distributions are relatively well developed, their counterparts for invasive species—which may not be at equilibrium within recipient environments and often exhibit rapid transformations—are lacking. Additionally, adaptive ecological management strategies address the causes and effects of biological invasions and other complex issues in social-ecological systems. We conducted a review of biological invasions, species distribution models, and adaptive practices in ecological management, and developed a framework for adaptive, niche-based, invasive species distribution model (iSDM) development and utilization. This iterative, 10-step framework promotes consistency and transparency in iSDM development, allows for changes in invasive drivers and filters, integrates mechanistic and correlative modeling techniques, balances the avoidance of type 1 and type 2 errors in predictions, encourages the linking of monitoring and management actions, and facilitates incremental improvements in models and management across space, time, and institutional boundaries. These improvements are useful for advancing coordinated invasive species modeling, management and monitoring from local scales to the regional, continental and global scales at which biological invasions occur and harm native ecosystems and economies, as well as for anticipating and responding to biological invasions under continuing global change.

  6. ADAPT: A Developmental, Asemantic, and Procedural Model for Transcoding From Verbal to Arabic Numerals

    ERIC Educational Resources Information Center

    Barrouillet, Pierre; Camos, Valerie; Perruchet, Pierre; Seron, Xavier

    2004-01-01

    This article presents a new model of transcoding numbers from verbal to arabic form. This model, called ADAPT, is developmental, asemantic, and procedural. The authors' main proposal is that the transcoding process shifts from an algorithmic strategy to the direct retrieval from memory of digital forms. Thus, the model is evolutive, adaptive, and…

  7. Use of Time Information in Models behind Adaptive System for Building Fluency in Mathematics

    ERIC Educational Resources Information Center

    Rihák, Jirí

    2015-01-01

    In this work we introduce the system for adaptive practice of foundations of mathematics. Adaptivity of the system is primarily provided by selection of suitable tasks, which uses information from a domain model and a student model. The domain model does not use prerequisites but works with splitting skills to more concrete sub-skills. The student…

  8. Comparison of Measured Galactic Background Radiation at L-Band with Model

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, William J.; Skou, Niels; Sobjaerg, Sten

    2004-01-01

    Radiation from the celestial sky in the spectral window at 1.413 GHz is strong and an accurate accounting of this background radiation is needed for calibration and retrieval algorithms. Modern radio astronomy measurements in this window have been converted into a brightness temperature map of the celestial sky at L-band suitable for such applications. This paper presents a comparison of the background predicted by this map with the measurements of several modern L-band remote sensing radiometer Keywords-Galactic background, microwave radiometry; remote sensing;

  9. Modelling non-Gaussianity of background and observational errors by the Maximum Entropy method

    NASA Astrophysics Data System (ADS)

    Pires, Carlos; Talagrand, Olivier; Bocquet, Marc

    2010-05-01

    The Best Linear Unbiased Estimator (BLUE) has widely been used in atmospheric-oceanic data assimilation. However, when data errors have non-Gaussian pdfs, the BLUE differs from the absolute Minimum Variance Unbiased Estimator (MVUE), minimizing the mean square analysis error. The non-Gaussianity of errors can be due to the statistical skewness and positiveness of some physical observables (e.g. moisture, chemical species) or due to the nonlinearity of the data assimilation models and observation operators acting on Gaussian errors. Non-Gaussianity of assimilated data errors can be justified from a priori hypotheses or inferred from statistical diagnostics of innovations (observation minus background). Following this rationale, we compute measures of innovation non-Gaussianity, namely its skewness and kurtosis, relating it to: a) the non-Gaussianity of the individual error themselves, b) the correlation between nonlinear functions of errors, and c) the heteroscedasticity of errors within diagnostic samples. Those relationships impose bounds for skewness and kurtosis of errors which are critically dependent on the error variances, thus leading to a necessary tuning of error variances in order to accomplish consistency with innovations. We evaluate the sub-optimality of the BLUE as compared to the MVUE, in terms of excess of error variance, under the presence of non-Gaussian errors. The error pdfs are obtained by the maximum entropy method constrained by error moments up to fourth order, from which the Bayesian probability density function and the MVUE are computed. The impact is higher for skewed extreme innovations and grows in average with the skewness of data errors, especially if those skewnesses have the same sign. Application has been performed to the quality-accepted ECMWF innovations of brightness temperatures of a set of High Resolution Infrared Sounder channels. In this context, the MVUE has led in some extreme cases to a potential reduction of 20-60% error

  10. Solitons on a finite-gap background in Bullough-Dodd-Jiber-Shabat model

    SciTech Connect

    Cherdantzev, I.Y.; Sharipov, R.A. )

    1990-08-10

    This paper presents the determinant formula for N-soliton solutions of the Bullough-Dodd-Jiber-Shabat equation on a finite-gap background. Nonsingularity conditions for them and their asymptotics are investigated.

  11. Modeling Lost-Particle Backgrounds in PEP-II Using LPTURTLE

    SciTech Connect

    Fieguth, T.; Barlow, R.; Kozanecki, W.; /DAPNIA, Saclay

    2005-05-17

    Background studies during the design, construction, commissioning, operation and improvement of BaBar and PEP-II have been greatly influenced by results from a program referred to as LPTURTLE (Lost Particle TURTLE) which was originally conceived for the purpose of studying gas background for SLC. This venerable program is still in use today. We describe its use, capabilities and improvements and refer to current results now being applied to BaBar.

  12. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  13. A Model of Family Background, Family Process, Youth Self-Control, and Delinquent Behavior in Two-Parent Families

    ERIC Educational Resources Information Center

    Jeong, So-Hee; Eamon, Mary Keegan

    2009-01-01

    Using data from a national sample of two-parent families with 11- and 12-year-old youths (N = 591), we tested a structural model of family background, family process (marital conflict and parenting), youth self-control, and delinquency four years later. Consistent with the conceptual model, marital conflict and youth self-control are directly…

  14. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era

    PubMed Central

    2015-01-01

    Background The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public’s health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. Objective The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. Methods To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. Results The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence

  15. Saccade adaptation as a model of learning in voluntary movements.

    PubMed

    Iwamoto, Yoshiki; Kaku, Yuki

    2010-07-01

    Motor learning ensures the accuracy of our daily movements. However, we know relatively little about its mechanisms, particularly for voluntary movements. Saccadic eye movements serve to bring the image of a visual target precisely onto the fovea. Their accuracy is maintained not by on-line sensory feedback but by a learning mechanism, called saccade adaptation. Recent studies on saccade adaptation have provided valuable additions to our knowledge of motor learning. This review summarizes what we know about the characteristics and neural mechanisms of saccade adaptation, emphasizing recent findings and new ideas. Long-term adaptation, distinct from its short-term counterpart, seems to be present in the saccadic system. Accumulating evidence indicates the involvement of the oculomotor cerebellar vermis as a learning site. The superior colliculus is now suggested not only to generate saccade commands but also to issue driving signals for motor learning. These and other significant contributions have advanced our understanding of saccade adaptation and motor learning in general.

  16. The Radio Language Arts Project: adapting the radio mathematics model.

    PubMed

    Christensen, P R

    1985-01-01

    Kenya's Radio Language Arts Project, directed by the Academy for Educational Development in cooperation with the Kenya Institute of Education in 1980-85, sought to teach English to rural school children in grades 1-3 through use of an intensive, radio-based instructional system. Daily 1/2 hour lessons are broadcast throughout the school year and supported by teachers and print materials. The project further was aimed at testing the feasibility of adaptation of the successful Nicaraguan Radio Math Project to a new subject area. Difficulties were encountered in articulating a language curriculum with the precision required for a media-based instructional system. Also a challenge was defining the acceptable regional standard for pronunciation and grammar; British English was finally selected. An important modification of the Radio Math model concerned the role of the teacher. While Radio Math sought to reduce the teacher's responsibilities during the broadcast, Radio Language Arts teachers played an important instructional role during the English lesson broadcasts by providing translation and checks on work. Evaluations of the Radio language Arts Project suggest significant gains in speaking, listening, and reading skills as well as high levels of satisfaction on the part of parents and teachers.

  17. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    SciTech Connect

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.; Vishwanath, Venkatram; Kumaran, Kalyan

    2015-01-01

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patterns (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.

  18. Adaptable Information Models in the Global Change Information System

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Buddenberg, A.; Aulenbach, S.; Wolfe, R.; Goldstein, J.

    2014-12-01

    The US Global Change Research Program has sponsored the creation of the Global Change Information System () to provide a web based source of accessible, usable, and timely information about climate and global change for use by scientists, decision makers, and the public. The GCIS played multiple roles during the assembly and release of the Third National Climate Assessment. It provided human and programmable interfaces, relational and semantic representations of information, and discrete identifiers for various types of resources, which could then be manipulated by a distributed team with a wide range of specialties. The GCIS also served as a scalable backend for the web based version of the report. In this talk, we discuss the infrastructure decisions made during the design and deployment of the GCIS, as well as ongoing work to adapt to new types of information. Both a constrained relational database and an open ended triple store are used to ensure data integrity while maintaining fluidity. Using natural primary keys allows identifiers to propagate through both models. Changing identifiers are accomodated through fine grained auditing and explicit mappings to external lexicons. A practical RESTful API is used whose endpoints are also URIs in an ontology. Both the relational schema and the ontology are maleable, and stability is ensured through test driven development and continuous integration testing using modern open source techniques. Content is also validated through continuous testing techniques. A high degres of scalability is achieved through caching.

  19. Distinguishing Models of Professional Development: The Case of an Adaptive Model's Impact on Teachers' Knowledge, Instruction, and Student Achievement

    ERIC Educational Resources Information Center

    Koellner, Karen; Jacobs, Jennifer

    2015-01-01

    We posit that professional development (PD) models fall on a continuum from highly adaptive to highly specified, and that these constructs provide a productive way to characterize and distinguish among models. The study reported here examines the impact of an adaptive mathematics PD model on teachers' knowledge and instructional practices as well…

  20. Adapted Boolean network models for extracellular matrix formation

    PubMed Central

    Wollbold, Johannes; Huber, René; Pohlers, Dirk; Koczan, Dirk; Guthke, Reinhard; Kinne, Raimund W; Gausmann, Ulrike

    2009-01-01

    Background Due to the rapid data accumulation on pathogenesis and progression of chronic inflammation, there is an increasing demand for approaches to analyse the underlying regulatory networks. For example, rheumatoid arthritis (RA) is a chronic inflammatory disease, characterised by joint destruction and perpetuated by activated synovial fibroblasts (SFB). These abnormally express and/or secrete pro-inflammatory cytokines, collagens causing joint fibrosis, or tissue-degrading enzymes resulting in destruction of the extra-cellular matrix (ECM). We applied three methods to analyse ECM regulation: data discretisation to filter out noise and to reduce complexity, Boolean network construction to implement logic relationships, and formal concept analysis (FCA) for the formation of minimal, but complete rule sets from the data. Results First, we extracted literature information to develop an interaction network containing 18 genes representing ECM formation and destruction. Subsequently, we constructed an asynchronous Boolean network with biologically plausible time intervals for mRNA and protein production, secretion, and inactivation. Experimental gene expression data was obtained from SFB stimulated by TGFβ1 or by TNFα and discretised thereafter. The Boolean functions of the initial network were improved iteratively by the comparison of the simulation runs to the experimental data and by exploitation of expert knowledge. This resulted in adapted networks for both cytokine stimulation conditions. The simulations were further analysed by the attribute exploration algorithm of FCA, integrating the observed time series in a fine-tuned and automated manner. The resulting temporal rules yielded new contributions to controversially discussed aspects of fibroblast biology (e.g., considerable expression of TNF and MMP9 by fibroblasts stimulation) and corroborated previously known facts (e.g., co-expression of collagens and MMPs after TNFα stimulation), but also revealed

  1. Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL): adapting the Partial Phylogenetic Profiling algorithm to scan sequences for signatures that predict protein function

    PubMed Central

    2010-01-01

    Background Comparative genomics methods such as phylogenetic profiling can mine powerful inferences from inherently noisy biological data sets. We introduce Sites Inferred by Metabolic Background Assertion Labeling (SIMBAL), a method that applies the Partial Phylogenetic Profiling (PPP) approach locally within a protein sequence to discover short sequence signatures associated with functional sites. The approach is based on the basic scoring mechanism employed by PPP, namely the use of binomial distribution statistics to optimize sequence similarity cutoffs during searches of partitioned training sets. Results Here we illustrate and validate the ability of the SIMBAL method to find functionally relevant short sequence signatures by application to two well-characterized protein families. In the first example, we partitioned a family of ABC permeases using a metabolic background property (urea utilization). Thus, the TRUE set for this family comprised members whose genome of origin encoded a urea utilization system. By moving a sliding window across the sequence of a permease, and searching each subsequence in turn against the full set of partitioned proteins, the method found which local sequence signatures best correlated with the urea utilization trait. Mapping of SIMBAL "hot spots" onto crystal structures of homologous permeases reveals that the significant sites are gating determinants on the cytosolic face rather than, say, docking sites for the substrate-binding protein on the extracellular face. In the second example, we partitioned a protein methyltransferase family using gene proximity as a criterion. In this case, the TRUE set comprised those methyltransferases encoded near the gene for the substrate RF-1. SIMBAL identifies sequence regions that map onto the substrate-binding interface while ignoring regions involved in the methyltransferase reaction mechanism in general. Neither method for training set construction requires any prior experimental

  2. Decentralized Adaptive Control of Systems with Uncertain Interconnections, Plant-Model Mismatch and Actuator Failures

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Decentralized adaptive control is considered for systems consisting of multiple interconnected subsystems. It is assumed that each subsystem s parameters are uncertain and the interconnection parameters are not known. In addition, mismatch can exist between each subsystem and its reference model. A strictly decentralized adaptive control scheme is developed, wherein each subsystem has access only to its own state but has the knowledge of all reference model states. The mismatch is estimated online for each subsystem and the mismatch estimates are used to adaptively modify the corresponding reference models. The adaptive control scheme is extended to the case with actuator failures in addition to mismatch.

  3. Modelling the background aerosol climatologies (1989-2010) for the Mediterranean basin

    NASA Astrophysics Data System (ADS)

    Jimenez-Guerrero, Pedro; Jerez, Sonia

    2014-05-01

    seasonally; here the sea spray clearly follows the wind speed variation. The results confirm the capability of the modelling strategies to reproduce the particulate matter levels, composition and variation in the Mediterranean area. This kind of information is useful for establishing improvement strategies for the prediction aerosols and to achieve the standards set in European Directives for modeling applications. Kulmala, M., Asmi, A., Lappalainen, H.K., Carslaw, K.S., Pöschl, U., Baltensperger, U. Hov, O., Brenquier, J.-L., Pandis, S.N., Facchini, M.C., Hanson, H.-C., Wiedensohler, A., O'Dowd, C.D., 2009. Introduction: European Integrated Project on Aerosol Cloud Climate and Air Quality interactions (EUCAARI) - integrating aerosol research from nano to global scales. Amos. Chem. Phys., 9, 2825-2841. Querol, X., Alastuey, A., Pey, J., Cusack, M., Pérez, N., Mihalopoulos, N., Theodosi, C., Gerasopoulos, E., Kubilay, N., Koçak, M., 2009. Variability in regional background aerosols within the Mediterranean. Atmos. Chem. Phys., 9, 4575-4591.

  4. Measuring diffuse interstellar bands with cool stars. Improved line lists to model background stellar spectra

    NASA Astrophysics Data System (ADS)

    Monreal-Ibero, A.; Lallement, R.

    2017-03-01

    Context. Diffuse stellar bands (DIBs) are ubiquitous in stellar spectra. Traditionally, they have been studied through their extraction from hot (early-type) stars because of their smooth continuum. In an era in which there are several ongoing or planned massive Galactic surveys using multi-object spectrographs, cool (late-type) stars constitute an appealing set of targets. However, from the technical point of view, the extraction of DIBs in their spectra is more challenging because of the complexity of the continuum. Aims: In this contribution we provide the community with an improved set of stellar lines in the spectral regions associated with the strong DIBs at λ6196.0, λ6269.8, λ6283.8, and λ6379.3. These lines allow for the creation of better stellar synthetic spectra, reproducing the background emission and a more accurate extraction of the magnitudes associated with a given DIB (e.g., equivalent width, radial velocity). Methods: The Sun and Arcturus were used as representative examples of dwarf and giant stars, respectively. A high quality spectrum for each of them was modeled using TURBOSPECTRUM and the Vienna Atomic Line Database (VALD) stellar line list. The oscillator strength log (gf) and wavelength of specific lines were modified to create synthetic spectra in which the residuals in both the Sun and Arcturus were minimized. Results: The TURBOSPECTRUM synthetic spectra, based on improved line lists, reproduce the observed spectra for the Sun and Arcturus in the mentioned spectral ranges with greater accuracy. Residuals between the synthetic and observed spectra are always ≲10%, which is much better than residuals with previously existing options. We tested the new line lists with some characteristic spectra from a variety of stars, including both giant and dwarf stars, and under different degrees of extinction. As occurred with the Sun and Arcturus, residuals in the fits used to extract the DIB information are smaller when using synthetic spectra

  5. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines.

    PubMed

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines.

  6. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines

    PubMed Central

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  7. Center-surround interaction with adaptive inhibition: a computational model for contour detection.

    PubMed

    Zeng, Chi; Li, Yongjie; Li, Chaoyi

    2011-03-01

    The broad region outside the classical receptive field (CRF) of a neuron in the primary visual cortex (V1), namely non-CRF (nCRF), exerts robust modulatory effects on the responses to visual stimuli presented within the CRF. This modulating effect is mostly suppressive, which plays important roles in visual information processing. One possible role is to extract object contours from disorderly background textures. In this study, a two-scale based contour extraction model, inspired by the inhibitory interactions between CRF and nCRF of V1 neurons, is presented. The kernel idea is that the side and end subregions of nCRF work in different manners, i.e., while the strength of side inhibition is consistently calculated just based on the local features in the side regions at a fine spatial scale, the strength of end inhibition adaptively varies in accordance with the local features in both end and side regions at both fine and coarse scales. Computationally, the end regions exert weaker inhibition on CRF at the locations where a meaningful contour more likely exists in the local texture and stronger inhibition at the locations where the texture elements are mainly stochastic. Our results demonstrate that by introducing such an adaptive mechanism into the model, the non-meaningful texture elements are removed dramatically, and at the same time, the object contours are extracted effectively. Besides the superior performance in contour detection over other inhibition-based models, our model provides a better understanding of the roles of nCRF and has potential applications in computer vision and pattern recognition.

  8. A Hybrid Acoustic and Pronunciation Model Adaptation Approach for Non-native Speech Recognition

    NASA Astrophysics Data System (ADS)

    Oh, Yoo Rhee; Kim, Hong Kook

    In this paper, we propose a hybrid model adaptation approach in which pronunciation and acoustic models are adapted by incorporating the pronunciation and acoustic variabilities of non-native speech in order to improve the performance of non-native automatic speech recognition (ASR). Specifically, the proposed hybrid model adaptation can be performed at either the state-tying or triphone-modeling level, depending at which acoustic model adaptation is performed. In both methods, we first analyze the pronunciation variant rules of non-native speakers and then classify each rule as either a pronunciation variant or an acoustic variant. The state-tying level hybrid method then adapts pronunciation models and acoustic models by accommodating the pronunciation variants in the pronunciation dictionary and by clustering the states of triphone acoustic models using the acoustic variants, respectively. On the other hand, the triphone-modeling level hybrid method initially adapts pronunciation models in the same way as in the state-tying level hybrid method; however, for the acoustic model adaptation, the triphone acoustic models are then re-estimated based on the adapted pronunciation models and the states of the re-estimated triphone acoustic models are clustered using the acoustic variants. From the Korean-spoken English speech recognition experiments, it is shown that ASR systems employing the state-tying and triphone-modeling level adaptation methods can relatively reduce the average word error rates (WERs) by 17.1% and 22.1% for non-native speech, respectively, when compared to a baseline ASR system.

  9. Command generator tracker based direct model reference adaptive control of a PUMA 560 manipulator. Thesis

    NASA Technical Reports Server (NTRS)

    Swift, David C.

    1992-01-01

    This project dealt with the application of a Direct Model Reference Adaptive Control algorithm to the control of a PUMA 560 Robotic Manipulator. This chapter will present some motivation for using Direct Model Reference Adaptive Control, followed by a brief historical review, the project goals, and a summary of the subsequent chapters.

  10. A Context-Adaptive Teacher Training Model in a Ubiquitous Learning Environment

    ERIC Educational Resources Information Center

    Chen, Min; Chiang, Feng Kuang; Jiang, Ya Na; Yu, Sheng Quan

    2017-01-01

    In view of the discrepancies in teacher training and teaching practice, this paper put forward a context-adaptive teacher training model in a ubiquitous learning (u-learning) environment. The innovative model provides teachers of different subjects with adaptive and personalized learning content in a u-learning environment, implements intra- and…

  11. A Systematic Ecological Model for Adapting Physical Activities: Theoretical Foundations and Practical Examples

    ERIC Educational Resources Information Center

    Hutzler, Yeshayahu

    2007-01-01

    This article proposes a theory- and practice-based model for adapting physical activities. The ecological frame of reference includes Dynamic and Action System Theory, World Health Organization International Classification of Function and Disability, and Adaptation Theory. A systematic model is presented addressing (a) the task objective, (b) task…

  12. Regulation of Persistent Activity by Background Inhibition in an In Vitro Model of a Cortical Microcircuit

    PubMed Central

    Fellous, Jean-Marc; Sejnowski, Terrence J.

    2010-01-01

    We combined in vitro intracellular recording from prefrontal cortical neurons with simulated synaptic activity of a layer 5 prefrontal microcircuit using a dynamic clamp. During simulated in vivo background conditions, the cell responded to a brief depolarization with a sequence of spikes that outlasted the depolarization, mimicking the activity of a cell recorded during the delay period of a working memory task in the behaving monkey. The onset of sustained activity depended on the number of action potentials elicited by the cue-like depolarization. Too few spikes failed to provide enough NMDA drive to elicit sustained reverberations; too many spikes activated a slow intrinsic hyperpolarization current that prevented spiking; an intermediate number of spikes produced sustained activity. When high dopamine levels were simulated by depolarizing the cell and by increasing the amount of NMDA current, the cell exhibited spontaneous ‘up-states’ that terminated by the activation of a slow intrinsic hyperpolarizing current. The firing rate during the delay period could be effectively modulated by the standard deviation of the inhibitory background synaptic noise without significant changes in the background firing rate before cue onset. These results suggest that the balance between fast feedback inhibition and slower AMPA and NMDA feedback excitation is critical in initiating persistent activity and that the maintenance of persistent activity may be regulated by the amount of correlated background inhibition. PMID:14576214

  13. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  14. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    NASA Astrophysics Data System (ADS)

    Rolland, Joran; Simonnet, Eric

    2015-02-01

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection-mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  15. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  16. Non-negative infrared patch-image model: Robust target-background separation via partial sum minimization of singular values

    NASA Astrophysics Data System (ADS)

    Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun

    2017-03-01

    To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.

  17. Model predictive control with constraints for a nonlinear adaptive cruise control vehicle model in transition manoeuvres

    NASA Astrophysics Data System (ADS)

    Ali, Zeeshan; Popov, Atanas A.; Charles, Guy

    2013-06-01

    A vehicle following control law, based on the model predictive control method, to perform transition manoeuvres (TMs) for a nonlinear adaptive cruise control (ACC) vehicle is presented in this paper. The TM controller ultimately establishes a steady-state following distance behind a preceding vehicle to avoid collision, keeping account of acceleration limits, safe distance, and state constraints. The vehicle dynamics model is for continuous-time domain and captures the real dynamics of the sub-vehicle models for steady-state and transient operations. The ACC vehicle can execute the TM successfully and achieves a steady-state in the presence of complex dynamics within the constraint boundaries.

  18. [Study on simplification of extraction kinetics model and adaptability of total flavonoids model of Scutellariae radix].

    PubMed

    Chen, Yang; Zhang, Jin; Ni, Jian; Dong, Xiao-Xu; Xu, Meng-Jie; Dou, Hao-Ran; Shen, Ming-Rui; Yang, Bo-Di; Fu, Jing

    2014-01-01

    Because of irregular shapes of Chinese herbal pieces, we simplified the previously deduced general extraction kinetic model for TCMs, and integrated particle diameters of Chinese herbs that had been hard to be determined in the final parameter "a". The reduction of the direct determination of particle diameters of Chinese herbs was conducive to increase the accuracy of the model, expand the application scope of the model, and get closer to the actual production conditions. Finally, a simplified model was established, with its corresponding experimental methods and data processing methods determined. With total flavonoids in Scutellariae Radix as the determination index, we conducted a study on the adaptability of total flavonoids extracted from Scutellariae Radix with the water decoction method in the model. The results showed a good linear correlation among the natural logarithm value of the mass concentration of total flavonoids in Scutellariae Radix, the time and the changes in the natural logarithm of solvent multiple. Through calculating and fitting, efforts were made to establish the kinetic model of extracting total flavonoids from Scutellariae Radix with the water decoction method, and verify the model, with a good degree of fitting and deviation within the range of the industrial production requirements. This indicated that the model established by the method has a good adaptability.

  19. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    NASA Astrophysics Data System (ADS)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  20. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  1. Model-Based Nonrigid Motion Analysis Using Natural Feature Adaptive Mesh

    SciTech Connect

    Zhang, Y.; Goldgof, D.B.; Sarkar, S.; Tsap, L.V.

    2000-04-25

    The success of nonrigid motion analysis using physical finite element model is dependent on the mesh that characterizes the object's geometric structure. We suggest a deformable mesh adapted to the natural features of images. The adaptive mesh requires much fewer number of nodes than the fixed mesh which was used in our previous work. We demonstrate the higher efficiency of the adaptive mesh in the context of estimating burn scar elasticity relative to normal skin elasticity using the observed 2D image sequence. Our results show that the scar assessment method based on the physical model using natural feature adaptive mesh can be applied to images which do not have artificial markers.

  2. Tensor product model transformation based adaptive integral-sliding mode controller: equivalent control method.

    PubMed

    Zhao, Guoliang; Sun, Kaibiao; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model.

  3. Digital adaptive model following flight control. [using fighter aircraft mathematical model-following algorithm

    NASA Technical Reports Server (NTRS)

    Alag, G. S.; Kaufman, H.

    1974-01-01

    Simple mechanical linkages are often unable to cope with the many control problems associated with high performance aircraft maneuvering over a wide flight envelope. One procedure for retaining uniform handling qualities over such an envelope is to implement a digital adaptive controller. Towards such an implementation an explicit adaptive controller, which makes direct use of online parameter identification, has been developed and applied to the linearized equations of motion for a typical fighter aircraft. The system is composed of an online weighted least squares identifier, a Kalman state filter, and a single stage real model following control law. The corresponding control gains are readily adjustable in accordance with parameter changes to ensure asymptotic stability if the conditions for perfect model following are satisfied and stability in the sense of boundedness otherwise.

  4. Comparison of Model Prediction with Measurements of Galactic Background Noise at L-Band

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, Willam J.; Skou, Niels; Sobjaerg, S.

    2004-01-01

    The spectral window at L-band (1.413 GHz) is important for passive remote sensing of surface parameters such as soil moisture and sea surface salinity that are needed to understand the hydrological cycle and ocean circulation. Radiation from celestial (mostly galactic) sources is strong in this window and an accurate accounting for this background radiation is often needed for calibration. Modem radio astronomy measurements in this spectral window have been converted into a brightness temperature map of the celestial sky at L-band suitable for use in correcting passive measurements. This paper presents a comparison of the background radiation predicted by this map with measurements made with several modem L-band remote sensing radiometers. The agreement validates the map and the procedure for locating the source of down-welling radiation.

  5. Modeling irrigation-based climate change adaptation in agriculture: Model development and evaluation in Northeast China

    NASA Astrophysics Data System (ADS)

    Okada, Masashi; Iizumi, Toshichika; Sakurai, Gen; Hanasaki, Naota; Sakai, Toru; Okamoto, Katsuo; Yokozawa, Masayuki

    2015-09-01

    Replacing a rainfed cropping system with an irrigated one is widely assumed to be an effective measure for climate change adaptation. However, many agricultural impact studies have not necessarily accounted for the space-time variations in the water availability under changing climate and land use. Moreover, many hydrologic and agricultural assessments of climate change impacts are not fully integrated. To overcome this shortcoming, a tool that can simultaneously simulate the dynamic interactions between crop production and water resources in a watershed is essential. Here we propose the regional production and circulation coupled model (CROVER) by embedding the PRYSBI-2 (Process-based Regional Yield Simulator with Bayesian Inference version 2) large-area crop model into the global water resources model (called H08), and apply this model to the Songhua River watershed in Northeast China. The evaluation reveals that the model's performance in capturing the major characteristics of historical change in surface soil moisture, river discharge, actual crop evapotranspiration, and soybean yield relative to the reference data during the interval 1979-2010 is satisfactory accurate. The simulation experiments using the model demonstrated that subregional irrigation management, such as designating the area to which irrigation is primarily applied, has measurable influences on the regional crop production in a drought year. This finding suggests that reassessing climate change risk in agriculture using this type of modeling is crucial not to overestimate potential of irrigation-based adaptation.

  6. The World Organisation for Animal Health and epidemiological modelling: background and objectives.

    PubMed

    Willeberg, P; Grubbe, T; Weber, S; Forde-Folle, K; Dubé, C

    2011-08-01

    The papers in this issue of the Scientific and Technical Review (the Review) examine uses of modelling as a tool to supportthe formulation of disease control policy and applications of models for various aspects of animal disease management. Different issues in model development and several types of models are described. The experience with modelling during the 2001 foot and mouth disease outbreak in the United Kingdom underlines how models might be appropriately applied by decision-makers when preparing for and dealing with animal health emergencies. This paper outlines the involvement of the World Organisation for Animal Health (OIE) in epidemiological modelling since 2005, with emphasis on the outcome of the 2007 questionnaire survey of model usage among Member Countries, the subsequent OIE General Session resolution and the 2008 epidemiological modelling workshop at the Centers for Epidemiology and Animal Health in the United States. Many of the workshop presentations were developed into the papers that are presented in this issue of the Review.

  7. Estimating North American background ozone in U.S. surface air with two independent global models: Variability, uncertainties, and recommendations

    EPA Science Inventory

    Accurate estimates for North American background (NAB) ozone (O3) in surface air over the United States are needed for setting and implementing an attainable national O3 standard. These estimates rely on simulations with atmospheric chemistry-transport models that set North Amer...

  8. An investigation of the thermal comfort adaptive model in a tropical upland climate

    SciTech Connect

    Malama, A.; Jitkhajornwanich, K.; Sharples, S.; Pitts, A.C.

    1998-10-01

    The results of two thermal comfort surveys performed in Zambia, which has a tropical upland climate, are presented and analyzed with special reference to the adaptive model. The main forms of adaptation and adjustment analyzed are: clothing, skin moisture, activity level, and environmental controls. Results show that in the cool season the main methods of adaptation used by the subjects were clothing and environmental controls, while in the warm season only environmental controls were used. It proved difficult to establish the impact of the various levels of adaptivity on thermal comfort standards. It would be useful if the adaptive model could be factored into thermal comfort to produce adaptive thermal comfort standards that would allow for differences in culture and climate across the globe.

  9. Adaptive Failure Compensation for Aircraft Tracking Control Using Engine Differential Based Model

    NASA Technical Reports Server (NTRS)

    Liu, Yu; Tang, Xidong; Tao, Gang; Joshi, Suresh M.

    2006-01-01

    An aircraft model that incorporates independently adjustable engine throttles and ailerons is employed to develop an adaptive control scheme in the presence of actuator failures. This model captures the key features of aircraft flight dynamics when in the engine differential mode. Based on this model an adaptive feedback control scheme for asymptotic state tracking is developed and applied to a transport aircraft model in the presence of two types of failures during operation, rudder failure and aileron failure. Simulation results are presented to demonstrate the adaptive failure compensation scheme.

  10. Cold dark matter confronts the cosmic microwave background - Large-angular-scale anisotropies in Omega sub 0 + lambda 1 models

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola

    1992-01-01

    A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.

  11. An adapted Coffey model for studying susceptibility losses in interacting magnetic nanoparticles

    PubMed Central

    Osaci, Mihaela

    2015-01-01

    Summary Background: Nanoparticles can be used in biomedical applications, such as contrast agents for magnetic resonance imaging, in tumor therapy or against cardiovascular diseases. Single-domain nanoparticles dissipate heat through susceptibility losses in two modes: Néel relaxation and Brownian relaxation. Results: Since a consistent theory for the Néel relaxation time that is applicable to systems of interacting nanoparticles has not yet been developed, we adapted the Coffey theoretical model for the Néel relaxation time in external magnetic fields in order to consider local dipolar magnetic fields. Then, we obtained the effective relaxation time. The effective relaxation time is further used for obtaining values of specific loss power (SLP) through linear response theory (LRT). A comparative analysis between our model and the discrete orientation model, more often used in literature, and a comparison with experimental data from literature have been carried out, in order to choose the optimal magnetic parameters of a nanoparticle system. Conclusion: In this way, we can study effects of the nanoparticle concentration on SLP in an acceptable range of frequencies and amplitudes of external magnetic fields for biomedical applications, especially for tumor therapy by magnetic hyperthermia. PMID:26665090

  12. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  13. Dynamics of Dual Prism Adaptation: Relating Novel Experimental Results to a Minimalistic Neural Model

    PubMed Central

    Arévalo, Orlando; Bornschlegl, Mona A.; Eberhardt, Sven; Ernst, Udo; Pawelzik, Klaus; Fahle, Manfred

    2013-01-01

    In everyday life, humans interact with a dynamic environment often requiring rapid adaptation of visual perception and motor control. In particular, new visuo–motor mappings must be learned while old skills have to be kept, such that after adaptation, subjects may be able to quickly change between two different modes of generating movements (‘dual–adaptation’). A fundamental question is how the adaptation schedule determines the acquisition speed of new skills. Given a fixed number of movements in two different environments, will dual–adaptation be faster if switches (‘phase changes’) between the environments occur more frequently? We investigated the dynamics of dual–adaptation under different training schedules in a virtual pointing experiment. Surprisingly, we found that acquisition speed of dual visuo–motor mappings in a pointing task is largely independent of the number of phase changes. Next, we studied the neuronal mechanisms underlying this result and other key phenomena of dual–adaptation by relating model simulations to experimental data. We propose a simple and yet biologically plausible neural model consisting of a spatial mapping from an input layer to a pointing angle which is subjected to a global gain modulation. Adaptation is performed by reinforcement learning on the model parameters. Despite its simplicity, the model provides a unifying account for a broad range of experimental data: It quantitatively reproduced the learning rates in dual–adaptation experiments for both direct effect, i.e. adaptation to prisms, and aftereffect, i.e. behavior after removal of prisms, and their independence on the number of phase changes. Several other phenomena, e.g. initial pointing errors that are far smaller than the induced optical shift, were also captured. Moreover, the underlying mechanisms, a local adaptation of a spatial mapping and a global adaptation of a gain factor, explained asymmetric spatial transfer and generalization of

  14. Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees

    SciTech Connect

    Bremer, P; Pascucci, V; Hamann, B

    2008-12-08

    We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.

  15. Receptor modelling of both particle composition and size distribution from a background site in London, UK

    NASA Astrophysics Data System (ADS)

    Beddows, D. C. S.; Harrison, R. M.; Green, D. C.; Fuller, G. W.

    2015-09-01

    Positive matrix factorisation (PMF) analysis was applied to PM10 chemical composition and particle number size distribution (NSD) data measured at an urban background site (North Kensington) in London, UK, for the whole of 2011 and 2012. The PMF analyses for these 2 years revealed six and four factors respectively which described seven sources or aerosol types. These included nucleation, traffic, urban background, secondary, fuel oil, marine and non-exhaust/crustal sources. Urban background, secondary and traffic sources were identified by both the chemical composition and particle NSD analysis, but a nucleation source was identified only from the particle NSD data set. Analysis of the PM10 chemical composition data set revealed fuel oil, marine, non-exhaust traffic/crustal sources which were not identified from the NSD data. The two methods appear to be complementary, as the analysis of the PM10 chemical composition data is able to distinguish components contributing largely to particle mass, whereas the number particle size distribution data set - although limited to detecting sources of particles below the diameter upper limit of the SMPS (604 nm) - is more effective for identifying components making an appreciable contribution to particle number. Analysis was also conducted on the combined chemical composition and NSD data set, revealing five factors representing urban background, nucleation, secondary, aged marine and traffic sources. However, the combined analysis appears not to offer any additional power to discriminate sources above that of the aggregate of the two separate PMF analyses. Day-of-the-week and month-of-the-year associations of the factors proved consistent with their assignment to source categories, and bivariate polar plots which examined the wind directional and wind speed association of the different factors also proved highly consistent with their inferred sources. Source attribution according to the air mass back trajectory showed, as

  16. Background and Derivation of ANS-5.4 Standard Fission Product Release Model

    SciTech Connect

    Beyer, Carl E.; Turnbull, Andrew J.

    2010-01-29

    This background report describes the technical basis for the newly proposed American Nuclear Society (ANS) 5.4 standard, Methods for Calculating the Fractional Release of Volatile Fission Products from Oxide Fuels. The proposed ANS 5.4 standard provides a methodology for determining the radioactive fission product releases from the fuel for use in assessing radiological consequences of postulated accidents that do not involve abrupt power transients. When coupled with isotopic yields, this method establishes the 'gap activity,' which is the inventory of volatile fission products that are released from the fuel rod if the cladding are breached.

  17. A model for homeopathic remedy effects: low dose nanoparticles, allostatic cross-adaptation, and time-dependent sensitization in a complex adaptive system

    PubMed Central

    2012-01-01

    Background This paper proposes a novel model for homeopathic remedy action on living systems. Research indicates that homeopathic remedies (a) contain measurable source and silica nanoparticles heterogeneously dispersed in colloidal solution; (b) act by modulating biological function of the allostatic stress response network (c) evoke biphasic actions on living systems via organism-dependent adaptive and endogenously amplified effects; (d) improve systemic resilience. Discussion The proposed active components of homeopathic remedies are nanoparticles of source substance in water-based colloidal solution, not bulk-form drugs. Nanoparticles have unique biological and physico-chemical properties, including increased catalytic reactivity, protein and DNA adsorption, bioavailability, dose-sparing, electromagnetic, and quantum effects different from bulk-form materials. Trituration and/or liquid succussions during classical remedy preparation create “top-down” nanostructures. Plants can biosynthesize remedy-templated silica nanostructures. Nanoparticles stimulate hormesis, a beneficial low-dose adaptive response. Homeopathic remedies prescribed in low doses spaced intermittently over time act as biological signals that stimulate the organism’s allostatic biological stress response network, evoking nonlinear modulatory, self-organizing change. Potential mechanisms include time-dependent sensitization (TDS), a type of adaptive plasticity/metaplasticity involving progressive amplification of host responses, which reverse direction and oscillate at physiological limits. To mobilize hormesis and TDS, the remedy must be appraised as a salient, but low level, novel threat, stressor, or homeostatic disruption for the whole organism. Silica nanoparticles adsorb remedy source and amplify effects. Properly-timed remedy dosing elicits disease-primed compensatory reversal in direction of maladaptive dynamics of the allostatic network, thus promoting resilience and recovery from

  18. Parametric recursive system identification and self-adaptive modeling of the human energy metabolism for adaptive control of fat weight.

    PubMed

    Őri, Zsolt P

    2016-08-03

    A mathematical model has been developed to facilitate indirect measurements of difficult to measure variables of the human energy metabolism on a daily basis. The model performs recursive system identification of the parameters of the metabolic model of the human energy metabolism using the law of conservation of energy and principle of indirect calorimetry. Self-adaptive models of the utilized energy intake prediction, macronutrient oxidation rates, and daily body composition changes were created utilizing Kalman filter and the nominal trajectory methods. The accuracy of the models was tested in a simulation study utilizing data from the Minnesota starvation and overfeeding study. With biweekly macronutrient intake measurements, the average prediction error of the utilized carbohydrate intake was -23.2 ± 53.8 kcal/day, fat intake was 11.0 ± 72.3 kcal/day, and protein was 3.7 ± 16.3 kcal/day. The fat and fat-free mass changes were estimated with an error of 0.44 ± 1.16 g/day for fat and -2.6 ± 64.98 g/day for fat-free mass. The daily metabolized macronutrient energy intake and/or daily macronutrient oxidation rate and the daily body composition change from directly measured serial data are optimally predicted with a self-adaptive model with Kalman filter that uses recursive system identification.

  19. Development and extension of an aggregated scale model: Part 1 - Background to ASMITA

    NASA Astrophysics Data System (ADS)

    Townend, Ian; Wang, Zheng Bing; Stive, Marcel; Zhou, Zeng

    2016-07-01

    Whilst much attention has been given to models that describe wave, tide and sediment transport processes in sufficient detail to determine the local changes in bed level over a relatively detailed representation of the bathymetry, far less attention has been given to models that consider the problem at a much larger scale (e.g. that of geomorphological elements such as a tidal flat and tidal channel). Such aggregated or lumped models tend not to represent the processes in detail but rather capture the behaviour at the scale of interest. One such model developed using the concept of an equilibrium concentration is the Aggregated Scale Morphological Interaction between Tidal basin and Adjacent coast (ASMITA). In this paper we provide some new insights into the concepts of equilibrium, and horizontal and vertical exchange that are key components of this modelling approach. In a companion paper, we summarise a range of developments that have been undertaken to extend the original model concept, to illustrate the flexibility and power of the conceptual framework. However, adding detail progressively moves the model in the direction of the more detailed process-based models and we give some consideration to the boundary between the two. Highlights The concept of aggregating model scales is explored and the basis of the ASMITA model is outlined in detail

  20. Energetic Metabolism and Biochemical Adaptation: A Bird Flight Muscle Model

    ERIC Educational Resources Information Center

    Rioux, Pierre; Blier, Pierre U.

    2006-01-01

    The main objective of this class experiment is to measure the activity of two metabolic enzymes in crude extract from bird pectoral muscle and to relate the differences to their mode of locomotion and ecology. The laboratory is adapted to stimulate the interest of wildlife management students to biochemistry. The enzymatic activities of cytochrome…

  1. An Adaptive Model of Student Performance Using Inverse Bayes

    ERIC Educational Resources Information Center

    Lang, Charles

    2014-01-01

    This article proposes a coherent framework for the use of Inverse Bayesian estimation to summarize and make predictions about student behaviour in adaptive educational settings. The Inverse Bayes Filter utilizes Bayes theorem to estimate the relative impact of contextual factors and internal student factors on student performance using time series…

  2. Application of the Bifactor Model to Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Seo, Dong Gi

    2011-01-01

    Most computerized adaptive tests (CAT) have been studied under the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CAT. In addition, a number of psychological variables (e.g., quality of life, depression) can be conceptualized…

  3. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  4. Probabilistic choice models in health-state valuation research: background, theories, assumptions and applications.

    PubMed

    Arons, Alexander M M; Krabbe, Paul F M

    2013-02-01

    Interest is rising in measuring subjective health outcomes, such as treatment outcomes that are not directly quantifiable (functional disability, symptoms, complaints, side effects and health-related quality of life). Health economists in particular have applied probabilistic choice models in the area of health evaluation. They increasingly use discrete choice models based on random utility theory to derive values for healthcare goods or services. Recent attempts have been made to use discrete choice models as an alternative method to derive values for health states. In this article, various probabilistic choice models are described according to their underlying theory. A historical overview traces their development and applications in diverse fields. The discussion highlights some theoretical and technical aspects of the choice models and their similarity and dissimilarity. The objective of the article is to elucidate the position of each model and their applications for health-state valuation.

  5. Simulation of the electrically stimulated cochlear neuron: modeling adaptation to trains of electric pulses.

    PubMed

    Woo, Jihwan; Miller, Charles A; Abbas, Paul J

    2009-05-01

    The Hodgkin-Huxley (HH) model does not simulate the significant changes in auditory nerve fiber (ANF) responses to sustained stimulation that are associated with neural adaptation. Given that the electric stimuli used by cochlear prostheses can result in adapted responses, a computational model incorporating an adaptation process is warranted if such models are to remain relevant and contribute to related research efforts. In this paper, we describe the development of a modified HH single-node model that includes potassium ion ( K(+)) concentration changes in response to each action potential. This activity-related change results in an altered resting potential, and hence, excitability. Our implementation of K(+)-related changes uses a phenomenological approach based upon K(+) accumulation and dissipation time constants. Modeled spike times were computed using repeated presentations of modeled pulse-train stimuli. Spike-rate adaptation was characterized by rate decrements and time constants and compared against ANF data from animal experiments. Responses to relatively low (250 pulse/s) and high rate (5000 pulse/s) trains were evaluated and the novel adaptation model results were compared against model results obtained without the adaptation mechanism. In addition to spike-rate changes, jitter and spike intervals were evaluated and found to change with the addition of modeled adaptation. These results provide one means of incorporating a heretofore neglected (although important) aspect of ANF responses to electric stimuli. Future studies could include evaluation of alternative versions of the adaptation model elements and broadening the model to simulate a complete axon, and eventually, a spatially realistic model of the electrically stimulated nerve within extracochlear tissues.

  6. Anisotropies of the cosmic microwave background in nonstandard cold dark matter models

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Silk, Joseph

    1992-01-01

    Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.

  7. Cosmic microwave background anisotropies in cold dark matter models with cosmological constant: The intermediate versus large angular scales

    NASA Technical Reports Server (NTRS)

    Stompor, Radoslaw; Gorski, Krzysztof M.

    1994-01-01

    We obtain predictions for cosmic microwave background anisotropies at angular scales near 1 deg in the context of cold dark matter models with a nonzero cosmological constant, normalized to the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) detection. The results are compared to those computed in the matter-dominated models. We show that the coherence length of the Cosmic Microwave Background (CMB) anisotropy is almost insensitive to cosmological parameters, and the rms amplitude of the anisotropy increases moderately with decreasing total matter density, while being most sensitive to the baryon abundance. We apply these results in the statistical analysis of the published data from the UCSB South Pole (SP) experiment (Gaier et al. 1992; Schuster et al. 1993). We reject most of the Cold Dark Matter (CDM)-Lambda models at the 95% confidence level when both SP scans are simulated together (although the combined data set renders less stringent limits than the Gaier et al. data alone). However, the Schuster et al. data considered alone as well as the results of some other recent experiments (MAX, MSAM, Saskatoon), suggest that typical temperature fluctuations on degree scales may be larger than is indicated by the Gaier et al. scan. If so, CDM-Lambda models may indeed provide, from a point of view of CMB anisotropies, an acceptable alternative to flat CDM models.

  8. Dynamic modeling, property investigation, and adaptive controller design of serial robotic manipulators modeled with structural compliance

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert; Tosunoglu, Sabri; Lin, Shyng-Her

    1990-01-01

    Research results on general serial robotic manipulators modeled with structural compliances are presented. Two compliant manipulator modeling approaches, distributed and lumped parameter models, are used in this study. System dynamic equations for both compliant models are derived by using the first and second order influence coefficients. Also, the properties of compliant manipulator system dynamics are investigated. One of the properties, which is defined as inaccessibility of vibratory modes, is shown to display a distinct character associated with compliant manipulators. This property indicates the impact of robot geometry on the control of structural oscillations. Example studies are provided to illustrate the physical interpretation of inaccessibility of vibratory modes. Two types of controllers are designed for compliant manipulators modeled by either lumped or distributed parameter techniques. In order to maintain the generality of the results, neither linearization is introduced. Example simulations are given to demonstrate the controller performance. The second type controller is also built for general serial robot arms and is adaptive in nature which can estimate uncertain payload parameters on-line and simultaneously maintain trajectory tracking properties. The relation between manipulator motion tracking capability and convergence of parameter estimation properties is discussed through example case studies. The effect of control input update delays on adaptive controller performance is also studied.

  9. How Career Variety Promotes the Adaptability of Managers: A Theoretical Model

    ERIC Educational Resources Information Center

    Karaevli, Ayse; Tim Hall, Douglas T.

    2006-01-01

    This paper presents a theoretical model showing how managerial adaptability develops from career variety over the span of the person's career. By building on the literature of career theory, adult learning and development, and career adjustment, we offer a new conceptualization of managerial adaptability by identifying its behavioral, cognitive,…

  10. An Algebraic Model of Adaptive Optics for Continuous-Wave Thermal Blooming.

    DTIC Science & Technology

    1979-01-01

    blooming. The aberrations modeled generally include those applied by an adaptive optics system to compensate the naturally occurring ones. For the...results when applied to thermal blooming. However, the analysis suggests novel remedies that will tend to optimize the corrections made, thus better realizing the full potential of adaptive optics . (Author)

  11. A Mixture Rasch Model-Based Computerized Adaptive Test for Latent Class Identification

    ERIC Educational Resources Information Center

    Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi

    2012-01-01

    This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…

  12. THE HYDROCARBON SPILL SCREENING MODEL (HSSM), VOLUME 2: THEORETICAL BACKGROUND AND SOURCE CODES

    EPA Science Inventory

    A screening model for subsurface release of a nonaqueous phase liquid which is less dense than water (LNAPL) is presented. The model conceptualizes the release as consisting of 1) vertical transport from near the surface to the capillary fringe, 2) radial spreading of an LNAPL l...

  13. AN OVERVIEW OF THE LAKE MICHIGAN MASS BALANCE MODELING PROJECT: BACKGROUND, ACCOMPLISHMENTS, AND FUTURE WORK

    EPA Science Inventory

    Modeling associated with the Lake Michigan Mass Balance Project (LMMBP) is being conducted using WASP-type water quality models to gain a better understanding of the ecosystem transport and fate of polychlorinated biphenyls (PCBs), atrazine, mercury, and trans-nonachlor in Lake M...

  14. Data for Environmental Modeling (D4EM): Background and Applications of Data Automation

    EPA Science Inventory

    The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...

  15. Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2016-10-01

    We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

  16. Sensorimotor synchronization with tempo-changing auditory sequences: Modeling temporal adaptation and anticipation.

    PubMed

    van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E

    2015-11-11

    The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal

  17. Ensuring congruency in multiscale modeling: towards linking agent based and continuum biomechanical models of arterial adaptation.

    PubMed

    Hayenga, Heather N; Thorne, Bryan C; Peirce, Shayn M; Humphrey, Jay D

    2011-11-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue-level manifestations of cellular level mechanisms. Continuum-based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent-based models are well suited for representing biological processes at a cellular level, but not for describing tissue-level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent-based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent-based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent-based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations.

  18. Demand modelling of passenger air travel: An analysis and extension. Volume 1: Background and summary

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.

    1978-01-01

    The framework for a model of travel demand which will be useful in predicting the total market for air travel between two cities is discussed. Variables to be used in determining the need for air transportation where none currently exists and the effect of changes in system characteristics on attracting latent demand are identified. Existing models are examined in order to provide insight into their strong points and shortcomings. Much of the existing behavioral research in travel demand is incorporated to allow the inclusion of non-economic factors, such as convenience. The model developed is characterized as a market segmentation model. This is a consequence of the strengths of disaggregation and its natural evolution to a usable aggregate formulation. The need for this approach both pedagogically and mathematically is discussed.

  19. Forecasting societies' adaptive capacities through a demographic metabolism model

    NASA Astrophysics Data System (ADS)

    Lutz, Wolfgang; Muttarak, Raya

    2017-03-01

    In seeking to understand how future societies will be affected by climate change we cannot simply assume they will be identical to those of today, because climate and societies are both dynamic. Here we propose that the concept of demographic metabolism and the associated methods of multi-dimensional population projections provide an effective analytical toolbox to forecast important aspects of societal change that affect adaptive capacity. We present an example of how the changing educational composition of future populations can influence societies' adaptive capacity. Multi-dimensional population projections form the human core of the Shared Socioeconomic Pathways scenarios, and knowledge and analytical tools from demography have great value in assessing the likely implications of climate change on future human well-being.

  20. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  1. Contributions to the Science Modeling Requirements Document; Earth Limb & Auroral Backgrounds

    DTIC Science & Technology

    2007-11-02

    Ionosphere Johns Hopkins University Applied Physics Laboratory K Planetary index Lyman-Birge-Hopfield Low resolution transmission model Local...measurements of atmospheric parameters (Hedin, 1983). MSIS-83 modeled magnetic storm variations in terms of the 3hr ap index and extended its region of...stations. NOAA/SEL issues provisional estimates of the planetary range indices Ap and Kp. These indices derive from real-time measurements of

  2. Modeling cognitive effects on visual search for targets in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Ruda, Harald; Hoffman, James

    1998-07-01

    To understand how a human operator performs visual search in complex scenes, it is necessary to take into account top- down cognitive biases in addition to bottom-up visual saliency effects. We constructed a model to elucidate the relationship between saliency and cognitive effects in the domain of visual search for distant targets in photo- realistic images of cluttered scenes. In this domain, detecting targets is difficult and requires high visual acuity. Sufficient acuity is only available near the fixation point, i.e. in the fovea. Hence, the choice of fixation points is the most important determinant of whether targets get detected. We developed a model that predicts the 2D distribution of fixation probabilities directly from an image. Fixation probabilities were computed as a function of local contrast (saliency effect) and proximity to the horizon (cognitive effect: distant targets are more likely to be found c close to the horizon). For validation, the model's predictions were compared to ensemble statistics of subjects' actual fixation locations, collected with an eye- tracker. The model's predictions correlated well with the observed data. Disabling the horizon-proximity functionality of the model significantly degraded prediction accuracy, demonstrating that cognitive effects must be accounted for when modeling visual search.

  3. Adaptive Detection and Parameter Estimation for Multidimensional Signal Models

    DTIC Science & Technology

    1989-04-19

    expected value of the non-adaptive parameter array estimator directly from Equation (5-1), using the fact that .zP = dppH = d We obtain EbI = (e-H E eI 1...depend only on the dimensional parameters of tlc problem. We will caerive these properties shcrLly, but first we wish to express the conditional pdf

  4. A Direct Adaptive Control Approach in the Presence of Model Mismatch

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.; Tao, Gang; Khong, Thuan

    2009-01-01

    This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.

  5. A model of a rapidly-adapting mechanosensitive current generated by a dorsal root ganglion neuron.

    PubMed

    Fujita, Kazuhisa

    2014-06-01

    I propose a model that replicates the kinetics of a rapidly-adapting mechanosensitive current generated by a dorsal root ganglion (DRG) neuron. When the DRG neuron is mechanically stimulated, an ionic current called a mechanosensitive current flows across its membrane. The kinetics of mechanosensitive currents are broadly classified into three types; rapidly adapting (RA), intermediately adapting, and slowly adapting. The kinetics of RA mechanosensitive currents are particularly intriguing. An RA mechanosensitive current is initially evoked by and rapidly adapts to a mechanical stimulus, but can also respond to an additional stimulus. Furthermore, an antecedent stimulus immediately followed by an additional stimulus suppresses reactivation of the current. The features of the kinetics depend on the characteristics of the mechanotransducer channels. Physiologists have proposed three factors associated with mechanotransducer channels, invoking activation, adaptation, and inactivation. In the present study, these factors are incorporated into an RA mechanosensitive current model. Computer simulations verified that the proposed model replicates the kinetics of real RA DRG mechanosensitive currents. The mechanosensitive current elicited by successive pulse-form stimuli was predominantly desensitized by the inactivating factor. Both the inactivating and adapting factors were involved in desensitization of a double-decker stimulus. The reduction of the sensitivity with decreasing velocity of the stimulus was mainly controlled by the adapting factor.

  6. Parent Management Training-Oregon Model (PMTO™) in Mexico City: Integrating Cultural Adaptation Activities in an Implementation Model.

    PubMed

    Baumann, Ana A; Domenech Rodríguez, Melanie M; Amador, Nancy G; Forgatch, Marion S; Parra-Cardona, J Rubén

    2014-03-01

    This article describes the process of cultural adaptation at the start of the implementation of the Parent Management Training intervention-Oregon model (PMTO) in Mexico City. The implementation process was guided by the model, and the cultural adaptation of PMTO was theoretically guided by the cultural adaptation process (CAP) model. During the process of the adaptation, we uncovered the potential for the CAP to be embedded in the implementation process, taking into account broader training and economic challenges and opportunities. We discuss how cultural adaptation and implementation processes are inextricably linked and iterative and how maintaining a collaborative relationship with the treatment developer has guided our work and has helped expand our research efforts, and how building human capital to implement PMTO in Mexico supported the implementation efforts of PMTO in other places in the United States.

  7. Parent Management Training-Oregon Model (PMTO™) in Mexico City: Integrating Cultural Adaptation Activities in an Implementation Model

    PubMed Central

    Baumann, Ana A.; Domenech Rodríguez, Melanie M.; Amador, Nancy G.; Forgatch, Marion S.; Parra-Cardona, J. Rubén

    2015-01-01

    This article describes the process of cultural adaptation at the start of the implementation of the Parent Management Training intervention-Oregon model (PMTO) in Mexico City. The implementation process was guided by the model, and the cultural adaptation of PMTO was theoretically guided by the cultural adaptation process (CAP) model. During the process of the adaptation, we uncovered the potential for the CAP to be embedded in the implementation process, taking into account broader training and economic challenges and opportunities. We discuss how cultural adaptation and implementation processes are inextricably linked and iterative and how maintaining a collaborative relationship with the treatment developer has guided our work and has helped expand our research efforts, and how building human capital to implement PMTO in Mexico supported the implementation efforts of PMTO in other places in the United States. PMID:26052184

  8. Extended adiabatic blast waves and a model of the soft X-ray background

    NASA Technical Reports Server (NTRS)

    Cox, D. P.; Anderson, P. R.

    1982-01-01

    The suggestion has been made that much of the soft X-ray background observed in X-ray astronomy might arise from being inside a very large supernova blast wave propagating in the hot, low-density component of the interstellar (ISM) medium. An investigation is conducted to study this possibility. An analytic approximation is presented for the nonsimilar time evolution of the dynamic structure of an adiabatic blast wave generated by a point explosion in a homogeneous ambient medium. A scheme is provided for evaluating the electron-temperature distribution for the evolving structure, and a procedure is presented for following the state of a given fluid element through the evolving dynamical and thermal structures. The results of the investigation show that, if the solar system were located within a blast wave, the Wisconsin soft X-ray rocket payload would measure the B and C band count rates that it does measure, provided conditions correspond to the values calculated in the investigation.

  9. Modeling time to detection for observers searching for targets in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Ruda, Harald; Snorrason, Magnus

    1999-07-01

    The purpose of this work is to provide a model for the average time to detection for observers searching for targets in photo-realistic images of cluttered scenes. The proposed model builds on previous work that constructs a fixation probability map (FPM) from the image. This FPM is constructed from bottom- up features, such as local contrast, but also includes top- down cognitive effects, such as the location of the horizon. The FPM is used to generate a set of conspicuous points that are likely to be fixation points, along with initial probabilities of fixation. These points are used to assemble fixation sequences. The order of these fixations is clearly crucial for determining the time to fixation. Recognizing that different observers (unconsciously) choose different orderings of the conspicuous points, the present model performs a Monte- Carlo simulation to find the probability of fixating each conspicuous point at each position in the sequence. The three main assumptions of this model are: the observer can only attend to the area of the image being fixated, each fixation has an approximately constant duration, and there is a short term memory for the locations of previous fixation points. This fixation point memory is an essential feature of the model, and the memory decay constant is a parameter of the model. Simulations show that the average time to fixation for a given conspicuous point in the image depends on the distribution of other conspicuous points. This is true even if the initial probability of fixation for a given point is the same across distributions, and only the initial probability of fixation of the other points is distributed differently.

  10. Testing non-standard inflationary models with the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Landau, Susana J.

    2015-03-01

    The emergence of the seeds of cosmic structure from an isotropic and homogeneuous universe has not been clearly explained by the standard version of inflationary models. We review a proposal that attempts to deal with this problem by introducing "the self induced collapse hypothesis". As a consequence of this modification of standard inflationary scenarios, the predicted primordial power spectrum and the CMB spectrum are modified. We show the results of statistical analyses comparing the predictions of these models with recent CMB observations and the matter power spectrum from galaxy surveys.

  11. On the role of model-based monitoring for adaptive planning under uncertainty

    NASA Astrophysics Data System (ADS)

    Raso, Luciano; Kwakkel, Jan; Timmermans, Jos; Haasnoot, Mariolijn

    2016-04-01

    , triggered by the challenge of uncertainty in operational control, may offer solutions from which monitoring for adaptive planning can benefit. Specifically: (i) in control, observations are incorporated into the model through data assimilation, updating the present state, boundary conditions, and parameters based on new observations, diminishing the shadow of the past; (ii) adaptive control is a way to modify the characteristics of the internal model, incorporating new knowledge on the system, countervailing the inhibition of learning; and (iii) in closed-loop control, a continuous system update equips the controller with "inherent robustness", i.e. to capacity to adapts to new conditions even when these were not initially considered. We aim to explore how inherent robustness addresses the challenge of surprise. Innovations in model-based control might help to improve and adapt the models used to support adaptive delta management to new information (reducing uncertainty). Moreover, this would offer a starting point for using these models not only in the design of adaptive plans, but also as part of the monitoring. The proposed research requires multidisciplinary cooperation between control theory, the policy sciences, and integrated assessment modeling.

  12. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-02-06

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively.

  13. Construction and solution of an adaptive image-restoration model for removing blur and mixed noise

    NASA Astrophysics Data System (ADS)

    Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun

    2016-03-01

    We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.

  14. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  15. MGGPOD: a Monte Carlo Suite for Modeling Instrumental Line and Continuum Backgrounds in Gamma-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Weidenspointner, G.; Harris, M. J.; Sturner, S.; Teegarden, B. J.; Ferguson, C.

    2004-01-01

    Intense and complex instrumental backgrounds, against which the much smaller signals from celestial sources have to be discerned, are a notorious problem for low and intermediate energy gamma-ray astronomy (approximately 50 keV - 10 MeV). Therefore a detailed qualitative and quantitative understanding of instrumental line and continuum backgrounds is crucial for most stages of gamma-ray astronomy missions, ranging from the design and development of new instrumentation through performance prediction to data reduction. We have developed MGGPOD, a user-friendly suite of Monte Carlo codes built around the widely used GEANT (Version 3.21) package, to simulate ab initio the physical processes relevant for the production of instrumental backgrounds. These include the build-up and delayed decay of radioactive isotopes as well as the prompt de-excitation of excited nuclei, both of which give rise to a plethora of instrumental gamma-ray background lines in addition t o continuum backgrounds. The MGGPOD package and documentation are publicly available for download. We demonstrate the capabilities of the MGGPOD suite by modeling high resolution gamma-ray spectra recorded by the Transient Gamma-Ray Spectrometer (TGRS) on board Wind during 1995. The TGRS is a Ge spectrometer operating in the 40 keV to 8 MeV range. Due to its fine energy resolution, these spectra reveal the complex instrumental background in formidable detail, particularly the many prompt and delayed gamma-ray lines. We evaluate the successes and failures of the MGGPOD package in reproducing TGRS data, and provide identifications for the numerous instrumental lines.

  16. MODELING EXTRAGALACTIC FOREGROUNDS AND SECONDARIES FOR UNBIASED ESTIMATION OF COSMOLOGICAL PARAMETERS FROM PRIMARY COSMIC MICROWAVE BACKGROUND ANISOTROPY

    SciTech Connect

    Millea, M.; Knox, L.; Dore, O.; Dudley, J.; Holder, G.; Shaw, L.; Song, Y.-S.; Zahn, O.

    2012-02-10

    Using the latest physical modeling and constrained by the most recent data, we develop a phenomenological parameterized model of the contributions to intensity and polarization maps at millimeter wavelengths from external galaxies and Sunyaev-Zeldovich effects. We find such modeling to be necessary for estimation of cosmological parameters from Planck data. For example, ignoring the clustering of the infrared background would result in a bias in n{sub s} of 7{sigma} in the context of an eight-parameter cosmological model. We show that the simultaneous marginalization over a full foreground model can eliminate such biases, while increasing the statistical uncertainty in cosmological parameters by less than 20%. The small increases in uncertainty can be significantly reduced with the inclusion of higher-resolution ground-based data. The multi-frequency analysis we employ involves modeling 46 total power spectra and marginalization over 17 foreground parameters. We show that we can also reduce the data to a best estimate of the cosmic microwave background power spectra, with just two principal components (with constrained amplitudes) describing residual foreground contamination.

  17. DATA FOR ENVIRONMENTAL MODELING (D4EM): BACKGROUND AND EXAMPLE APPLICATIONS OF DATA AUTOMATION

    EPA Science Inventory

    Data is a basic requirement for most modeling applications. Collecting data is expensive and time consuming. High speed internet connections and growing databases of online environmental data go a long way to overcoming issues of data scarcity. Among the obstacles still remaining...

  18. Background studies in the modeling of extrusion cooking processes for soy flour doughs.

    PubMed

    Luxenburg, L A; Baird, D G; Joseph, E G

    1985-03-01

    Soy flour is processed in single screw extruders to yield textured vegetable protein used as meat extenders and replacements. The fundamental processes which take place in extrusion cooking of soy doughs are poorly understood from an engineering point of view. This paper is concerned with gaining an understanding of extrusion cooking in order to develop a quantitative model of this process in single screw extruders. Rheological and thermodynamic data are obtained over a range of conditions found in the extrusion process and this data is used both in understanding and modeling the extrusion cooking process.In particular, differential scanning calorimetry (DSC) has been used to determine changes in the enthalpy of soy doughs at various moisture levels. It is observed that in general the enthalpy changes are small (e.g. of the order of 1.0 cal/g) and endothermic. However, if the dough is subjected to shear and thermal history, then the enthalpy changes become significant. The viscosity of the dough exhibits a mild increase in the temperature range where the endotherms are observed in the DSC data. Based on the results of this study, it is concluded that cooking does not involve cross-linking of the proteins but more likely a conformation change of the molecules. Also, it is found that for the 40% to 60% added moisture soy flour dough systems, the rheological properties can be modeled using a Bingham model modified with a shear rate dependent viscosity.

  19. Cosmic-Ray Background Flux Model Baed on a Gamma-Ray Large Area Space Telescope Baloon Flight Engineering

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4(pi) sr), the sensitive energy range of the instrument ((approx) 10 MeV to 100 GeV) and abundant components (proton, alpha, e(sup -), e(sup +), (mu)(sup -), (mu)(sup +) and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functions in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.

  20. Solid modelling for the manipulative robot arm (power) and adaptive vision control for space station missions

    NASA Technical Reports Server (NTRS)

    Harrand, V.; Choudry, A.

    1987-01-01

    The structure of a flexible arm derived from concatenation of the Stewart-Table-based links were studied. Solid modeling provides not only a realistic simulation, but is also essential for studying vision algorithms. These algorithms could be used for the adaptive control of the arm, using the well-known algorithms such as shape from shading, edge detection, orientation, etc. Details of solid modeling and its relation to vision based adaptive control are discussed.

  1. The software package CAOS 7.0: enhanced numerical modelling of astronomical adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Carbillet, Marcel; La Camera, Andrea; Folcher, Jean-Pierre; Perruchon-Monge, Ulysse; Sy, Adama

    2016-07-01

    The Software Package CAOS (acronym for Code for Adaptive Optics Systems) is a modular scientific package performing end-to-end numerical modelling of astronomical adaptive optics (AO) systems. It is IDL-based and developed within the eponymous CAOS Problem-Solving Environment, recently completely re-organized. In this paper we present version 7.0 of the Software Package CAOS, containing a number of enhancements and new modules, in particular for wide-field AO systems modelling.

  2. Modeling of oropharyngeal articulatory adaptation to compensate for the acoustic effects of nasalization.

    PubMed

    Rong, Panying; Kuehn, David P; Shosted, Ryan K

    2016-09-01

    Hypernasality is one of the most detrimental speech disturbances that lead to declines of speech intelligibility. Velopharyngeal inadequacy, which is associated with anatomic defects such as cleft palate or neuromuscular disorders that affect velopharygneal function, is the primary cause of hypernasality. A simulation study by Rong and Kuehn [J. Speech Lang. Hear. Res. 55(5), 1438-1448 (2012)] demonstrated that properly adjusted oropharyngeal articulation can reduce nasality for vowels synthesized with an articulatory model [Mermelstein, J. Acoust. Soc. Am. 53(4), 1070-1082 (1973)]. In this study, a speaker-adaptive articulatory model was developed to simulate speaker-customized oropharyngeal articulatory adaptation to compensate for the acoustic effects of nasalization on /a/, /i/, and /u/. The results demonstrated that (1) the oropharyngeal articulatory adaptation effectively counteracted the effects of nasalization on the second lowest formant frequency (F2) and partially compensated for the effects of nasalization on vowel space (e.g., shifting and constriction of vowel space) and (2) the articulatory adaptation strategies generated by the speaker-adaptive model might be more efficacious for counteracting the acoustic effects of nasalization compared to the adaptation strategies generated by the standard articulatory model in Rong and Kuehn. The findings of this study indicated the potential of using oropharyngeal articulatory adaptation as a means to correct maladaptive articulatory behaviors and to reduce nasality.

  3. Acoustic model adaptation for ortolan bunting (Emberiza hortulana L.) song-type classification.

    PubMed

    Tao, Jidong; Johnson, Michael T; Osiejuk, Tomasz S

    2008-03-01

    Automatic systems for vocalization classification often require fairly large amounts of data on which to train models. However, animal vocalization data collection and transcription is a difficult and time-consuming task, so that it is expensive to create large data sets. One natural solution to this problem is the use of acoustic adaptation methods. Such methods, common in human speech recognition systems, create initial models trained on speaker independent data, then use small amounts of adaptation data to build individual-specific models. Since, as in human speech, individual vocal variability is a significant source of variation in bioacoustic data, acoustic model adaptation is naturally suited to classification in this domain as well. To demonstrate and evaluate the effectiveness of this approach, this paper presents the application of maximum likelihood linear regression adaptation to ortolan bunting (Emberiza hortulana L.) song-type classification. Classification accuracies for the adapted system are computed as a function of the amount of adaptation data and compared to caller-independent and caller-dependent systems. The experimental results indicate that given the same amount of data, supervised adaptation significantly outperforms both caller-independent and caller-dependent systems.

  4. Time domain and frequency domain design techniques for model reference adaptive control systems

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1971-01-01

    Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.

  5. Wavelet detection of weak far-magnetic signal based on adaptive ARMA model threshold

    NASA Astrophysics Data System (ADS)

    Zhang, Ning; Lin, Chun-sheng; Fang, Shi

    2009-10-01

    Based on Mallat algorithm, a de-noising algorithm of adaptive wavelet threshold is applied for weak magnetic signal detection of far moving target in complex magnetic environment. The choice of threshold is the key problem. With the spectrum analysis of the magnetic field target, a threshold algorithm on the basis of adaptive ARMA model filter is brought forward to improve the wavelet filtering performance. The simulation of this algorithm on measured data is carried out. Compared to Donoho threshold algorithm, it shows that adaptive ARMA model threshold algorithm significantly improved the capability of weak magnetic signal detection in complex magnetic environment.

  6. A Model for Designing Adaptive Laboratory Evolution Experiments.

    PubMed

    LaCroix, Ryan A; Palsson, Bernhard O; Feist, Adam M

    2017-04-15

    The occurrence of mutations is a cornerstone of the evolutionary theory of adaptation, capitalizing on the rare chance that a mutation confers a fitness benefit. Natural selection is increasingly being leveraged in laboratory settings for industrial and basic science applications. Despite increasing deployment, there are no standardized procedures available for designing and performing adaptive laboratory evolution (ALE) experiments. Thus, there is a need to optimize the experimental design, specifically for determining when to consider an experiment complete and for balancing outcomes with available resources (i.e., laboratory supplies, personnel, and time). To design and to better understand ALE experiments, a simulator, ALEsim, was developed, validated, and applied to the optimization of ALE experiments. The effects of various passage sizes were experimentally determined and subsequently evaluated with ALEsim, to explain differences in experimental outcomes. Furthermore, a beneficial mutation rate of 10(-6.9) to 10(-8.4) mutations per cell division was derived. A retrospective analysis of ALE experiments revealed that passage sizes typically employed in serial passage batch culture ALE experiments led to inefficient production and fixation of beneficial mutations. ALEsim and the results described here will aid in the design of ALE experiments to fit the exact needs of a project while taking into account the resources required and will lower the barriers to entry for this experimental technique.IMPORTANCE ALE is a widely used scientific technique to increase scientific understanding, as well as to create industrially relevant organisms. The manner in which ALE experiments are conducted is highly manual and uniform, with little optimization for efficiency. Such inefficiencies result in suboptimal experiments that can take multiple months to complete. With the availability of automation and computer simulations, we can now perform these experiments in an optimized

  7. Using box models to quantify zonal distributions and emissions of halocarbons in the background atmosphere.

    NASA Astrophysics Data System (ADS)

    Elkins, J. W.; Nance, J. D.; Dutton, G. S.; Montzka, S. A.; Hall, B. D.; Miller, B.; Butler, J. H.; Mondeel, D. J.; Siso, C.; Moore, F. L.; Hintsa, E. J.; Wofsy, S. C.; Rigby, M. L.

    2015-12-01

    The Halocarbons and other Atmospheric Trace Species (HATS) of NOAA's Global Monitoring Division started measurements of the major chlorofluorocarbons and nitrous oxide in 1977 from flask samples collected at five remote sites around the world. Our program has expanded to over 40 compounds at twelve sites, which includes six in situ instruments and twelve flask sites. The Montreal Protocol for Substances that Deplete the Ozone Layer and its subsequent amendments has helped to decrease the concentrations of many of the ozone depleting compounds in the atmosphere. Our goal is to provide zonal emission estimates for these trace gases from multi-box models and their estimated atmospheric lifetimes in this presentation and make the emission values available on our web site. We plan to use our airborne measurements to calibrate the exchange times between the boxes for 5-box and 12-box models using sulfur hexafluoride where emissions are better understood.

  8. Direct Adaptive Control Methodologies for Flexible-Joint Space Manipulators with Uncertainties and Modeling Errors

    NASA Astrophysics Data System (ADS)

    Ulrich, Steve

    This work addresses the direct adaptive trajectory tracking control problem associated with lightweight space robotic manipulators that exhibit elastic vibrations in their joints, and which are subject to parametric uncertainties and modeling errors. Unlike existing adaptive control methodologies, the proposed flexible-joint control techniques do not require identification of unknown parameters, or mathematical models of the system to be controlled. The direct adaptive controllers developed in this work are based on the model reference adaptive control approach, and manage modeling errors and parametric uncertainties by time-varying the controller gains using new adaptation mechanisms, thereby reducing the errors between an ideal model and the actual robot system. More specifically, new decentralized adaptation mechanisms derived from the simple adaptive control technique and fuzzy logic control theory are considered in this work. Numerical simulations compare the performance of the adaptive controllers with a nonadaptive and a conventional model-based controller, in the context of 12.6 m xx 12.6 m square trajectory tracking. To validate the robustness of the controllers to modeling errors, a new dynamics formulation that includes several nonlinear effects usually neglected in flexible-joint dynamics models is proposed. Results obtained with the adaptive methodologies demonstrate an increased robustness to both uncertainties in joint stiffness coefficients and dynamics modeling errors, as well as highly improved tracking performance compared with the nonadaptive and model-based strategies. Finally, this work considers the partial state feedback problem related to flexible-joint space robotic manipulators equipped only with sensors that provide noisy measurements of motor positions and velocities. An extended Kalman filter-based estimation strategy is developed to estimate all state variables in real-time. The state estimation filter is combined with an adaptive

  9. Modelling T cell proliferation: Dynamics heterogeneity depending on cell differentiation, age, and genetic background

    PubMed Central

    2017-01-01

    Cell proliferation is the common characteristic of all biological systems. The immune system insures the maintenance of body integrity on the basis of a continuous production of diversified T lymphocytes in the thymus. This involves processes of proliferation, differentiation, selection, death and migration of lymphocytes to peripheral tissues, where proliferation also occurs upon antigen recognition. Quantification of cell proliferation dynamics requires specific experimental methods and mathematical modelling. Here, we assess the impact of genetics and aging on the immune system by investigating the dynamics of proliferation of T lymphocytes across their differentiation through thymus and spleen in mice. Our investigation is based on single-cell multicolour flow cytometry analysis revealing the active incorporation of a thymidine analogue during S phase after pulse-chase-pulse experiments in vivo, versus cell DNA content. A generic mathematical model of state transition simulates through Ordinary Differential Equations (ODEs) the evolution of single cell behaviour during various durations of labelling. It allows us to fit our data, to deduce proliferation rates and estimate cell cycle durations in sub-populations. Our model is simple and flexible and is validated with other durations of pulse/chase experiments. Our results reveal that T cell proliferation is highly heterogeneous but with a specific “signature” that depends upon genetic origins, is specific to cell differentiation stages in thymus and spleen and is altered with age. In conclusion, our model allows us to infer proliferation rates and cell cycle phase durations from complex experimental 5-ethynyl-2'-deoxyuridine (EdU) data, revealing T cell proliferation heterogeneity and specific signatures. PMID:28288157

  10. Modelling T cell proliferation: Dynamics heterogeneity depending on cell differentiation, age, and genetic background.

    PubMed

    Vibert, Julien; Thomas-Vaslin, Véronique

    2017-03-01

    Cell proliferation is the common characteristic of all biological systems. The immune system insures the maintenance of body integrity on the basis of a continuous production of diversified T lymphocytes in the thymus. This involves processes of proliferation, differentiation, selection, death and migration of lymphocytes to peripheral tissues, where proliferation also occurs upon antigen recognition. Quantification of cell proliferation dynamics requires specific experimental methods and mathematical modelling. Here, we assess the impact of genetics and aging on the immune system by investigating the dynamics of proliferation of T lymphocytes across their differentiation through thymus and spleen in mice. Our investigation is based on single-cell multicolour flow cytometry analysis revealing the active incorporation of a thymidine analogue during S phase after pulse-chase-pulse experiments in vivo, versus cell DNA content. A generic mathematical model of state transition simulates through Ordinary Differential Equations (ODEs) the evolution of single cell behaviour during various durations of labelling. It allows us to fit our data, to deduce proliferation rates and estimate cell cycle durations in sub-populations. Our model is simple and flexible and is validated with other durations of pulse/chase experiments. Our results reveal that T cell proliferation is highly heterogeneous but with a specific "signature" that depends upon genetic origins, is specific to cell differentiation stages in thymus and spleen and is altered with age. In conclusion, our model allows us to infer proliferation rates and cell cycle phase durations from complex experimental 5-ethynyl-2'-deoxyuridine (EdU) data, revealing T cell proliferation heterogeneity and specific signatures.

  11. [The mathematical modelling of population dynamics taking into account the adaptive behavior of individuals].

    PubMed

    Abakumov, A I

    2000-01-01

    The general approach for modelling of abundance dynamic of biological populations and communities is offered. The mechanisms of individual adaptation in changing environment are considered. The approach is detailed for population models without structure and with age structure. The property of solutions are investigated. As examples the author studies the concrete definitions of general models by analogy with models of Ricker and May. Theoretical analysis and calculations shows that survival of model population in extreme situation increases if adaptive behaviour is taking into account.

  12. Particle Swarm Social Adaptive Model for Multi-Agent Based Insurgency Warfare Simulation

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2009-12-01

    To better understand insurgent activities and asymmetric warfare, a social adaptive model for modeling multiple insurgent groups attacking multiple military and civilian targets is proposed and investigated. This report presents a pilot study using the particle swarm modeling, a widely used non-linear optimal tool to model the emergence of insurgency campaign. The objective of this research is to apply the particle swarm metaphor as a model of insurgent social adaptation for the dynamically changing environment and to provide insight and understanding of insurgency warfare. Our results show that unified leadership, strategic planning, and effective communication between insurgent groups are not the necessary requirements for insurgents to efficiently attain their objective.

  13. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  14. HMM-Based Style Control for Expressive Speech Synthesis with Arbitrary Speaker's Voice Using Model Adaptation

    NASA Astrophysics Data System (ADS)

    Nose, Takashi; Tachibana, Makoto; Kobayashi, Takao

    This paper presents methods for controlling the intensity of emotional expressions and speaking styles of an arbitrary speaker's synthetic speech by using a small amount of his/her speech data in HMM-based speech synthesis. Model adaptation approaches are introduced into the style control technique based on the multiple-regression hidden semi-Markov model (MRHSMM). Two different approaches are proposed for training a target speaker's MRHSMMs. The first one is MRHSMM-based model adaptation in which the pretrained MRHSMM is adapted to the target speaker's model. For this purpose, we formulate the MLLR adaptation algorithm for the MRHSMM. The second method utilizes simultaneous adaptation of speaker and style from an average voice model to obtain the target speaker's style-dependent HSMMs which are used for the initialization of the MRHSMM. From the result of subjective evaluation using adaptation data of 50 sentences of each style, we show that the proposed methods outperform the conventional speaker-dependent model training when using the same size of speech data of the target speaker.

  15. Modeling the fluctuations of the cosmic infrared background: what did we learn from Planck?

    NASA Astrophysics Data System (ADS)

    Bethermin, Matthieu

    2015-08-01

    The CIB is the relic emission of the dust heated by young stars across. It is a powerful probe of the star formation history in the Universe. The distribution of star-forming galaxies in the large-scale structures is imprinted in the anisotropies of the CIB. They are thus one of the keys to understand how large-scale structures shaped the evolution of the galaxies. Planck measured these anisotropies with an unprecedented accuracy. However, the CIB is an integrated emission and a model is necessary to disentangle the contribution of the different redshifts.Large-scale anisotropies can be interpreted using a linear model. This simple approach relies on a minimal number of hypotheses. We found a star formation history consistent with the extrapolation of the Herschel luminosity function. This rules out any major contribution of faint IR galaxies. We also constrained the mean mass of the dark matter halos hosting the galaxies, which emit the CIB. This mass is almost constant from z=4 to z=0, while dark matter halos grew very quickly during this interval of time. The structures hosting star formation are thus not the same at low and high redshifts. This also suggests the existence of a halo mass for which the star formation is most efficient.Halo occupation models can describe in details how dark matter halos are populated by infrared galaxies. We coupled a phenomenological model of galaxy evolution calibrated on Herschel data with a halo model, using the technique of abundance matching. This approach allows to naturally reproduce the CIB anisotropies. We found that the efficiency of halos to convert accreted baryons into stars varies strongly with halo mass, but not with time. This highlights the role played by host halos as regulator of the star formation in galaxies.I will finally explain how we could have access to 3D information with future instruments and isolate more efficiently the highest redshift using intensity mapping of bright sub-millimeter lines. I will

  16. Adaptive design clinical trials and trial logistics models in CNS drug development.

    PubMed

    Wang, Sue-Jane; Hung, H M James; O'Neill, Robert

    2011-02-01

    In central nervous system therapeutic areas, there are general concerns with establishing efficacy thought to be sources of high attrition rate in drug development. For instance, efficacy endpoints are often subjective and highly variable. There is a lack of robust or operational biomarkers to substitute for soft endpoints. In addition, animal models are generally poor, unreliable or unpredictive. To increase the probability of success in central nervous system drug development program, adaptive design has been considered as an alternative designs that provides flexibility to the conventional fixed designs and has been viewed to have the potential to improve the efficiency in drug development processes. In addition, successful implementation of an adaptive design trial relies on establishment of a trustworthy logistics model that ensures integrity of the trial conduct. In accordance with the spirit of the U.S. Food and Drug Administration adaptive design draft guidance document recently released, this paper enlists the critical considerations from both methodological aspects and regulatory aspects in reviewing an adaptive design proposal and discusses two general types of adaptations, sample size planning and re-estimation, and two-stage adaptive design. Literature examples of adaptive designs in central nervous system are used to highlight the principles laid out in the U.S. FDA draft guidance. Four logistics models seen in regulatory adaptive design applications are introduced. In general, complex adaptive designs require simulation studies to access the design performance. For an adequate and well-controlled clinical trial, if a Learn-and-Confirm adaptive selection approach is considered, the study-wise type I error rate should be adhered to. However, it is controversial to use the simulated type I error rate to address a strong control of the study-wise type I error rate.

  17. Deconfinement in the presence of a strong magnetic background: An exercise within the MIT bag model

    NASA Astrophysics Data System (ADS)

    Fraga, Eduardo S.; Palhares, Letícia F.

    2012-07-01

    We study the effect of a very strong homogeneous magnetic field B on the thermal deconfinement transition within the simplest phenomenological approach: the MIT bag pressure for the quark-gluon plasma and a gas of pions for the hadronic sector. Even though the model is known to be crude in numerical precision and misses the correct nature of the (crossover) transition, it provides a simple setup for the discussion of some subtleties of vacuum and thermal contributions in each phase, and should provide a reasonable qualitative description of the critical temperature in the presence of B. We find that the critical temperature decreases, saturating for very large fields.

  18. Theoretical model for a background noise limited laser-excited optical filter for doubled Nd lasers

    NASA Astrophysics Data System (ADS)

    Shay, Thomas M.; Garcia, Daniel F.

    1990-06-01

    A simple theoretical model for the calculation of the dependence of filter quantum efficiency versus laser pump power in an atomic Rb vapor laser-excited optical filter is reported. Calculations for Rb filter transitions that can be used to detect the practical and important frequency-doubled Nd lasers are presented. The results of these calculations show the filter's quantum efficiency versus the laser pump power. The required laser pump powers required range from 2.4 to 60 mW/sq cm of filter aperture.

  19. Theoretical model for a background noise limited laser-excited optical filter for doubled Nd lasers

    NASA Technical Reports Server (NTRS)

    Shay, Thomas M.; Garcia, Daniel F.

    1990-01-01

    A simple theoretical model for the calculation of the dependence of filter quantum efficiency versus laser pump power in an atomic Rb vapor laser-excited optical filter is reported. Calculations for Rb filter transitions that can be used to detect the practical and important frequency-doubled Nd lasers are presented. The results of these calculations show the filter's quantum efficiency versus the laser pump power. The required laser pump powers required range from 2.4 to 60 mW/sq cm of filter aperture.

  20. Changing universe model of the cosmic microwave background, early type galaxies, redshift, and discrete redshifts

    NASA Astrophysics Data System (ADS)

    Hodge, John

    2005-04-01

    Developing the changing universe model (CUM) toward an alternate cosmological model provides motivation to investigate cosmological observations. The black body nature of the CMB is consistent with the CUM. Since the CUM posits the photons are quantized, positing quantum oscillators in the wall of the black body cavity is unnecessary. The CMB temperature and mass content of our universe is controlled by a feedback mechanism. If our universe is stable, the temperature of the CMB radiation should be 2.718 K. The CUM suggests the higher measured CMB temperature indicates an imbalance between energy injection and energy ejection rates of the Sources and Sinks. Several differences among galaxy types suggest that spiral galaxies are Sources and that early type and irregular galaxies are Sinks. The redshift calculation explored previously (SESAPS '04,session GD 15) is improved. Further, the CUM suggests the discrete variations in redshift, reported by W. G. Tifft, 1997, Astrophy. J. 485, 465 (and references therein) and confirmed by others, are consistent with the Sink's effect on redshift in clusters. Full text: http://web.infoave.net/ scjh.

  1. Columbia River Statistical Update Model, Version 4. 0 (COLSTAT4): Background documentation and user's guide

    SciTech Connect

    Whelan, G.; Damschen, D.W.; Brockhaus, R.D.

    1987-08-01

    Daily-averaged temperature and flow information on the Columbia River just downstream of Priest Rapids Dam and upstream of river mile 380 were collected and stored in a data base. The flow information corresponds to discharges that were collected daily from October 1, 1959, through July 28, 1986. The temperature information corresponds to values that were collected daily from January 1, 1965, through May 27, 1986. The computer model, COLSTAT4 (Columbia River Statistical Update - Version 4.0 model), uses the temperature-discharge data base to statistically analyze temperature and flow conditions by computing the frequency of occurrence and duration of selected temperatures and flow rates for the Columbia River. The COLSTAT4 code analyzes the flow and temperature information in a sequential time frame (i.e., a continuous analysis over a given time period); it also analyzes this information in a seasonal time frame (i.e., a periodic analysis over a specific season from year to year). A provision is included to enable the user to edit and/or extend the data base of temperature and flow information. This report describes the COLSTAT4 code and the information contained in its data base.

  2. Design of a Model Reference Adaptive Controller for an Unmanned Air Vehicle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.

    2010-01-01

    This paper presents the "Adaptive Control Technology for Safe Flight (ACTS)" architecture, which consists of a non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off nominal ones. The design and implementation procedures of both controllers are presented. The aim of these procedures, which encompass both theoretical and practical considerations, is to develop a controller suitable for flight. The ACTS architecture is applied to the Generic Transport Model developed by NASA-Langley Research Center. The GTM is a dynamically scaled test model of a transport aircraft for which a flight-test article and a high-fidelity simulation are available. The nominal controller at the core of the ACTS architecture has a multivariable LQR-PI structure while the adaptive one has a direct, model reference structure. The main control surfaces as well as the throttles are used as control inputs. The inclusion of the latter alleviates the pilot s workload by eliminating the need for cancelling the pitch coupling generated by changes in thrust. Furthermore, the independent usage of the throttles by the adaptive controller enables their use for attitude control. Advantages and potential drawbacks of adaptation are demonstrated by performing high fidelity simulations of a flight-validated controller and of its adaptive augmentation.

  3. REVIEW: Internal models in sensorimotor integration: perspectives from adaptive control theory

    NASA Astrophysics Data System (ADS)

    Tin, Chung; Poon, Chi-Sang

    2005-09-01

    Internal models and adaptive controls are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models' architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods, such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning, are reviewed and their possible relevance to motor control is discussed. Possible applicability of a Luenberger observer and an extended Kalman filter to state estimation problems—such as sensorimotor prediction or the resolution of vestibular sensory ambiguity—is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal models in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future.

  4. Neuro- and sensoriphysiological Adaptations to Microgravity using Fish as Model System

    NASA Astrophysics Data System (ADS)

    Anken, R.

    The phylogenetic development of all organisms took place under constant gravity conditions, against which they achieved specific countermeasures for compensation and adaptation. On this background, it is still an open question to which extent altered gravity such as hyper- or microgravity (centrifuge/spaceflight) affects the normal individual development, either on the systemic level of the whole organism or on the level of individual organs or even single cells. The present review provides information on this topic, focusing on the effects of altered gravity on developing fish as model systems even for higher vertebrates including humans, with special emphasis on the effect of altered gravity on behaviour and particularly on the developing brain and vestibular system. Overall, the results speak in favour of the following concept: Short-term altered gravity (˜ 1 day) can induce transient sensorimotor disorders (kinetoses) due to malfunctions of the inner ear, originating from asymmetric otoliths. The regain of normal postural control is likely due to a reweighing of sensory inputs. During long-term altered gravity (several days and more), complex adptations on the level of the central and peripheral vestibular system occur. This work was financially supported by the German Aerospace Center (DLR) e.V. (FKZ: 50 WB 9997).

  5. Effect of mouse strain as a background for Alzheimer's disease models on the clearance of amyloid-β.

    PubMed

    Qosa, Hisham; Kaddoumi, Amal

    2016-04-01

    Novel animal models of Alzheimer's disease (AD) are relentlessly being developed and existing ones are being fine-tuned; however, these models face multiple challenges associated with the complexity of the disease where most of these models do not reproduce the full phenotypical disease spectrum. Moreover, different AD models express different phenotypes that could affect their validity to recapitulate disease pathogenesis and/or response to a drug. One of the most important and understudied differences between AD models is differences in the phenotypic characteristics of the background species. Here, we used the brain clearance index (BCI) method to investigate the effect of strain differences on the clearance of amyloid β (Aβ) from the brains of four mouse strains. These mouse strains, namely C57BL/6, FVB/N, BALB/c and SJL/J, are widely used as a background for the development of AD mouse models. Findings showed that while Aβ clearance across the blood-brain barrier (BBB) was comparable between the 4 strains, levels of LRP1, an Aβ clearance protein, was significantly lower in SJL/J mice compared to other mouse strains. Furthermore, these mouse strains showed a significantly different response to rifampicin treatment with regard to Aβ clearance and effect on brain level of its clearance-related proteins. Our results provide for the first time an evidence for strain differences that could affect ability of AD mouse models to recapitulate response to a drug, and opens a new research avenue that requires further investigation to successfully develop mouse models that could simulate clinically important phenotypic characteristics of AD.

  6. Adaptation of a general circulation model to ocean dynamics

    NASA Technical Reports Server (NTRS)

    Turner, R. E.; Rees, T. H.; Woodbury, G. E.

    1976-01-01

    A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.

  7. Adapting the Sport Education Model for Children with Disabilities

    ERIC Educational Resources Information Center

    Presse, Cindy; Block, Martin E.; Horton, Mel; Harvey, William J.

    2011-01-01

    The sport education model (SEM) has been widely used as a curriculum and instructional model to provide children with authentic and active sport experiences in physical education. In this model, students are assigned various roles to gain a deeper understanding of the sport or activity. This article provides a brief overview of the SEM and…

  8. Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.

    PubMed

    Leotta, Matthew J; Mundy, Joseph L

    2011-07-01

    In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.

  9. Adaptive Ambient Illumination Based on Color Harmony Model

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ayano; Hirai, Keita; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi

    We investigated the relationship between ambient illumination and psychological effect by applying a modified color harmony model. We verified the proposed model by analyzing correlation between psychological value and modified color harmony score. Experimental results showed the possibility to obtain the best color for illumination using this model.

  10. Crop plants as models for understanding plant adaptation and diversification

    PubMed Central

    Olsen, Kenneth M.; Wendel, Jonathan F.

    2013-01-01

    Since the time of Darwin, biologists have understood the promise of crop plants and their wild relatives for providing insight into the mechanisms of phenotypic evolution. The intense selection imposed by our ancestors during plant domestication and subsequent crop improvement has generated remarkable transformations of plant phenotypes. Unlike evolution in natural settings, descendent and antecedent conditions for crop plants are often both extant, providing opportunities for direct comparisons through crossing and other experimental approaches. Moreover, since domestication has repeatedly generated a suite of “domestication syndrome” traits that are shared among crops, opportunities exist for gaining insight into the genetic and developmental mechanisms that underlie parallel adaptive evolution. Advances in our understanding of the genetic architecture of domestication-related traits have emerged from combining powerful molecular technologies with advanced experimental designs, including nested association mapping, genome-wide association studies, population genetic screens for signatures of selection, and candidate gene approaches. These studies may be combined with high-throughput evaluations of the various “omics” involved in trait transformation, revealing a diversity of underlying causative mutations affecting phenotypes and their downstream propagation through biological networks. We summarize the state of our knowledge of the mutational spectrum that generates phenotypic novelty in domesticated plant species, and our current understanding of how domestication can reshape gene expression networks and emergent phenotypes. An exploration of traits that have been subject to similar selective pressures across crops (e.g., flowering time) suggests that a diversity of targeted genes and causative mutational changes can underlie parallel adaptation in the context of crop evolution. PMID:23914199

  11. From epidemics to information propagation: striking differences in structurally similar adaptive network models.

    PubMed

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  12. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    NASA Astrophysics Data System (ADS)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  13. Adapting Axelrod's cultural dissemination model for simulating peer effects.

    PubMed

    Hofer, Christian; Lechner, Gernot; Brudermann, Thomas; Füllsack, Manfred

    2017-01-01

    We present a generic method for considering incomplete but gradually expandable sociological data in agent-based modeling based on the classic model of cultural dissemination by Axelrod. Our method extension was inspired by research on the diffusion of citizen photovoltaic initiatives, i.e. by initiatives in which citizens collectively invest in photovoltaic plants and share the profits. Owing to the absence of empirical interaction parameters, the Axelrod model was used as basis for considering peer effects with contrived interaction data that can be updated from empirical surveys later on. The Axelrod model was extended to cover the following additional features: •Consideration of empirical social science data for concrete social interaction.•Development of a variable and fine-tunable interaction function for agents.•Deployment of a generic procedure for modeling peer effects in agent-based models.

  14. Stochastic stage-structured modeling of the adaptive immune system

    SciTech Connect

    Chao, D. L.; Davenport, M. P.; Forrest, S.; Perelson, Alan S.,

    2003-01-01

    We have constructed a computer model of the cytotoxic T lymphocyte (CTL) response to antigen and the maintenance of immunological memory. Because immune responses often begin with small numbers of cells and there is great variation among individual immune systems, we have chosen to implement a stochastic model that captures the life cycle of T cells more faithfully than deterministic models. Past models of the immune response have been differential equation based, which do not capture stochastic effects, or agent-based, which are computationally expensive. We use a stochastic stage-structured approach that has many of the advantages of agent-based modeling but is more efficient. Our model can provide insights into the effect infections have on the CTL repertoire and the response to subsequent infections.

  15. Predicting Adaptive Performance in Multicultural Teams: A Causal Model

    DTIC Science & Technology

    2008-02-01

    Applied Psychology, 91, 1189-1207. [6] Byrne, B. M. (2001). Structural equation modeling with AMOS: Basic concepts, applications, and programming. Mahwah...means of Factor Analysis (FA), Multidimensional Scaling (MDS), and Structural Equation Modeling (LISREL). Unpublished manuscript; in process of being... equation modeling . New York, NY: Guilford Press. [14] Kozlowski, S. W. J., Gully, S. M., Brown, K. G., Salas, E., Smith, E. M., & Nason, E. R. (2001

  16. Adaptive model of plankton dynamics for the North Atlantic

    NASA Astrophysics Data System (ADS)

    Pahlow, Markus; Vézina, Alain F.; Casault, Benoit; Maass, Heidi; Malloch, Louise; Wright, Daniel G.; Lu, Youyu

    2008-02-01

    Plankton ecosystems in the North Atlantic display strong regional and interannual variability in productivity and trophic structure, which cannot be captured by simple plankton models. Additional compartments subdividing functional groups can increase predictive power, but the high number of parameters tends to compromise portability and robustness of model predictions. An alternative strategy is to use property state variables, such as cell size, normally considered constant parameters in ecosystem models, to define the structure of functional groups in terms of both behaviour and response to physical forcing. This strategy may allow us to simulate realistically regional and temporal differences among plankton communities while keeping model complexity at a minimum. We fit a model of plankton and DOM dynamics globally and individually to observed climatologies at three diverse locations in the North Atlantic. Introducing additional property state variables is shown to improve the model fit both locally and globally, make the model more portable, and help identify model deficiencies. The zooplankton formulation exerts strong control on model performance. Our results suggest that the current paradigm on zooplankton allometric functional relationships might be at odds with observed plankton dynamics. Our parameter estimation resulted in more realistic estimates of parameters important for primary production than previous data assimilation studies. Property state variables generate complex emergent functional relationships, and might be used like tracers to differentiate between locally produced and advected biomass. The model results suggest that the observed temperature dependence of heterotrophic growth efficiency [Rivkin, R.B., Legendre, L., 2001. Biogenic carbon cycling in the upper ocean: effects of microbial respiration. Science 291 (5512) 2398-2400] could be an emergent relation due to intercorrelations among temperature, nutrient concentration and growth

  17. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  18. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation.

    PubMed

    Hillis, James M; Brainard, David H

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  19. Adaptation of Mesoscale Weather Models to Local Forecasting

    NASA Technical Reports Server (NTRS)

    Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.

    2003-01-01

    Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes

  20. Performance Optimizing Multi-Objective Adaptive Control with Time-Varying Model Reference Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2017-01-01

    This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.