Science.gov

Sample records for adaptive background model

  1. Adaptive background model

    NASA Astrophysics Data System (ADS)

    Lu, Xiaochun; Xiao, Yijun; Chai, Zhi; Wang, Bangping

    2007-11-01

    An adaptive background model aiming at outdoor vehicle detection is presented in this paper. This model is an improved model of PICA (pixel intensity classification algorithm), it classifies pixels into K-distributions by color similarity, and then a hypothesis that the background pixel color appears in image sequence with a high frequency is used to evaluate all the distributions to determine which presents the current background color. As experiments show, the model presented in this paper is a robust, adaptive and flexible model, which can deal with situations like camera motions, lighting changes and so on.

  2. Suppression of Background Odor Effect in Odor Sensing System Using Olfactory Adaptation Model

    NASA Astrophysics Data System (ADS)

    Ohba, Tsuneaki; Yamanaka, Takao

    In this study, a new method for suppressing the background odor effect is proposed. Since odor sensors response to background odors in addition to a target odor, it is difficult to detect the target odor information. In the conventional odor sensing systems, the effect of the background odors are compensated by subtracting the response to the background odors (the baseline response). Although this simple subtraction method is effective for constant background odors, it fails in the compensation for time-varying background odors. The proposed method for the background suppression is effective even for the time-varying background odors.

  3. In-Depth Functional Diagnostics of Mouse Models by Single-Flash and Flicker Electroretinograms without Adapting Background Illumination.

    PubMed

    Tanimoto, Naoyuki; Michalakis, Stylianos; Weber, Bernhard H F; Wahl-Schott, Christian A; Hammes, Hans-Peter; Seeliger, Mathias W

    2016-01-01

    Electroretinograms (ERGs) are commonly recorded at the cornea for an assessment of the functional status of the retina in mouse models. Full-field ERGs can be elicited by single-flash as well as flicker light stimulation although in most laboratories flicker ERGs are recorded much less frequently than singleflash ERGs. Whereas conventional single-flash ERGs contain information about layers, i.e., outer and inner retina, flicker ERGs permit functional assessment of the vertical pathways of the retina, i.e., rod system, cone ON-pathway, and cone OFF-pathway, when the responses are evoked at a relatively high luminance (0.5 log cd s/m(2)) with varying frequency (from 0.5 to 30 Hz) without any adapting background illumination. Therefore, both types of ERGs complement an in-depth functional characterization of the mouse retina, allowing for a discrimination of an underlying functional pathology. Here, we introduce the systematic interpretation of the single-flash and flicker ERGs by demonstrating several different patterns of functional phenotype in genetic mouse models, in which photoreceptors and/or bipolar cells are primarily or secondarily affected. PMID:26427467

  4. The GLAST Background Model

    SciTech Connect

    Ormes, J.F.; Atwood, W.; Burnett, T.; Grove, E.; Longo, F.; McEnery, J.; Mizuno, T.; Ritz, S.; /NASA, Goddard

    2007-10-17

    In order to estimate the ability of the GLAST/LAT to reject unwanted background of charged particles, optimize the on-board processing, size the required telemetry and optimize the GLAST orbit, we developed a detailed model of the background particles that would affect the LAT. In addition to the well-known components of the cosmic radiation, we included splash and reentrant components of protons, electrons (e+ and e-) from 10 MeV and beyond as well as the albedo gamma rays produced by cosmic ray interactions with the atmosphere. We made estimates of the irreducible background components produced by positrons and hadrons interacting in the multilayered micrometeorite shield and spacecraft surrounding the LAT and note that because the orbital debris has increased, the shielding required and hence the background are larger than were present in EGRET. Improvements to the model are currently being made to include the east-west effect.

  5. The GLAST Background Model

    SciTech Connect

    Ormes, J. F.; Atwood, W.; Burnett, T.; Grove, E.; Longo, F.; McEnery, J.; Ritz, S.; Mizuno, T.

    2007-07-12

    In order to estimate the ability of the GLAST/LAT to reject unwanted background of charged particles, optimize the on-board processing, size the required telemetry and optimize the GLAST orbit, we developed a detailed model of the background particles that would affect the LAT. In addition to the well-known components of the cosmic radiation, we included splash and reentrant components of protons, electrons (e+ and e-) from 10 MeV and beyond as well as the albedo gamma rays produced by cosmic ray interactions with the atmosphere. We made estimates of the irreducible background components produced by positrons and hadrons interacting in the multilayered micrometeorite shield and spacecraft surrounding the LAT and note that because the orbital debris has increased, the shielding required and hence the background are larger than were present in EGRET. Improvements to the model are currently being made to include the east-west effect.

  6. Sensorimotor adaptation is influenced by background music.

    PubMed

    Bock, Otmar

    2010-06-01

    It is well established that listening to music can modify subjects' cognitive performance. The present study evaluates whether this so-called Mozart Effect extends beyond cognitive tasks and includes sensorimotor adaptation. Three subject groups listened to musical pieces that in the author's judgment were serene, neutral, or sad, respectively. This judgment was confirmed by the subjects' introspective reports. While listening to music, subjects engaged in a pointing task that required them to adapt to rotated visual feedback. All three groups adapted successfully, but the speed and magnitude of adaptive improvement was more pronounced with serene music than with the other two music types. In contrast, aftereffects upon restoration of normal feedback were independent of music type. These findings support the existence of a "Mozart effect" for strategic movement control, but not for adaptive recalibration. Possibly, listening to music modifies neural activity in an intertwined cognitive-emotional network. PMID:20480363

  7. Real-Time Adaptive Foreground/Background Segmentation

    NASA Astrophysics Data System (ADS)

    Butler, Darren E.; Bove, V. Michael; Sridharan, Sridha

    2005-12-01

    The automatic analysis of digital video scenes often requires the segmentation of moving objects from a static background. Historically, algorithms developed for this purpose have been restricted to small frame sizes, low frame rates, or offline processing. The simplest approach involves subtracting the current frame from the known background. However, as the background is rarely known beforehand, the key is how to learn and model it. This paper proposes a new algorithm that represents each pixel in the frame by a group of clusters. The clusters are sorted in order of the likelihood that they model the background and are adapted to deal with background and lighting variations. Incoming pixels are matched against the corresponding cluster group and are classified according to whether the matching cluster is considered part of the background. The algorithm has been qualitatively and quantitatively evaluated against three other well-known techniques. It demonstrated equal or better segmentation and proved capable of processing [InlineEquation not available: see fulltext.] PAL video at full frame rate using only 35%-40% of a [InlineEquation not available: see fulltext.] GHz Pentium 4 computer.

  8. Psychological Adaptation of Adolescents with Immigrant Backgrounds.

    ERIC Educational Resources Information Center

    Sam, David Lackland

    2000-01-01

    Examines three theoretical perspectives (family values, acculturation strategies, and social group identity) as predictors of the psychological well-being of adolescents from immigrant backgrounds. Reveals that the perspectives accounted for between 12% and 22% of variance of mental health, life satisfaction, and self-esteem, while social group…

  9. An auto-adaptive background subtraction method for Raman spectra.

    PubMed

    Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun

    2016-05-15

    Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy. PMID:26950502

  10. An auto-adaptive background subtraction method for Raman spectra

    NASA Astrophysics Data System (ADS)

    Xie, Yi; Yang, Lidong; Sun, Xilong; Wu, Dewen; Chen, Qizhen; Zeng, Yongming; Liu, Guokun

    2016-05-01

    Background subtraction is a crucial step in the preprocessing of Raman spectrum. Usually, parameter manipulating of the background subtraction method is necessary for the efficient removal of the background, which makes the quality of the spectrum empirically dependent. In order to avoid artificial bias, we proposed an auto-adaptive background subtraction method without parameter adjustment. The main procedure is: (1) select the local minima of spectrum while preserving major peaks, (2) apply an interpolation scheme to estimate background, (3) and design an iteration scheme to improve the adaptability of background subtraction. Both simulated data and Raman spectra have been used to evaluate the proposed method. By comparing the backgrounds obtained from three widely applied methods: the polynomial, the Baek's and the airPLS, the auto-adaptive method meets the demand of practical applications in terms of efficiency and accuracy.

  11. Improved visual background extractor using an adaptive distance threshold

    NASA Astrophysics Data System (ADS)

    Han, Guang; Wang, Jinkuan; Cai, Xi

    2014-11-01

    Camouflage is a challenging issue in moving object detection. Even the recent and advanced background subtraction technique, visual background extractor (ViBe), cannot effectively deal with it. To better handle camouflage according to the perception characteristics of the human visual system (HVS) in terms of minimum change of intensity under a certain background illumination, we propose an improved ViBe method using an adaptive distance threshold, named IViBe for short. Different from the original ViBe using a fixed distance threshold for background matching, our approach adaptively sets a distance threshold for each background sample based on its intensity. Through analyzing the performance of the HVS in discriminating intensity changes, we determine a reasonable ratio between the intensity of a background sample and its corresponding distance threshold. We also analyze the impacts of our adaptive threshold together with an update mechanism on detection results. Experimental results demonstrate that our method outperforms ViBe even when the foreground and background share similar intensities. Furthermore, in a scenario where foreground objects are motionless for several frames, our IViBe not only reduces the initial false negatives, but also suppresses the diffusion of misclassification caused by those false negatives serving as erroneous background seeds, and hence shows an improved performance compared to ViBe.

  12. Background stratospheric aerosol reference model

    NASA Technical Reports Server (NTRS)

    Mccormick, M. P.; Wang, P.

    1989-01-01

    In this analysis, a reference background stratospheric aerosol optical model is developed based on the nearly global SAGE 1 satellite observations in the non-volcanic period from March 1979 to February 1980. Zonally averaged profiles of the 1.0 micron aerosol extinction for the tropics and the mid- and high-altitudes for both hemispheres are obtained and presented in graphical and tabulated form for the different seasons. In addition, analytic expressions for these seasonal global zonal means, as well as the yearly global mean, are determined according to a third order polynomial fit to the vertical profile data set. This proposed background stratospheric aerosol model can be useful in modeling studies of stratospheric aerosols and for simulations of atmospheric radiative transfer and radiance calculations in atmospheric remote sensing.

  13. Background stratospheric aerosol reference model

    NASA Astrophysics Data System (ADS)

    McCormick, M. P.; Wang, Pi-Huan

    Nearly global SAGE I satellite observations in the nonvolcanic period from March 1979 to February 1980 are used to produce a reference background stratospheric aerosol optical model. Zonally average profiles of the 1.0-micron aerosol extinction for the tropics, midlatitudes, and high latitudes for both hemispheres are given in graphical and tabulated form for the different seasons. A third order polynomial fit to the vertical profile data set is used to derive analytic expressions for the seasonal global means and the yearly global mean. The results have application to the simulation of atmospheric radiative transfer and radiance calculations in atmospheric remote sensing.

  14. Human vs model observers in anatomic backgrounds

    NASA Astrophysics Data System (ADS)

    Eckstein, Miguel P.; Abbey, Craig K.; Whiting, James S.

    1998-04-01

    Model observers have been compared to human performance detecting low contrast signals in a variety of computer generated background including white noise, correlated noise, lumpy backgrounds, and two component noise. The purpose of the present paper is to extend this work by comparing a cumber of previously proposed model observers to human visual detection performance in real anatomic backgrounds. Human and model observer performance are compared as a function of increasing added white noise. Our results show that three of the four models are good predictors of human performance.

  15. Adaptive and Background-Aware GAL4 Expression Enhancement of Co-registered Confocal Microscopy Images.

    PubMed

    Trapp, Martin; Schulze, Florian; Novikov, Alexey A; Tirian, Laszlo; J Dickson, Barry; Bühler, Katja

    2016-04-01

    GAL4 gene expression imaging using confocal microscopy is a common and powerful technique used to study the nervous system of a model organism such as Drosophila melanogaster. Recent research projects focused on high throughput screenings of thousands of different driver lines, resulting in large image databases. The amount of data generated makes manual assessment tedious or even impossible. The first and most important step in any automatic image processing and data extraction pipeline is to enhance areas with relevant signal. However, data acquired via high throughput imaging tends to be less then ideal for this task, often showing high amounts of background signal. Furthermore, neuronal structures and in particular thin and elongated projections with a weak staining signal are easily lost. In this paper we present a method for enhancing the relevant signal by utilizing a Hessian-based filter to augment thin and weak tube-like structures in the image. To get optimal results, we present a novel adaptive background-aware enhancement filter parametrized with the local background intensity, which is estimated based on a common background model. We also integrate recent research on adaptive image enhancement into our approach, allowing us to propose an effective solution for known problems present in confocal microscopy images. We provide an evaluation based on annotated image data and compare our results against current state-of-the-art algorithms. The results show that our algorithm clearly outperforms the existing solutions. PMID:26743993

  16. Background modeling for the GERDA experiment

    NASA Astrophysics Data System (ADS)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  17. Background modeling for the GERDA experiment

    SciTech Connect

    Becerici-Schmidt, N.; Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  18. Cosmic microwave background probes models of inflation

    NASA Technical Reports Server (NTRS)

    Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.

    1992-01-01

    Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.

  19. Adaptive Models for Gene Networks

    PubMed Central

    Shin, Yong-Jun; Sayed, Ali H.; Shen, Xiling

    2012-01-01

    Biological systems are often treated as time-invariant by computational models that use fixed parameter values. In this study, we demonstrate that the behavior of the p53-MDM2 gene network in individual cells can be tracked using adaptive filtering algorithms and the resulting time-variant models can approximate experimental measurements more accurately than time-invariant models. Adaptive models with time-variant parameters can help reduce modeling complexity and can more realistically represent biological systems. PMID:22359614

  20. Improving the EOTDA ocean background model

    NASA Astrophysics Data System (ADS)

    McGrath, Charles P.; Badzik, Gregory D.

    1997-09-01

    The Electro-Optical Tactical Decision Aid (EOTDA) is a strike warfare mission planning tool originally developed by the US Air Force. The US Navy has added navy sensors and targets to the EOTDA and installed it into current fleet mission planning and support systems. Fleet experience with the EOTDA and previous studies have noted the need for improvement, especially for scenarios involving ocean backgrounds. In order to test and improve the water background model in the EOTDA, a modified version has been created that replaces the existing semi-empirical model with the SeaRad model that was developed by Naval Command, Control and Ocean Surveillance Systems (NRaD). The SeaRad model is a more rigorous solution based on the Cox-Munk wave-slope probabilities. During the April 1996 Electrooptical Propagation Assessment in Coastal Environments (EOPACE) trials, data was collected to evaluate the effects of the SeaRad version of the EOTDA. Data was collected using a calibrated airborne infrared imaging system and operational FUR systems against ship targets. A modified version of MODTRAN also containing the SeaRad model is used to correct the data for the influences of the atmosphere. This report uses these data along with the modified EOTDA to evaluate the effects of the SeaRad model on ocean background predictions under clear and clouded skies. Upon using the more accurate water reflection model, the significance of the sky and cloud radiance contributions become more apparent leading to recommendations for further improvements.

  1. Model-based target and background characterization

    NASA Astrophysics Data System (ADS)

    Mueller, Markus; Krueger, Wolfgang; Heinze, Norbert

    2000-07-01

    Up to now most approaches of target and background characterization (and exploitation) concentrate solely on the information given by pixels. In many cases this is a complex and unprofitable task. During the development of automatic exploitation algorithms the main goal is the optimization of certain performance parameters. These parameters are measured during test runs while applying one algorithm with one parameter set to images that constitute of image domains with very different domain characteristics (targets and various types of background clutter). Model based geocoding and registration approaches provide means for utilizing the information stored in GIS (Geographical Information Systems). The geographical information stored in the various GIS layers can define ROE (Regions of Expectations) and may allow for dedicated algorithm parametrization and development. ROI (Region of Interest) detection algorithms (in most cases MMO (Man- Made Object) detection) use implicit target and/or background models. The detection algorithms of ROIs utilize gradient direction models that have to be matched with transformed image domain data. In most cases simple threshold calculations on the match results discriminate target object signatures from the background. The geocoding approaches extract line-like structures (street signatures) from the image domain and match the graph constellation against a vector model extracted from a GIS (Geographical Information System) data base. Apart from geo-coding the algorithms can be also used for image-to-image registration (multi sensor and data fusion) and may be used for creation and validation of geographical maps.

  2. TIMSS 2011 User Guide for the International Database. Supplement 2: National Adaptations of International Background Questionnaires

    ERIC Educational Resources Information Center

    Foy, Pierre, Ed.; Arora, Alka, Ed.; Stanco, Gabrielle M., Ed.

    2013-01-01

    This supplement describes national adaptations made to the international version of the TIMSS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the TIMSS 2011 background variables. Background questionnaire adaptations…

  3. Influence of background size, luminance and eccentricity on different adaptation mechanisms.

    PubMed

    Gloriani, Alejandro H; Matesanz, Beatriz M; Barrionuevo, Pablo A; Arranz, Isabel; Issolio, Luis; Mar, Santiago; Aparicio, Juan A

    2016-08-01

    Mechanisms of light adaptation have been traditionally explained with reference to psychophysical experimentation. However, the neural substrata involved in those mechanisms remain to be elucidated. Our study analyzed links between psychophysical measurements and retinal physiological evidence with consideration for the phenomena of rod-cone interactions, photon noise, and spatial summation. Threshold test luminances were obtained with steady background fields at mesopic and photopic light levels (i.e., 0.06-110cd/m(2)) for retinal eccentricities from 0° to 15° using three combinations of background/test field sizes (i.e., 10°/2°, 10°/0.45°, and 1°/0.45°). A two-channel Maxwellian view optical system was employed to eliminate pupil effects on the measured thresholds. A model based on visual mechanisms that were described in the literature was optimized to fit the measured luminance thresholds in all experimental conditions. Our results can be described by a combination of visual mechanisms. We determined how spatial summation changed with eccentricity and how subtractive adaptation changed with eccentricity and background field size. According to our model, photon noise plays a significant role to explain contrast detection thresholds measured with the 1/0.45° background/test size combination at mesopic luminances and at off-axis eccentricities. In these conditions, our data reflect the presence of rod-cone interaction for eccentricities between 6° and 9° and luminances between 0.6 and 5cd/m(2). In spite of the increasing noise effects with eccentricity, results also show that the visual system tends to maintain a constant signal-to-noise ratio in the off-axis detection task over the whole mesopic range. PMID:27210038

  4. Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.

    2012-01-01

    Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.

  5. Observer models for statistically-defined backgrounds

    NASA Astrophysics Data System (ADS)

    Burgess, Arthur E.

    1994-04-01

    Investigation of human signal-detection performance for noise- limited tasks with statistically defined signal or image parameters represents a step towards clinical realism. However, the ideal observer procedure is then usually nonlinear, and analysis becomes mathematically intractable. Two linear but suboptimal observer models, the Hotelling observer and the non- prewhitening (NPW) matched filter, have been proposed for mathematical convenience. Experiments by Rolland and Barrett involving detection of signals in white noise superimposed on statistically defined backgrounds showed that the Hotelling model gave a good fit while the simple NPW matched filter gave a poor fit. It will be shown that the NPW model can be modified to fit their data by adding a spatial frequency filter of shape similar to the human contrast sensitivity function. The best fit is obtained using an eye filter model, E(f) equals f1.3 exp(-cf2) with c selected to give a peak at 4 cycles per degree.

  6. ADAPT model: Model use, calibration and validation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper presents an overview of the Agricultural Drainage and Pesticide Transport (ADAPT) model and a case study to illustrate the calibration and validation steps for predicting subsurface tile drainage and nitrate-N losses from an agricultural system. The ADAPT model is a daily time step field ...

  7. PIRLS 2011 User Guide for the International Database. Supplement 2: National Adaptations of International Background Questionnaires

    ERIC Educational Resources Information Center

    Foy, Pierre, Ed.; Drucker, Kathleen T., Ed.

    2013-01-01

    This supplement describes national adaptations made to the international version of the PIRLS/prePIRLS 2011 background questionnaires. This information provides users with a guide to evaluate the availability of internationally comparable data for use in secondary analyses involving the PIRLS/prePIRLS 2011 background variables. Background…

  8. Hybrid Adaptive Flight Control with Model Inversion Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan

    2011-01-01

    This study investigates a hybrid adaptive flight control method as a design possibility for a flight control system that can enable an effective adaptation strategy to deal with off-nominal flight conditions. The hybrid adaptive control blends both direct and indirect adaptive control in a model inversion flight control architecture. The blending of both direct and indirect adaptive control provides a much more flexible and effective adaptive flight control architecture than that with either direct or indirect adaptive control alone. The indirect adaptive control is used to update the model inversion controller by an on-line parameter estimation of uncertain plant dynamics based on two methods. The first parameter estimation method is an indirect adaptive law based on the Lyapunov theory, and the second method is a recursive least-squares indirect adaptive law. The model inversion controller is therefore made to adapt to changes in the plant dynamics due to uncertainty. As a result, the modeling error is reduced that directly leads to a decrease in the tracking error. In conjunction with the indirect adaptive control that updates the model inversion controller, a direct adaptive control is implemented as an augmented command to further reduce any residual tracking error that is not entirely eliminated by the indirect adaptive control.

  9. Model of aircraft noise adaptation

    NASA Technical Reports Server (NTRS)

    Dempsey, T. K.; Coates, G. D.; Cawthorn, J. M.

    1977-01-01

    Development of an aircraft noise adaptation model, which would account for much of the variability in the responses of subjects participating in human response to noise experiments, was studied. A description of the model development is presented. The principal concept of the model, was the determination of an aircraft adaptation level which represents an annoyance calibration for each individual. Results showed a direct correlation between noise level of the stimuli and annoyance reactions. Attitude-personality variables were found to account for varying annoyance judgements.

  10. Adaptive detection method of infrared small target based on target-background separation via robust principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Chuanyun; Qin, Shiyin

    2015-03-01

    Motivated by the robust principal component analysis, infrared small target image is regarded as low-rank background matrix corrupted by sparse target and noise matrices, thus a new target-background separation model is designed, subsequently, an adaptive detection method of infrared small target is presented. Firstly, multi-scale transform and patch transform are used to generate an image patch set for infrared small target detection; secondly, target-background separation of each patch is achieved by recovering the low-rank and sparse matrices using adaptive weighting parameter; thirdly, the image reconstruction and fusion are carried out to obtain the entire separated background and target images; finally, the infrared small target detection is realized by threshold segmentation of template matching similarity measurement. In order to validate the performance of the proposed method, three experiments: target-background separation, background clutter suppression and infrared small target detection, are performed over different clutter background with real infrared small targets in single-frame or sequence images. A series of experiment results demonstrate that the proposed method can not only suppress background clutter effectively even if with strong noise interference but also detect targets accurately with low false alarm rate.

  11. Observations and Modeling of Seismic Background Noise

    USGS Publications Warehouse

    Peterson, Jon R.

    1993-01-01

    INTRODUCTION The preparation of this report had two purposes. One was to present a catalog of seismic background noise spectra obtained from a worldwide network of seismograph stations. The other purpose was to refine and document models of seismic background noise that have been in use for several years. The second objective was, in fact, the principal reason that this study was initiated and influenced the procedures used in collecting and processing the data. With a single exception, all of the data used in this study were extracted from the digital data archive at the U.S. Geological Survey's Albuquerque Seismological Laboratory (ASL). This archive dates from 1972 when ASL first began deploying digital seismograph systems and collecting and distributing digital data under the sponsorship of the Defense Advanced Research Projects Agency (DARPA). There have been many changes and additions to the global seismograph networks during the past twenty years, but perhaps none as significant as the current deployment of very broadband seismographs by the U.S. Geological Survey (USGS) and the University of California San Diego (UCSD) under the scientific direction of the IRIS consortium. The new data acquisition systems have extended the bandwidth and resolution of seismic recording, and they utilize high-density recording media that permit the continuous recording of broadband data. The data improvements and continuous recording greatly benefit and simplify surveys of seismic background noise. Although there are many other sources of digital data, the ASL archive data were used almost exclusively because of accessibility and because the data systems and their calibration are well documented for the most part. Fortunately, the ASL archive contains high-quality data from other stations in addition to those deployed by the USGS. Included are data from UCSD IRIS/IDA stations, the Regional Seismic Test Network (RSTN) deployed by Sandia National Laboratories (SNL), and the

  12. Method For Model-Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1990-01-01

    Relatively simple method of model-reference adaptive control (MRAC) developed from two prior classes of MRAC techniques: signal-synthesis method and parameter-adaption method. Incorporated into unified theory, which yields more general adaptation scheme.

  13. An efficient background modeling approach based on vehicle detection

    NASA Astrophysics Data System (ADS)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  14. An Adapted Dialogic Reading Program for Turkish Kindergarteners from Low Socio-Economic Backgrounds

    ERIC Educational Resources Information Center

    Ergül, Cevriye; Akoglu, Gözde; Sarica, Ayse D.; Karaman, Gökçe; Tufan, Mümin; Bahap-Kudret, Zeynep; Zülfikar, Deniz

    2016-01-01

    The study aimed to examine the effectiveness of the Adapted Dialogic Reading Program (ADR) on the language and early literacy skills of Turkish kindergarteners from low socio-economic (SES) backgrounds. The effectiveness of ADR was investigated across six different treatment conditions including classroom and home based implementations in various…

  15. Background Model for the Majorana Demonstrator

    NASA Astrophysics Data System (ADS)

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y.-D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V. E.; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, A. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. J.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R. L.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B. R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C.-H.; Yumatov, V.

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.

  16. Background model for the Majorana Demonstrator

    SciTech Connect

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, III, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; Caldwell, A. S.; Chan, Y -D.; Christofferson, C. D.; Combs, D. C.; Detwiler, J. A.; Doe, P. J.; Efremenko, Yu.; Egorov, V.; Ejiri, H.; Elliott, S. R.; Fast, J. E.; Finnerty, P.; Fraenkle, F. M.; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, V.; Gusev, K.; Hallin, A.; Hazama, R.; Hegai, A.; Henning, R.; Hoppe, E. W.; Howard, S.; Howe, M. A.; Keeter, K. J.; Kidd, M. F.; Kochetov, O.; Konovalov, S. I.; Kouzes, R. T.; LaFerriere, B. D.; Leon, J.; Leviner, L. E.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S.; Mertens, S.; Nomachi, M.; Orrell, J. L.; O'Shaughnessy, C.; Overman, N. R.; Phillips, D. G.; Poon, W. W. P.; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, K.; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, A. G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, K. K.; Snyder, N.; Suriano, A. M.; Thompson, J.; Timkin, V.; Tornow, W.; Trimble, J. E.; Varner, R.; Vasilyev, S.; Vetter, K.; Vorren, K.; White, B.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A. R.; Yu, C. -H.; Yumatov, V.

    2015-01-01

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.

  17. Background model for the Majorana Demonstrator

    DOE PAGESBeta

    Cuesta, C.; Abgrall, N.; Aguayo, E.; Avignone, III, F. T.; Barabash, A. S.; Bertrand, F. E.; Boswell, M.; Brudanin, V.; Busch, M.; Byram, D.; et al

    2015-01-01

    The Majorana Collaboration is constructing a system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment capable of probing the neutrino mass scale in the inverted-hierarchy region. To realize this, a major goal of the Majorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example usingmore » powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using simulations of the different background components whose purity levels are constrained from radioassay measurements.« less

  18. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds

    PubMed Central

    Hillis, James M.; Brainard, David H.

    2007-01-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism. PMID:16277280

  19. Do common mechanisms of adaptation mediate color discrimination and appearance? Uniform backgrounds

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2005-10-01

    Color vision is useful for detecting surface boundaries and identifying objects. Are the signals used to perform these two functions processed by common mechanisms, or has the visual system optimized its processing separately for each task? We measured the effect of mean chromaticity and luminance on color discriminability and on color appearance under well-matched stimulus conditions. In the discrimination experiments, a pedestal spot was presented in one interval and a pedestal + test in a second. Observers indicated which interval contained the test. In the appearance experiments, observers matched the appearance of test spots across a change in background. We analyzed the data using a variant of Fechner's proposal, that the rate of apparent stimulus change is proportional to visual sensitivity. We found that saturating visual response functions together with a model of adaptation that included multiplicative gain control and a subtractive term accounted for data from both tasks. This result suggests that effects of the contexts we studied on color appearance and discriminability are controlled by the same underlying mechanism.

  20. Background Model for the Majorana Demonstrator

    SciTech Connect

    Cuesta, C.; Abgrall, N.; Aguayo, Estanislao; Avignone, Frank T.; Barabash, Alexander S.; Bertrand, F.; Boswell, M.; Brudanin, V.; Busch, Matthew; Byram, D.; Caldwell, A. S.; Chan, Yuen-Dat; Christofferson, Cabot-Ann; Combs, Dustin C.; Detwiler, Jason A.; Doe, Peter J.; Efremenko, Yuri; Egorov, Viatcheslav; Ejiri, H.; Elliott, S. R.; Fast, James E.; Finnerty, P.; Fraenkle, Florian; Galindo-Uribarri, A.; Giovanetti, G. K.; Goett, J.; Green, M. P.; Gruszko, J.; Guiseppe, Vincente; Gusev, K.; Hallin, A. L.; Hazama, R.; Hegai, A.; Henning, Reyco; Hoppe, Eric W.; Howard, Stanley; Howe, M. A.; Keeter, K.; Kidd, M. F.; Kochetov, Oleg; Konovalov, S.; Kouzes, Richard T.; Laferriere, Brian D.; Leon, Jonathan D.; Leviner, L.; Loach, J. C.; MacMullin, J.; MacMullin, S.; Martin, R. D.; Meijer, S. J.; Mertens, S.; Nomachi, Masaharu; Orrell, John L.; O'Shaughnessy, C.; Overman, Nicole R.; Phillips, D.; Poon, Alan; Pushkin, K.; Radford, D. C.; Rager, J.; Rielage, Keith; Robertson, R. G. H.; Romero-Romero, E.; Ronquest, M. C.; Schubert, Alexis G.; Shanks, B.; Shima, T.; Shirchenko, M.; Snavely, Kyle J.; Snyder, N.; Suriano, Anne-Marie; Thompson, J.; Timkin, V.; Tornow, Werner; Trimble, J. E.; Varner, R. L.; Vasilyev, Sergey; Vetter, Kai; Vorren, Kris R.; White, Brandon R.; Wilkerson, J. F.; Wiseman, C.; Xu, W.; Yakushev, E.; Young, A.; Yu, Chang-Hong; Yumatov, Vladimir

    2015-06-01

    The Majorana Collaboration is constructing a prototype system containing 40 kg of HPGe detectors to demonstrate the feasibility and potential of a future tonne-scale experiment to search for neutrinoless double-beta (0v BB) decay in 76Ge. In view of the requirement that the next generation of tonne-scale Ge-based 0vBB-decay experiment be capable of probing the neutrino mass scale in the inverted-hierarchy region, a major goal of theMajorana Demonstrator is to demonstrate a path forward to achieving a background rate at or below 1 cnt/(ROI-t-y) in the 4 keV region of interest around the Q-value at 2039 keV. This goal is pursued through a combination of a significant reduction of radioactive impurities in construction materials with analytical methods for background rejection, for example using powerful pulse shape analysis techniques profiting from the p-type point contact HPGe detectors technology. The effectiveness of these methods is assessed using Geant4 simulations of the different background components whose purity levels are constrained from radioassay measurements.

  1. Model for adaptive multimedia services

    NASA Astrophysics Data System (ADS)

    Forstadius, Jari; Ala-Kurikka, Jussi; Koivisto, Antti T.; Sauvola, Jaakko J.

    2001-11-01

    Development towards high-bandwidth wireless devices that are capable of processing complex, streaming multimedia is enabling a new breed of network-based media services. Coping with the diversity of network and device capabilities requires services to be flexible and able to adapt to the needs and limitations of the environment at hand. Before efficient deployment, multi-platform services require additional issues to be considered, e.g. content handling, digital rights management, adaptability of content, user profiling, provisioning, and the available access methods. The key issue is how the content and the service is being modelled and stored for inauguration. We propose a new service content model based on persistent media objects able to store and manage XHTML-based multimedia services. In our approach, media, content summaries, and other meta-information are stored within media objects that can be queried from the object database. The content of the media objects can also specify queries to the database and links to other media objects. The final presentation is created dynamically according to the service request and user profiles. Our approach allows for dynamic updating of the service database together with user group management, and provides a method for notifying the registered users by different smart messaging methods, e.g. via e-mail or a SMS message. The model is demonstrated with an 'ice-hockey service' running in our platform called Princess. The service also utilizes SMIL and key frame techniques for the video representation.

  2. Chromo-natural model in anisotropic background

    SciTech Connect

    Maleknejad, Azadeh; Erfani, Encieh E-mail: eerfani@ipm.ir

    2014-03-01

    In this work we study the chromo-natural inflation model in the anisotropic setup. Initiating inflation from Bianchi type-I cosmology, we analyze the system thoroughly during the slow-roll inflation, from both analytical and numerical points of view. We show that the isotropic FRW inflation is an attractor of the system. In other words, anisotropies are damped within few e-folds and the chromo-natural model respects the cosmic no-hair conjecture. Furthermore, we demonstrate that in the slow-roll limit, the anisotropies in both chromo-natural and gauge-flation models share the same dynamics.

  3. Coherent structures and modeling: Some background comments

    NASA Technical Reports Server (NTRS)

    Kline, S. J.

    1987-01-01

    Coherent structures are discussed as a sequence of events (identifiable motions) in the flow which convert significant amounts of mechanical energies of the mean flow stream, into turbulent fluctuations. The use of structure information in modeling is also discussed.

  4. Modulation of prism adaptation by a shift of background in the monkey.

    PubMed

    Inoue, Masato; Harada, Hiroyuki; Fujisawa, Masahiro; Uchimura, Motoaki; Kitazawa, Shigeru

    2016-01-15

    Recent human behavioral studies have shown that the position of a visual target is instantly represented relative to the background (e.g., a large square) and used for evaluating the error in reaching the target. In the present study, we examined whether the same allocentric mechanism is shared by the monkey. We trained two monkeys to perform a fast and accurate reaching movement toward a visual target with a square in the background. Then, a visual shift (20mm or 4.1°) was introduced by wedge prisms to examine the process of decreasing the error during an exposure period (30 trials) and the size of the error upon removal of the prisms (aftereffect). The square was shifted during each movement, either in the direction of the visual displacement or in the opposite direction, by an amount equal to the size of the visual shift. The ipsilateral shift of the background increased the asymptote during the exposure period and decreased the aftereffect, i.e., prism adaptation was attenuated by the ipsilateral shift. By contrast, a contralateral shift enhanced adaptation. We further tested whether the shift of the square alone could cause an increase in the motor error. Although the target did not move, the shift of the square increased the motor error in the direction of the shift. These results were generally consistent with the results reported in human subjects, suggesting that the monkey and the human share the same neural mechanisms for representing a target relative to the background. PMID:26431765

  5. Simple method for model reference adaptive control

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1989-01-01

    A simple method is presented for combined signal synthesis and parameter adaptation within the framework of model reference adaptive control theory. The results are obtained using a simple derivation based on an improved Liapunov function.

  6. Results from Modeling the Diffuse Ultraviolet Background

    NASA Astrophysics Data System (ADS)

    Murthy, Jayant

    2016-07-01

    I have used a Monte Carlo model for dust scattering in our Galaxy with multiple scattering to study the diffuse emission seen by the GALEX mission. I find that the emission at low and mid latitudes is fit well by scattering from dust grains with an albedo of 0.4. However, only about 30% of the diffuse radiation at high Galactic latitudes is due to dust scattering. There is an additional component of 500 - 600 ph cm^{-2} s^{-1} sr^{-1} Å^{-1} at all latitudes of an unknown origin.

  7. ADAPTATION OF MAMMALIAN PHOTORECEPTORS TO BACKGROUND LIGHT: PUTATIVE ROLE FOR DIRECT MODULATION OF PHOSPHODIESTERASE

    PubMed Central

    Fain, Gordon L

    2011-01-01

    All sensory receptors adapt. As the mean level of light or sound or odor is altered, the sensitivity of the receptor is adjusted to permit the cell to function over as wide a range of ambient stimulation as possible. In a rod photoreceptor, adaptation to maintained background light produces a decrease (or “sag) in the response to the prolonged illumination, as well as an acceleration in response decay time and a Weber-Fechner-like decrease in sensitivity. Earlier work on salamander indicated that adaptation is controlled by the intracellular concentration of Ca2+. Three Ca2+-dependent mechanisms were subsequently identified, namely regulation of guanylyl cyclase, modulation of activated rhodopsin lifetime, and alteration of channel opening probability, with the contribution of the cyclase thought to be the most important. Later experiments on mouse that exploit the powerful techniques of molecular genetics have shown that cyclase does indeed play a significant role in mammalian rods, but that much of adaptation remains even when regulation of cyclase and both of the other proposed pathways have been genetically deleted. The identity of the missing mechanism or mechanisms is unclear, but recent speculation has focused on direct modulation of spontaneous and light-activated phosphodiesterase. PMID:21922272

  8. Egyptian exploration: background, models, and future potential

    SciTech Connect

    Kanes, W.H.; Abdine, S.

    1983-03-01

    Egypt has proven to be an area with excelllnt exploration potential. Recent discoveries in the Western Desert tilted fault blocks are leading to a reevaluation of new play concepts based on an east-west Tethyan rift structure model. Facies favorable to hydrocarbon accumulation are associated with shallow-water marine depositional environments. Production has not been great on a per-well basis, but fields have consistently out-produced the original recoverable reserve estimates. The Gulf of Suez lies within the rift between North Africa and Arabia-Sinai. It remains a major producing area with production from sandstones which range in age from Carboniferous to Cretaceous. The Upper Cretaceous and Lower Tertiary carbonates are potentially attractive zones, as are the Miocene clastics and carbonates. Miocene marls and Upper Cretaceous shales are source rocks, and thermal maturation can be directly related to continental rifting with the oil window most attractive in the southern third of the Gulf of Suez. Structural style is strongly rift-influenced with tilted and locally eroded hosts prevalent. The central gulf has a general eastern dip, whereas the northern and southern areas have a regional westward dip. This has had a direct influence in isolating some major oil fields and has adversely affected reflection seismic surveys. Exploration has been difficult because of excessive Miocene and younger salt thicknesses. With increasingly refined technology, attractive targets now are being delineated in the hitherto unexplored lows between horsts within the gulf.

  9. A Bayesian Model of Sensory Adaptation

    PubMed Central

    Sato, Yoshiyuki; Aihara, Kazuyuki

    2011-01-01

    Recent studies reported two opposite types of adaptation in temporal perception. Here, we propose a Bayesian model of sensory adaptation that exhibits both types of adaptation. We regard adaptation as the adaptive updating of estimations of time-evolving variables, which determine the mean value of the likelihood function and that of the prior distribution in a Bayesian model of temporal perception. On the basis of certain assumptions, we can analytically determine the mean behavior in our model and identify the parameters that determine the type of adaptation that actually occurs. The results of our model suggest that we can control the type of adaptation by controlling the statistical properties of the stimuli presented. PMID:21541346

  10. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  11. Adaptive Urban Dispersion Integrated Model

    SciTech Connect

    Wissink, A; Chand, K; Kosovic, B; Chan, S; Berger, M; Chow, F K

    2005-11-03

    Numerical simulations represent a unique predictive tool for understanding the three-dimensional flow fields and associated concentration distributions from contaminant releases in complex urban settings (Britter and Hanna 2003). Utilization of the most accurate urban models, based on fully three-dimensional computational fluid dynamics (CFD) that solve the Navier-Stokes equations with incorporated turbulence models, presents many challenges. We address two in this work; first, a fast but accurate way to incorporate the complex urban terrain, buildings, and other structures to enforce proper boundary conditions in the flow solution; second, ways to achieve a level of computational efficiency that allows the models to be run in an automated fashion such that they may be used for emergency response and event reconstruction applications. We have developed a new integrated urban dispersion modeling capability based on FEM3MP (Gresho and Chan 1998, Chan and Stevens 2000), a CFD model from Lawrence Livermore National Lab. The integrated capability incorporates fast embedded boundary mesh generation for geometrically complex problems and full three-dimensional Cartesian adaptive mesh refinement (AMR). Parallel AMR and embedded boundary gridding support are provided through the SAMRAI library (Wissink et al. 2001, Hornung and Kohn 2002). Embedded boundary mesh generation has been demonstrated to be an automatic, fast, and efficient approach for problem setup. It has been used for a variety of geometrically complex applications, including urban applications (Pullen et al. 2005). The key technology we introduce in this work is the application of AMR, which allows the application of high-resolution modeling to certain important features, such as individual buildings and high-resolution terrain (including important vegetative and land-use features). It also allows the urban scale model to be readily interfaced with coarser resolution meso or regional scale models. This talk

  12. Computerized Adaptive Testing under Nonparametric IRT Models

    ERIC Educational Resources Information Center

    Xu, Xueli; Douglas, Jeff

    2006-01-01

    Nonparametric item response models have been developed as alternatives to the relatively inflexible parametric item response models. An open question is whether it is possible and practical to administer computerized adaptive testing with nonparametric models. This paper explores the possibility of computerized adaptive testing when using…

  13. An analog retina model for detecting dim moving objects against a bright moving background

    NASA Technical Reports Server (NTRS)

    Searfus, R. M.; Colvin, M. E.; Eeckman, F. H.; Teeters, J. L.; Axelrod, T. S.

    1991-01-01

    We are interested in applications that require the ability to track a dim target against a bright, moving background. Since the target signal will be less than or comparable to the variations in the background signal intensity, sophisticated techniques must be employed to detect the target. We present an analog retina model that adapts to the motion of the background in order to enhance targets that have a velocity difference with respect to the background. Computer simulation results and our preliminary concept of an analog 'Z' focal plane implementation are also presented.

  14. Background noise cancellation of manatee vocalizations using an adaptive line enhancer.

    PubMed

    Yan, Zheng; Niezrecki, Christopher; Cattafesta, Louis N; Beusse, Diedrich O

    2006-07-01

    The West Indian manatee (Trichechus manatus latirostris) has become an endangered species partly because of an increase in the number of collisions with boats. A device to alert boaters of the presence of manatees is desired. Previous research has shown that background noise limits the manatee vocalization detection range (which is critical for practical implementation). By improving the signal-to-noise ratio of the measured manatee vocalization signal, it is possible to extend the detection range. The finite impulse response (FIR) structure of the adaptive line enhancer (ALE) can detect and track narrow-band signals buried in broadband noise. In this paper, a constrained infinite impulse response (IIR) ALE, called a feedback ALE (FALE), is implemented to reduce the background noise. In addition, a bandpass filter is used as a baseline for comparison. A library consisting of 100 manatee calls spanning ten different signal categories is used to evaluate the performance of the bandpass filter, FIR-ALE, and FALE. The results show that the FALE is capable of reducing background noise by about 6.0 and 21.4 dB better than that of the FIR-ALE and bandpass filter, respectively, when the signal-to-noise ratio (SNR) of the original manatee call is -5 dB. PMID:16875212

  15. An Adaptive Critic Approach to Reference Model Adaptation

    NASA Technical Reports Server (NTRS)

    Krishnakumar, K.; Limes, G.; Gundy-Burlet, K.; Bryant, D.

    2003-01-01

    Neural networks have been successfully used for implementing control architectures for different applications. In this work, we examine a neural network augmented adaptive critic as a Level 2 intelligent controller for a C- 17 aircraft. This intelligent control architecture utilizes an adaptive critic to tune the parameters of a reference model, which is then used to define the angular rate command for a Level 1 intelligent controller. The present architecture is implemented on a high-fidelity non-linear model of a C-17 aircraft. The goal of this research is to improve the performance of the C-17 under degraded conditions such as control failures and battle damage. Pilot ratings using a motion based simulation facility are included in this paper. The benefits of using an adaptive critic are documented using time response comparisons for severe damage situations.

  16. Stacked Multilayer Self-Organizing Map for Background Modeling.

    PubMed

    Zhao, Zhenjie; Zhang, Xuebo; Fang, Yongchun

    2015-09-01

    In this paper, a new background modeling method called stacked multilayer self-organizing map background model (SMSOM-BM) is proposed, which presents several merits such as strong representative ability for complex scenarios, easy to use, and so on. In order to enhance the representative ability of the background model and make the parameters learned automatically, the recently developed idea of representative learning (or deep learning) is elegantly employed to extend the existing single-layer self-organizing map background model to a multilayer one (namely, the proposed SMSOM-BM). As a consequence, the SMSOM-BM gains several merits including strong representative ability to learn background model of challenging scenarios, and automatic determination for most network parameters. More specifically, every pixel is modeled by a SMSOM, and spatial consistency is considered at each layer. By introducing a novel over-layer filtering process, we can train the background model layer by layer in an efficient manner. Furthermore, for real-time performance consideration, we have implemented the proposed method using NVIDIA CUDA platform. Comparative experimental results show superior performance of the proposed approach. PMID:25935034

  17. Adaptation to background light enables contrast coding at rod bipolar cell synapses

    PubMed Central

    Ke, Jiang-Bin; Wang, Yanbin V.; Borghuis, Bart G.; Cembrowski, Mark S.; Riecke, Hermann; Kath, William L.; Demb, Jonathan B.; Singer, Joshua H.

    2013-01-01

    SUMMARY Rod photoreceptors contribute to vision over a ~6 log-unit range of light intensities. The wide dynamic range of rod vision is thought to depend upon light intensity-dependent switching between two parallel pathways linking rods to ganglion cells: a rod→rod bipolar (RB) cell pathway that operates at dim backgrounds and a rod→cone→cone bipolar cell pathway that operates at brighter backgrounds. We evaluated this conventional model of rod vision by recording rod-mediated light responses from ganglion and AII amacrine cells and by recording RB-mediated synaptic currents from AII amacrine cells in mouse retina. Contrary to the conventional model, we found that the RB pathway functioned at backgrounds sufficient to activate the rod→cone pathway. As background light intensity increased, the RB’s role changed from encoding the absorption of single photons to encoding contrast modulations around mean luminance. This transition is explained by the intrinsic dynamics of transmission from RB synapses. PMID:24373883

  18. Spatiotemporal models for the simulation of infrared backgrounds

    NASA Astrophysics Data System (ADS)

    Wilkes, Don M.; Cadzow, James A.; Peters, R. Alan, II; Li, Xingkang

    1992-09-01

    It is highly desirable for designers of automatic target recognizers (ATRs) to be able to test their algorithms on targets superimposed on a wide variety of background imagery. Background imagery in the infrared spectrum is expensive to gather from real sources, consequently, there is a need for accurate models for producing synthetic IR background imagery. We have developed a model for such imagery that will do the following: Given a real, infrared background image, generate another image, distinctly different from the one given, that has the same general visual characteristics as well as the first and second-order statistics of the original image. The proposed model consists of a finite impulse response (FIR) kernel convolved with an excitation function, and histogram modification applied to the final solution. A procedure for deriving the FIR kernel using a signal enhancement algorithm has been developed, and the histogram modification step is a simple memoryless nonlinear mapping that imposes the first order statistics of the original image onto the synthetic one, thus the overall model is a linear system cascaded with a memoryless nonlinearity. It has been found that the excitation function relates to the placement of features in the image, the FIR kernel controls the sharpness of the edges and the global spectrum of the image, and the histogram controls the basic coloration of the image. A drawback to this method of simulating IR backgrounds is that a database of actual background images must be collected in order to produce accurate FIR and histogram models. If this database must include images of all types of backgrounds obtained at all times of the day and all times of the year, the size of the database would be prohibitive. In this paper we propose improvements to the model described above that enable time-dependent modeling of the IR background. This approach can greatly reduce the number of actual IR backgrounds that are required to produce a

  19. Adaptive Modeling of the International Space Station Electrical Power System

    NASA Technical Reports Server (NTRS)

    Thomas, Justin Ray

    2007-01-01

    Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.

  20. On Fractional Model Reference Adaptive Control

    PubMed Central

    Shi, Bao; Dong, Chao

    2014-01-01

    This paper extends the conventional Model Reference Adaptive Control systems to fractional ones based on the theory of fractional calculus. A control law and an incommensurate fractional adaptation law are designed for the fractional plant and the fractional reference model. The stability and tracking convergence are analyzed using the frequency distributed fractional integrator model and Lyapunov theory. Moreover, numerical simulations of both linear and nonlinear systems are performed to exhibit the viability and effectiveness of the proposed methodology. PMID:24574897

  1. On fractional Model Reference Adaptive Control.

    PubMed

    Shi, Bao; Yuan, Jian; Dong, Chao

    2014-01-01

    This paper extends the conventional Model Reference Adaptive Control systems to fractional ones based on the theory of fractional calculus. A control law and an incommensurate fractional adaptation law are designed for the fractional plant and the fractional reference model. The stability and tracking convergence are analyzed using the frequency distributed fractional integrator model and Lyapunov theory. Moreover, numerical simulations of both linear and nonlinear systems are performed to exhibit the viability and effectiveness of the proposed methodology. PMID:24574897

  2. Gravitoinertial force background level affects adaptation to coriolis force perturbations of reaching movements

    NASA Technical Reports Server (NTRS)

    Lackner, J. R.; Dizio, P.

    1998-01-01

    We evaluated the combined effects on reaching movements of the transient, movement-dependent Coriolis forces and the static centrifugal forces generated in a rotating environment. Specifically, we assessed the effects of comparable Coriolis force perturbations in different static force backgrounds. Two groups of subjects made reaching movements toward a just-extinguished visual target before rotation began, during 10 rpm counterclockwise rotation, and after rotation ceased. One group was seated on the axis of rotation, the other 2.23 m away. The resultant of gravity and centrifugal force on the hand was 1.0 g for the on-center group during 10 rpm rotation, and 1.031 g for the off-center group because of the 0.25 g centrifugal force present. For both groups, rightward Coriolis forces, approximately 0.2 g peak, were generated during voluntary arm movements. The endpoints and paths of the initial per-rotation movements were deviated rightward for both groups by comparable amounts. Within 10 subsequent reaches, the on-center group regained baseline accuracy and straight-line paths; however, even after 40 movements the off-center group had not resumed baseline endpoint accuracy. Mirror-image aftereffects occurred when rotation stopped. These findings demonstrate that manual control is disrupted by transient Coriolis force perturbations and that adaptation can occur even in the absence of visual feedback. An increase, even a small one, in background force level above normal gravity does not affect the size of the reaching errors induced by Coriolis forces nor does it affect the rate of reacquiring straight reaching paths; however, it does hinder restoration of reaching accuracy.

  3. Graphical Models and Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    Almond, Russell G.; Mislevy, Robert J.

    1999-01-01

    Considers computerized adaptive testing from the perspective of graphical modeling (GM). GM provides methods for making inferences about multifaceted skills and knowledge and for extracting data from complex performances. Provides examples from language-proficiency assessment. (SLD)

  4. Image Discrimination Models Predict Object Detection in Natural Backgrounds

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Rohaly, A. M.; Watson, Andrew B.; Null, Cynthia H. (Technical Monitor)

    1994-01-01

    Object detection involves looking for one of a large set of object sub-images in a large set of background images. Image discrimination models only predict the probability that an observer will detect a difference between two images. In a recent study based on only six different images, we found that discrimination models can predict the relative detectability of objects in those images, suggesting that these simpler models may be useful in some object detection applications. Here we replicate this result using a new, larger set of images. Fifteen images of a vehicle in an other-wise natural setting were altered to remove the vehicle and mixed with the original image in a proportion chosen to make the target neither perfectly recognizable nor unrecognizable. The target was also rotated about a vertical axis through its center and mixed with the background. Sixteen observers rated these 30 target images and the 15 background-only images for the presence of a vehicle. The likelihoods of the observer responses were computed from a Thurstone scaling model with the assumption that the detectabilities are proportional to the predictions of an image discrimination model. Three image discrimination models were used: a cortex transform model, a single channel model with a contrast sensitivity function filter, and the Root-Mean-Square (RMS) difference of the digital target and background-only images. As in the previous study, the cortex transform model performed best; the RMS difference predictor was second best; and last, but still a reasonable predictor, was the single channel model. Image discrimination models can predict the relative detectabilities of objects in natural backgrounds.

  5. Background noise model development for seismic stations of KMA

    NASA Astrophysics Data System (ADS)

    Jeon, Youngsoo

    2010-05-01

    The background noise recorded at seismometer is exist at any seismic signal due to the natural phenomena of the medium which the signal passed through. Reducing the seismic noise is very important to improve the data quality in seismic studies. But, the most important aspect of reducing seismic noise is to find the appropriate place before installing the seismometer. For this reason, NIMR(National Institution of Meteorological Researches) starts to develop a model of standard background noise for the broadband seismic stations of the KMA(Korea Meteorological Administration) using a continuous data set obtained from 13 broadband stations during the period of 2007 and 2008. We also developed the model using short period seismic data from 10 stations at the year of 2009. The method of Mcmara and Buland(2004) is applied to analyse background noise of Korean Peninsula. The fact that borehole seismometer records show low noise level at frequency range greater than 1 Hz compared with that of records at the surface indicate that the cultural noise of inland Korean Peninsula should be considered to process the seismic data set. Reducing Double Frequency peak also should be regarded because the Korean Peninsula surrounded by the seas from eastern, western and southern part. The development of KMA background model shows that the Peterson model(1993) is not applicable to fit the background noise signal generated from Korean Peninsula.

  6. Modeling surface backgrounds from radon progeny plate-out

    SciTech Connect

    Perumpilly, G.; Guiseppe, V. E.; Snyder, N.

    2013-08-08

    The next generation low-background detectors operating deep underground aim for unprecedented low levels of radioactive backgrounds. The surface deposition and subsequent implantation of radon progeny in detector materials will be a source of energetic background events. We investigate Monte Carlo and model-based simulations to understand the surface implantation profile of radon progeny. Depending on the material and region of interest of a rare event search, these partial energy depositions can be problematic. Motivated by the use of Ge crystals for the detection of neutrinoless double-beta decay, we wish to understand the detector response of surface backgrounds from radon progeny. We look at the simulation of surface decays using a validated implantation distribution based on nuclear recoils and a realistic surface texture. Results of the simulations and measured α spectra are presented.

  7. Cosmic microwave background observables of small field models of inflation

    SciTech Connect

    Ben-Dayan, Ido; Brustein, Ram E-mail: ramyb@bgu.ac.il

    2010-09-01

    We construct a class of single small field models of inflation that can predict, contrary to popular wisdom, an observable gravitational wave signal in the cosmic microwave background anisotropies. The spectral index, its running, the tensor to scalar ratio and the number of e-folds can cover all the parameter space currently allowed by cosmological observations. A unique feature of models in this class is their ability to predict a negative spectral index running in accordance with recent cosmic microwave background observations. We discuss the new class of models from an effective field theory perspective and show that if the dimensionless trilinear coupling is small, as required for consistency, then the observed spectral index running implies a high scale of inflation and hence an observable gravitational wave signal. All the models share a distinct prediction of higher power at smaller scales, making them easy targets for detection.

  8. Adaptive Modeling Language and Its Derivatives

    NASA Technical Reports Server (NTRS)

    Chemaly, Adel

    2006-01-01

    Adaptive Modeling Language (AML) is the underlying language of an object-oriented, multidisciplinary, knowledge-based engineering framework. AML offers an advanced modeling paradigm with an open architecture, enabling the automation of the entire product development cycle, integrating product configuration, design, analysis, visualization, production planning, inspection, and cost estimation.

  9. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Khamayseh, Ahmed K; de Almeida, Valmor F; Hansen, Glen

    2008-01-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called "mesh motion" (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  10. Hybrid Surface Mesh Adaptation for Climate Modeling

    SciTech Connect

    Ahmed Khamayseh; Valmor de Almeida; Glen Hansen

    2008-10-01

    Solution-driven mesh adaptation is becoming quite popular for spatial error control in the numerical simulation of complex computational physics applications, such as climate modeling. Typically, spatial adaptation is achieved by element subdivision (h adaptation) with a primary goal of resolving the local length scales of interest. A second, less-popular method of spatial adaptivity is called “mesh motion” (r adaptation); the smooth repositioning of mesh node points aimed at resizing existing elements to capture the local length scales. This paper proposes an adaptation method based on a combination of both element subdivision and node point repositioning (rh adaptation). By combining these two methods using the notion of a mobility function, the proposed approach seeks to increase the flexibility and extensibility of mesh motion algorithms while providing a somewhat smoother transition between refined regions than is produced by element subdivision alone. Further, in an attempt to support the requirements of a very general class of climate simulation applications, the proposed method is designed to accommodate unstructured, polygonal mesh topologies in addition to the most popular mesh types.

  11. Fast background subtraction for moving cameras based on nonparametric models

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Qin, Kaihuai; Sun, Wei; Guo, Huayuan

    2016-05-01

    In this paper, a fast background subtraction algorithm for freely moving cameras is presented. A nonparametric sample consensus model is employed as the appearance background model. The as-similar-as-possible warping technique, which obtains multiple homographies for different regions of the frame, is introduced to robustly estimate and compensate the camera motion between the consecutive frames. Unlike previous methods, our algorithm does not need any preprocess step for computing the dense optical flow or point trajectories. Instead, a superpixel-based seeded region growing scheme is proposed to extend the motion cue based on the sparse optical flow to the entire image. Then, a superpixel-based temporal coherent Markov random field optimization framework is built on the raw segmentations from the background model and the motion cue, and the final background/foreground labels are obtained using the graph-cut algorithm. Extensive experimental evaluations show that our algorithm achieves satisfactory accuracy, while being much faster than the state-of-the-art competing methods.

  12. Model Based Unsupervised Learning Guided by Abundant Background Samples

    PubMed Central

    Mahdi, Rami N.; Rouchka, Eric C.

    2010-01-01

    Many data sets contain an abundance of background data or samples belonging to classes not currently under consideration. We present a new unsupervised learning method based on Fuzzy C-Means to learn sub models of a class using background samples to guide cluster split and merge operations. The proposed method demonstrates how background samples can be used to guide and improve the clustering process. The proposed method results in more accurate clusters and helps to escape locally minimum solutions. In addition, the number of clusters is determined for the class under consideration. The method demonstrates remarkable performance on both synthetic 2D and real world data from the MNIST dataset of hand written digits. PMID:20436793

  13. Roy’s Adaptation Model-Based Patient Education for Promoting the Adaptation of Hemodialysis Patients

    PubMed Central

    Afrasiabifar, Ardashir; Karimi, Zohreh; Hassani, Parkhideh

    2013-01-01

    Background In addition to physical adaptation and psychosocial adjustment to chronic renal disease, hemodialysis (HD) patients must also adapt to dialysis therapy plan. Objectives The aim of the present study was to examine the effect of Roy’s adaptation model-based patient education on adaptation of HD patients. Patients and Methods This study is a semi-experimental research that was conducted with the participation of all patients with end-stage renal disease referred to the dialysis unit of Shahid Beheshti Hospital of Yasuj city, 2010. A total of 59 HD patients were randomly allocated to two groups of test and control. Data were collected by a questionnaire based on the Roy’s Adaptation Model (RAM). Validity and reliability of the questionnaire were approved. Patient education was determined by eight one-hour sessions over eight weeks. At the end of the education plan, the patients were given an educational booklet containing the main points of self-care for HD patients. The effectiveness of education plan was assessed two months after plan completion and data were compared with the pre-education scores. All analyses were conducted using the SPSS software (version 16) through descriptive and inferential statistics including correlation, t-test, ANOVA and ANCOVA tests. Results The results showed significant differences in the mean scores of physiological and self-concept models between the test and control groups (P = 0.01 and P = 0.03 respectively). Also a statistical difference (P = 0.04) was observed in the mean scores of the role function mode of both groups. There was no significant difference in the mean scores of interdependence modes between the two groups. Conclusions RAM based patient education could improve the patients’ adaptation in physiologic and self-concept modes. In addition to suggesting further research in this area, nurses are recommended to pay more attention in applying RAM in dialysis centers. PMID:24396575

  14. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  15. DiffuseModel: Modeling the diffuse ultraviolet background

    NASA Astrophysics Data System (ADS)

    Murthy, Jayant

    2015-12-01

    DiffuseModel calculates the scattered radiation from dust scattering in the Milky Way based on stars from the Hipparcos catalog. It uses Monte Carlo to implement multiple scattering and assumes a user-supplied grid for the dust distribution. The output is a FITS file with the diffuse light over the Galaxy. It is intended for use in the UV (900 - 3000 A) but may be modified for use in other wavelengths and galaxies.

  16. Hybrid adaptive control of a dragonfly model

    NASA Astrophysics Data System (ADS)

    Couceiro, Micael S.; Ferreira, Nuno M. F.; Machado, J. A. Tenreiro

    2012-02-01

    Dragonflies show unique and superior flight performances than most of other insect species and birds. They are equipped with two pairs of independently controlled wings granting an unmatchable flying performance and robustness. In this paper, it is presented an adaptive scheme controlling a nonlinear model inspired in a dragonfly-like robot. It is proposed a hybrid adaptive ( HA) law for adjusting the parameters analyzing the tracking error. At the current stage of the project it is considered essential the development of computational simulation models based in the dynamics to test whether strategies or algorithms of control, parts of the system (such as different wing configurations, tail) as well as the complete system. The performance analysis proves the superiority of the HA law over the direct adaptive ( DA) method in terms of faster and improved tracking and parameter convergence.

  17. Adaptive modelling of structured molecular representations for toxicity prediction

    NASA Astrophysics Data System (ADS)

    Bertinetto, Carlo; Duce, Celia; Micheli, Alessio; Solaro, Roberto; Tiné, Maria Rosaria

    2012-12-01

    We investigated the possibility of modelling structure-toxicity relationships by direct treatment of the molecular structure (without using descriptors) through an adaptive model able to retain the appropriate structural information. With respect to traditional descriptor-based approaches, this provides a more general and flexible way to tackle prediction problems that is particularly suitable when little or no background knowledge is available. Our method employs a tree-structured molecular representation, which is processed by a recursive neural network (RNN). To explore the realization of RNN modelling in toxicological problems, we employed a data set containing growth impairment concentrations (IGC50) for Tetrahymena pyriformis.

  18. Adaptive deformable model for mouth boundary detection

    NASA Astrophysics Data System (ADS)

    Mirhosseini, Ali R.; Yan, Hong; Lam, Kin-Man

    1998-03-01

    A new generalized algorithm is proposed to automatically extract a mouth boundary model form human face images. Such an algorithm can contribute to human face recognition and lip-reading-assisted speech recognition systems, in particular, and multimodal human computer interaction system, in general. The new model is an iterative algorithm based on a hierarchical model adaptation scheme using deformable templates, as a generalization of some of the previous works. The role of prior knowledge is essential for perceptual organization in the algorithm. The prior knowledge about the mouth shape is used to define and initialize a primary deformable mode. Each primary boundary curve of a mouth is formed on three control points, including two mouth corners, whose locations are optimized using a primary energy functional. This energy functional essentially captures the knowledge of the mouth shape to perceptually organize image information. The primary model is finely tuned in the second stage of optimization algorithm using a generalized secondary energy functional. Basically each boundary curve is finely tuned using more control points. The primary model is replaced by an adapted model if there is an increase in the secondary energy functional. The results indicate that the new model adaptation technique satisfactorily generalizes the mouth boundary model extraction in an automated fashion.

  19. An Assessment of a Technique for Modeling Lidar Background Measurements

    NASA Astrophysics Data System (ADS)

    Powell, K. A.; Hunt, W. H.; Vaughan, M. A.; Hair, J. W.; Butler, C. F.; Hostetler, C. A.

    2015-12-01

    A high-fidelity lidar simulation tool has been developed to generate synthetic lidar backscatter data that closely matches the expected performance of various lidars, including the noise characteristics inherent to analog detection and uncertainties related to the measurement environment. This tool supports performance trade studies and scientific investigations for both the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), which flies aboard Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), and the NASA Langley Research Center airborne High Spectral Resolution Lidar (HSRL). The simulation tool models the lidar instrument characteristics, the backscatter signals generated from aerosols, clouds, ocean surface and subsurface, and the solar background signals. The background signals are derived from the simulated aerosol and cloud characteristics, the surface type, and solar zenith angle, using a look-up table of upwelling radiance vs scene type. The upwelling radiances were derived from the CALIOP RMS background noise and were correlated with measurements of the particulate intensive and extensive optical properties, including surface scattering for transparent layers. Tests were conducted by tuning the tool for both HSRL and CALIOP instrument settings and the atmospheres were defined using HSRL measurements from underflights of CALIPSO. For similar scenes, the simulated and measured backgrounds were compared. Overall, comparisons showed good agreement, verifying the accuracy of the tool to support studies involving instrument characterization and advanced data analysis techniques.

  20. Sigma models for genuinely non-geometric backgrounds

    NASA Astrophysics Data System (ADS)

    Chatzistavrakidis, Athanasios; Jonke, Larisa; Lechtenfeld, Olaf

    2015-11-01

    The existence of genuinely non-geometric backgrounds, i.e. ones without geometric dual, is an important question in string theory. In this paper we examine this question from a sigma model perspective. First we construct a particular class of Courant algebroids as protobialgebroids with all types of geometric and non-geometric fluxes. For such structures we apply the mathematical result that any Courant algebroid gives rise to a 3D topological sigma model of the AKSZ type and we discuss the corresponding 2D field theories. It is found that these models are always geometric, even when both 2-form and 2-vector fields are neither vanishing nor inverse of one another. Taking a further step, we suggest an extended class of 3D sigma models, whose world volume is embedded in phase space, which allow for genuinely non-geometric backgrounds. Adopting the doubled formalism such models can be related to double field theory, albeit from a world sheet perspective.

  1. Automated adaptive inference of phenomenological dynamical models

    PubMed Central

    Daniels, Bryan C.; Nemenman, Ilya

    2015-01-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508

  2. Automated adaptive inference of phenomenological dynamical models

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan C.; Nemenman, Ilya

    2015-08-01

    Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.

  3. Model reference adaptive systems some examples.

    NASA Technical Reports Server (NTRS)

    Landau, I. D.; Sinner, E.; Courtiol, B.

    1972-01-01

    A direct design method is derived for several single-input single-output model reference adaptive systems (M.R.A.S.). The approach used helps to clarify the various steps involved in a design, which utilizes the hyperstability concept. An example of a multiinput, multioutput M.R.A.S. is also discussed. Attention is given to the problem of a series compensator. It is pointed out that a series compensator which contains derivative terms must generally be introduced in the adaptation mechanism in order to assure asymptotic hyperstability. Results obtained by the simulation of a M.R.A.S. on an analog computer are also presented.

  4. Colour matching of isoluminant samples and backgrounds: a model.

    PubMed

    Stanikunas, Rytis; Vaitkevicius, Henrikas; Kulikowski, Janus J; Murray, Ian J; Daugirdiene, Avsra

    2005-01-01

    A cone-opponent-based vector model is used to derive the activity in the red-green, yellow-blue, and achromatic channels during a sequential asymmetric colour-matching experiment. Forty Munsell samples, simulated under illuminant C, were matched with their appearance under eight test illuminants. The test samples and backgrounds were photometrically isoluminant with each other. According to the model, the orthogonality of the channels is revealed when test illuminants lie along either red-green or yellow blue cardinal axes. The red green and yellow-blue outputs of the channels are described in terms of the hue of the sample. The fact that the three-channel model explains the data in a colour-matching experiment indicates that an early form of colour processing is mediated at a site where the three channels converge, probably the input layer of V1. PMID:16178154

  5. Peaks in the Cosmic Microwave Background: Flat versus Open Models

    NASA Astrophysics Data System (ADS)

    Barreiro, R. B.; Sanz, J. L.; Martínez-González, E.; Cayón, L.; Silk, Joseph

    1997-03-01

    We present properties of the peaks (maxima) of the microwave background anisotropies expected in flat and open cold dark matter models. We obtain analytical expressions of several topological descriptors: mean number of maxima and the probability distribution of the Gaussian curvature and the eccentricity of the peaks. These quantities are calculated as functions of the radiation power spectrum, assuming a Gaussian distribution of temperature anisotropies. We present results for angular resolutions ranging from 5' to 20' (antenna FWHM), scales that are relevant for the MAP and COBRAS/SAMBA space missions and the ground-based interferometer experiments. Our analysis also includes the effects of noise. We find that the number of peaks can discriminate between standard cold dark matter models and that the Gaussian curvature distribution provides a useful test for these various models, whereas the eccentricity distribution cannot distinguish between them.

  6. Adaptive Behaviour Assessment System: Indigenous Australian Adaptation Model (ABAS: IAAM)

    ERIC Educational Resources Information Center

    du Plessis, Santie

    2015-01-01

    The study objectives were to develop, trial and evaluate a cross-cultural adaptation of the Adaptive Behavior Assessment System-Second Edition Teacher Form (ABAS-II TF) ages 5-21 for use with Indigenous Australian students ages 5-14. This study introduced a multiphase mixed-method design with semi-structured and informal interviews, school…

  7. Adapting overcomplete wavelet models to natural images

    NASA Astrophysics Data System (ADS)

    Sallee, Phil; Olshausen, Bruno A.

    2003-11-01

    Overcomplete wavelet representations have become increasingly popular for their ability to provide highly sparse and robust descriptions of natural signals. We describe a method for incorporating an overcomplete wavelet representation as part of a statistical model of images which includes a sparse prior distribution over the wavelet coefficients. The wavelet basis functions are parameterized by a small set of 2-D functions. These functions are adapted to maximize the average log-likelihood of the model for a large database of natural images. When adapted to natural images, these functions become selective to different spatial orientations, and they achieve a superior degree of sparsity on natural images as compared with traditional wavelet bases. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coefficients. Inference with the learned model is demonstrated for applications such as denoising, with results that compare favorably with other methods.

  8. Modeling Background Attenuation by Sample Matrix in Gamma Spectrometric Analyses

    SciTech Connect

    Bastos, Rodrigo O.; Appoloni, Carlos R.

    2008-08-07

    In laboratory gamma spectrometric analyses, the procedures for estimating background usually overestimate it. If an empty container similar to that used to hold samples is measured, it does not consider the background attenuation by sample matrix. If a 'blank' sample is measured, the hypothesis that this sample will be free of radionuclides is generally not true. The activity of this 'blank' sample is frequently sufficient to mask or to overwhelm the effect of attenuation so that the background remains overestimated. In order to overcome this problem, a model was developed to obtain the attenuated background from the spectrum acquired with the empty container. Beyond reasonable hypotheses, the model presumes the knowledge of the linear attenuation coefficient of the samples and its dependence on photon energy and samples densities. An evaluation of the effects of this model on the Lowest Limit of Detection (LLD) is presented for geological samples placed in cylindrical containers that completely cover the top of an HPGe detector that has a 66% relative efficiency. The results are presented for energies in the range of 63 to 2614keV, for sample densities varying from 1.5 to 2.5 g{center_dot}cm{sup -3}, and for the height of the material on the detector of 2 cm and 5 cm. For a sample density of 2.0 g{center_dot}cm{sup -3} and with a 2cm height, the method allowed for a lowering of 3.4% of the LLD for the energy of 1460keV, from {sup 40}K, 3.9% for the energy of 911keV from {sup 228}Ac, 4.5% for the energy of 609keV from {sup 214}Bi, and8.3% for the energy of 92keV from {sup 234}Th. For a sample density of 1.75 g{center_dot}cm{sup -3} and a 5cm height, the method indicates a lowering of 6.5%, 7.4%, 8.3% and 12.9% of the LLD for the same respective energies.

  9. Individual-based model for quorum sensing with background flow.

    PubMed

    Uecker, Hannes; Uecke, Hannes; Müller, Johannes; Hense, Burkhard A

    2014-07-01

    Quorum sensing is a wide-spread mode of cell-cell communication among bacteria in which cells release a signalling substance at a low rate. The concentration of this substance allows the bacteria to gain information about population size or spatial confinement. We consider a model for N cells which communicate with each other via a signalling substance in a diffusive medium with a background flow. The model consists of an initial boundary value problem for a parabolic PDE describing the exterior concentration u of the signalling substance, coupled with N ODEs for the masses ai of the substance within each cell. The cells are balls of radius R in R3, and under some scaling assumptions we formally derive an effective system of N ODEs describing the behaviour of the cells. The reduced system is then used to study the effect of flow on communication in general, and in particular for a number of geometric configurations. PMID:24849771

  10. Adaptive Numerical Algorithms in Space Weather Modeling

    NASA Technical Reports Server (NTRS)

    Toth, Gabor; vanderHolst, Bart; Sokolov, Igor V.; DeZeeuw, Darren; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Nakib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2010-01-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different physics in different domains. A multi-physics system can be modeled by a software framework comprising of several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solar wind Roe Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamics (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit numerical

  11. Adaptive numerical algorithms in space weather modeling

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; van der Holst, Bart; Sokolov, Igor V.; De Zeeuw, Darren L.; Gombosi, Tamas I.; Fang, Fang; Manchester, Ward B.; Meng, Xing; Najib, Dalal; Powell, Kenneth G.; Stout, Quentin F.; Glocer, Alex; Ma, Ying-Juan; Opher, Merav

    2012-02-01

    Space weather describes the various processes in the Sun-Earth system that present danger to human health and technology. The goal of space weather forecasting is to provide an opportunity to mitigate these negative effects. Physics-based space weather modeling is characterized by disparate temporal and spatial scales as well as by different relevant physics in different domains. A multi-physics system can be modeled by a software framework comprising several components. Each component corresponds to a physics domain, and each component is represented by one or more numerical models. The publicly available Space Weather Modeling Framework (SWMF) can execute and couple together several components distributed over a parallel machine in a flexible and efficient manner. The framework also allows resolving disparate spatial and temporal scales with independent spatial and temporal discretizations in the various models. Several of the computationally most expensive domains of the framework are modeled by the Block-Adaptive Tree Solarwind Roe-type Upwind Scheme (BATS-R-US) code that can solve various forms of the magnetohydrodynamic (MHD) equations, including Hall, semi-relativistic, multi-species and multi-fluid MHD, anisotropic pressure, radiative transport and heat conduction. Modeling disparate scales within BATS-R-US is achieved by a block-adaptive mesh both in Cartesian and generalized coordinates. Most recently we have created a new core for BATS-R-US: the Block-Adaptive Tree Library (BATL) that provides a general toolkit for creating, load balancing and message passing in a 1, 2 or 3 dimensional block-adaptive grid. We describe the algorithms of BATL and demonstrate its efficiency and scaling properties for various problems. BATS-R-US uses several time-integration schemes to address multiple time-scales: explicit time stepping with fixed or local time steps, partially steady-state evolution, point-implicit, semi-implicit, explicit/implicit, and fully implicit

  12. Adaptive Control with Reference Model Modification

    NASA Technical Reports Server (NTRS)

    Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This paper presents a modification of the conventional model reference adaptive control (MRAC) architecture in order to improve transient performance of the input and output signals of uncertain systems. A simple modification of the reference model is proposed by feeding back the tracking error signal. It is shown that the proposed approach guarantees tracking of the given reference command and the reference control signal (one that would be designed if the system were known) not only asymptotically but also in transient. Moreover, it prevents generation of high frequency oscillations, which are unavoidable in conventional MRAC systems for large adaptation rates. The provided design guideline makes it possible to track a reference commands of any magnitude from any initial position without re-tuning. The benefits of the method are demonstrated with a simulation example

  13. Adaptive cyber-attack modeling system

    NASA Astrophysics Data System (ADS)

    Gonsalves, Paul G.; Dougherty, Edward T.

    2006-05-01

    The pervasiveness of software and networked information systems is evident across a broad spectrum of business and government sectors. Such reliance provides an ample opportunity not only for the nefarious exploits of lone wolf computer hackers, but for more systematic software attacks from organized entities. Much effort and focus has been placed on preventing and ameliorating network and OS attacks, a concomitant emphasis is required to address protection of mission critical software. Typical software protection technique and methodology evaluation and verification and validation (V&V) involves the use of a team of subject matter experts (SMEs) to mimic potential attackers or hackers. This manpower intensive, time-consuming, and potentially cost-prohibitive approach is not amenable to performing the necessary multiple non-subjective analyses required to support quantifying software protection levels. To facilitate the evaluation and V&V of software protection solutions, we have designed and developed a prototype adaptive cyber attack modeling system. Our approach integrates an off-line mechanism for rapid construction of Bayesian belief network (BN) attack models with an on-line model instantiation, adaptation and knowledge acquisition scheme. Off-line model construction is supported via a knowledge elicitation approach for identifying key domain requirements and a process for translating these requirements into a library of BN-based cyber-attack models. On-line attack modeling and knowledge acquisition is supported via BN evidence propagation and model parameter learning.

  14. Adaptive human behavior in epidemiological models.

    PubMed

    Fenichel, Eli P; Castillo-Chavez, Carlos; Ceddia, M G; Chowell, Gerardo; Parra, Paula A Gonzalez; Hickling, Graham J; Holloway, Garth; Horan, Richard; Morin, Benjamin; Perrings, Charles; Springborn, Michael; Velazquez, Leticia; Villalobos, Cristina

    2011-04-12

    The science and management of infectious disease are entering a new stage. Increasingly public policy to manage epidemics focuses on motivating people, through social distancing policies, to alter their behavior to reduce contacts and reduce public disease risk. Person-to-person contacts drive human disease dynamics. People value such contacts and are willing to accept some disease risk to gain contact-related benefits. The cost-benefit trade-offs that shape contact behavior, and hence the course of epidemics, are often only implicitly incorporated in epidemiological models. This approach creates difficulty in parsing out the effects of adaptive behavior. We use an epidemiological-economic model of disease dynamics to explicitly model the trade-offs that drive person-to-person contact decisions. Results indicate that including adaptive human behavior significantly changes the predicted course of epidemics and that this inclusion has implications for parameter estimation and interpretation and for the development of social distancing policies. Acknowledging adaptive behavior requires a shift in thinking about epidemiological processes and parameters. PMID:21444809

  15. Adaptive human behavior in epidemiological models

    PubMed Central

    Fenichel, Eli P.; Castillo-Chavez, Carlos; Ceddia, M. G.; Chowell, Gerardo; Parra, Paula A. Gonzalez; Hickling, Graham J.; Holloway, Garth; Horan, Richard; Morin, Benjamin; Perrings, Charles; Springborn, Michael; Velazquez, Leticia; Villalobos, Cristina

    2011-01-01

    The science and management of infectious disease are entering a new stage. Increasingly public policy to manage epidemics focuses on motivating people, through social distancing policies, to alter their behavior to reduce contacts and reduce public disease risk. Person-to-person contacts drive human disease dynamics. People value such contacts and are willing to accept some disease risk to gain contact-related benefits. The cost–benefit trade-offs that shape contact behavior, and hence the course of epidemics, are often only implicitly incorporated in epidemiological models. This approach creates difficulty in parsing out the effects of adaptive behavior. We use an epidemiological–economic model of disease dynamics to explicitly model the trade-offs that drive person-to-person contact decisions. Results indicate that including adaptive human behavior significantly changes the predicted course of epidemics and that this inclusion has implications for parameter estimation and interpretation and for the development of social distancing policies. Acknowledging adaptive behavior requires a shift in thinking about epidemiological processes and parameters. PMID:21444809

  16. Modeling and adaptive control of acoustic noise

    NASA Astrophysics Data System (ADS)

    Venugopal, Ravinder

    Active noise control is a problem that receives significant attention in many areas including aerospace and manufacturing. The advent of inexpensive high performance processors has made it possible to implement real-time control algorithms to effect active noise control. Both fixed-gain and adaptive methods may be used to design controllers for this problem. For fixed-gain methods, it is necessary to obtain a mathematical model of the system to design controllers. In addition, models help us gain phenomenological insights into the dynamics of the system. Models are also necessary to perform numerical simulations. However, models are often inadequate for the purpose of controller design because they involve parameters that are difficult to determine and also because there are always unmodeled effects. This fact motivates the use of adaptive algorithms for control since adaptive methods usually require significantly less model information than fixed-gain methods. The first part of this dissertation deals with derivation of a state space model of a one-dimensional acoustic duct. Two types of actuation, namely, a side-mounted speaker (interior control) and an end-mounted speaker (boundary control) are considered. The techniques used to derive the model of the acoustic duct are extended to the problem of fluid surface wave control. A state space model of small amplitude surfaces waves of a fluid in a rectangular container is derived and two types of control methods, namely, surface pressure control and map actuator based control are proposed and analyzed. The second part of this dissertation deals with the development of an adaptive disturbance rejection algorithm that is applied to the problem of active noise control. ARMARKOV models which have the same structure as predictor models are used for system representation. The algorithm requires knowledge of only one path of the system, from control to performance, and does not require a measurement of the disturbance nor

  17. Distinguishing between inflationary models from cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Tsujikawa, Shinji

    2014-06-01

    In this paper, inflationary cosmology is reviewed, paying particular attention to its observational signatures associated with large-scale density perturbations generated from quantum fluctuations. In the most general scalar-tensor theories with second-order equations of motion, we derive the scalar spectral index n_s, the tensor-to-scalar ratio r, and the nonlinear estimator f_{NL} of primordial non-Gaussianities to confront models with observations of cosmic microwave background (CMB) temperature anisotropies. Our analysis includes models such as potential-driven slow-roll inflation, k-inflation, Starobinsky inflation, and Higgs inflation with non-minimal/derivative/Galileon couplings. We constrain a host of inflationary models by using the Planck data combined with other measurements to find models most favored observationally in the current literature. We also study anisotropic inflation based on a scalar coupling with a vector (or two-form) field and discuss its observational signatures appearing in the two-point and three-point correlation functions of scalar and tensor perturbations.

  18. Characterization and modeling of a low background HPGe detector

    NASA Astrophysics Data System (ADS)

    Dokania, N.; Singh, V.; Mathimalar, S.; Nanal, V.; Pal, S.; Pillay, R. G.

    2014-05-01

    A high efficiency, low background counting setup has been made at TIFR consisting of a special HPGe detector (~ 70 %) surrounded by a low activity copper+lead shield. Detailed measurements are performed with point and extended geometry sources to obtain a complete response of the detector. An effective model of the detector has been made with GEANT4 based Monte Carlo simulations which agrees with experimental data within 5%. This setup will be used for qualification and selection of radio-pure materials to be used in a cryogenic bolometer for the study of Neutrinoless Double Beta Decay in 124Sn as well as for other rare event studies. Using this setup, radio-impurities in the rock sample from India-based Neutrino Observatory (INO) site have been estimated.

  19. An adaptive contextual quantum language model

    NASA Astrophysics Data System (ADS)

    Li, Jingfei; Zhang, Peng; Song, Dawei; Hou, Yuexian

    2016-08-01

    User interactions in search system represent a rich source of implicit knowledge about the user's cognitive state and information need that continuously evolves over time. Despite massive efforts that have been made to exploiting and incorporating this implicit knowledge in information retrieval, it is still a challenge to effectively capture the term dependencies and the user's dynamic information need (reflected by query modifications) in the context of user interaction. To tackle these issues, motivated by the recent Quantum Language Model (QLM), we develop a QLM based retrieval model for session search, which naturally incorporates the complex term dependencies occurring in user's historical queries and clicked documents with density matrices. In order to capture the dynamic information within users' search session, we propose a density matrix transformation framework and further develop an adaptive QLM ranking model. Extensive comparative experiments show the effectiveness of our session quantum language models.

  20. Synaptic dynamics: linear model and adaptation algorithm.

    PubMed

    Yousefi, Ali; Dibazar, Alireza A; Berger, Theodore W

    2014-08-01

    In this research, temporal processing in brain neural circuitries is addressed by a dynamic model of synaptic connections in which the synapse model accounts for both pre- and post-synaptic processes determining its temporal dynamics and strength. Neurons, which are excited by the post-synaptic potentials of hundred of the synapses, build the computational engine capable of processing dynamic neural stimuli. Temporal dynamics in neural models with dynamic synapses will be analyzed, and learning algorithms for synaptic adaptation of neural networks with hundreds of synaptic connections are proposed. The paper starts by introducing a linear approximate model for the temporal dynamics of synaptic transmission. The proposed linear model substantially simplifies the analysis and training of spiking neural networks. Furthermore, it is capable of replicating the synaptic response of the non-linear facilitation-depression model with an accuracy better than 92.5%. In the second part of the paper, a supervised spike-in-spike-out learning rule for synaptic adaptation in dynamic synapse neural networks (DSNN) is proposed. The proposed learning rule is a biologically plausible process, and it is capable of simultaneously adjusting both pre- and post-synaptic components of individual synapses. The last section of the paper starts with presenting the rigorous analysis of the learning algorithm in a system identification task with hundreds of synaptic connections which confirms the learning algorithm's accuracy, repeatability and scalability. The DSNN is utilized to predict the spiking activity of cortical neurons and pattern recognition tasks. The DSNN model is demonstrated to be a generative model capable of producing different cortical neuron spiking patterns and CA1 Pyramidal neurons recordings. A single-layer DSNN classifier on a benchmark pattern recognition task outperforms a 2-Layer Neural Network and GMM classifiers while having fewer numbers of free parameters and

  1. Effect of a care plan based on Roy adaptation model biological dimension on stroke patients’ physiologic adaptation level

    PubMed Central

    Alimohammadi, Nasrollah; Maleki, Bibi; Shahriari, Mohsen; Chitsaz, Ahmad

    2015-01-01

    Background: Stroke is a stressful event with several functional, physical, psychological, social, and economic problems that affect individuals’ different living balances. With coping strategies, patients try to control these problems and return to their natural life. The aim of this study is to investigate the effect of a care plan based on Roy adaptation model biological dimension on stroke patients’ physiologic adaptation level. Materials and Methods: This study is a clinical trial in which 50 patients, affected by brain stroke and being admitted in the neurology ward of Kashani and Alzahra hospitals, were randomly assigned to control and study groups in Isfahan in 2013. Roy adaptation model care plan was administered in biological dimension in the form of four sessions and phone call follow-ups for 1 month. The forms related to Roy adaptation model were completed before and after intervention in the two groups. Chi-square test and t-test were used to analyze the data through SPSS 18. Results: There was a significant difference in mean score of adaptation in physiological dimension in the study group after intervention (P < 0.001) compared to before intervention. Comparison of the mean scores of changes of adaptation in the patients affected by brain stroke in the study and control groups showed a significant increase in physiological dimension in the study group by 47.30 after intervention (P < 0.001). Conclusions: The results of study showed that Roy adaptation model biological dimension care plan can result in an increase in adaptation in patients with stroke in physiological dimension. Nurses can use this model for increasing patients’ adaptation. PMID:25878708

  2. Adaptive neuro-fuzzy inference system for classification of background EEG signals from ESES patients and controls.

    PubMed

    Yang, Zhixian; Wang, Yinghua; Ouyang, Gaoxiang

    2014-01-01

    Background electroencephalography (EEG), recorded with scalp electrodes, in children with electrical status epilepticus during slow-wave sleep (ESES) syndrome and control subjects has been analyzed. We considered 10 ESES patients, all right-handed and aged 3-9 years. The 10 control individuals had the same characteristics of the ESES ones but presented a normal EEG. Recordings were undertaken in the awake and relaxed states with their eyes open. The complexity of background EEG was evaluated using the permutation entropy (PE) and sample entropy (SampEn) in combination with the ANOVA test. It can be seen that the entropy measures of EEG are significantly different between the ESES patients and normal control subjects. Then, a classification framework based on entropy measures and adaptive neuro-fuzzy inference system (ANFIS) classifier is proposed to distinguish ESES and normal EEG signals. The results are promising and a classification accuracy of about 89% is achieved. PMID:24790547

  3. Adaptive Neuro-Fuzzy Inference System for Classification of Background EEG Signals from ESES Patients and Controls

    PubMed Central

    Yang, Zhixian; Wang, Yinghua; Ouyang, Gaoxiang

    2014-01-01

    Background electroencephalography (EEG), recorded with scalp electrodes, in children with electrical status epilepticus during slow-wave sleep (ESES) syndrome and control subjects has been analyzed. We considered 10 ESES patients, all right-handed and aged 3–9 years. The 10 control individuals had the same characteristics of the ESES ones but presented a normal EEG. Recordings were undertaken in the awake and relaxed states with their eyes open. The complexity of background EEG was evaluated using the permutation entropy (PE) and sample entropy (SampEn) in combination with the ANOVA test. It can be seen that the entropy measures of EEG are significantly different between the ESES patients and normal control subjects. Then, a classification framework based on entropy measures and adaptive neuro-fuzzy inference system (ANFIS) classifier is proposed to distinguish ESES and normal EEG signals. The results are promising and a classification accuracy of about 89% is achieved. PMID:24790547

  4. Model reference adaptive control of robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo

    1991-01-01

    This project presents the results of controlling two types of robots using new Command Generator Tracker (CGT) based Direct Model Reference Adaptive Control (MRAC) algorithms. Two mathematical models were used to represent a single-link, flexible joint arm and a Unimation PUMA 560 arm; and these were then controlled in simulation using different MRAC algorithms. Special attention was given to the performance of the algorithms in the presence of sudden changes in the robot load. Previously used CGT based MRAC algorithms had several problems. The original algorithm that was developed guaranteed asymptotic stability only for almost strictly positive real (ASPR) plants. This condition is very restrictive, since most systems do not satisfy this assumption. Further developments to the algorithm led to an expansion of the number of plants that could be controlled, however, a steady state error was introduced in the response. These problems led to the introduction of some modifications to the algorithms so that they would be able to control a wider class of plants and at the same time would asymptotically track the reference model. This project presents the development of two algorithms that achieve the desired results and simulates the control of the two robots mentioned before. The results of the simulations are satisfactory and show that the problems stated above have been corrected in the new algorithms. In addition, the responses obtained show that the adaptively controlled processes are resistant to sudden changes in the load.

  5. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo

    2014-04-15

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  6. Adaptive model training system and method

    DOEpatents

    Bickford, Randall L; Palnitkar, Rahul M

    2014-11-18

    An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.

  7. Adaptive dynamics for physiologically structured population models.

    PubMed

    Durinx, Michel; Metz, J A J Hans; Meszéna, Géza

    2008-05-01

    We develop a systematic toolbox for analyzing the adaptive dynamics of multidimensional traits in physiologically structured population models with point equilibria (sensu Dieckmann et al. in Theor. Popul. Biol. 63:309-338, 2003). Firstly, we show how the canonical equation of adaptive dynamics (Dieckmann and Law in J. Math. Biol. 34:579-612, 1996), an approximation for the rate of evolutionary change in characters under directional selection, can be extended so as to apply to general physiologically structured population models with multiple birth states. Secondly, we show that the invasion fitness function (up to and including second order terms, in the distances of the trait vectors to the singularity) for a community of N coexisting types near an evolutionarily singular point has a rational form, which is model-independent in the following sense: the form depends on the strategies of the residents and the invader, and on the second order partial derivatives of the one-resident fitness function at the singular point. This normal form holds for Lotka-Volterra models as well as for physiologically structured population models with multiple birth states, in discrete as well as continuous time and can thus be considered universal for the evolutionary dynamics in the neighbourhood of singular points. Only in the case of one-dimensional trait spaces or when N = 1 can the normal form be reduced to a Taylor polynomial. Lastly we show, in the form of a stylized recipe, how these results can be combined into a systematic approach for the analysis of the (large) class of evolutionary models that satisfy the above restrictions. PMID:17943289

  8. Gravitational wave background from Standard Model physics: qualitative features

    SciTech Connect

    Ghiglieri, J.; Laine, M.

    2015-07-16

    Because of physical processes ranging from microscopic particle collisions to macroscopic hydrodynamic fluctuations, any plasma in thermal equilibrium emits gravitational waves. For the largest wavelengths the emission rate is proportional to the shear viscosity of the plasma. In the Standard Model at T>160 GeV, the shear viscosity is dominated by the most weakly interacting particles, right-handed leptons, and is relatively large. We estimate the order of magnitude of the corresponding spectrum of gravitational waves. Even though at small frequencies (corresponding to the sub-Hz range relevant for planned observatories such as eLISA) this background is tiny compared with that from non-equilibrium sources, the total energy carried by the high-frequency part of the spectrum is non-negligible if the production continues for a long time. We suggest that this may constrain (weakly) the highest temperature of the radiation epoch. Observing the high-frequency part directly sets a very ambitious goal for future generations of GHz-range detectors.

  9. Latest inflation model constraints from cosmic microwave background measurements: Addendum

    SciTech Connect

    Kinney, William H.; Kolb, Edward W.; Melchiorri, Alessandro; Riotto, Antonio

    2008-10-15

    In this addendum to Phys. Rev. D 74, 023502 (2006), we present an update of cosmological constraints on single-field inflation in light of the Wilkinson Microwave Ansiotropy Probe satellite mission five-year results (WMAP5). We find that the cosmic microwave background data are quite consistent with a Harrison-Zel'dovich primordial spectrum with no running and zero tensor amplitude. We find that the three main conclusions of our analysis of the WMAP three-year data (WMAP3) are consistent with the WMAP5 data: (1) the Harrison-Zel'dovich model is within the 95% confidence level contours; (2) there is no evidence for running of the spectral index of scalar perturbations; (3) from the WMAP 5 data alone, potentials of the form V{proportional_to}{phi}{sup p} are consistent with the data for p=2 and are ruled out for p=4. Furthermore, consistent with our WMAP3 analysis, we find no evidence for primordial tensor perturbations, this time with a 95% confidence upper limit of r<0.4 for the WMAP5 data alone, and r<0.35 for the WMAP5 data taken in combination with the Arcminute Cosmology Bolometer Array (ACBAR)

  10. Complex interplay between neutral and adaptive evolution shaped differential genomic background and disease susceptibility along the Italian peninsula

    PubMed Central

    Sazzini, Marco; Gnecchi Ruscone, Guido Alberto; Giuliani, Cristina; Sarno, Stefania; Quagliariello, Andrea; De Fanti, Sara; Boattini, Alessio; Gentilini, Davide; Fiorito, Giovanni; Catanoso, Mariagrazia; Boiardi, Luigi; Croci, Stefania; Macchioni, Pierluigi; Mantovani, Vilma; Di Blasio, Anna Maria; Matullo, Giuseppe; Salvarani, Carlo; Franceschi, Claudio; Pettener, Davide; Garagnani, Paolo; Luiselli, Donata

    2016-01-01

    The Italian peninsula has long represented a natural hub for human migrations across the Mediterranean area, being involved in several prehistoric and historical population movements. Coupled with a patchy environmental landscape entailing different ecological/cultural selective pressures, this might have produced peculiar patterns of population structure and local adaptations responsible for heterogeneous genomic background of present-day Italians. To disentangle this complex scenario, genome-wide data from 780 Italian individuals were generated and set into the context of European/Mediterranean genomic diversity by comparison with genotypes from 50 populations. To maximize possibility of pinpointing functional genomic regions that have played adaptive roles during Italian natural history, our survey included also ~250,000 exomic markers and ~20,000 coding/regulatory variants with well-established clinical relevance. This enabled fine-grained dissection of Italian population structure through the identification of clusters of genetically homogeneous provinces and of genomic regions underlying their local adaptations. Description of such patterns disclosed crucial implications for understanding differential susceptibility to some inflammatory/autoimmune disorders, coronary artery disease and type 2 diabetes of diverse Italian subpopulations, suggesting the evolutionary causes that made some of them particularly exposed to the metabolic and immune challenges imposed by dietary and lifestyle shifts that involved western societies in the last centuries. PMID:27582244

  11. Complex interplay between neutral and adaptive evolution shaped differential genomic background and disease susceptibility along the Italian peninsula.

    PubMed

    Sazzini, Marco; Gnecchi Ruscone, Guido Alberto; Giuliani, Cristina; Sarno, Stefania; Quagliariello, Andrea; De Fanti, Sara; Boattini, Alessio; Gentilini, Davide; Fiorito, Giovanni; Catanoso, Mariagrazia; Boiardi, Luigi; Croci, Stefania; Macchioni, Pierluigi; Mantovani, Vilma; Di Blasio, Anna Maria; Matullo, Giuseppe; Salvarani, Carlo; Franceschi, Claudio; Pettener, Davide; Garagnani, Paolo; Luiselli, Donata

    2016-01-01

    The Italian peninsula has long represented a natural hub for human migrations across the Mediterranean area, being involved in several prehistoric and historical population movements. Coupled with a patchy environmental landscape entailing different ecological/cultural selective pressures, this might have produced peculiar patterns of population structure and local adaptations responsible for heterogeneous genomic background of present-day Italians. To disentangle this complex scenario, genome-wide data from 780 Italian individuals were generated and set into the context of European/Mediterranean genomic diversity by comparison with genotypes from 50 populations. To maximize possibility of pinpointing functional genomic regions that have played adaptive roles during Italian natural history, our survey included also ~250,000 exomic markers and ~20,000 coding/regulatory variants with well-established clinical relevance. This enabled fine-grained dissection of Italian population structure through the identification of clusters of genetically homogeneous provinces and of genomic regions underlying their local adaptations. Description of such patterns disclosed crucial implications for understanding differential susceptibility to some inflammatory/autoimmune disorders, coronary artery disease and type 2 diabetes of diverse Italian subpopulations, suggesting the evolutionary causes that made some of them particularly exposed to the metabolic and immune challenges imposed by dietary and lifestyle shifts that involved western societies in the last centuries. PMID:27582244

  12. Plant adaptive behaviour in hydrological models (Invited)

    NASA Astrophysics Data System (ADS)

    van der Ploeg, M. J.; Teuling, R.

    2013-12-01

    Models that will be able to cope with future precipitation and evaporation regimes need a solid base that describes the essence of the processes involved [1]. Micro-behaviour in the soil-vegetation-atmosphere system may have a large impact on patterns emerging at larger scales. A complicating factor in the micro-behaviour is the constant interaction between vegetation and geology in which water plays a key role. The resilience of the coupled vegetation-soil system critically depends on its sensitivity to environmental changes. As a result of environmental changes vegetation may wither and die, but such environmental changes may also trigger gene adaptation. Constant exposure to environmental stresses, biotic or abiotic, influences plant physiology, gene adaptations, and flexibility in gene adaptation [2-6]. Gene expression as a result of different environmental conditions may profoundly impact drought responses across the same plant species. Differences in response to an environmental stress, has consequences for the way species are currently being treated in models (single plant to global scale). In particular, model parameters that control root water uptake and plant transpiration are generally assumed to be a property of the plant functional type. Assigning plant functional types does not allow for local plant adaptation to be reflected in the model parameters, nor does it allow for correlations that might exist between root parameters and soil type. Models potentially provide a means to link root water uptake and transport to large scale processes (e.g. Rosnay and Polcher 1998, Feddes et al. 2001, Jung 2010), especially when powered with an integrated hydrological, ecological and physiological base. We explore the experimental evidence from natural vegetation to formulate possible alternative modeling concepts. [1] Seibert, J. 2000. Multi-criteria calibration of a conceptual runoff model using a genetic algorithm. Hydrology and Earth System Sciences 4(2): 215

  13. The Adaptive Calibration Model of stress responsivity

    PubMed Central

    Ellis, Bruce J.; Shirtcliff, Elizabeth A.

    2010-01-01

    This paper presents the Adaptive Calibration Model (ACM), an evolutionary-developmental theory of individual differences in the functioning of the stress response system. The stress response system has three main biological functions: (1) to coordinate the organism’s allostatic response to physical and psychosocial challenges; (2) to encode and filter information about the organism’s social and physical environment, mediating the organism’s openness to environmental inputs; and (3) to regulate the organism’s physiology and behavior in a broad range of fitness-relevant areas including defensive behaviors, competitive risk-taking, learning, attachment, affiliation and reproductive functioning. The information encoded by the system during development feeds back on the long-term calibration of the system itself, resulting in adaptive patterns of responsivity and individual differences in behavior. Drawing on evolutionary life history theory, we build a model of the development of stress responsivity across life stages, describe four prototypical responsivity patterns, and discuss the emergence and meaning of sex differences. The ACM extends the theory of biological sensitivity to context (BSC) and provides an integrative framework for future research in the field. PMID:21145350

  14. Adaptive Decision Modeling in Wisconsin River Islands

    NASA Astrophysics Data System (ADS)

    Gyawali, R.; Greb, S. R.; Watkins, D. W., Jr.; Block, P.

    2014-12-01

    River islands in Wisconsin are of high ecological significance. Understanding of climate change impacts and appropriate management alternatives in these islands are of great interest to all stakeholders, including the State of Wisconsin and Bureau of Land Management (BLM) who have jurisdiction of these islands in WI. We use historical areal imagery to describe island dynamics and river morphometry, such as changes in island shape and size. Relationships of related changes are explored with concurrent changes in river flow regimes. In an effort to integrate climate change uncertainties into decision making, we demonstrate an application of a multistage adaptive decision making framework to Wisconsin River islands, with a particular emphasis on flood management and planning. The framework is comprised of hydro-climatic ensemble projections generated from CMIP5 climate model outputs and multiple hydrologic models, including statistical and physically based approaches.

  15. Modeling Adaptable Business Service for Enterprise Collaboration

    NASA Astrophysics Data System (ADS)

    Boukadi, Khouloud; Vincent, Lucien; Burlat, Patrick

    Nowadays, a Service Oriented Architecture (SOA) seems to be one of the most promising paradigms for leveraging enterprise information systems. SOA creates opportunities for enterprises to provide value added service tailored for on demand enterprise collaboration. With the emergence and rapid development of Web services technologies, SOA is being paid increasing attention and has become widespread. In spite of the popularity of SOA, a standardized framework for modeling and implementing business services are still in progress. For the purpose of supporting these service-oriented solutions, we adopt a model driven development approach. This paper outlines the Contextual Service Oriented Modeling and Analysis (CSOMA) methodology and presents UML profiles for the PIM level service-oriented architectural modeling, as well as its corresponding meta-models. The proposed PIM (Platform Independent Model) describes the business SOA at a high level of abstraction regardless of techniques involved in the application employment. In addition, all essential service-specific concerns required for delivering quality and context-aware service are covered. Some of the advantages of this approach are that it is generic and thus not closely allied with Web service technology as well as specifically treating the service adaptability during the design stage.

  16. Adaptive model reduction for nonsmooth discrete element simulation

    NASA Astrophysics Data System (ADS)

    Servin, Martin; Wang, Da

    2016-03-01

    A method for adaptive model order reduction for nonsmooth discrete element simulation is developed and analysed in numerical experiments. Regions of the granular media that collectively move as rigid bodies are substituted with rigid bodies of the corresponding shape and mass distribution. The method also support particles merging with articulated multibody systems. A model approximation error is defined and used to derive conditions for when and where to apply reduction and refinement back into particles and smaller rigid bodies. Three methods for refinement are proposed and tested: prediction from contact events, trial solutions computed in the background and using split sensors. The computational performance can be increased by 5-50 times for model reduction level between 70-95 %.

  17. DANA: distributed numerical and adaptive modelling framework.

    PubMed

    Rougier, Nicolas P; Fix, Jérémy

    2012-01-01

    DANA is a python framework ( http://dana.loria.fr ) whose computational paradigm is grounded on the notion of a unit that is essentially a set of time dependent values varying under the influence of other units via adaptive weighted connections. The evolution of a unit's value are defined by a set of differential equations expressed in standard mathematical notation which greatly ease their definition. The units are organized into groups that form a model. Each unit can be connected to any other unit (including itself) using a weighted connection. The DANA framework offers a set of core objects needed to design and run such models. The modeler only has to define the equations of a unit as well as the equations governing the training of the connections. The simulation is completely transparent to the modeler and is handled by DANA. This allows DANA to be used for a wide range of numerical and distributed models as long as they fit the proposed framework (e.g. cellular automata, reaction-diffusion system, decentralized neural networks, recurrent neural networks, kernel-based image processing, etc.). PMID:22994650

  18. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan

    2008-01-01

    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  19. Multiple model adaptive tracking of airborne targets

    NASA Astrophysics Data System (ADS)

    Norton, John E.

    1988-12-01

    Over the past ten years considerable work has been accomplished at the Air Force Institute of Technology (AFIT) towards improving the ability of tracking airborne targets. Motivated by the performance advantages in using established models of tracking environment variables within a Kalman filter, an advanced tracking algorithm has been developed based on adaptive estimation filter structures. A multiple model bank of filters that have been designed for various target dynamics, which each accounting for atmospheric disturbance of the Forward Looking Infrared (FLIR) sensor data and mechanical vibrations of the sensor platform, outperforms a correlator tracker. The bank of filters provides the estimation capability to guide the pointing mechanisms of a shared aperture laser/sensor system. The data is provided to the tracking algorithm via an (8 x 8)-pixel tracking Field of View (FOV) from the FLIR image plane. Data at each sample period is compared by an enhanced correlator to a target template. These offsets are measurements to a bank of linear Kalman filters which provide estimates of the target's location in azimuth and elevation coordinates based on a Gauss-Markov acceleration model, and a reduced form of the atmospheric jitter model for the disturbance in the IR wavefront carrying future measurements.

  20. Adaptable Multivariate Calibration Models for Spectral Applications

    SciTech Connect

    THOMAS,EDWARD V.

    1999-12-20

    Multivariate calibration techniques have been used in a wide variety of spectroscopic situations. In many of these situations spectral variation can be partitioned into meaningful classes. For example, suppose that multiple spectra are obtained from each of a number of different objects wherein the level of the analyte of interest varies within each object over time. In such situations the total spectral variation observed across all measurements has two distinct general sources of variation: intra-object and inter-object. One might want to develop a global multivariate calibration model that predicts the analyte of interest accurately both within and across objects, including new objects not involved in developing the calibration model. However, this goal might be hard to realize if the inter-object spectral variation is complex and difficult to model. If the intra-object spectral variation is consistent across objects, an effective alternative approach might be to develop a generic intra-object model that can be adapted to each object separately. This paper contains recommendations for experimental protocols and data analysis in such situations. The approach is illustrated with an example involving the noninvasive measurement of glucose using near-infrared reflectance spectroscopy. Extensions to calibration maintenance and calibration transfer are discussed.

  1. Patterns of coral bleaching: Modeling the adaptive bleaching hypothesis

    USGS Publications Warehouse

    Ware, J.R.; Fautin, D.G.; Buddemeier, R.W.

    1996-01-01

    Bleaching - the loss of symbiotic dinoflagellates (zooxanthellae) from animals normally possessing them - can be induced by a variety of stresses, of which temperature has received the most attention. Bleaching is generally considered detrimental, but Buddemeier and Fautin have proposed that bleaching is also adaptive, providing an opportunity for recombining hosts with alternative algal types to form symbioses that might be better adapted to altered circumstances. Our mathematical model of this "adaptive bleaching hypothesis" provides insight into how animal-algae symbioses might react under various circumstances. It emulates many aspects of the coral bleaching phenomenon including: corals bleaching in response to a temperature only slightly greater than their average local maximum temperature; background bleaching; bleaching events being followed by bleaching of lesser magnitude in the subsequent one to several years; higher thermal tolerance of corals subject to environmental variability compared with those living under more constant conditions; patchiness in bleaching; and bleaching at temperatures that had not previously resulted in bleaching. ?? 1996 Elsevier Science B.V. All rights reserved.

  2. Stringly restrictions on the backgrounds in the heterotic sigma model

    SciTech Connect

    Sengupta, S. ); Majumdar, P. )

    1992-03-21

    This paper shows that for the heterotic string theory in the presence of arbitrary background gauge, gravitational and antisymmetric tensor fields, truncated by a general coordinate dependent compactification a la Scherk-Schwarz, the requirement of 2D conformal invariance is as restrictive as to inhibit supersymmetry breaking with vanishing cosmological constant.

  3. Roy’s Adaptation Model-Guided Education and Promoting the Adaptation of Veterans With Lower Extremities Amputation

    PubMed Central

    Azarmi, Somayeh; Farsi, Zahra

    2015-01-01

    Background: Any defect in extremities of the body can affect different life aspects. Objectives: The purpose of this study was to investigate the effect of Roy’s adaptation model-guided education on promoting the adaptation of veterans with lower extremities amputation. Patients and Methods: In a randomized clinical trial, 60 veterans with lower extremities amputation referring to Kowsar Orthotics and Prosthetics Center of veterans clinic in Tehran, Iran, were recruited with convenience method and were randomly assigned to intervention and control groups during 2013 - 2014. For data collection, Roy’s adaptation model questionnaire was used. After completing the questionnaires in both groups, maladaptive behaviors were determined in the intervention group and an education program based on Roy’s adaptation model was implemented. After two months, both groups completed the questionnaires again. Data was analyzed with SPSS software. Results: Independent t-test showed statistically significant differences between the two groups in the post-test stage in terms of the total score of adaptation (P = 0.001) as well as physiologic (P = 0.0001) and role function modes (P = 0.004). The total score of adaptation (139.43 ± 5.45 to 127.54 ± 14.55, P = 0.006) as well as the scores of physiologic (60.26 ± 5.45 to 53.73 ± 7.79, P = 0.001) and role function (20.30 ± 2.42 to 18.13 ± 3.18, P = 0.01) modes in the intervention group significantly increased, whereas the scores of self-concept (42.10 ± 4.71 to 39.40 ± 5.67, P = 0.21) and interdependence (16.76 ± 2.22 to 16.30 ± 2.57, P = 0.44) modes in the two stages did not have a significant difference. Conclusions: Findings of this research indicated that the Roy’s adaptation model-guided education promoted the adaptation level of physiologic and role function modes in veterans with lower extremities amputation. However, this intervention could not promote adaptation in self-concept and interdependence modes. More

  4. A Roy model study of adapting to being HIV positive.

    PubMed

    Perrett, Stephanie E; Biley, Francis C

    2013-10-01

    Roy's adaptation model outlines a generic process of adaptation useful to nurses in any situation where a patient is facing change. To advance nursing practice, nursing theories and frameworks must be constantly tested and developed through research. This article describes how the results of a qualitative grounded theory study have been used to test components of the Roy adaptation model. A framework for "negotiating uncertainty" was the result of a grounded theory study exploring adaptation to HIV. This framework has been compared to the Roy adaptation model, strengthening concepts such as focal and contextual stimuli, Roy's definition of adaptation and her description of adaptive modes, while suggesting areas for further development including the role of perception. The comparison described in this article demonstrates the usefulness of qualitative research in developing nursing models, specifically highlighting opportunities to continue refining Roy's work. PMID:24085671

  5. A novel approach to model EPIC variable background

    NASA Astrophysics Data System (ADS)

    Marelli, M.; De Luca, A.; Salvetti, D.; Belfiore, A.; Pizzocaro, D.

    2016-06-01

    In the past years XMM-Newton revolutionized our way to look at the X-ray sky. With more than 200 Ms of exposure, it allowed for numerous discoveries in every field of astronomy. Unfortunately, about 35% of the observing time is badly affected by soft proton flares, with background increasing by orders of magnitudes hampering any classical analysis of field sources. One of the main aim of the EXTraS ("Exploring the X-ray Transient and variable Sky") project is to characterise the variability of XMM-Newton sources within each single observation, including periods of high background. This posed severe challenges. I will describe a novel approach that we implemented within the EXTraS project to produce background-subtracted light curves, that allows to treat the case of very faint sources and very large proton flares. EXTraS light curves will be soon released to the community, together with new tools that will allow the user to reproduce EXTraS results, as well as to extend a similar analysis to future data. Results of this work (including an unprecedented characterisation of the soft proton phenomenon and instrument response) will also serve as a reference for future missions and will be particularly relevant for the Athena observatory.

  6. Adapting the ALP Model for Student and Institutional Needs

    ERIC Educational Resources Information Center

    Sides, Meredith

    2016-01-01

    With the increasing adoption of accelerated models of learning comes the necessary step of adapting these models to fit the unique needs of the student population at each individual institution. One such college adapted the ALP (Accelerated Learning Program) model and made specific changes to the target population, structure and scheduling, and…

  7. A Sharing Item Response Theory Model for Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Segall, Daniel O.

    2004-01-01

    A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection…

  8. Image Discrimination Models for Object Detection in Natural Backgrounds

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.

    2000-01-01

    This paper reviews work accomplished and in progress at NASA Ames relating to visual target detection. The focus is on image discrimination models, starting with Watson's pioneering development of a simple spatial model and progressing through this model's descendents and extensions. The application of image discrimination models to target detection will be described and results reviewed for Rohaly's vehicle target data and the Search 2 data. The paper concludes with a description of work we have done to model the process by which observers learn target templates and methods for elucidating those templates.

  9. TEAM MODEL FOR EVALUATING ALTERNATIVE ADAPTATION STRATEGIES

    EPA Science Inventory

    Advances in the scientific literature have focused attention on the need to develop adaptation strategies to reduce the risks, and take advantage of the opportunities, posed by climate change and climate variability. Adaptation needs to be considered as part of any response plan....

  10. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGESBeta

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  11. Computational modeling of multispectral remote sensing systems: Background investigations

    NASA Technical Reports Server (NTRS)

    Aherron, R. M.

    1982-01-01

    A computational model of the deterministic and stochastic process of remote sensing has been developed based upon the results of the investigations presented. The model is used in studying concepts for improving worldwide environment and resource monitoring. A review of various atmospheric radiative transfer models is presented as well as details of the selected model. Functional forms for spectral diffuse reflectance with variability introduced are also presented. A cloud detection algorithm and the stochastic nature of remote sensing data with its implications are considered.

  12. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  13. Transitional Jobs: Background, Program Models, and Evaluation Evidence

    ERIC Educational Resources Information Center

    Bloom, Dan

    2010-01-01

    The budget for the U.S. Department of Labor for Fiscal Year 2010 includes a total of $45 million to support and study transitional jobs. This paper describes the origins of the transitional jobs models that are operating today, reviews the evidence on the effectiveness of this approach and other subsidized employment models, and offers some…

  14. The curvature adaptive optics system modeling

    NASA Astrophysics Data System (ADS)

    Yang, Qiang

    A curvature adaptive optics (AO) simulation system has been built. The simulation is based on the Hokupa'a-36 AO system for the NASA IRTF 3m telescope and the Hokupa'a-85 AO system for the Gemini Near Infrared Coronagraphic Imager. Several sub-models are built separately for the AO simulation system, and they are: (1) generation and propagation of atmospheric phase screens, (2) the bimorph deformable mirror (DM), (3) the curvature wave-front sensor (CWFS), (4) generation of response functions, interaction matrices and calculation of command matrices, (5) Fresnel propagation from the DM pupil to the lenslet pupil, (6) AO servo loop, and (7) post processing. The AO simulation system is then applied to the effects of DM hysteresis, and to the optimization of DM actuator patterns for the Hokupa'a-85 and Hokupa'a-36 AO systems. In the first application, an enhancing Coleman-Hodgdon model is introduced to approximate the hysteresis curves, and then the Lambert W function is introduced to calculate the inverse of the Coleman-Hodgdon equation. Step response, transfer functions and Strehl Ratios from the AO system have been compared under the cases with/without DM hysteresis. The servo-loop results show that the bandwidth of an AO system is improved greatly after the DM hysteresis is corrected. In the second application, many issues of the bimorph mirror will be considered to optimize the DM patterns, and they include the type and length of the edge benders, gap size of electrodes, DM size, and DM curvature limit.

  15. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  16. Multiple Adaptations and Content-Adaptive FEC Using Parameterized RD Model for Embedded Wavelet Video

    NASA Astrophysics Data System (ADS)

    Yu, Ya-Huei; Ho, Chien-Peng; Tsai, Chun-Jen

    2007-12-01

    Scalable video coding (SVC) has been an active research topic for the past decade. In the past, most SVC technologies were based on a coarse-granularity scalable model which puts many scalability constraints on the encoded bitstreams. As a result, the application scenario of adapting a preencoded bitstream multiple times along the distribution chain has not been seriously investigated before. In this paper, a model-based multiple-adaptation framework based on a wavelet video codec, MC-EZBC, is proposed. The proposed technology allows multiple adaptations on both the video data and the content-adaptive FEC protection codes. For multiple adaptations of video data, rate-distortion information must be embedded within the video bitstream in order to allow rate-distortion optimized operations for each adaptation. Experimental results show that the proposed method reduces the amount of side information by more than 50% on average when compared to the existing technique. It also reduces the number of iterations required to perform the tier-2 entropy coding by more than 64% on average. In addition, due to the nondiscrete nature of the rate-distortion model, the proposed framework also enables multiple adaptations of content-adaptive FEC protection scheme for more flexible error-resilient transmission of bitstreams.

  17. Modeling Adaptation as a Flow and Stock Decision with Mitigation

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-liv...

  18. Modeling Adaptation as a Flow and Stock Decsion with Mitigation

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-live...

  19. Modeling Two Types of Adaptation to Climate Change

    EPA Science Inventory

    Mitigation and adaptation are the two key responses available to policymakers to reduce the risks of climate change. We model these two policies together in a new DICE-based integrated assessment model that characterizes adaptation as either short-lived flow spending or long-live...

  20. A Comparison between High-Energy Radiation Background Models and SPENVIS Trapped-Particle Radiation Models

    NASA Technical Reports Server (NTRS)

    Krizmanic, John F.

    2013-01-01

    We have been assessing the effects of background radiation in low-Earth orbit for the next generation of X-ray and Cosmic-ray experiments, in particular for International Space Station orbit. Outside the areas of high fluxes of trapped radiation, we have been using parameterizations developed by the Fermi team to quantify the high-energy induced background. For the low-energy background, we have been using the AE8 and AP8 SPENVIS models to determine the orbit fractions where the fluxes of trapped particles are too high to allow for useful operation of the experiment. One area we are investigating is how the fluxes of SPENVIS predictions at higher energies match the fluxes at the low-energy end of our parameterizations. I will summarize our methodology for background determination from the various sources of cosmogenic and terrestrial radiation and how these compare to SPENVIS predictions in overlapping energy ranges.

  1. Adapting of the Background-Oriented Schlieren (BOS) Technique in the Characterization of the Flow Regimes in Thermal Spraying Processes

    NASA Astrophysics Data System (ADS)

    Tillmann, W.; Abdulgader, M.; Rademacher, H. G.; Anjami, N.; Hagen, L.

    2014-01-01

    In thermal spraying technique, the changes in the in-flight particle velocities are considered to be only a function of the drag forces caused by the dominating flow regimes in the spray jet. Therefore, the correct understanding of the aerodynamic phenomena occurred at nozzle out let and at the substrate interface is an important task in the targeted improvement in the nozzle and air-cap design as well as in the spraying process in total. The presented work deals with the adapting of an innovative technique for the flow characterization called background-oriented Schlieren. The flow regimes in twin wire arc spraying (TWAS) and high velocity oxygen fuel (HVOF) were analyzed with this technique. The interfering of the atomization gas flow with the intersected wires causes in case of TWAS process a deformation of the jet shape. It leads also to areas with different aero dynamic forces. The configurations of the outlet air-caps in TWAS effect predominantly the outlet flow characteristics. The ratio between fuel and oxygen determine the dominating flow regimes in the HVOF spraying jet. Enhanced understanding of the aerodynamics at outlet and at the substrate interface could lead to a targeted improvement in thermal spraying processes.

  2. Radiation Background and Attenuation Model Validation and Development

    SciTech Connect

    Peplow, Douglas E.; Santiago, Claudio P.

    2015-08-05

    This report describes the initial results of a study being conducted as part of the Urban Search Planning Tool project. The study is comparing the Urban Scene Simulator (USS), a one-dimensional (1D) radiation transport model developed at LLNL, with the three-dimensional (3D) radiation transport model from ORNL using the MCNP, SCALE/ORIGEN and SCALE/MAVRIC simulation codes. In this study, we have analyzed the differences between the two approaches at every step, from source term representation, to estimating flux and detector count rates at a fixed distance from a simple surface (slab), and at points throughout more complex 3D scenes.

  3. Scale-adaptive surface modeling of vascular structures

    PubMed Central

    2010-01-01

    Background The effective geometric modeling of vascular structures is crucial for diagnosis, therapy planning and medical education. These applications require good balance with respect to surface smoothness, surface accuracy, triangle quality and surface size. Methods Our method first extracts the vascular boundary voxels from the segmentation result, and utilizes these voxels to build a three-dimensional (3D) point cloud whose normal vectors are estimated via covariance analysis. Then a 3D implicit indicator function is computed from the oriented 3D point cloud by solving a Poisson equation. Finally the vessel surface is generated by a proposed adaptive polygonization algorithm for explicit 3D visualization. Results Experiments carried out on several typical vascular structures demonstrate that the presented method yields both a smooth morphologically correct and a topologically preserved two-manifold surface, which is scale-adaptive to the local curvature of the surface. Furthermore, the presented method produces fewer and better-shaped triangles with satisfactory surface quality and accuracy. Conclusions Compared to other state-of-the-art approaches, our method reaches good balance in terms of smoothness, accuracy, triangle quality and surface size. The vessel surfaces produced by our method are suitable for applications such as computational fluid dynamics simulations and real-time virtual interventional surgery. PMID:21087525

  4. Modeling Background Radiation in our Environment Using Geochemical Data

    SciTech Connect

    Malchow, Russell L.; Marsac, Kara; Burnley, Pamela; Hausrath, Elisabeth; Haber, Daniel; Adcock, Christopher

    2015-02-01

    Radiation occurs naturally in bedrock and soil. Gamma rays are released from the decay of the radioactive isotopes K, U, and Th. Gamma rays observed at the surface come from the first 30 cm of rock and soil. The energy of gamma rays is specific to each isotope, allowing identification. For this research, data was collected from national databases, private companies, scientific literature, and field work. Data points were then evaluated for self-consistency. A model was created by converting concentrations of U, K, and Th for each rock and soil unit into a ground exposure rate using the following equation: D=1.32 K+ 0.548 U+ 0.272 Th. The first objective of this research was to compare the original Aerial Measurement System gamma ray survey to results produced by the model. The second objective was to improve the method and learn the constraints of the model. Future work will include sample data analysis from field work with a goal of improving the geochemical model.

  5. Gradient-based adaptation of continuous dynamic model structures

    NASA Astrophysics Data System (ADS)

    La Cava, William G.; Danai, Kourosh

    2016-01-01

    A gradient-based method of symbolic adaptation is introduced for a class of continuous dynamic models. The proposed model structure adaptation method starts with the first-principles model of the system and adapts its structure after adjusting its individual components in symbolic form. A key contribution of this work is its introduction of the model's parameter sensitivity as the measure of symbolic changes to the model. This measure, which is essential to defining the structural sensitivity of the model, not only accommodates algebraic evaluation of candidate models in lieu of more computationally expensive simulation-based evaluation, but also makes possible the implementation of gradient-based optimisation in symbolic adaptation. The proposed method is applied to models of several virtual and real-world systems that demonstrate its potential utility.

  6. Image Watermarking Based on Adaptive Models of Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Khawne, Amnach; Hamamoto, Kazuhiko; Chitsobhuk, Orachat

    This paper proposes a digital image watermarking based on adaptive models of human visual perception. The algorithm exploits the local activities estimated from wavelet coefficients of each subband to adaptively control the luminance masking. The adaptive luminance is thus delicately combined with the contrast masking and edge detection and adopted as a visibility threshold. With the proposed combination of adaptive visual sensitivity parameters, the proposed perceptual model can be more appropriate to the different characteristics of various images. The weighting function is chosen such that the fidelity, imperceptibility and robustness could be preserved without making any perceptual difference to the image quality.

  7. Consensus time and conformity in the adaptive voter model

    NASA Astrophysics Data System (ADS)

    Rogers, Tim; Gross, Thilo

    2013-09-01

    The adaptive voter model is a paradigmatic model in the study of opinion formation. Here we propose an extension for this model, in which conflicts are resolved by obtaining another opinion, and analytically study the time required for consensus to emerge. Our results shed light on the rich phenomenology of both the original and extended adaptive voter models, including a dynamical phase transition in the scaling behavior of the mean time to consensus.

  8. Background model systematics for the Fermi GeV excess

    NASA Astrophysics Data System (ADS)

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy Ebreak = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of mχ=49+6.4-5.4 GeV.

  9. GRACE follow-on sensor noise with realistic background models

    NASA Astrophysics Data System (ADS)

    Ellmer, Matthias; Mayer-Gürr, Torsten

    2015-04-01

    We performed multiple simulation studies of a GRACE-like satellite mission based on the current K-Band ranging instrument (KBR). We also simulated a laser-ranging instrument (LRI) configuration as a drop-in replacement for GRACE low-low satellite to satellite tracking, the remaining parameters of the simulation are shared between the two scenarios. Our simulated data are based on real GRACE observations for April 2006, which allows us to compare our results to published gravity field models for this particular month. The variational equation approach was employed to generate independent reduced-dynamic orbits for both GRACE satellites. These orbits were then fitted to the actual GRACE kinematic orbits. The resulting orbit was then used to synthesize artificial satellite ranging, star camera, accelerometer and kinematic orbit data. We synchronized all simulated instruments with real instrument data for the simulated month, which guarantees realistic data gaps. Appropriate noise was added to all observables. In the recovery step, the AOD1B de-aliasing product -- previously used in the generation of the fundamental reduced-dynamic orbit data -- was degraded with partial constituents of the updated ESA earth system model dataset. Specifically, the atmosphere, ocean, and hydrology components were used. This has the effect that the computed gravity field possesses the characteristic structure associated with a residual time-variable gravity field signal. An overview of the achieved results is given in the presentation.

  10. Background model systematics for the Fermi GeV excess

    SciTech Connect

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E(break) = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of m(χ)=49(+6.4)(-)(5.4)  GeV.

  11. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  12. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  13. Modeling Family Adaptation to Fragile X Syndrome

    ERIC Educational Resources Information Center

    Raspa, Melissa; Bailey, Donald, Jr.; Bann, Carla; Bishop, Ellen

    2014-01-01

    Using data from a survey of 1,099 families who have a child with Fragile X syndrome, we examined adaptation across 7 dimensions of family life: parenting knowledge, social support, social life, financial impact, well-being, quality of life, and overall impact. Results illustrate that although families report a high quality of life, they struggle…

  14. Particle Swarm Based Collective Searching Model for Adaptive Environment

    SciTech Connect

    Cui, Xiaohui; Patton, Robert M; Potok, Thomas E; Treadwell, Jim N

    2007-01-01

    This report presents a pilot study of an integration of particle swarm algorithm, social knowledge adaptation and multi-agent approaches for modeling the collective search behavior of self-organized groups in an adaptive environment. The objective of this research is to apply the particle swarm metaphor as a model of social group adaptation for the dynamic environment and to provide insight and understanding of social group knowledge discovering and strategic searching. A new adaptive environment model, which dynamically reacts to the group collective searching behaviors, is proposed in this research. The simulations in the research indicate that effective communication between groups is not the necessary requirement for whole self-organized groups to achieve the efficient collective searching behavior in the adaptive environment.

  15. Particle Swarm Based Collective Searching Model for Adaptive Environment

    SciTech Connect

    Cui, Xiaohui; Patton, Robert M; Potok, Thomas E; Treadwell, Jim N

    2008-01-01

    This report presents a pilot study of an integration of particle swarm algorithm, social knowledge adaptation and multi-agent approaches for modeling the collective search behavior of self-organized groups in an adaptive environment. The objective of this research is to apply the particle swarm metaphor as a model of social group adaptation for the dynamic environment and to provide insight and understanding of social group knowledge discovering and strategic searching. A new adaptive environment model, which dynamically reacts to the group collective searching behaviors, is proposed in this research. The simulations in the research indicate that effective communication between groups is not the necessary requirement for whole self-organized groups to achieve the efficient collective searching behavior in the adaptive environment.

  16. Adapted Lethality: What We Can Learn from Guinea Pig-Adapted Ebola Virus Infection Model

    PubMed Central

    Cheresiz, S. V.; Semenova, E. A.; Chepurnov, A. A.

    2016-01-01

    Establishment of small animal models of Ebola virus (EBOV) infection is important both for the study of genetic determinants involved in the complex pathology of EBOV disease and for the preliminary screening of antivirals, production of therapeutic heterologic immunoglobulins, and experimental vaccine development. Since the wild-type EBOV is avirulent in rodents, the adaptation series of passages in these animals are required for the virulence/lethality to emerge in these models. Here, we provide an overview of our several adaptation series in guinea pigs, which resulted in the establishment of guinea pig-adapted EBOV (GPA-EBOV) variants different in their characteristics, while uniformly lethal for the infected animals, and compare the virologic, genetic, pathomorphologic, and immunologic findings with those obtained in the adaptation experiments of the other research groups. PMID:26989413

  17. Adapted Lethality: What We Can Learn from Guinea Pig-Adapted Ebola Virus Infection Model.

    PubMed

    Cheresiz, S V; Semenova, E A; Chepurnov, A A

    2016-01-01

    Establishment of small animal models of Ebola virus (EBOV) infection is important both for the study of genetic determinants involved in the complex pathology of EBOV disease and for the preliminary screening of antivirals, production of therapeutic heterologic immunoglobulins, and experimental vaccine development. Since the wild-type EBOV is avirulent in rodents, the adaptation series of passages in these animals are required for the virulence/lethality to emerge in these models. Here, we provide an overview of our several adaptation series in guinea pigs, which resulted in the establishment of guinea pig-adapted EBOV (GPA-EBOV) variants different in their characteristics, while uniformly lethal for the infected animals, and compare the virologic, genetic, pathomorphologic, and immunologic findings with those obtained in the adaptation experiments of the other research groups. PMID:26989413

  18. Improving nonlinear modeling capabilities of functional link adaptive filters.

    PubMed

    Comminiello, Danilo; Scarpiniti, Michele; Scardapane, Simone; Parisi, Raffaele; Uncini, Aurelio

    2015-09-01

    The functional link adaptive filter (FLAF) represents an effective solution for online nonlinear modeling problems. In this paper, we take into account a FLAF-based architecture, which separates the adaptation of linear and nonlinear elements, and we focus on the nonlinear branch to improve the modeling performance. In particular, we propose a new model that involves an adaptive combination of filters downstream of the nonlinear expansion. Such combination leads to a cooperative behavior of the whole architecture, thus yielding a performance improvement, particularly in the presence of strong nonlinearities. An advanced architecture is also proposed involving the adaptive combination of multiple filters on the nonlinear branch. The proposed models are assessed in different nonlinear modeling problems, in which their effectiveness and capabilities are shown. PMID:26057613

  19. Context aware adaptive security service model

    NASA Astrophysics Data System (ADS)

    Tunia, Marcin A.

    2015-09-01

    Present systems and devices are usually protected against different threats concerning digital data processing. The protection mechanisms consume resources, which are either highly limited or intensively utilized by many entities. The optimization of these resources usage is advantageous. The resources that are saved performing optimization may be utilized by other mechanisms or may be sufficient for longer time. It is usually assumed that protection has to provide specific quality and attack resistance. By interpreting context situation of business services - users and services themselves, it is possible to adapt security services parameters to countermeasure threats associated with current situation. This approach leads to optimization of used resources and maintains sufficient security level. This paper presents architecture of adaptive security service, which is context-aware and exploits quality of context data issue.

  20. Synthesized performance model of thermal imaging systems based on natural background

    NASA Astrophysics Data System (ADS)

    Chen, Song-lin; Wang, Ji-hui; Wang, Xiao-wei; Jin, Wei-qi

    2013-09-01

    The impact of nature environment on the synthesized performance of thermal imaging systems was researched comparing with the targeting task performance (TTP) model. A nature background noise factor was presented and introduced into the minimum resolvable temperature difference channel width (MRTD-CW) model. The method for determining the nature background noise factor was given. A information quantity model based on MRTD-CW model was proposed to evaluate the impact of nature environment on the synthesized performance of thermal imaging systems. A normalized parameter was introduced into the information quantity model. Different background experiments were performed, and the results were analyzed and compared with those of TTP model.

  1. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    SciTech Connect

    Archer, Daniel E.; Hornback, Donald Eric; Johnson, Jeffrey O.; Nicholson, Andrew D.; Patton, Bruce W.; Peplow, Douglas E.; Miller, Thomas Martin; Ayaz-Maierhafer, Birsen

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  2. A model for culturally adapting a learning system.

    PubMed

    Del Rosario, M L

    1975-12-01

    The Cross-Cultural Adaption Model (XCAM) is designed to help identify cultural values contained in the text, narration, or visual components of a learning instrument and enables the adapter to evaluate his adapted model so that he can modify or revise it, and allows him to assess the modified version by actually measuring the amount of cultural conflict still present in it. Such a model would permit world-wide adaption of learning materials in population regulation. A random sample of the target group is selected. The adapter develops a measurin g instrument, the cross-cultural adaption scale (XCA), a number of statements about the cultural affinity of the object evaluated. The pretest portion of the sample tests the clarity and understandability of the rating scale to be used for evaluating the instructional materials; the pilot group analyzes the original version of the instructional mater ials, determines the criteria for change, and analyzes the adapted version in terms of these criteria; the control group is administered the original version of the learning materials; and the experimental group is administered the adapted version. Finally, the responses obtained from the XRA rating scale and discussions of both the experimental and control groups are studied and group differences are ev aluated according to cultural conflicts met with each version. With this data, the preferred combination of elements is constructed. PMID:12307758

  3. The Stratified Ocean Model with Adaptive Refinement (SOMAR)

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Scotti, Alberto

    2015-06-01

    A computational framework for the evolution of non-hydrostatic, baroclinic flows encountered in regional and coastal ocean simulations is presented, which combines the flexibility of Adaptive Mesh Refinement (AMR) with a suite of numerical tools specifically developed to deal with the high degree of anisotropy of oceanic flows and their attendant numerical challenges. This framework introduces a semi-implicit update of the terms that give rise to buoyancy oscillations, which permits a stable integration of the Navier-Stokes equations when a background density stratification is present. The lepticity of each grid in the AMR hierarchy, which serves as a useful metric for anisotropy, is used to select one of several different efficient Poisson-solving techniques. In this way, we compute the pressure over the entire set of AMR grids without resorting to the hydrostatic approximation, which can degrade the structure of internal waves whose dynamics may have large-scale significance. We apply the modeling framework to three test cases, for which numerical or analytical solutions are known that can be used to benchmark the results. In all the cases considered, the model achieves an excellent degree of congruence with the benchmark, while at the same time achieving a substantial reduction of the computational resources needed.

  4. Fantastic animals as an experimental model to teach animal adaptation

    PubMed Central

    Guidetti, Roberto; Baraldi, Laura; Calzolai, Caterina; Pini, Lorenza; Veronesi, Paola; Pederzoli, Aurora

    2007-01-01

    Background Science curricula and teachers should emphasize evolution in a manner commensurate with its importance as a unifying concept in science. The concept of adaptation represents a first step to understand the results of natural selection. We settled an experimental project of alternative didactic to improve knowledge of organism adaptation. Students were involved and stimulated in learning processes by creative activities. To set adaptation in a historic frame, fossil records as evidence of past life and evolution were considered. Results The experimental project is schematized in nine phases: review of previous knowledge; lesson on fossils; lesson on fantastic animals; planning an imaginary world; creation of an imaginary animal; revision of the imaginary animals; adaptations of real animals; adaptations of fossil animals; and public exposition. A rubric to evaluate the student's performances is reported. The project involved professors and students of the University of Modena and Reggio Emilia and of the "G. Marconi" Secondary School of First Degree (Modena, Italy). Conclusion The educational objectives of the project are in line with the National Indications of the Italian Ministry of Public Instruction: knowledge of the characteristics of living beings, the meanings of the term "adaptation", the meaning of fossils, the definition of ecosystem, and the particularity of the different biomes. At the end of the project, students will be able to grasp particular adaptations of real organisms and to deduce information about the environment in which the organism evolved. This project allows students to review previous knowledge and to form their personalities. PMID:17767729

  5. Microcomputer pollution model for civilian airports and Air Force Bases. Model application and background

    SciTech Connect

    Segal, H.M.

    1988-08-01

    This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). All reports use the same main title--A MICROCOMPUTER MODEL FOR CIVILIAN AIRPORTS AND AIR FORCE BASES--but different subtitles. The subtitles are: (1) USER'S GUIDE - ISSUE 2 (FAA-EE-88-3/ESL-TR-88-54); (2) MODEL DESCRIPTION (FAA-EE-88-4/ESL-TR-88-53); (S) MODEL APPLICATION AND BACKGROUND (FAA-EE-88-5/ESL-TR-88-55). The first and second reports above describe the EDMS model and provide instructions for its use. This is the third report. IT consists of an accumulation of five key documents describing the development and use of the EDMS model. This report is prepared in accordance with discussions with the EPA and requirements outlined in the March 27, 1980 Federal Register for submitting air-quality models to the EPA. Contents: Model Development and Use - Its Chronology and Reports; Monitoring Concorde EMissions; The Influence of Aircraft Operations on Air Quality at Airports; Simplex A - A simplified Atmospheric Dispersion Model for Airport Use -(User's Guide); Microcomputer Graphics in Atmospheric Dispersion Modeling; Pollution from Motor Vehicles and Aircraft at Stapleton International Airport (Abbreviated Report).

  6. Dynamics modeling and adaptive control of flexible manipulators

    NASA Technical Reports Server (NTRS)

    Sasiadek, J. Z.

    1991-01-01

    An application of Model Reference Adaptive Control (MRAC) to the position and force control of flexible manipulators and robots is presented. A single-link flexible manipulator is analyzed. The problem was to develop a mathematical model of a flexible robot that is accurate. The objective is to show that the adaptive control works better than 'conventional' systems and is suitable for flexible structure control.

  7. Conceptual development: an adaptive resonance theory model of polysemy

    NASA Astrophysics Data System (ADS)

    Dunbar, George L.

    1997-04-01

    Adaptive Resonance Theory provides a model of pattern classification that addresses the plasticity--stability dilemma and allows a neural network to detect when to construct a new category without the assistance of a supervisor. We show that Adaptive Resonance Theory can be applied to the study of natural concept development. Specifically, a model is presented which is able to categorize different usages of a common noun and group the polysemous senses appropriately.

  8. Post-Revolution Egypt: The Roy Adaptation Model in Community.

    PubMed

    Buckner, Britton S; Buckner, Ellen B

    2015-10-01

    The 2011 Arab Spring swept across the Middle East creating profound instability in Egypt, a country already challenged with poverty and internal pressures. To respond to this crisis, Catholic Relief Services led a community-based program called "Egypt Works" that included community improvement projects and psychosocial support. Following implementation, program outcomes were analyzed using the middle-range theory of adaptation to situational life events, based on the Roy adaptation model. The comprehensive, community-based approach facilitated adaptation, serving as a model for applying theory in post-crisis environments. PMID:26396214

  9. Adaptive Input Reconstruction with Application to Model Refinement, State Estimation, and Adaptive Control

    NASA Astrophysics Data System (ADS)

    D'Amato, Anthony M.

    Input reconstruction is the process of using the output of a system to estimate its input. In some cases, input reconstruction can be accomplished by determining the output of the inverse of a model of the system whose input is the output of the original system. Inversion, however, requires an exact and fully known analytical model, and is limited by instabilities arising from nonminimum-phase zeros. The main contribution of this work is a novel technique for input reconstruction that does not require model inversion. This technique is based on a retrospective cost, which requires a limited number of Markov parameters. Retrospective cost input reconstruction (RCIR) does not require knowledge of nonminimum-phase zero locations or an analytical model of the system. RCIR provides a technique that can be used for model refinement, state estimation, and adaptive control. In the model refinement application, data are used to refine or improve a model of a system. It is assumed that the difference between the model output and the data is due to an unmodeled subsystem whose interconnection with the modeled system is inaccessible, that is, the interconnection signals cannot be measured and thus standard system identification techniques cannot be used. Using input reconstruction, these inaccessible signals can be estimated, and the inaccessible subsystem can be fitted. We demonstrate input reconstruction in a model refinement framework by identifying unknown physics in a space weather model and by estimating an unknown film growth in a lithium ion battery. The same technique can be used to obtain estimates of states that cannot be directly measured. Adaptive control can be formulated as a model-refinement problem, where the unknown subsystem is the idealized controller that minimizes a measured performance variable. Minimal modeling input reconstruction for adaptive control is useful for applications where modeling information may be difficult to obtain. We demonstrate

  10. Learning Speech Variability in Discriminative Acoustic Model Adaptation

    NASA Astrophysics Data System (ADS)

    Sato, Shoei; Oku, Takahiro; Homma, Shinichi; Kobayashi, Akio; Imai, Toru

    We present a new discriminative method of acoustic model adaptation that deals with a task-dependent speech variability. We have focused on differences of expressions or speaking styles between tasks and set the objective of this method as improving the recognition accuracy of indistinctly pronounced phrases dependent on a speaking style.The adaptation appends subword models for frequently observable variants of subwords in the task. To find the task-dependent variants, low-confidence words are statistically selected from words with higher frequency in the task's adaptation data by using their word lattices. HMM parameters of subword models dependent on the words are discriminatively trained by using linear transforms with a minimum phoneme error (MPE) criterion. For the MPE training, subword accuracy discriminating between the variants and the originals is also investigated. In speech recognition experiments, the proposed adaptation with the subword variants reduced the word error rate by 12.0% relative in a Japanese conversational broadcast task.

  11. Adaptive Finite Element Methods for Continuum Damage Modeling

    NASA Technical Reports Server (NTRS)

    Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.

    1995-01-01

    The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.

  12. Internal Models in Sensorimotor Integration: Perspectives from Adaptive Control Theory

    PubMed Central

    Tin, Chung; Poon, Chi-Sang

    2007-01-01

    Internal model and adaptive control are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning are reviewed and their possible relevance to motor control is discussed. Possible applicability of Luenberger observer and extended Kalman filter to state estimation problems such as sensorimotor prediction or the resolution of vestibular sensory ambiguity is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal model in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future. PMID:16135881

  13. Modeling-Error-Driven Performance-Seeking Direct Adaptive Control

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V.; Kaneshige, John; Krishnakumar, Kalmanje; Burken, John

    2008-01-01

    This paper presents a stable discrete-time adaptive law that targets modeling errors in a direct adaptive control framework. The update law was developed in our previous work for the adaptive disturbance rejection application. The approach is based on the philosophy that without modeling errors, the original control design has been tuned to achieve the desired performance. The adaptive control should, therefore, work towards getting this performance even in the face of modeling uncertainties/errors. In this work, the baseline controller uses dynamic inversion with proportional-integral augmentation. Dynamic inversion is carried out using the assumed system model. On-line adaptation of this control law is achieved by providing a parameterized augmentation signal to the dynamic inversion block. The parameters of this augmentation signal are updated to achieve the nominal desired error dynamics. Contrary to the typical Lyapunov-based adaptive approaches that guarantee only stability, the current approach investigates conditions for stability as well as performance. A high-fidelity F-15 model is used to illustrate the overall approach.

  14. Modeling Students' Memory for Application in Adaptive Educational Systems

    ERIC Educational Resources Information Center

    Pelánek, Radek

    2015-01-01

    Human memory has been thoroughly studied and modeled in psychology, but mainly in laboratory setting under simplified conditions. For application in practical adaptive educational systems we need simple and robust models which can cope with aspects like varied prior knowledge or multiple-choice questions. We discuss and evaluate several models of…

  15. Adaptive predictive multiplicative autoregressive model for medical image compression.

    PubMed

    Chen, Z D; Chang, R F; Kuo, W J

    1999-02-01

    In this paper, an adaptive predictive multiplicative autoregressive (APMAR) method is proposed for lossless medical image coding. The adaptive predictor is used for improving the prediction accuracy of encoded image blocks in our proposed method. Each block is first adaptively predicted by one of the seven predictors of the JPEG lossless mode and a local mean predictor. It is clear that the prediction accuracy of an adaptive predictor is better than that of a fixed predictor. Then the residual values are processed by the MAR model with Huffman coding. Comparisons with other methods [MAR, SMAR, adaptive JPEG (AJPEG)] on a series of test images show that our method is suitable for reversible medical image compression. PMID:10232675

  16. Background Error Covariance Estimation using Information from a Single Model Trajectory with Application to Ocean Data Assimilation into the GEOS-5 Coupled Model

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele M.; Kovach, Robin M.; Vernieres, Guillaume; Koster, Randal D. (Editor)

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory. SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  17. The reduced order model problem in distributed parameter systems adaptive identification and control. [adaptive control of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.

    1981-01-01

    The reduced order model problem in distributed parameter systems adaptive identification and control is investigated. A comprehensive examination of real-time centralized adaptive control options for flexible spacecraft is provided.

  18. Location- and lesion-dependent estimation of background tissue complexity for anthropomorphic model observer

    NASA Astrophysics Data System (ADS)

    Avanaki, Ali R. N.; Espig, Kathryn; Knippel, Eddie; Kimpe, Tom R. L.; Xthona, Albert; Maidment, Andrew D. A.

    2016-03-01

    In this paper, we specify a notion of background tissue complexity (BTC) as perceived by a human observer that is suited for use with model observers. This notion of BTC is a function of image location and lesion shape and size. We propose four unsupervised BTC estimators based on: (i) perceived pre- and post-lesion similarity of images, (ii) lesion border analysis (LBA; conspicuous lesion should be brighter than its surround), (iii) tissue anomaly detection, and (iv) mammogram density measurement. The latter two are existing methods we adapt for location- and lesion-dependent BTC estimation. To validate the BTC estimators, we ask human observers to measure BTC as the visibility threshold amplitude of an inserted lesion at specified locations in a mammogram. Both human-measured and computationally estimated BTC varied with lesion shape (from circular to oval), size (from small circular to larger circular), and location (different points across a mammogram). BTCs measured by different human observers are correlated (ρ=0.67). BTC estimators are highly correlated to each other (0.84model observer, with applications such as optimization of contrast-enhanced medical imaging systems, and creation of a diversified image dataset with characteristics of a desired population.

  19. Modeling Power Systems as Complex Adaptive Systems

    SciTech Connect

    Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.

  20. Adaptive network models of collective decision making in swarming systems.

    PubMed

    Chen, Li; Huepe, Cristián; Gross, Thilo

    2016-08-01

    We consider a class of adaptive network models where links can only be created or deleted between nodes in different states. These models provide an approximate description of a set of systems where nodes represent agents moving in physical or abstract space, the state of each node represents the agent's heading direction, and links indicate mutual awareness. We show analytically that the adaptive network description captures a phase transition to collective motion in some swarming systems, such as the Vicsek model, and that the properties of this transition are determined by the number of states (discrete heading directions) that can be accessed by each agent. PMID:27627342

  1. The Nominal Response Model in Computerized Adaptive Testing.

    ERIC Educational Resources Information Center

    De Ayala, R. J.

    One important and promising application of item response theory (IRT) is computerized adaptive testing (CAT). The implementation of a nominal response model-based CAT (NRCAT) was studied. Item pool characteristics for the NRCAT as well as the comparative performance of the NRCAT and a CAT based on the three-parameter logistic (3PL) model were…

  2. Error magnitude estimation in model-reference adaptive systems

    NASA Technical Reports Server (NTRS)

    Colburn, B. K.; Boland, J. S., III

    1975-01-01

    A second order approximation is derived from a linearized error characteristic equation for Lyapunov designed model-reference adaptive systems and is used to estimate the maximum error between the model and plant states, and the time to reach this peak following a plant perturbation. The results are applicable in the analysis of plants containing magnitude-dependent nonlinearities.

  3. Statistical Models of Adaptive Immune populations

    NASA Astrophysics Data System (ADS)

    Sethna, Zachary; Callan, Curtis; Walczak, Aleksandra; Mora, Thierry

    The availability of large (104-106 sequences) datasets of B or T cell populations from a single individual allows reliable fitting of complex statistical models for naïve generation, somatic selection, and hypermutation. It is crucial to utilize a probabilistic/informational approach when modeling these populations. The inferred probability distributions allow for population characterization, calculation of probability distributions of various hidden variables (e.g. number of insertions), as well as statistical properties of the distribution itself (e.g. entropy). In particular, the differences between the T cell populations of embryonic and mature mice will be examined as a case study. Comparing these populations, as well as proposed mixed populations, provides a concrete exercise in model creation, comparison, choice, and validation.

  4. Modeling Developmental Transitions in Adaptive Resonance Theory

    ERIC Educational Resources Information Center

    Raijmakers, Maartje E. J.; Molenaar, Peter C. M.

    2004-01-01

    Neural networks are applied to a theoretical subject in developmental psychology: modeling developmental transitions. Two issues that are involved will be discussed: discontinuities and acquiring qualitatively new knowledge. We will argue that by the appearance of a bifurcation, a neural network can show discontinuities and may acquire…

  5. A Model of Adaptive Language Learning

    ERIC Educational Resources Information Center

    Woodrow, Lindy J.

    2006-01-01

    This study applies theorizing from educational psychology and language learning to hypothesize a model of language learning that takes into account affect, motivation, and language learning strategies. The study employed a questionnaire to assess variables of motivation, self-efficacy, anxiety, and language learning strategies. The sample…

  6. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  7. The Importance of Formalizing Computational Models of Face Adaptation Aftereffects

    PubMed Central

    Ross, David A.; Palmeri, Thomas J.

    2016-01-01

    Face adaptation is widely used as a means to probe the neural representations that support face recognition. While the theories that relate face adaptation to behavioral aftereffects may seem conceptually simple, our work has shown that testing computational instantiations of these theories can lead to unexpected results. Instantiating a model of face adaptation not only requires specifying how faces are represented and how adaptation shapes those representations but also specifying how decisions are made, translating hidden representational states into observed responses. Considering the high-dimensionality of face representations, the parallel activation of multiple representations, and the non-linearity of activation functions and decision mechanisms, intuitions alone are unlikely to succeed. If the goal is to understand mechanism, not simply to examine the boundaries of a behavioral phenomenon or correlate behavior with brain activity, then formal computational modeling must be a component of theory testing. To illustrate, we highlight our recent computational modeling of face adaptation aftereffects and discuss how models can be used to understand the mechanisms by which faces are recognized. PMID:27378960

  8. Model-adaptive hybrid dynamic control for robotic assembly tasks

    SciTech Connect

    Austin, D.J.; McCarragher, B.J.

    1999-10-01

    A new task-level adaptive controller is presented for the hybrid dynamic control of robotic assembly tasks. Using a hybrid dynamic model of the assembly task, velocity constraints are derived from which satisfactory velocity commands are obtained. Due to modeling errors and parametric uncertainties, the velocity commands may be erroneous and may result in suboptimal performance. Task-level adaptive control schemes, based on the occurrence of discrete events, are used to change the model parameters from which the velocity commands are determined. Two adaptive schemes are presented: the first is based on intuitive reasoning about the vector spaces involved whereas the second uses a search region that is reduced with each iteration. For the first adaptation law, asymptotic convergence to the correct model parameters is proven except for one case. This weakness motivated the development of the second adaptation law, for which asymptotic convergence is proven in all cases. Automated control of a peg-in-hole assembly task is given as an example, and simulations and experiments for this task are presented. These results demonstrate the success of the method and also indicate properties for rapid convergence.

  9. Multithreaded Model for Dynamic Load Balancing Parallel Adaptive PDE Computations

    NASA Technical Reports Server (NTRS)

    Chrisochoides, Nikos

    1995-01-01

    We present a multithreaded model for the dynamic load-balancing of numerical, adaptive computations required for the solution of Partial Differential Equations (PDE's) on multiprocessors. Multithreading is used as a means of exploring concurrency in the processor level in order to tolerate synchronization costs inherent to traditional (non-threaded) parallel adaptive PDE solvers. Our preliminary analysis for parallel, adaptive PDE solvers indicates that multithreading can be used an a mechanism to mask overheads required for the dynamic balancing of processor workloads with computations required for the actual numerical solution of the PDE's. Also, multithreading can simplify the implementation of dynamic load-balancing algorithms, a task that is very difficult for traditional data parallel adaptive PDE computations. Unfortunately, multithreading does not always simplify program complexity, often makes code re-usability not an easy task, and increases software complexity.

  10. Recent models for adaptive personality differences: a review

    PubMed Central

    Dingemanse, Niels J.; Wolf, Max

    2010-01-01

    In this paper we review recent models that provide adaptive explanations for animal personalities: individual differences in behaviour (or suites of correlated behaviours) that are consistent over time or contexts. We start by briefly discussing patterns of variation in behaviour that have been documented in natural populations. In the main part of the paper we discuss models for personality differences that (i) explain animal personalities as adaptive behavioural responses to differences in state, (ii) investigate how feedbacks between state and behaviour can stabilize initial differences among individuals and (iii) provide adaptive explanations for animal personalities that are not based on state differences. Throughout, we focus on two basic questions. First, what is the basic conceptual idea underlying the model? Second, what are the key assumptions and predictions of the model? We conclude by discussing empirical features of personalities that have not yet been addressed by formal modelling. While this paper is primarily intended to guide empiricists through current adaptive theory, thereby stimulating empirical tests of these models, we hope it also inspires theoreticians to address aspects of personalities that have received little attention up to now. PMID:21078647

  11. Adaptive Shape Functions and Internal Mesh Adaptation for Modelling Progressive Failure in Adhesively Bonded Joints

    NASA Technical Reports Server (NTRS)

    Stapleton, Scott; Gries, Thomas; Waas, Anthony M.; Pineda, Evan J.

    2014-01-01

    Enhanced finite elements are elements with an embedded analytical solution that can capture detailed local fields, enabling more efficient, mesh independent finite element analysis. The shape functions are determined based on the analytical model rather than prescribed. This method was applied to adhesively bonded joints to model joint behavior with one element through the thickness. This study demonstrates two methods of maintaining the fidelity of such elements during adhesive non-linearity and cracking without increasing the mesh needed for an accurate solution. The first method uses adaptive shape functions, where the shape functions are recalculated at each load step based on the softening of the adhesive. The second method is internal mesh adaption, where cracking of the adhesive within an element is captured by further discretizing the element internally to represent the partially cracked geometry. By keeping mesh adaptations within an element, a finer mesh can be used during the analysis without affecting the global finite element model mesh. Examples are shown which highlight when each method is most effective in reducing the number of elements needed to capture adhesive nonlinearity and cracking. These methods are validated against analogous finite element models utilizing cohesive zone elements.

  12. Determination of background concentrations for air quality models using spectral analysis and filtering of monitoring data

    NASA Astrophysics Data System (ADS)

    Tchepel, O.; Costa, A. M.; Martins, H.; Ferreira, J.; Monteiro, A.; Miranda, A. I.; Borrego, C.

    2010-01-01

    The use of background concentrations in air pollution modelling is usually a critical issue and a source of errors. The current work proposes an approach for the estimation of background concentrations using air quality measured data decomposed on baseline and short-term components. For this purpose, the spectral density was obtained for air quality monitoring data based on the Fourier series analysis. After, short-term fluctuations associated with the influence of local emissions and dispersion conditions were extracted from the original measurements using an iterative moving-average filter and taking into account the contribution of higher frequencies determined from the spectral analysis. The deterministic component obtained by the filtering is characterised by wider spatial and temporal representativeness than original monitoring data and is assumed to be appropriate for establishing the background values. This methodology was applied to define background concentrations of particulate matter (PM 10) used as input data for a local scale CFD model, and compared with an alternative approach using background concentrations provided by a mesoscale air quality modelling system. The study is focused on a selected domain within the Lisbon urban area (Portugal). The results present a better performance for the microscale model when initialised by decomposed time series and demonstrate the importance of the proposed methodology in reducing the uncertainty of the model predictions. The decomposition of air quality measurements and the removal of short-term fluctuations discussed in the work is a valuable technique to determine representative background concentrations.

  13. Object detection in natural backgrounds predicted by discrimination performance and models

    NASA Technical Reports Server (NTRS)

    Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.

    1997-01-01

    Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.

  14. Hardware performance versus video quality trade-off for Gaussian mixture model based background identification systems

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore; Petra, Nicola

    2014-04-01

    Background identification is a fundamental task in many video processing systems. The Gaussian Mixture Model is a background identification algorithm that models the pixel luminance with a mixture of K Gaussian distributions. The number of Gaussian distributions determines the accuracy of the background model and the computational complexity of the algorithm. This paper compares two hardware implementations of the Gaussian Mixture Model that use three and five Gaussians per pixel. A trade off analysis is carried out by evaluating the quality of the processed video sequences and the hardware performances. The circuits are implemented on FPGA by exploiting state of the art, hardware oriented, formulation of the Gaussian Mixture Model equations and by using truncated binary multipliers. The results suggest that the circuit that uses three Gaussian distributions provides video with good accuracy while requiring significant less resources than the option that uses five Gaussian distributions per pixel.

  15. Subjective quality assessment of an adaptive video streaming model

    NASA Astrophysics Data System (ADS)

    Tavakoli, Samira; Brunnström, Kjell; Wang, Kun; Andrén, Börje; Shahid, Muhammad; Garcia, Narciso

    2014-01-01

    With the recent increased popularity and high usage of HTTP Adaptive Streaming (HAS) techniques, various studies have been carried out in this area which generally focused on the technical enhancement of HAS technology and applications. However, a lack of common HAS standard led to multiple proprietary approaches which have been developed by major Internet companies. In the emerging MPEG-DASH standard the packagings of the video content and HTTP syntax have been standardized; but all the details of the adaptation behavior are left to the client implementation. Nevertheless, to design an adaptation algorithm which optimizes the viewing experience of the enduser, the multimedia service providers need to know about the Quality of Experience (QoE) of different adaptation schemes. Taking this into account, the objective of this experiment was to study the QoE of a HAS-based video broadcast model. The experiment has been carried out through a subjective study of the end user response to various possible clients' behavior for changing the video quality taking different QoE-influence factors into account. The experimental conclusions have made a good insight into the QoE of different adaptation schemes which can be exploited by HAS clients for designing the adaptation algorithms.

  16. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    SciTech Connect

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; Arge, C. Nick

    2015-03-17

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF) to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.

  17. Achieving runtime adaptability through automated model evolution and variant selection

    NASA Astrophysics Data System (ADS)

    Mosincat, Adina; Binder, Walter; Jazayeri, Mehdi

    2014-01-01

    Dynamically adaptive systems propose adaptation by means of variants that are specified in the system model at design time and allow for a fixed set of different runtime configurations. However, in a dynamic environment, unanticipated changes may result in the inability of the system to meet its quality requirements. To allow the system to react to these changes, this article proposes a solution for automatically evolving the system model by integrating new variants and periodically validating the existing ones based on updated quality parameters. To illustrate this approach, the article presents a BPEL-based framework using a service composition model to represent the functional requirements of the system. The framework estimates quality of service (QoS) values based on information provided by a monitoring mechanism, ensuring that changes in QoS are reflected in the system model. The article shows how the evolved model can be used at runtime to increase the system's autonomic capabilities and delivered QoS.

  18. The ADaptation and Anticipation Model (ADAM) of sensorimotor synchronization

    PubMed Central

    van der Steen, M. C. (Marieke); Keller, Peter E.

    2013-01-01

    A constantly changing environment requires precise yet flexible timing of movements. Sensorimotor synchronization (SMS)—the temporal coordination of an action with events in a predictable external rhythm—is a fundamental human skill that contributes to optimal sensory-motor control in daily life. A large body of research related to SMS has focused on adaptive error correction mechanisms that support the synchronization of periodic movements (e.g., finger taps) with events in regular pacing sequences. The results of recent studies additionally highlight the importance of anticipatory mechanisms that support temporal prediction in the context of SMS with sequences that contain tempo changes. To investigate the role of adaptation and anticipatory mechanisms in SMS we introduce ADAM: an ADaptation and Anticipation Model. ADAM combines reactive error correction processes (adaptation) with predictive temporal extrapolation processes (anticipation) inspired by the computational neuroscience concept of internal models. The combination of simulations and experimental manipulations based on ADAM creates a novel and promising approach for exploring adaptation and anticipation in SMS. The current paper describes the conceptual basis and architecture of ADAM. PMID:23772211

  19. Modeling neural adaptation in the frog auditory system

    NASA Astrophysics Data System (ADS)

    Wotton, Janine; McArthur, Kimberly; Bohara, Amit; Ferragamo, Michael; Megela Simmons, Andrea

    2005-09-01

    Extracellular recordings from the auditory midbrain, Torus semicircularis, of the leopard frog reveal a wide diversity of tuning patterns. Some cells seem to be well suited for time-based coding of signal envelope, and others for rate-based coding of signal frequency. Adaptation for ongoing stimuli plays a significant role in shaping the frequency-dependent response rate at different levels of the frog auditory system. Anuran auditory-nerve fibers are unusual in that they reveal frequency-dependent adaptation [A. L. Megela, J. Acoust. Soc. Am. 75, 1155-1162 (1984)], and therefore provide rate-based input. In order to examine the influence of these peripheral inputs on central responses, three layers of auditory neurons were modeled to examine short-term neural adaptation to pure tones and complex signals. The response of each neuron was simulated with a leaky integrate and fire model, and adaptation was implemented by means of an increasing threshold. Auditory-nerve fibers, dorsal medullary nucleus neurons, and toral cells were simulated and connected in three ascending layers. Modifying the adaptation properties of the peripheral fibers dramatically alters the response at the midbrain. [Work supported by NOHR to M.J.F.; Gustavus Presidential Scholarship to K.McA.; NIH DC05257 to A.M.S.

  20. Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies.

    PubMed

    Yu, Chao; Tan, Guozhen; Lv, Hongtao; Wang, Zhen; Meng, Jun; Hao, Jianye; Ren, Fenghui

    2016-01-01

    Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people's adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics. PMID:27282089

  1. Communicating to Farmers about Skin Cancer: The Behavior Adaptation Model.

    ERIC Educational Resources Information Center

    Parrott, Roxanne; Monahan, Jennifer; Ainsworth, Stuart; Steiner, Carol

    1998-01-01

    States health campaign messages designed to encourage behavior adaptation have greater likelihood of success than campaigns promoting avoidance of at-risk behaviors that cannot be avoided. Tests a model of health risk behavior using four different behaviors in a communication campaign aimed at reducing farmers' risk for skin cancer--questions…

  2. Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies

    PubMed Central

    Yu, Chao; Tan, Guozhen; Lv, Hongtao; Wang, Zhen; Meng, Jun; Hao, Jianye; Ren, Fenghui

    2016-01-01

    Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people’s adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics. PMID:27282089

  3. Water-energy modelling: Adaptation to water scarcity

    NASA Astrophysics Data System (ADS)

    Pereira-Cardenal, Silvio J.

    2016-02-01

    Combined water and power models are important to predict how changes in one resource will impact the other. A new global assessment of hydropower and thermoelectric power plants predicts future vulnerabilities arising from climate-change-induced water constraints and tests possible adaptation options.

  4. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  5. Modelling Adaptive Learning Behaviours for Consensus Formation in Human Societies

    NASA Astrophysics Data System (ADS)

    Yu, Chao; Tan, Guozhen; Lv, Hongtao; Wang, Zhen; Meng, Jun; Hao, Jianye; Ren, Fenghui

    2016-06-01

    Learning is an important capability of humans and plays a vital role in human society for forming beliefs and opinions. In this paper, we investigate how learning affects the dynamics of opinion formation in social networks. A novel learning model is proposed, in which agents can dynamically adapt their learning behaviours in order to facilitate the formation of consensus among them, and thus establish a consistent social norm in the whole population more efficiently. In the model, agents adapt their opinions through trail-and-error interactions with others. By exploiting historical interaction experience, a guiding opinion, which is considered to be the most successful opinion in the neighbourhood, can be generated based on the principle of evolutionary game theory. Then, depending on the consistency between its own opinion and the guiding opinion, a focal agent can realize whether its opinion complies with the social norm (i.e., the majority opinion that has been adopted) in the population, and adapt its behaviours accordingly. The highlight of the model lies in that it captures the essential features of people’s adaptive learning behaviours during the evolution and formation of opinions. Experimental results show that the proposed model can facilitate the formation of consensus among agents, and some critical factors such as size of opinion space and network topology can have significant influences on opinion dynamics.

  6. A model of the gamma-ray background on the BATSE experiment.

    NASA Astrophysics Data System (ADS)

    Rubin, B. C.; Lei, F.; Fishman, G. J.; Finger, M. H.; Harmon, B. A.; Kouveliotou, C.; Paciesas, W. S.; Pendleton, G. N.; Wilson, R. B.; Zhang, S. N.

    1996-12-01

    The BATSE experiment on the Compton Gamma-Ray Observatory is a nearly uninterrupted all-sky monitor in the hard X-ray/gamma-ray energy range. Count rate data continuously transmitted to the ground from Low Earth Orbit (altitude ~450km) is dominated, in the 20-300keV energy range, by diffuse cosmic background modulated by blocking effects of the Earth. Other background sources include atmospheric gamma-rays and the decay of radionuclides created in cosmic ray and radiation belt trapped particle interactions with the detector. Numerous discrete cosmic sources are also present in these data. In this paper we describe a semi-empirical background model which has been used to reduce the effect of dominant background sources. The use of this model can increase the sensitivity of the experiment to sources observed with the Earth occultation technique; to long period pulsed sources; to analysis of flickering noise; and to transient events.

  7. An adaptive multi-feature segmentation model for infrared image

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa

    2016-04-01

    Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.

  8. On fractional order composite model reference adaptive control

    NASA Astrophysics Data System (ADS)

    Wei, Yiheng; Sun, Zhenyuan; Hu, Yangsheng; Wang, Yong

    2016-08-01

    This paper presents a novel composite model reference adaptive control approach for a class of fractional order linear systems with unknown constant parameters. The method is extended from the model reference adaptive control. The parameter estimation error of our method depends on both the tracking error and the prediction error, whereas the existing method only depends on the tracking error, which makes our method has better transient performance in the sense of generating smooth system output. By the aid of the continuous frequency distributed model, stability of the proposed approach is established in the Lyapunov sense. Furthermore, the convergence property of the model parameters estimation is presented, on the premise that the closed-loop control system is stable. Finally, numerical simulation examples are given to demonstrate the effectiveness of the proposed schemes.

  9. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  10. Comparison of background ozone estimates over the western United States based on two separate model methodologies

    NASA Astrophysics Data System (ADS)

    Dolwick, Pat; Akhtar, Farhan; Baker, Kirk R.; Possiel, Norm; Simon, Heather; Tonnesen, Gail

    2015-05-01

    Two separate air quality model methodologies for estimating background ozone levels over the western U.S. are compared in this analysis. The first approach is a direct sensitivity modeling approach that considers the ozone levels that would remain after certain emissions are entirely removed (i.e., zero-out modeling). The second approach is based on an instrumented air quality model which tracks the formation of ozone within the simulation and assigns the source of that ozone to pre-identified categories (i.e., source apportionment modeling). This analysis focuses on a definition of background referred to as U.S. background (USB) which is designed to represent the influence of all sources other than U.S. anthropogenic emissions. Two separate modeling simulations were completed for an April-October 2007 period, both focused on isolating the influence of sources other than domestic manmade emissions. The zero-out modeling was conducted with the Community Multiscale Air Quality (CMAQ) model and the source apportionment modeling was completed with the Comprehensive Air Quality Model with Extensions (CAMx). Our analysis shows that the zero-out and source apportionment techniques provide relatively similar estimates of the magnitude of seasonal mean daily 8-h maximum U.S. background ozone at locations in the western U.S. when base case model ozone biases are considered. The largest differences between the two sets of USB estimates occur in urban areas where interactions with local NOx emissions can be important, especially when ozone levels are relatively low. Both methodologies conclude that seasonal mean daily 8-h maximum U.S. background ozone levels can be as high as 40-45 ppb over rural portions of the western U.S. Background fractions tend to decrease as modeled total ozone concentrations increase, with typical fractions of 75-100 percent on the lowest ozone days (<25 ppb) and typical fractions between 30 and 50% on days with ozone above 75 ppb. The finding that

  11. Evaluating mallard adaptive management models with time series

    USGS Publications Warehouse

    Conn, P.B.; Kendall, W.L.

    2004-01-01

    Wildlife practitioners concerned with midcontinent mallard (Anas platyrhynchos) management in the United States have instituted a system of adaptive harvest management (AHM) as an objective format for setting harvest regulations. Under the AHM paradigm, predictions from a set of models that reflect key uncertainties about processes underlying population dynamics are used in coordination with optimization software to determine an optimal set of harvest decisions. Managers use comparisons of the predictive abilities of these models to gauge the relative truth of different hypotheses about density-dependent recruitment and survival, with better-predicting models giving more weight to the determination of harvest regulations. We tested the effectiveness of this strategy by examining convergence rates of 'predictor' models when the true model for population dynamics was known a priori. We generated time series for cases when the a priori model was 1 of the predictor models as well as for several cases when the a priori model was not in the model set. We further examined the addition of different levels of uncertainty into the variance structure of predictor models, reflecting different levels of confidence about estimated parameters. We showed that in certain situations, the model-selection process favors a predictor model that incorporates the hypotheses of additive harvest mortality and weakly density-dependent recruitment, even when the model is not used to generate data. Higher levels of predictor model variance led to decreased rates of convergence to the model that generated the data, but model weight trajectories were in general more stable. We suggest that predictive models should incorporate all sources of uncertainty about estimated parameters, that the variance structure should be similar for all predictor models, and that models with different functional forms for population dynamics should be considered for inclusion in predictor model! sets. All of these

  12. A generic efficient adaptive grid scheme for rocket propulsion modeling

    NASA Technical Reports Server (NTRS)

    Mo, J. D.; Chow, Alan S.

    1993-01-01

    The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.

  13. OMEGA: The operational multiscale environment model with grid adaptivity

    SciTech Connect

    Bacon, D.P.

    1995-07-01

    This review talk describes the OMEGA code, used for weather simulation and the modeling of aerosol transport through the atmosphere. Omega employs a 3D mesh of wedge shaped elements (triangles when viewed from above) that adapt with time. Because wedges are laid out in layers of triangular elements, the scheme can utilize structured storage and differencing techniques along the elevation coordinate, and is thus a hybrid of structured and unstructured methods. The utility of adaptive gridding in this moded, near geographic features such as coastlines, where material properties change discontinuously, is illustrated. Temporal adaptivity was used additionally to track moving internal fronts, such as clouds of aerosol contaminants. The author also discusses limitations specific to this problem, including manipulation of huge data bases and fixed turn-around times. In practice, the latter requires a carefully tuned optimization between accuracy and computation speed.

  14. Learning Adaptive Forecasting Models from Irregularly Sampled Multivariate Clinical Data

    PubMed Central

    Liu, Zitao; Hauskrecht, Milos

    2016-01-01

    Building accurate predictive models of clinical multivariate time series is crucial for understanding of the patient condition, the dynamics of a disease, and clinical decision making. A challenging aspect of this process is that the model should be flexible and adaptive to reflect well patient-specific temporal behaviors and this also in the case when the available patient-specific data are sparse and short span. To address this problem we propose and develop an adaptive two-stage forecasting approach for modeling multivariate, irregularly sampled clinical time series of varying lengths. The proposed model (1) learns the population trend from a collection of time series for past patients; (2) captures individual-specific short-term multivariate variability; and (3) adapts by automatically adjusting its predictions based on new observations. The proposed forecasting model is evaluated on a real-world clinical time series dataset. The results demonstrate the benefits of our approach on the prediction tasks for multivariate, irregularly sampled clinical time series, and show that it can outperform both the population based and patient-specific time series prediction models in terms of prediction accuracy. PMID:27525189

  15. Missile guidance law design using adaptive cerebellar model articulation controller.

    PubMed

    Lin, Chih-Min; Peng, Ya-Fu

    2005-05-01

    An adaptive cerebellar model articulation controller (CMAC) is proposed for command to line-of-sight (CLOS) missile guidance law design. In this design, the three-dimensional (3-D) CLOS guidance problem is formulated as a tracking problem of a time-varying nonlinear system. The adaptive CMAC control system is comprised of a CMAC and a compensation controller. The CMAC control is used to imitate a feedback linearization control law and the compensation controller is utilized to compensate the difference between the feedback linearization control law and the CMAC control. The online adaptive law is derived based on the Lyapunov stability theorem to learn the weights of receptive-field basis functions in CMAC control. In addition, in order to relax the requirement of approximation error bound, an estimation law is derived to estimate the error bound. Then the adaptive CMAC control system is designed to achieve satisfactory tracking performance. Simulation results for different engagement scenarios illustrate the validity of the proposed adaptive CMAC-based guidance law. PMID:15940993

  16. A model of excitation and adaptation in bacterial chemotaxis.

    PubMed Central

    Hauri, D C; Ross, J

    1995-01-01

    We present a model of the chemotactic mechanism of Escherichia coli that exhibits both initial excitation and eventual complete adaptation to any and all levels of stimulus ("exact" adaptation). In setting up the reaction network, we use only known interactions and experimentally determined cytosolic concentrations. Whenever possible, rate coefficients are first assigned experimentally measured values; second, we permit some variation in these rate coefficients by using a multiple-well optimization technique and incremental adjustment to obtain values that are sufficient to engender initial response to stimuli (excitation) and an eventual return of behavior to baseline (adaptation). The predictions of the model are similar to the observed behavior of wild-type bacteria in regard to the time scale of excitation in the presence of both attractant and repellent. The model predicts a weaker response to attractant than that observed experimentally, and the time scale of adaptation does not depend as strongly upon stimulant concentration as does that for wild-type bacteria. The mechanism responsible for long-term adaptation is local rather than global: on addition of a repellent or attractant, the receptor types not sensitive to that attractant or repellent do not change their average methylation level in the long term, although transient changes do occur. By carrying out a phenomenological simulation of bacterial chemotaxis, we find that the model is insufficiently sensitive to effect taxis in a gradient of attractant. However, by arbitrarily increasing the sensitivity of the motor to the tumble effector (phosphorylated CheY), we can obtain chemotactic behavior. Images FIGURE 6 FIGURE 7 PMID:7696522

  17. Numerical modeling of plasma plume evolution against ambient background gas in laser blow off experiments

    SciTech Connect

    Patel, Bhavesh G.; Das, Amita; Kaw, Predhiman; Singh, Rajesh; Kumar, Ajai

    2012-07-15

    Two dimensional numerical modelling based on simplified hydrodynamic evolution for an expanding plasma plume (created by laser blow off) against an ambient background gas has been carried out. A comparison with experimental observations shows that these simulations capture most features of the plasma plume expansion. The plume location and other gross features are reproduced as per the experimental observation in quantitative detail. The plume shape evolution and its dependence on the ambient background gas are in good qualitative agreement with the experiment. This suggests that a simplified hydrodynamic expansion model is adequate for the description of plasma plume expansion.

  18. Statistical models for LWIR hyperspectral backgrounds and their applications in chemical agent detection

    NASA Astrophysics Data System (ADS)

    Manolakis, D.; Jairam, L. G.; Zhang, D.; Rossacci, M.

    2007-04-01

    Remote detection of chemical vapors in the atmosphere has a wide range of civilian and military applications. In the past few years there has been significant interest in the detection of effluent plumes using hyperspectral imaging spectroscopy in the 8-13μm atmospheric window. A major obstacle in the full exploitation of this technology is the fact that everything in the infrared is a source of radiation. As a result, the emission from the gases of interest is always mixed with emission by the more abundant atmospheric constituents and by other objects in the sensor field of view. The radiance fluctuations in this background emission constitute an additional source of interference which is much stronger than the detector noise. In this paper we develop and evaluate parametric models for the statistical characterization of LWIR hyperspectral backgrounds. We consider models based on the theory of elliptically contoured distributions. Both models can handle heavy tails, which is a key stastical feature of hyperspectral imaging backgrounds. The paper provides a concise description of the underlying models, the algorithms used to estimate their parameters from the background spectral measurements, and the use of the developed models in the design and evaluation of chemical warfare agent detection algorithms.

  19. Audibility of time-varying signals in time-varying backgrounds: Model and data

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.; Glasberg, Brian R.

    2001-05-01

    We have described a model for calculating the partial loudness of a steady signal in the presence of a steady background sound [Moore et al., J. Audio Eng. Soc. 45, 224-240 (1997)]. We have also described a model for calculating the loudness of time-varying signals [B. R. Glasberg and B. C. J. Moore, J. Audio Eng. Soc. 50, 331-342 (2002)]. These two models have been combined to allow calculation of the partial loudness of a time-varying signal in the presence of a time-varying background. To evaluate the model, psychometric functions for the detection of a variety of time-varying signals (e.g., telephone ring tones) have been measured in a variety of background sounds sampled from everyday listening situations, using a two-alternative forced-choice task. The different signals and backgrounds were interleaved, to create stimulus uncertainty, as would occur in everyday life. The data are used to relate the detectability index, d', to the calculated partial loudness. In this way, the model can be used to predict the detectability of any signal, based on its calculated partial loudness. [Work supported by MRC (UK) and by Nokia.

  20. A new adaptive hybrid electromagnetic damper: modelling, optimization, and experiment

    NASA Astrophysics Data System (ADS)

    Asadi, Ehsan; Ribeiro, Roberto; Behrad Khamesee, Mir; Khajepour, Amir

    2015-07-01

    This paper presents the development of a new electromagnetic hybrid damper which provides regenerative adaptive damping force for various applications. Recently, the introduction of electromagnetic technologies to the damping systems has provided researchers with new opportunities for the realization of adaptive semi-active damping systems with the added benefit of energy recovery. In this research, a hybrid electromagnetic damper is proposed. The hybrid damper is configured to operate with viscous and electromagnetic subsystems. The viscous medium provides a bias and fail-safe damping force while the electromagnetic component adds adaptability and the capacity for regeneration to the hybrid design. The electromagnetic component is modeled and analyzed using analytical (lumped equivalent magnetic circuit) and electromagnetic finite element method (FEM) (COMSOL® software package) approaches. By implementing both modeling approaches, an optimization for the geometric aspects of the electromagnetic subsystem is obtained. Based on the proposed electromagnetic hybrid damping concept and the preliminary optimization solution, a prototype is designed and fabricated. A good agreement is observed between the experimental and FEM results for the magnetic field distribution and electromagnetic damping forces. These results validate the accuracy of the modeling approach and the preliminary optimization solution. An analytical model is also presented for viscous damping force, and is compared with experimental results The results show that the damper is able to produce damping coefficients of 1300 and 0-238 N s m-1 through the viscous and electromagnetic components, respectively.

  1. Adaptation of frequency-domain readout for Transition Edge Sensor bolometers for the POLARBEAR-2 Cosmic Microwave Background experiment

    NASA Astrophysics Data System (ADS)

    Hattori, Kaori; Arnold, Kam; Barron, Darcy; Dobbs, Matt; de Haan, Tijmen; Harrington, Nicholas; Hasegawa, Masaya; Hazumi, Masashi; Holzapfel, William L.; Keating, Brian; Lee, Adrian T.; Morii, Hideki; Myers, Michael J.; Smecher, Graeme; Suzuki, Aritoki; Tomaru, Takayuki

    2013-12-01

    The POLARBEAR-2 Cosmic Microwave Background (CMB) experiment aims to observe B-mode polarization with high sensitivity to explore gravitational lensing of CMB and inflationary gravitational waves. POLARBEAR-2 is an upgraded experiment based on POLARBEAR-1, which had first light in January 2012. For POLARBEAR-2, we will build a receiver that has 7588 Transition Edge Sensor (TES) bolometers coupled to two-band (95 and 150 GHz) polarization-sensitive antennas. For the large array's readout, we employ digital frequency-domain multiplexing and multiplex 32 bolometers through a single superconducting quantum interference device (SQUID). An 8-bolometer frequency-domain multiplexing readout has been deployed with the POLARBEAR-1 experiment. Extending that architecture to 32 bolometers requires an increase in the bandwidth of the SQUID electronics to 3 MHz. To achieve this increase in bandwidth, we use Digital Active Nulling (DAN) on the digital frequency multiplexing platform. In this paper, we present requirements and improvements on parasitic inductance and resistance of cryogenic wiring and capacitors used for modulating bolometers. These components are problematic above 1 MHz. We also show that our system is able to bias a bolometer in its superconducting transition at 3 MHz.

  2. Adaptive multiscale model reduction with Generalized Multiscale Finite Element Methods

    NASA Astrophysics Data System (ADS)

    Chung, Eric; Efendiev, Yalchin; Hou, Thomas Y.

    2016-09-01

    In this paper, we discuss a general multiscale model reduction framework based on multiscale finite element methods. We give a brief overview of related multiscale methods. Due to page limitations, the overview focuses on a few related methods and is not intended to be comprehensive. We present a general adaptive multiscale model reduction framework, the Generalized Multiscale Finite Element Method. Besides the method's basic outline, we discuss some important ingredients needed for the method's success. We also discuss several applications. The proposed method allows performing local model reduction in the presence of high contrast and no scale separation.

  3. Cosmic microwave background and supernova constraints on quintessence: Concordance regions and target models

    NASA Astrophysics Data System (ADS)

    Caldwell, Robert R.; Doran, Michael

    2004-05-01

    We perform a detailed comparison of the Wilkinson Microwave Anisotropy Probe measurements of the cosmic microwave background (CMB) temperature and polarization anisotropy with the predictions of quintessence cosmological models of dark energy. We consider a wide range of quintessence models, including a constant equation of state, a simply parametrized, time-evolving equation of state, a class of models of early quintessence, and scalar fields with an inverse-power law potential. We also provide a joint fit to the Cosmic Background Imager (CBI) and Arcminute Cosmology Bolometer Array Receiver (ACBAR) CMB data, and the type 1a supernovae. Using these select constraints we identify viable, target models which should prove useful for numerical studies of large scale structure formation, and to rapidly estimate the impact to the concordance region when new or improved observations become available.

  4. Modeled summer background concentration nutrients and suspended sediment in the mid-continent (USA) great rivers

    EPA Science Inventory

    We used regression models to predict background concentration of four water quality indictors: total nitrogen (N), total phosphorus (P), chloride, and total suspended solids (TSS), in the mid-continent (USA) great rivers, the Upper Mississippi, the Lower Missouri, and the Ohio. F...

  5. Object Detection in Natural Backgrounds Predicted by Discrimination Performance and Models

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Watson, A. B.; Rohaly, A. M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    In object detection, an observer looks for an object class member in a set of backgrounds. In discrimination, an observer tries to distinguish two images. Discrimination models predict the probability that an observer detects a difference between two images. We compare object detection and image discrimination with the same stimuli by: (1) making stimulus pairs of the same background with and without the target object and (2) either giving many consecutive trials with the same background (discrimination) or intermixing the stimuli (object detection). Six images of a vehicle in a natural setting were altered to remove the vehicle and mixed with the original image in various proportions. Detection observers rated the images for vehicle presence. Discrimination observers rated the images for any difference from the background image. Estimated detectabilities of the vehicles were found by maximizing the likelihood of a Thurstone category scaling model. The pattern of estimated detectabilities is similar for discrimination and object detection, and is accurately predicted by a Cortex Transform discrimination model. Predictions of a Contrast- Sensitivity- Function filter model and a Root-Mean-Square difference metric based on the digital image values are less accurate. The discrimination detectabilities averaged about twice those of object detection.

  6. Nonresonant Background in Isobaric Models of Photoproduction of η-Mesons on Nucleons

    NASA Astrophysics Data System (ADS)

    Tryasuchev, V. A.; Alekseev, B. A.; Yakovleva, V. S.; Kondratyeva, A. G.

    2016-07-01

    Within the framework of isobaric models of pseudoscalar meson photoproduction, the nonresonant background of photoproduction of η-mesons on nucleons is investigated as a function of energy. A bound on the magnitude of the pseudoscalar coupling constant of the η-meson with a nucleon is obtained: g {ηNN/ 2}/4π ≤ 0.01, and a bound on vector meson exchange models is also obtained.

  7. Nonresonant Background in Isobaric Models of Photoproduction of η-Mesons on Nucleons

    NASA Astrophysics Data System (ADS)

    Tryasuchev, V. A.; Alekseev, B. A.; Yakovleva, V. S.; Kondratyeva, A. G.

    2016-07-01

    Within the framework of isobaric models of pseudoscalar meson photoproduction, the nonresonant background of photoproduction of η-mesons on nucleons is investigated as a function of energy. A bound on the magnitude of the pseudoscalar coupling constant of the η-meson with a nucleon is obtained: g η NN 2 /4π ≤ 0.01, and a bound on vector meson exchange models is also obtained.

  8. On a combined adaptive tetrahedral tracing and edge diffraction model

    NASA Astrophysics Data System (ADS)

    Hart, Carl R.

    A major challenge in architectural acoustics is the unification of diffraction models and geometric acoustics. For example, geometric acoustics is insufficient to quantify the scattering characteristics of acoustic diffusors. Typically the time-independent boundary element method (BEM) is the method of choice. In contrast, time-domain computations are of interest for characterizing both the spatial and temporal scattering characteristics of acoustic diffusors. Hence, a method is sought that predicts acoustic scattering in the time-domain. A prediction method, which combines an advanced image source method and an edge diffraction model, is investigated for the prediction of time-domain scattering. Adaptive tetrahedral tracing is an advanced image source method that generates image sources through an adaptive process. Propagating tetrahedral beams adapt to ensonified geometry mapping the geometric sound field in space and along boundaries. The edge diffraction model interfaces with the adaptive tetrahedral tracing process by the transfer of edge geometry and visibility information. Scattering is quantified as the contribution of secondary sources along a single or multiple interacting edges. Accounting for a finite number of diffraction permutations approximates the scattered sound field. Superposition of the geometric and scattered sound fields results in a synthesized impulse response between a source and a receiver. Evaluation of the prediction technique involves numerical verification and numerical validation. Numerical verification is based upon a comparison with analytic and numerical (BEM) solutions for scattering geometries. Good agreement is shown for the selected scattering geometries. Numerical validation is based upon experimentally determined scattered impulse responses of acoustic diffusors. Experimental data suggests that the predictive model is appropriate for high-frequency predictions. For the experimental determination of the scattered impulse

  9. Algebraic turbulence modeling for unstructured and adaptive meshes

    NASA Technical Reports Server (NTRS)

    Mavriplis, Dimitri J.

    1990-01-01

    An algebraic turbulence model based on the Baldwin-Lomax model, has been implemented for use on unstructured grids. The implementation is based on the use of local background structured turbulence meshes. At each time-step, flow variables are interpolated from the unstructured mesh onto the background structured meshes, the turbulence model is executed on these meshes, and the resulting eddy viscosity values are interpolated back to the unstructured mesh. Modifications to the algebraic model were required to enable the treatment of more complicated flows, such as confluent boundary layers and wakes. The model is used in conjuction with an efficient unstructured multigrid finite-element Navier-Stokes solver in order to compute compressible turbulent flows on fully unstructured meshes. Solutions about single and multiple element airfoils are obtained and compared with experimental data.

  10. Language Model Combination and Adaptation Using Weighted Finite State Transducers

    NASA Technical Reports Server (NTRS)

    Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.

    2010-01-01

    In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences

  11. Network and adaptive system of systems modeling and analysis.

    SciTech Connect

    Lawton, Craig R.; Campbell, James E. Dr.; Anderson, Dennis James; Eddy, John P.

    2007-05-01

    This report documents the results of an LDRD program entitled ''Network and Adaptive System of Systems Modeling and Analysis'' that was conducted during FY 2005 and FY 2006. The purpose of this study was to determine and implement ways to incorporate network communications modeling into existing System of Systems (SoS) modeling capabilities. Current SoS modeling, particularly for the Future Combat Systems (FCS) program, is conducted under the assumption that communication between the various systems is always possible and occurs instantaneously. A more realistic representation of these communications allows for better, more accurate simulation results. The current approach to meeting this objective has been to use existing capabilities to model network hardware reliability and adding capabilities to use that information to model the impact on the sustainment supply chain and operational availability.

  12. An adaptive distance measure for use with nonparametric models

    SciTech Connect

    Garvey, D. R.; Hines, J. W.

    2006-07-01

    Distance measures perform a critical task in nonparametric, locally weighted regression. Locally weighted regression (LWR) models are a form of 'lazy learning' which construct a local model 'on the fly' by comparing a query vector to historical, exemplar vectors according to a three step process. First, the distance of the query vector to each of the exemplar vectors is calculated. Next, these distances are passed to a kernel function, which converts the distances to similarities or weights. Finally, the model output or response is calculated by performing locally weighted polynomial regression. To date, traditional distance measures, such as the Euclidean, weighted Euclidean, and L1-norm have been used as the first step in the prediction process. Since these measures do not take into consideration sensor failures and drift, they are inherently ill-suited for application to 'real world' systems. This paper describes one such LWR model, namely auto associative kernel regression (AAKR), and describes a new, Adaptive Euclidean distance measure that can be used to dynamically compensate for faulty sensor inputs. In this new distance measure, the query observations that lie outside of the training range (i.e. outside the minimum and maximum input exemplars) are dropped from the distance calculation. This allows for the distance calculation to be robust to sensor drifts and failures, in addition to providing a method for managing inputs that exceed the training range. In this paper, AAKR models using the standard and Adaptive Euclidean distance are developed and compared for the pressure system of an operating nuclear power plant. It is shown that using the standard Euclidean distance for data with failed inputs, significant errors in the AAKR predictions can result. By using the Adaptive Euclidean distance it is shown that high fidelity predictions are possible, in spite of the input failure. In fact, it is shown that with the Adaptive Euclidean distance prediction

  13. Modeling of beam loss in Tevatron and backgrounds in the BTeV detector

    SciTech Connect

    Alexandr I. Drozhdin; Nikolai V. Mokhov

    2004-07-07

    Detailed STRUCT simulations are performed on beam loss rates in the vicinity of the BTeV detector in the Tevatron CO interaction region due to beam-gas nuclear elastic interactions and out-scattering from the collimation system. Corresponding showers induced in the machine components and background rates in BTeV are modeled with the MARS14 code. It is shown that the combination of a steel collimator and concrete shielding wall located in front of the detector can reduce the accelerator-related background rates in the detector by an order of magnitude.

  14. ForCent Model Development and Testing using the Enriched Background Isotope Study (EBIS) Experiment

    SciTech Connect

    Parton, William; Hanson, Paul J; Swanston, Chris; Torn, Margaret S.; Trumbore, Susan E.; Riley, William J.; Kelly, Robin

    2010-01-01

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool 14C signature (? 14C) data from the Enriched Background Isotope Study 14C experiment (1999-2006) shows that the model correctly simulates the temporal dynamics of the 14C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass ? 14C data, and with soil respiration ? 14C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study 14C experimental treatments on soil respiration ? 14C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.

  15. ForCent model development and testing using the Enriched Background Isotope Study experiment

    SciTech Connect

    Parton, W.J.; Hanson, P. J.; Swanston, C.; Torn, M.; Trumbore, S. E.; Riley, W.; Kelly, R.

    2010-10-01

    The ForCent forest ecosystem model was developed by making major revisions to the DayCent model including: (1) adding a humus organic pool, (2) incorporating a detailed root growth model, and (3) including plant phenological growth patterns. Observed plant production and soil respiration data from 1993 to 2000 were used to demonstrate that the ForCent model could accurately simulate ecosystem carbon dynamics for the Oak Ridge National Laboratory deciduous forest. A comparison of ForCent versus observed soil pool {sup 14}C signature ({Delta} {sup 14}C) data from the Enriched Background Isotope Study {sup 14}C experiment (1999-2006) shows that the model correctly simulates the temporal dynamics of the {sup 14}C label as it moved from the surface litter and roots into the mineral soil organic matter pools. ForCent model validation was performed by comparing the observed Enriched Background Isotope Study experimental data with simulated live and dead root biomass {Delta} {sup 14}C data, and with soil respiration {Delta} {sup 14}C (mineral soil, humus layer, leaf litter layer, and total soil respiration) data. Results show that the model correctly simulates the impact of the Enriched Background Isotope Study {sup 14}C experimental treatments on soil respiration {Delta} {sup 14}C values for the different soil organic matter pools. Model results suggest that a two-pool root growth model correctly represents root carbon dynamics and inputs to the soil. The model fitting process and sensitivity analysis exposed uncertainty in our estimates of the fraction of mineral soil in the slow and passive pools, dissolved organic carbon flux out of the litter layer into the mineral soil, and mixing of the humus layer into the mineral soil layer.

  16. Model Adaptation for Prognostics in a Particle Filtering Framework

    NASA Technical Reports Server (NTRS)

    Saha, Bhaskar; Goebel, Kai Frank

    2011-01-01

    One of the key motivating factors for using particle filters for prognostics is the ability to include model parameters as part of the state vector to be estimated. This performs model adaptation in conjunction with state tracking, and thus, produces a tuned model that can used for long term predictions. This feature of particle filters works in most part due to the fact that they are not subject to the "curse of dimensionality", i.e. the exponential growth of computational complexity with state dimension. However, in practice, this property holds for "well-designed" particle filters only as dimensionality increases. This paper explores the notion of wellness of design in the context of predicting remaining useful life for individual discharge cycles of Li-ion batteries. Prognostic metrics are used to analyze the tradeoff between different model designs and prediction performance. Results demonstrate how sensitivity analysis may be used to arrive at a well-designed prognostic model that can take advantage of the model adaptation properties of a particle filter.

  17. Integrated modeling of the GMT laser tomography adaptive optics system

    NASA Astrophysics Data System (ADS)

    Piatrou, Piotr

    2014-08-01

    Laser Tomography Adaptive Optics (LTAO) is one of adaptive optics systems planned for the Giant Magellan Telescope (GMT). End-to-end simulation tools that are able to cope with the complexity and computational burden of the AO systems to be installed on the extremely large telescopes such as GMT prove to be an integral part of the GMT LTAO system development endeavors. SL95, the Fortran 95 Simulation Library, is one of the software tools successfully used for the LTAO system end-to-end simulations. The goal of SL95 project is to provide a complete set of generic, richly parameterized mathematical models for key elements of the segmented telescope wavefront control systems including both active and adaptive optics as well as the models for atmospheric turbulence, extended light sources like Laser Guide Stars (LGS), light propagation engines and closed-loop controllers. The library is implemented as a hierarchical collection of classes capable of mutual interaction, which allows one to assemble complex wavefront control system configurations with multiple interacting control channels. In this paper we demonstrate the SL95 capabilities by building an integrated end-to-end model of the GMT LTAO system with 7 control channels: LGS tomography with Adaptive Secondary and on-instrument deformable mirrors, tip-tilt and vibration control, LGS stabilization, LGS focus control, truth sensor-based dynamic noncommon path aberration rejection, pupil position control, SLODAR-like embedded turbulence profiler. The rich parameterization of the SL95 classes allows to build detailed error budgets propagating through the system multiple errors and perturbations such as turbulence-, telescope-, telescope misalignment-, segment phasing error-, non-common path-induced aberrations, sensor noises, deformable mirror-to-sensor mis-registration, vibration, temporal errors, etc. We will present a short description of the SL95 architecture, as well as the sample GMT LTAO system simulation

  18. Supergravity background of λ-deformed model for AdS2 × S2 supercoset

    NASA Astrophysics Data System (ADS)

    Borsato, R.; Tseytlin, A. A.; Wulff, L.

    2016-04-01

    Starting with the F ˆ / G supercoset model corresponding to the AdSn ×Sn superstring one can define the λ-model of arxiv:arXiv:1409.1538 either as a deformation of the F ˆ / F ˆ gauged WZW model or as an integrable one-parameter generalisation of the non-abelian T-dual of the AdSn ×Sn superstring sigma model with respect to the whole supergroup F ˆ . Here we consider the case of n = 2 and find the explicit form of the 4d target space background for the λ-model for the PSU (1 , 1 | 2) / SO (1 , 1) × SO (2) supercoset. We show that this background represents a solution of type IIB 10d supergravity compactified on a 6-torus with only metric, dilaton Φ and the RR 5-form (represented by a 2-form F in 4d) being non-trivial. This implies that the λ-model is Weyl invariant at the quantum level and thus defines a consistent superstring sigma model. The supergravity solution we find is different from the one in arXiv:1410.1886 which should correspond to a version of the λ-model where only the bosonic subgroup of F ˆ is gauged. Still, the two solutions have equivalent scaling limit of arxiv:arXiv:1504.07213 leading to the isometric background for the metric and eΦ F which is related to the η-deformed AdS2 ×S2 sigma model of arXiv:1309.5850. Similar results are expected in the AdS3 ×S3 and AdS5 ×S5 cases.

  19. Stochastic background of relic gravitons in a bouncing quantum cosmological model

    SciTech Connect

    Bessada, Dennis; Pinto-Neto, Nelson; Siffert, Beatriz B.; Miranda, Oswaldo D. E-mail: beatriz@if.ufrj.br E-mail: oswaldo@das.inpe.br

    2012-11-01

    The spectrum and amplitude of the stochastic background of relic gravitons produced in a bouncing universe is calculated. The matter content of the model consists of dust and radiation fluids, and the bounce occurs due to quantum cosmological effects when the universe approaches the classical singularity in the contracting phase. The resulting amplitude is very small and it cannot be observed by any present and near future gravitational wave detector. Hence, as in the ekpyrotic model, any observation of these relic gravitons will rule out this type of quantum cosmological bouncing model.

  20. Data Assimilation in the ADAPT Photospheric Flux Transport Model

    DOE PAGESBeta

    Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; Arge, C. Nick

    2015-03-17

    Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF)more » to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.« less

  1. Numerical modeling of seismic waves using frequency-adaptive meshes

    NASA Astrophysics Data System (ADS)

    Hu, Jinyin; Jia, Xiaofeng

    2016-08-01

    An improved modeling algorithm using frequency-adaptive meshes is applied to meet the computational requirements of all seismic frequency components. It automatically adopts coarse meshes for low-frequency computations and fine meshes for high-frequency computations. The grid intervals are adaptively calculated based on a smooth inversely proportional function of grid size with respect to the frequency. In regular grid-based methods, the uniform mesh or non-uniform mesh is used for frequency-domain wave propagators and it is fixed for all frequencies. A too coarse mesh results in inaccurate high-frequency wavefields and unacceptable numerical dispersion; on the other hand, an overly fine mesh may cause storage and computational overburdens as well as invalid propagation angles of low-frequency wavefields. Experiments on the Padé generalized screen propagator indicate that the Adaptive mesh effectively solves these drawbacks of regular fixed-mesh methods, thus accurately computing the wavefield and its propagation angle in a wide frequency band. Several synthetic examples also demonstrate its feasibility for seismic modeling and migration.

  2. The Pattern of Neutral Molecular Variation under the Background Selection Model

    PubMed Central

    Charlesworth, D.; Charlesworth, B.; Morgan, M. T.

    1995-01-01

    Stochastic simulations of the infinite sites model were used to study the behavior of genetic diversity at a neutral locus in a genomic region without recombination, but subject to selection against deleterious alleles maintained by recurrent mutation (background selection). In large populations, the effect of background selection on the number of segregating sites approaches the effct on nucleotide site diversity, i.e., the reduction in genetic variability caused by background selection resembles that caused by a simple reduction in effective population size. We examined, by coalescence-based methods, the power of several tests for the departure from neutral expectation of the frequency spectra of alleles in samples from randomly mating populations (TAJIMA's, FU and LI's, and WATTERSON's tests). All of the tests have low power unless the selection against mutant alleles is extremely weak. In Drosophila, significant TAJIMA's tests are usually not obtained with empirical data sets from loci in genomic regions with restricted recombination frequencies and that exhibit low genetic diversity. This is consistent with the operation of background selection as opposed to selective sweeps. It remains to be decided whether background selection is sufficient to explain the observed extent of reduction in diversity in regions of restricted recombination. PMID:8601499

  3. CMAQ (Community Multi-Scale Air Quality) atmospheric distribution model adaptation to region of Hungary

    NASA Astrophysics Data System (ADS)

    Lázár, Dóra; Weidinger, Tamás

    2016-04-01

    For our days, it has become important to measure and predict the concentration of harmful atmospheric pollutants such as dust, aerosol particles of different size ranges, nitrogen compounds, and ozone. The Department of Meteorology at Eötvös Loránd University has been applying the WRF (Weather Research and Forecasting) model several years ago, which is suitable for weather forecasting tasks and provides input data for various environmental models (e.g. DNDC). By adapting the CMAQ (Community Multi-scale Air Quality) model we have designed a combined ambient air-meteorological model (WRF-CMAQ). In this research it is important to apply different emission databases and a background model describing the initial distribution of the pollutant. We used SMOKE (Sparse Matrix Operator Kernel Emissions) model for construction emission dataset from EMEP (European Monitoring and Evaluation Programme) inventories and GEOS-Chem model for initial and boundary conditions. Our model settings were CMAQ CB05 (Carbon Bond 2005) chemical mechanism with 108 x 108 km, 36 x 36 km and 12 x 12 km grids for regions of Europe, the Carpathian Basin and Hungary respectively. i) The structure of the model system, ii) a case study for Carpathian Basin (an anticyclonic weather situation at 21th September 2012) are presented. iii) Verification of ozone forecast has been provided based on the measurements of background air pollution stations. iv) Effects of model attributes (f.e. transition time, emission dataset, parameterizations) for the ozone forecast in Hungary are also investigated.

  4. Comparison of wavefront sensor models for simulation of adaptive optics.

    PubMed

    Wu, Zhiwen; Enmark, Anita; Owner-Petersen, Mette; Andersen, Torben

    2009-10-26

    The new generation of extremely large telescopes will have adaptive optics. Due to the complexity and cost of such systems, it is important to simulate their performance before construction. Most systems planned will have Shack-Hartmann wavefront sensors. Different mathematical models are available for simulation of such wavefront sensors. The choice of wavefront sensor model strongly influences computation time and simulation accuracy. We have studied the influence of three wavefront sensor models on performance calculations for a generic, adaptive optics (AO) system designed for K-band operation of a 42 m telescope. The performance of this AO system has been investigated both for reduced wavelengths and for reduced r(0) in the K band. The telescope AO system was designed for K-band operation, that is both the subaperture size and the actuator pitch were matched to a fixed value of r(0) in the K-band. We find that under certain conditions, such as investigating limiting guide star magnitude for large Strehl-ratios, a full model based on Fraunhofer propagation to the subimages is significantly more accurate. It does however require long computation times. The shortcomings of simpler models based on either direct use of average wavefront tilt over the subapertures for actuator control, or use of the average tilt to move a precalculated point spread function in the subimages are most pronounced for studies of system limitations to operating parameter variations. In the long run, efficient parallelization techniques may be developed to overcome the problem. PMID:19997286

  5. Adaptive optics sky coverage modeling for extremely large telescopes

    NASA Astrophysics Data System (ADS)

    Clare, Richard M.; Ellerbroek, Brent L.; Herriot, Glen; Véran, Jean-Pierre

    2006-12-01

    A Monte Carlo sky coverage model for laser guide star adaptive optics systems was proposed by Clare and Ellerbroek [J. Opt. Soc. Am. A 23, 418 (2006)]. We refine the model to include (i) natural guide star (NGS) statistics using published star count models, (ii) noise on the NGS measurements, (iii) the effect of telescope wind shake, (iv) a model for how the Strehl and hence NGS wavefront sensor measurement noise varies across the field, (v) the focus error due to imperfectly tracking the range to the sodium layer, (vi) the mechanical bandwidths of the tip-tilt (TT) stage and deformable mirror actuators, and (vii) temporal filtering of the NGS measurements to balance errors due to noise and servo lag. From this model, we are able to generate a TT error budget for the Thirty Meter Telescope facility narrow-field infrared adaptive optics system (NFIRAOS) and perform several design trade studies. With the current NFIRAOS design, the median TT error at the galactic pole with median seeing is calculated to be 65 nm or 1.8 mas rms.

  6. Adaptive optics sky coverage modeling for extremely large telescopes.

    PubMed

    Clare, Richard M; Ellerbroek, Brent L; Herriot, Glen; Véran, Jean-Pierre

    2006-12-10

    A Monte Carlo sky coverage model for laser guide star adaptive optics systems was proposed by Clare and Ellerbroek [J. Opt. Soc. Am. A 23, 418 (2006)]. We refine the model to include (i) natural guide star (NGS) statistics using published star count models, (ii) noise on the NGS measurements, (iii) the effect of telescope wind shake, (iv) a model for how the Strehl and hence NGS wavefront sensor measurement noise varies across the field, (v) the focus error due to imperfectly tracking the range to the sodium layer, (vi) the mechanical bandwidths of the tip-tilt (TT) stage and deformable mirror actuators, and (vii) temporal filtering of the NGS measurements to balance errors due to noise and servo lag. From this model, we are able to generate a TT error budget for the Thirty Meter Telescope facility narrow-field infrared adaptive optics system (NFIRAOS) and perform several design trade studies. With the current NFIRAOS design, the median TT error at the galactic pole with median seeing is calculated to be 65 nm or 1.8 mas rms. PMID:17119597

  7. Modeling scramjet combustor flowfields with a grid adaptation scheme

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, R.; Singh, D. J.

    1994-01-01

    The accurate description of flow features associated with the normal injection of fuel into supersonic primary flows is essential in the design of efficient engines for hypervelocity aerospace vehicles. The flow features in such injections are complex with multiple interactions between shocks and between shocks boundary layers. Numerical studies of perpendicular sonic N2 injection and mixing in a Mach 3.8 scramjet combustor environment are discussed. A dynamic grid adaptation procedure based on the equilibration of spring-mass system is employed to enhanced the description of the complicated flow features. Numerical results are compared with experimental measurements and indicate that the adaptation procedure enhances the capability of the modeling procedure to describe the flow features associated with scramjet combustor components.

  8. Particle systems for adaptive, isotropic meshing of CAD models

    PubMed Central

    Levine, Joshua A.; Whitaker, Ross T.

    2012-01-01

    We present a particle-based approach for generating adaptive triangular surface and tetrahedral volume meshes from computer-aided design models. Input shapes are treated as a collection of smooth, parametric surface patches that can meet non-smoothly on boundaries. Our approach uses a hierarchical sampling scheme that places particles on features in order of increasing dimensionality. These particles reach a good distribution by minimizing an energy computed in 3D world space, with movements occurring in the parametric space of each surface patch. Rather than using a pre-computed measure of feature size, our system automatically adapts to both curvature as well as a notion of topological separation. It also enforces a measure of smoothness on these constraints to construct a sizing field that acts as a proxy to piecewise-smooth feature size. We evaluate our technique with comparisons against other popular triangular meshing techniques for this domain. PMID:23162181

  9. Regional and global modeling estimates of policy relevant background ozone over the United States

    NASA Astrophysics Data System (ADS)

    Emery, Christopher; Jung, Jaegun; Downey, Nicole; Johnson, Jeremiah; Jimenez, Michele; Yarwood, Greg; Morris, Ralph

    2012-02-01

    Policy Relevant Background (PRB) ozone, as defined by the US Environmental Protection Agency (EPA), refers to ozone concentrations that would occur in the absence of all North American anthropogenic emissions. PRB enters into the calculation of health risk benefits, and as the US ozone standard approaches background levels, PRB is increasingly important in determining the feasibility and cost of compliance. As PRB is a hypothetical construct, modeling is a necessary tool. Since 2006 EPA has relied on global modeling to establish PRB for their regulatory analyses. Recent assessments with higher resolution global models exhibit improved agreement with remote observations and modest upward shifts in PRB estimates. This paper shifts the paradigm to a regional model (CAMx) run at 12 km resolution, for which North American boundary conditions were provided by a low-resolution version of the GEOS-Chem global model. We conducted a comprehensive model inter-comparison, from which we elucidate differences in predictive performance against ozone observations and differences in temporal and spatial background variability over the US. In general, CAMx performed better in replicating observations at remote monitoring sites, and performance remained better at higher concentrations. While spring and summer mean PRB predicted by GEOS-Chem ranged 20-45 ppb, CAMx predicted PRB ranged 25-50 ppb and reached well over 60 ppb in the west due to event-oriented phenomena such as stratospheric intrusion and wildfires. CAMx showed a higher correlation between modeled PRB and total observed ozone, which is significant for health risk assessments. A case study during April 2006 suggests that stratospheric exchange of ozone is underestimated in both models on an event basis. We conclude that wildfires, lightning NO x and stratospheric intrusions contribute a significant level of uncertainty in estimating PRB, and that PRB will require careful consideration in the ozone standard setting process.

  10. Framework for dynamic background modeling and shadow suppression for moving object segmentation in complex wavelet domain

    NASA Astrophysics Data System (ADS)

    Kushwaha, Alok Kumar Singh; Srivastava, Rajeev

    2015-09-01

    Moving object segmentation using change detection in wavelet domain under continuous variations of lighting condition is a challenging problem in video surveillance systems. There are several methods proposed in the literature for change detection in wavelet domain for moving object segmentation having static backgrounds, but it has not been addressed effectively for dynamic background changes. The methods proposed in the literature suffer from various problems, such as ghostlike appearance, object shadows, and noise. To deal with these issues, a framework for dynamic background modeling and shadow suppression under rapidly changing illumination conditions for moving object segmentation in complex wavelet domain is proposed. The proposed method consists of eight steps applied on given video frames, which include wavelet decomposition of frame using complex wavelet transform; use of change detection on detail coefficients (LH, HL, and HH), use of improved Gaussian mixture-based dynamic background modeling on approximate coefficient (LL subband); cast shadow suppression; use of soft thresholding for noise removal; strong edge detection; inverse wavelet transformation for reconstruction; and finally using closing morphology operator. A comparative analysis of the proposed method is presented both qualitatively and quantitatively with other standard methods available in the literature for six datasets in terms of various performance measures. Experimental results demonstrate the efficacy of the proposed method.

  11. Prediction of conductivity by adaptive neuro-fuzzy model.

    PubMed

    Akbarzadeh, S; Arof, A K; Ramesh, S; Khanmirzaei, M H; Nor, R M

    2014-01-01

    Electrochemical impedance spectroscopy (EIS) is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity. PMID:24658582

  12. Prediction of Conductivity by Adaptive Neuro-Fuzzy Model

    PubMed Central

    Akbarzadeh, S.; Arof, A. K.; Ramesh, S.; Khanmirzaei, M. H.; Nor, R. M.

    2014-01-01

    Electrochemical impedance spectroscopy (EIS) is a key method for the characterizing the ionic and electronic conductivity of materials. One of the requirements of this technique is a model to forecast conductivity in preliminary experiments. The aim of this paper is to examine the prediction of conductivity by neuro-fuzzy inference with basic experimental factors such as temperature, frequency, thickness of the film and weight percentage of salt. In order to provide the optimal sets of fuzzy logic rule bases, the grid partition fuzzy inference method was applied. The validation of the model was tested by four random data sets. To evaluate the validity of the model, eleven statistical features were examined. Statistical analysis of the results clearly shows that modeling with an adaptive neuro-fuzzy is powerful enough for the prediction of conductivity. PMID:24658582

  13. A Comparison of Three Programming Models for Adaptive Applications

    NASA Technical Reports Server (NTRS)

    Shan, Hong-Zhang; Singh, Jaswinder Pal; Oliker, Leonid; Biswa, Rupak; Kwak, Dochan (Technical Monitor)

    2000-01-01

    We study the performance and programming effort for two major classes of adaptive applications under three leading parallel programming models. We find that all three models can achieve scalable performance on the state-of-the-art multiprocessor machines. The basic parallel algorithms needed for different programming models to deliver their best performance are similar, but the implementations differ greatly, far beyond the fact of using explicit messages versus implicit loads/stores. Compared with MPI and SHMEM, CC-SAS (cache-coherent shared address space) provides substantial ease of programming at the conceptual and program orchestration level, which often leads to the performance gain. However it may also suffer from the poor spatial locality of physically distributed shared data on large number of processors. Our CC-SAS implementation of the PARMETIS partitioner itself runs faster than in the other two programming models, and generates more balanced result for our application.

  14. The method of narrow-band audio classification based on universal noise background model

    NASA Astrophysics Data System (ADS)

    Rui, Rui; Bao, Chang-chun

    2013-03-01

    Audio classification is the basis of content-based audio analysis and retrieval. The conventional classification methods mainly depend on feature extraction of audio clip, which certainly increase the time requirement for classification. An approach for classifying the narrow-band audio stream based on feature extraction of audio frame-level is presented in this paper. The audio signals are divided into speech, instrumental music, song with accompaniment and noise using the Gaussian mixture model (GMM). In order to satisfy the demand of actual environment changing, a universal noise background model (UNBM) for white noise, street noise, factory noise and car interior noise is built. In addition, three feature schemes are considered to optimize feature selection. The experimental results show that the proposed algorithm achieves a high accuracy for audio classification, especially under each noise background we used and keep the classification time less than one second.

  15. An input adaptive, pursuit tracking model of the human opertor

    NASA Technical Reports Server (NTRS)

    Ware, J. R.

    1972-01-01

    Developed and evaluated is a simple model of the input adaptive behavior of the human operator (HO) in a pursuit tracking task in which the plant controlled consists of a pure gain. If it is assumed that the HO is approximately an optimal predictor using only position and velocity information, then there is a simple method of computing the values of the model parameters in terms of the autocorrelation function of the input signal. Experimental evidence indicates that the ability of the HO to use velocity information decreases with increasing signal velocity indicating that a biased estimator of the velocity weighting should be used. A suitable approximation is derived which has rapid convergence and low variance. The model thus derived is compared to actual subject transfer functions and is found to be in close agreement. In addition to tracking random processes the model can adapt to and track deterministic signals, such as sine waves, up to approximately the frequency at which human operators begin to track precognitively.

  16. An Adaptive Complex Network Model for Brain Functional Networks

    PubMed Central

    Gomez Portillo, Ignacio J.; Gleiser, Pablo M.

    2009-01-01

    Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902

  17. Prequential Analysis of Complex Data with Adaptive Model Reselection†

    PubMed Central

    Clarke, Jennifer; Clarke, Bertrand

    2010-01-01

    In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias–variance tradeoff in statistical modeling. PMID:20617104

  18. Model Minority Stereotyping, Perceived Discrimination, and Adjustment Among Adolescents from Asian American Backgrounds.

    PubMed

    Kiang, Lisa; Witkow, Melissa R; Thompson, Taylor L

    2016-07-01

    The model minority image is a common and pervasive stereotype that Asian American adolescents must navigate. Using multiwave data from 159 adolescents from Asian American backgrounds (mean age at initial recruitment = 15.03, SD = .92; 60 % female; 74 % US-born), the current study targeted unexplored aspects of the model minority experience in conjunction with more traditionally measured experiences of negative discrimination. When examining normative changes, perceptions of model minority stereotyping increased over the high school years while perceptions of discrimination decreased. Both experiences were not associated with each other, suggesting independent forms of social interactions. Model minority stereotyping generally promoted academic and socioemotional adjustment, whereas discrimination hindered outcomes. Moreover, in terms of academic adjustment, the model minority stereotype appears to protect against the detrimental effect of discrimination. Implications of the complex duality of adolescents' social interactions are discussed. PMID:26251100

  19. A multigrid integral equation method for large-scale models with inhomogeneous backgrounds

    NASA Astrophysics Data System (ADS)

    Endo, Masashi; Čuma, Martin; Zhdanov, Michael S.

    2008-12-01

    We present a multigrid integral equation (IE) method for three-dimensional (3D) electromagnetic (EM) field computations in large-scale models with inhomogeneous background conductivity (IBC). This method combines the advantages of the iterative IBC IE method and the multigrid quasi-linear (MGQL) approximation. The new EM modelling method solves the corresponding systems of linear equations within the domains of anomalous conductivity, Da, and inhomogeneous background conductivity, Db, separately on coarse grids. The observed EM fields in the receivers are computed using grids with fine discretization. The developed MGQL IBC IE method can also be applied iteratively by taking into account the return effect of the anomalous field inside the domain of the background inhomogeneity Db, and vice versa. The iterative process described above is continued until we reach the required accuracy of the EM field calculations in both domains, Da and Db. The method was tested for modelling the marine controlled-source electromagnetic field for complex geoelectrical structures with hydrocarbon petroleum reservoirs and a rough sea-bottom bathymetry.

  20. The Role of Scale and Model Bias in ADAPT's Photospheric Eatimation

    SciTech Connect

    Godinez Vazquez, Humberto C.; Hickmann, Kyle Scott; Arge, Charles Nicholas; Henney, Carl

    2015-05-20

    The Air Force Assimilative Photospheric flux Transport model (ADAPT), is a magnetic flux propagation based on Worden-Harvey (WH) model. ADAPT would be used to provide a global photospheric map of the Earth. A data assimilation method based on the Ensemble Kalman Filter (EnKF), a method of Monte Carlo approximation tied with Kalman filtering, is used in calculating the ADAPT models.

  1. A new technique for measuring aerosols with moonlight observations and a sky background model

    NASA Astrophysics Data System (ADS)

    Jones, Amy; Noll, Stefan; Kausch, Wolfgang; Kimeswenger, Stefan; Szyszka, Ceszary; Unterguggenberger, Stefanie

    2014-05-01

    There have been an ample number of studies on aerosols in urban, daylight conditions, but few for remote, nocturnal aerosols. We have developed a new technique for investigating such aerosols using our sky background model and astronomical observations. With a dedicated observing proposal we have successfully tested this technique for nocturnal, remote aerosol studies. This technique relies on three requirements: (a) sky background model, (b) observations taken with scattered moonlight, and (c) spectrophotometric standard star observations for flux calibrations. The sky background model was developed for the European Southern Observatory and is optimized for the Very Large Telescope at Cerro Paranal in the Atacama desert in Chile. This is a remote location with almost no urban aerosols. It is well suited for studying remote background aerosols that are normally difficult to detect. Our sky background model has an uncertainty of around 20 percent and the scattered moonlight portion is even more accurate. The last two requirements are having astronomical observations with moonlight and of standard stars at different airmasses, all during the same night. We had a dedicated observing proposal at Cerro Paranal with the instrument X-Shooter to use as a case study for this method. X-Shooter is a medium resolution, echelle spectrograph which covers the wavelengths from 0.3 to 2.5 micrometers. We observed plain sky at six different distances (7, 13, 20, 45, 90, and 110 degrees) to the Moon for three different Moon phases (between full and half). Also direct observations of spectrophotometric standard stars were taken at two different airmasses for each night to measure the extinction curve via the Langley method. This is an ideal data set for testing this technique. The underlying assumption is that all components, other than the atmospheric conditions (specifically aerosols and airglow), can be calculated with the model for the given observing parameters. The scattered

  2. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    SciTech Connect

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, we can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.

  3. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE PAGESBeta

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  4. Constraints on universe models with cosmological constant from cosmic microwave background anisotropy

    SciTech Connect

    Sugiyama, Naoshi; Gouda, Naoteru; Sasaki, Misao Kyoto Univ., Uji )

    1990-12-01

    Thorough numerical calculations of the fluctuations in the cosmic microwave background radiation using the gauge-invariant formalism are carried out for various cosmological models with the cosmological constant. It is shown that a spatially flat cold dark matter-dominated universe of Omega(0) = 0.1 to about 0.4 and H(0) = 50 to about 100 km/s per Mpc with adiabatic perturbations has the possibility of giving the final answer to cosmological puzzles. It is also found that the introduction of the cosmological constant may revive pure baryonic universe models. 33 refs.

  5. Recent Progress in Modelling the RXTE Proportional Counter Array Instrumental Background

    NASA Astrophysics Data System (ADS)

    Jahoda, K.; Strohmayer, T. E.; Smith, D. A.; Stark, M. J.

    1999-04-01

    We present recent progress in the modelling of the instrumental background for the RXTE Proportional Counter Array. Unmodelled systematic errors for faint sources are now <= 0.2 ct/sec/3 PCU in the 2-10 keV band for data selected from the front layer. We present the status of our search for additional correlations. We also present extensions of the times and conditions under which the L7 model is applicable: to early mission times (prior to April 1996) and to sources as bright as ~ 3000 count/sec/detector (comparable to the Crab).

  6. Modelling static 3-D spatial background error covariances - the effect of vertical and horizontal transform order

    NASA Astrophysics Data System (ADS)

    Wlasak, M. A.; Cullen, M. J. P.

    2014-06-01

    A major difference in the formulation of the univariate part of static background error covariance models for use in global operational 4DVAR arises from the order in which the horizontal and vertical transforms are applied. This is because the atmosphere is non-separable with large horizontal scales generally tied to large vertical scales and small horizontal scales tied to small vertical scales. Also horizontal length scales increase dramatically as one enters the stratosphere. A study is presented which evaluates the strengths and weaknesses of each approach with the Met Office Unified Model. It is shown that if the vertical transform is applied as a function of horizontal wavenumber then the horizontal globally-averaged variance and the homogenous, isotropic length scale on each model level for each control variable of the training data is preserved by the covariance model. In addition the wind variance and associated length scales are preserved as the scheme preserves the variances and length scales of horizontal derivatives. If the vertical transform is applied in physical space, it is possible to make it a function of latitude at the cost of not preserving the variances and length scales of the horizontal derivatives. Summer and winter global 4DVAR trials have been run with both background error covariance models. A clear benefit is seen in the fit to observations when the vertical transform is in spectral space and is a function of total horizontal wavenumber.

  7. Model of adaptive temporal development of structured finite systems

    NASA Astrophysics Data System (ADS)

    Patera, Jiri; Shaw, Gordon L.; Slansky, Richard; Leng, Xiaodan

    1989-07-01

    The weight systems of level-zero representations of affine Kac-Moody algebras provide an appropriate kinematical framework for studying structured finite systems with adaptive temporal development. Much of the structure is determined by Lie algebra theory, so it is possible to restrict greatly the connection space and analytic results are possible. The time development of these systems often evolves to cyclic temporal-spatial patterns, depending on the definition of the dynamics. The purpose of this paper is to set up the mathematical formalism for this ``memory in Lie algebras'' class of models. An illustration is used to show the kinds of complex behavior that occur in simple cases.

  8. Model reference adaptive attitude control of spacecraft using reaction wheels

    NASA Technical Reports Server (NTRS)

    Singh, Sahjendra N.

    1986-01-01

    A nonlinear model reference adaptive control law for large angle rotational maneuvers of spacecraft using reaction wheels in the presence of uncertainty is presented. The derivation of control law does not require any information on the values of the system parameters and the disturbance torques acting on the spacecraft. The controller includes a dynamic system in the feedback path. The control law is a nonlinear function of the attitude error, the rate of the attitude error, and the compensator state. Simulation results are prsented to show that large angle rotational maneuvers can be performed in spite of the uncertainty in the system.

  9. Model-free adaptive control of advanced power plants

    SciTech Connect

    Cheng, George Shu-Xing; Mulkey, Steven L.; Wang, Qiang

    2015-08-18

    A novel 3-Input-3-Output (3.times.3) Model-Free Adaptive (MFA) controller with a set of artificial neural networks as part of the controller is introduced. A 3.times.3 MFA control system using the inventive 3.times.3 MFA controller is described to control key process variables including Power, Steam Throttle Pressure, and Steam Temperature of boiler-turbine-generator (BTG) units in conventional and advanced power plants. Those advanced power plants may comprise Once-Through Supercritical (OTSC) Boilers, Circulating Fluidized-Bed (CFB) Boilers, and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.

  10. Adaptive modeling of compression hearing aids: Convergence and tracking issues

    NASA Astrophysics Data System (ADS)

    Parsa, Vijay; Jamieson, Donald

    2003-10-01

    Typical measurements of electroacoustic performance of hearing aids include frequency response, compression ratio, threshold and time constants, equivalent input noise, and total harmonic distortion. These measurements employ artificial test signals and do not relate well to perceptual indices of hearing aid performance. Speech-based electroacoustic measures provide means to quantify the real world performance of hearing aids and have been shown to correlate better with perceptual data. This paper investigates the application of system identification paradigm for deriving the speech-based measures, where the hearing aid is modeled as a linear time-varying system and its response to speech stimuli is predicted using a linear adaptive filter. The performance of three adaptive filtering algorithms, viz. the Least Mean Square (LMS), Normalized LMS, and the Affine Projection Algorithm (APA) was investigated using simulated and real digital hearing aids. In particular, the convergence and tracking behavior of these algorithms in modeling compression hearing aids was thoroughly investigated for a range of compression ratio and threshold parameters, and attack and release time constants. Our results show that the NLMS and APA algorithms are capable of modeling digital hearing aids under a variety of compression conditions, and are suitable for deriving speech-based metrics of hearing aid performance.

  11. Adaptive Modeling, Engineering Analysis and Design of Advanced Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek; Hsu, Su-Yuen; Mason, Brian H.; Hicks, Mike D.; Jones, William T.; Sleight, David W.; Chun, Julio; Spangler, Jan L.; Kamhawi, Hilmi; Dahl, Jorgen L.

    2006-01-01

    This paper describes initial progress towards the development and enhancement of a set of software tools for rapid adaptive modeling, and conceptual design of advanced aerospace vehicle concepts. With demanding structural and aerodynamic performance requirements, these high fidelity geometry based modeling tools are essential for rapid and accurate engineering analysis at the early concept development stage. This adaptive modeling tool was used for generating vehicle parametric geometry, outer mold line and detailed internal structural layout of wing, fuselage, skin, spars, ribs, control surfaces, frames, bulkheads, floors, etc., that facilitated rapid finite element analysis, sizing study and weight optimization. The high quality outer mold line enabled rapid aerodynamic analysis in order to provide reliable design data at critical flight conditions. Example application for structural design of a conventional aircraft and a high altitude long endurance vehicle configuration are presented. This work was performed under the Conceptual Design Shop sub-project within the Efficient Aerodynamic Shape and Integration project, under the former Vehicle Systems Program. The project objective was to design and assess unconventional atmospheric vehicle concepts efficiently and confidently. The implementation may also dramatically facilitate physics-based systems analysis for the NASA Fundamental Aeronautics Mission. In addition to providing technology for design and development of unconventional aircraft, the techniques for generation of accurate geometry and internal sub-structure and the automated interface with the high fidelity analysis codes could also be applied towards the design of vehicles for the NASA Exploration and Space Science Mission projects.

  12. Direct model reference adaptive control of a flexible robotic manipulator

    NASA Technical Reports Server (NTRS)

    Meldrum, D. R.

    1985-01-01

    Quick, precise control of a flexible manipulator in a space environment is essential for future Space Station repair and satellite servicing. Numerous control algorithms have proven successful in controlling rigid manipulators wih colocated sensors and actuators; however, few have been tested on a flexible manipulator with noncolocated sensors and actuators. In this thesis, a model reference adaptive control (MRAC) scheme based on command generator tracker theory is designed for a flexible manipulator. Quicker, more precise tracking results are expected over nonadaptive control laws for this MRAC approach. Equations of motion in modal coordinates are derived for a single-link, flexible manipulator with an actuator at the pinned-end and a sensor at the free end. An MRAC is designed with the objective of controlling the torquing actuator so that the tip position follows a trajectory that is prescribed by the reference model. An appealing feature of this direct MRAC law is that it allows the reference model to have fewer states than the plant itself. Direct adaptive control also adjusts the controller parameters directly with knowledge of only the plant output and input signals.

  13. An adaptive multigrid model for hurricane track prediction

    NASA Technical Reports Server (NTRS)

    Fulton, Scott R.

    1993-01-01

    This paper describes a simple numerical model for hurricane track prediction which uses a multigrid method to adapt the model resolution as the vortex moves. The model is based on the modified barotropic vorticity equation, discretized in space by conservative finite differences and in time by a Runge-Kutta scheme. A multigrid method is used to solve an elliptic problem for the streamfunction at each time step. Nonuniform resolution is obtained by superimposing uniform grids of different spatial extent; these grids move with the vortex as it moves. Preliminary numerical results indicate that the local mesh refinement allows accurate prediction of the hurricane track with substantially less computer time than required on a single uniform grid.

  14. Extending the radial diffusion model of Falthammar to non-dipole background field

    SciTech Connect

    Cunningham, Gregory Scott

    2015-05-26

    A model for radial diffusion caused by electromagnetic disturbances was published by Falthammar (1965) using a two-parameter model of the disturbance perturbing a background dipole magnetic field. Schulz and Lanzerotti (1974) extended this model by recognizing the two parameter perturbation as the leading (non--dipole) terms of the Mead Williams magnetic field model. They emphasized that the magnetic perturbation in such a model induces an electric ield that can be calculated from the motion of field lines on which the particles are ‘frozen’. Roederer and Zhang (2014) describe how the field lines on which the particles are frozen can be calculated by tracing the unperturbed field lines from the minimum-B location to the ionospheric footpoint, and then tracing the perturbed field (which shares the same ionospheric footpoint due to the frozen -in condition) from the ionospheric footpoint back to a perturbed minimum B location. The instantaneous change n Roederer L*, dL*/dt, can then be computed as the product (dL*/dphi)*(dphi/dt). dL*/Dphi is linearly dependent on the perturbation parameters (to first order) and is obtained by computing the drift across L*-labeled perturbed field lines, while dphi/dt is related to the bounce-averaged gradient-curvature drift velocity. The advantage of assuming a dipole background magnetic field, as in these previous studies, is that the instantaneous dL*/dt can be computed analytically (with some approximations), as can the DLL that results from integrating dL*/dt over time and computing the expected value of (dL*)^2. The approach can also be applied to complex background magnetic field models like T89 or TS04, on top of which the small perturbations are added, but an analytical solution is not possible and so a numerical solution must be implemented. In this talk, I discuss our progress in implementing a numerical solution to the calculation of DL*L* using arbitrary background field models with simple electromagnetic

  15. Landau-Lifshitz magnetodynamics as a Hamilton model: Magnons in an instanton background

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Igor V.; Wang, Kang L.

    2010-07-01

    To take full advantage of the well-developed field-theoretic methods, Magnonics needs a yet-existing Lagrangian formulation. Here, we show that Landau-Lifshitz magnetodynamics is a member of the covariant-Schrödinger-equation family of Hamilton models and apply the covariant background method arriving at the Ginzburg-Landau Lagrangian formalism for magnons in an instanton background. Magnons appear to be nonrelativistic spinless bosons, which feel instantons as a gauge field and as a Bose condensate. Among the examples of the usefulness of the proposition is the recognition of the instanton-induced phase shifts in magnons as the Berry phase and the interpretation of the spin-transfer-torque generation as a ferromagnetic counterpart of the Josephson supercurrent.

  16. Fermionic backgrounds and condensation of supergravity fields in the type IIB matrix model

    SciTech Connect

    Iso, Satoshi; Terachi, Hidenori; Sugino, Fumihiko; Umetsu, Hiroshi

    2005-09-15

    In a previous paper 1 we constructed wave functions of a D-instanton and vertex operators in type IIB matrix model by expanding supersymmetric Wilson line operators. They describe couplings of a D-instanton and type IIB matrix model to the massless closed string fields, respectively, and form a multiplet of D=10 N=2 supersymmetries. In this paper we consider fermionic backgrounds and condensation of supergravity fields in IIB matrix model by using these wave functions. We start from the type IIB matrix model in a flat background whose matrix size is (N+1)x(N+1), or equivalently the effective action for (N+1) D-instantons. We then calculate an effective action for N D-instantons by integrating out 1 D-instanton (which we call a mean-field D-instanton) with an appropriate wave function and show that various terms can be induced corresponding to the choice of the wave functions. In particular, a Chern-Simons-like term is induced when the mean-field D-instanton has a wave function of the antisymmetric tensor field. A fuzzy sphere becomes a classical solution to the equation of motion for the effective action. We also give an interpretation of the above wave functions in the superstring theory side as overlaps of the D-instanton boundary state with the closed string massless states in the Green-Schwarz formalism.

  17. Oxidative DNA damage background estimated by a system model of base excision repair

    SciTech Connect

    Sokhansanj, B A; Wilson, III, D M

    2004-05-13

    Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parameters from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.

  18. Adaptive inferential sensors based on evolving fuzzy models.

    PubMed

    Angelov, Plamen; Kordon, Arthur

    2010-04-01

    A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can address the

  19. Application of Open Loop H-Adaptation to an Unstructured Grid Tidal Flat Model

    NASA Astrophysics Data System (ADS)

    Cowles, G. W.

    2008-12-01

    The complex topology of tidal flats presents a challenge to coastal ocean models. Recently, several models have been developed employing unstructured grids, which can provide the flexibility in mesh resolution required to resolve the complex bathymetry and coastline. However, the distribution of element size in the initial mesh can be somewhat arbitrary, and is in general the product of the operator tailoring the resolution to the underlying bathymetry and regions of interest. In this work, the flow solution from an idealized tidal flat application is used to drive an open loop h-adaptation of the mesh. The model used for this work is the Finite Volume Coastal Ocean Model (FVCOM), an open source, terrain following model. A background length scale distribution derived from model output is used to generate a new initial mesh for the model run, thus defining an iteration of the procedure. Several metrics for computing the background length scale will be examined. These include direct estimation of spatial discretization error using Richardson's extrapolation from a sequence of meshes as well as heuristics derived from gradients in the primitive variables. Examination of grid independence, computational efficiency, and performance of the scheme for idealized tidal flats with inclusion of morphodynamics will be discussed.

  20. Adaptive self-organization in a realistic neural network model

    NASA Astrophysics Data System (ADS)

    Meisel, Christian; Gross, Thilo

    2009-12-01

    Information processing in complex systems is often found to be maximally efficient close to critical states associated with phase transitions. It is therefore conceivable that also neural information processing operates close to criticality. This is further supported by the observation of power-law distributions, which are a hallmark of phase transitions. An important open question is how neural networks could remain close to a critical point while undergoing a continual change in the course of development, adaptation, learning, and more. An influential contribution was made by Bornholdt and Rohlf, introducing a generic mechanism of robust self-organized criticality in adaptive networks. Here, we address the question whether this mechanism is relevant for real neural networks. We show in a realistic model that spike-time-dependent synaptic plasticity can self-organize neural networks robustly toward criticality. Our model reproduces several empirical observations and makes testable predictions on the distribution of synaptic strength, relating them to the critical state of the network. These results suggest that the interplay between dynamics and topology may be essential for neural information processing.

  1. A Model of the Soft X-ray Background as a Blast Wave Viewed from Inside

    NASA Technical Reports Server (NTRS)

    Edgar, R. J.; Cox, D. P.

    1984-01-01

    The suggestion that the soft X-ray background arises in part from the Sun which is inside a large supernova blastwave was examined by models of spherical blastwaves. The models can produce quantitative fits to both surface brightnesses and energy band ratios when t = 10 to the 5th power E sub o = 5 x 10 to the 50th power ergs, and n sub approx. 0.004 cm to the -3 power. The models are generalized by varying the relative importance of factors such as thermal conduction, Coulomb heating of electrons, and external pressure; and to allow the explosions to occur in preexisting cavities with steep density gradients, or by examination of the effects of large obstructions or other anisotrophies in the ambient medium.

  2. A model for the distribution of material generating the soft X-ray background

    SciTech Connect

    Snowden, S.L.; Cox, D.P.; Mccammon, D.; Sanders, W.T. )

    1990-05-01

    The observational evidence relating to the soft X-ray diffuse background is discussed, and a simple model for its source and spatial structure is presented. In this simple model with one free parameter, the observed 1/4 keV X-ray intensity originates as thermal emission from a uniform hot plasma filling a cavity in the neutral material of the Galactic disk which contains the sun. Variations in the observed X-ray intensity are due to variations in the extent of the emission volume and therefore the emission measure of the plasma. The model reproduces the observed negative correlation between X-ray intensity and H I column density and predicts reasonable values for interstellar medium parameters. 64 refs.

  3. A model for the distribution of material generating the soft X-ray background

    NASA Technical Reports Server (NTRS)

    Snowden, S. L.; Cox, D. P.; Mccammon, D.; Sanders, W. T.

    1990-01-01

    The observational evidence relating to the soft X-ray diffuse background is discussed, and a simple model for its source and spatial structure is presented. In this simple model with one free parameter, the observed 1/4 keV X-ray intensity originates as thermal emission from a uniform hot plasma filling a cavity in the neutral material of the Galactic disk which contains the sun. Variations in the observed X-ray intensity are due to variations in the extent of the emission volume and therefore the emission measure of the plasma. The model reproduces the observed negative correlation between X-ray intensity and H I column density and predicts reasonable values for interstellar medium parameters.

  4. Systematic spectral analysis of GX 339-4: Influence of Galactic background and reflection models

    NASA Astrophysics Data System (ADS)

    Clavel, M.; Rodriguez, J.; Corbel, S.; Coriat, M.

    2016-05-01

    Black hole X-ray binaries display large outbursts, during which their properties are strongly variable. We develop a systematic spectral analysis of the 3-40 keV {RXTE}/PCA data in order to study the evolution of these systems and apply it to GX 339-4. Using the low count rate observations, we provide a precise model of the Galactic background at GX 339-4's location and discuss its possible impact on the source spectral parameters. At higher fluxes, the use of a Gaussian line to model the reflection component can lead to the detection of a high-temperature disk, in particular in the high-hard state. We demonstrate that this component is an artifact arising from an incomplete modeling of the reflection spectrum.

  5. Adaptive finite difference for seismic wavefield modelling in acoustic media.

    PubMed

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-01-01

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang's optimised finite difference scheme. PMID:27491333

  6. Modelling interactions between mitigation, adaptation and sustainable development

    NASA Astrophysics Data System (ADS)

    Reusser, D. E.; Siabatto, F. A. P.; Garcia Cantu Ros, A.; Pape, C.; Lissner, T.; Kropp, J. P.

    2012-04-01

    Managing the interdependence of climate mitigation, adaptation and sustainable development requires a good understanding of the dominant socioecological processes that have determined the pathways in the past. Key variables include water and food availability which depend on climate and overall ecosystem services, as well as energy supply and social, political and economic conditions. We present our initial steps to build a system dynamic model of nations that represents a minimal set of relevant variables of the socio- ecological development. The ultimate goal of the modelling exercise is to derive possible future scenarios and test those for their compatibility with sustainability boundaries. Where dynamics go beyond sustainability boundaries intervention points in the dynamics can be searched.

  7. Adaptive finite difference for seismic wavefield modelling in acoustic media

    NASA Astrophysics Data System (ADS)

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-08-01

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang’s optimised finite difference scheme.

  8. Adaptive, Model-Based Monitoring and Threat Detection

    NASA Astrophysics Data System (ADS)

    Valdes, Alfonso; Skinner, Keith

    2002-09-01

    We explore the suitability of model-based probabilistic techniques, such as Bayes networks, to the field of intrusion detection and alert report correlation. We describe a network intrusion detection system (IDS) using Bayes inference, wherein the knowledge base is encoded not as rules but as conditional probability relations between observables and hypotheses of normal and malicious usage. The same high-performance Bayes inference library was employed in a component of the Mission-Based Correlation effort, using an initial knowledge base that adaptively learns the security administrator's preference for alert priority and rank. Another major effort demonstrated probabilistic techniques in heterogeneous sensor correlation. We provide results for simulated attack data, live traffic, and the CyberPanel Grand Challenge Problem. Our results establish that model-based probabilistic techniques are an important complementary capability to signature-based methods in detection and correlation.

  9. Direct model reference adaptive control of robotic arms

    NASA Technical Reports Server (NTRS)

    Kaufman, Howard; Swift, David C.; Cummings, Steven T.; Shankey, Jeffrey R.

    1993-01-01

    The results of controlling A PUMA 560 Robotic Manipulator and the NASA shuttle Remote Manipulator System (RMS) using a Command Generator Tracker (CGT) based Model Reference Adaptive Controller (DMRAC) are presented. Initially, the DMRAC algorithm was run in simulation using a detailed dynamic model of the PUMA 560. The algorithm was tuned on the simulation and then used to control the manipulator using minimum jerk trajectories as the desired reference inputs. The ability to track a trajectory in the presence of load changes was also investigated in the simulation. Satisfactory performance was achieved in both simulation and on the actual robot. The obtained responses showed that the algorithm was robust in the presence of sudden load changes. Because these results indicate that the DMRAC algorithm can indeed be successfully applied to the control of robotic manipulators, additional testing was performed to validate the applicability of DMRAC to simulated dynamics of the shuttle RMS.

  10. Carving and adaptive drainage enforcement of grid digital elevation models

    NASA Astrophysics Data System (ADS)

    Soille, Pierre; Vogt, Jürgen; Colombo, Roberto

    2003-12-01

    An effective and widely used method for removing spurious pits in digital elevation models consists of filling them until they overflow. However, this method sometimes creates large flat regions which in turn pose a problem for the determination of accurate flow directions. In this study, we propose to suppress each pit by creating a descending path from it to the nearest point having a lower elevation value. This is achieved by carving, i.e., lowering, the terrain elevations along the detected path. Carving paths are identified through a flooding simulation starting from the river outlets. The proposed approach allows for adaptive drainage enforcement whereby river networks coming from other data sources are imposed to the digital elevation model only in places where the automatic river network extraction deviates substantially from the known networks. An improvement to methods for routing flow over flat regions is also introduced. Detailed results are presented over test areas of the Danube basin.

  11. Adaptive finite difference for seismic wavefield modelling in acoustic media

    PubMed Central

    Yao, Gang; Wu, Di; Debens, Henry Alexander

    2016-01-01

    Efficient numerical seismic wavefield modelling is a key component of modern seismic imaging techniques, such as reverse-time migration and full-waveform inversion. Finite difference methods are perhaps the most widely used numerical approach for forward modelling, and here we introduce a novel scheme for implementing finite difference by introducing a time-to-space wavelet mapping. Finite difference coefficients are then computed by minimising the difference between the spatial derivatives of the mapped wavelet and the finite difference operator over all propagation angles. Since the coefficients vary adaptively with different velocities and source wavelet bandwidths, the method is capable to maximise the accuracy of the finite difference operator. Numerical examples demonstrate that this method is superior to standard finite difference methods, while comparable to Zhang’s optimised finite difference scheme. PMID:27491333

  12. Dynamic modeling and adaptive control for space stations

    NASA Technical Reports Server (NTRS)

    Ih, C. H. C.; Wang, S. J.

    1985-01-01

    Of all large space structural systems, space stations present a unique challenge and requirement to advanced control technology. Their operations require control system stability over an extremely broad range of parameter changes and high level of disturbances. During shuttle docking the system mass may suddenly increase by more than 100% and during station assembly the mass may vary even more drastically. These coupled with the inherent dynamic model uncertainties associated with large space structural systems require highly sophisticated control systems that can grow as the stations evolve and cope with the uncertainties and time-varying elements to maintain the stability and pointing of the space stations. The aspects of space station operational properties are first examined, including configurations, dynamic models, shuttle docking contact dynamics, solar panel interaction, and load reduction to yield a set of system models and conditions. A model reference adaptive control algorithm along with the inner-loop plant augmentation design for controlling the space stations under severe operational conditions of shuttle docking, excessive model parameter errors, and model truncation are then investigated. The instability problem caused by the zero-frequency rigid body modes and a proposed solution using plant augmentation are addressed. Two sets of sufficient conditions which guarantee the globablly asymptotic stability for the space station systems are obtained.

  13. The reduced order model problem in distributed parameter systems adaptive identification and control

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.

    1980-01-01

    The research concerning the reduced order model problem in distributed parameter systems is reported. The adaptive control strategy was chosen for investigation in the annular momentum control device. It is noted, that if there is no observation spill over, and no model errors, an indirect adaptive control strategy can be globally stable. Recent publications concerning adaptive control are included.

  14. Adaptation in tunably rugged fitness landscapes: the rough Mount Fuji model.

    PubMed

    Neidhart, Johannes; Szendro, Ivan G; Krug, Joachim

    2014-10-01

    Much of the current theory of adaptation is based on Gillespie's mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  15. Adaptation in Tunably Rugged Fitness Landscapes: The Rough Mount Fuji Model

    PubMed Central

    Neidhart, Johannes; Szendro, Ivan G.; Krug, Joachim

    2014-01-01

    Much of the current theory of adaptation is based on Gillespie’s mutational landscape model (MLM), which assumes that the fitness values of genotypes linked by single mutational steps are independent random variables. On the other hand, a growing body of empirical evidence shows that real fitness landscapes, while possessing a considerable amount of ruggedness, are smoother than predicted by the MLM. In the present article we propose and analyze a simple fitness landscape model with tunable ruggedness based on the rough Mount Fuji (RMF) model originally introduced by Aita et al. in the context of protein evolution. We provide a comprehensive collection of results pertaining to the topographical structure of RMF landscapes, including explicit formulas for the expected number of local fitness maxima, the location of the global peak, and the fitness correlation function. The statistics of single and multiple adaptive steps on the RMF landscape are explored mainly through simulations, and the results are compared to the known behavior in the MLM model. Finally, we show that the RMF model can explain the large number of second-step mutations observed on a highly fit first-step background in a recent evolution experiment with a microvirid bacteriophage. PMID:25123507

  16. Face detection in complex background based on Adaboost algorithm and YCbCr skin color model

    NASA Astrophysics Data System (ADS)

    Ge, Wei; Han, Chunling; Quan, Wei

    2015-12-01

    Face detection is a fundamental and important research theme in the topic of Pattern Recognition and Computer Vision. Now, remarkable fruits have been achieved. Among these methods, statistics based methods hold a dominant position. In this paper, Adaboost algorithm based on Haar-like features is used to detect faces in complex background. The method combining YCbCr skin model detection and Adaboost is researched, the skin detection method is used to validate the detection results obtained by Adaboost algorithm. It overcomes false detection problem by Adaboost. Experimental results show that nearly all non-face areas are removed, and improve the detection rate.

  17. Knowledge fusion: Time series modeling followed by pattern recognition applied to unusual sections of background data

    SciTech Connect

    Burr, T.; Doak, J.; Howell, J.A.; Martinez, D.; Strittmatter, R.

    1996-03-01

    This report describes work performed during FY 95 for the Knowledge Fusion Project, which by the Department of Energy, Office of Nonproliferation and National Security. The project team selected satellite sensor data as the one main example to which its analysis algorithms would be applied. The specific sensor-fusion problem has many generic features that make it a worthwhile problem to attempt to solve in a general way. The generic problem is to recognize events of interest from multiple time series in a possibly noisy background. By implementing a suite of time series modeling and forecasting methods and using well-chosen alarm criteria, we reduce the number of false alarms. We then further reduce the number of false alarms by analyzing all suspicious sections of data, as judged by the alarm criteria, with pattern recognition methods. This report describes the implementation and application of this two-step process for separating events from unusual background. As a fortunate by-product of this activity, it is possible to gain a better understanding of the natural background.

  18. Localized dynamic kinetic-energy-based models for stochastic coherent adaptive large eddy simulation

    NASA Astrophysics Data System (ADS)

    De Stefano, Giuliano; Vasilyev, Oleg V.; Goldstein, Daniel E.

    2008-04-01

    Stochastic coherent adaptive large eddy simulation (SCALES) is an extension of the large eddy simulation approach in which a wavelet filter-based dynamic grid adaptation strategy is employed to solve for the most "energetic" coherent structures in a turbulent field while modeling the effect of the less energetic background flow. In order to take full advantage of the ability of the method in simulating complex flows, the use of localized subgrid-scale models is required. In this paper, new local dynamic one-equation subgrid-scale models based on both eddy-viscosity and non-eddy-viscosity assumptions are proposed for SCALES. The models involve the definition of an additional field variable that represents the kinetic energy associated with the unresolved motions. This way, the energy transfer between resolved and residual flow structures is explicitly taken into account by the modeling procedure without an equilibrium assumption, as in the classical Smagorinsky approach. The wavelet-filtered incompressible Navier-Stokes equations for the velocity field, along with the additional evolution equation for the subgrid-scale kinetic energy variable, are numerically solved by means of the dynamically adaptive wavelet collocation solver. The proposed models are tested for freely decaying homogeneous turbulence at Reλ=72. It is shown that the SCALES results, obtained with less than 0.5% of the total nonadaptive computational nodes, closely match reference data from direct numerical simulation. In contrast to classical large eddy simulation, where the energetic small scales are poorly simulated, the agreement holds not only in terms of global statistical quantities but also in terms of spectral distribution of energy and, more importantly, enstrophy all the way down to the dissipative scales.

  19. A new adaptive data transfer library for model coupling

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Liu, Li; Yang, Guangwen; Li, Ruizhe; Wang, Bin

    2016-06-01

    Data transfer means transferring data fields from a sender to a receiver. It is a fundamental and frequently used operation of a coupler. Most versions of state-of-the-art couplers currently use an implementation based on the point-to-point (P2P) communication of the message passing interface (MPI) (referred to as "P2P implementation" hereafter). In this paper, we reveal the drawbacks of the P2P implementation when the parallel decompositions of the sender and the receiver are different, including low communication bandwidth due to small message size, variable and high number of MPI messages, as well as network contention. To overcome these drawbacks, we propose a butterfly implementation for data transfer. Although the butterfly implementation outperforms the P2P implementation in many cases, it degrades the performance when the sender and the receiver have similar parallel decompositions or when the number of processes used for running models is small. To ensure data transfer with optimal performance, we design and implement an adaptive data transfer library that combines the advantages of both butterfly implementation and P2P implementation. As the adaptive data transfer library automatically uses the best implementation for data transfer, it outperforms the P2P implementation in many cases while it does not decrease the performance in any cases. Now, the adaptive data transfer library is open to the public and has been imported into the C-Coupler1 coupler for performance improvement of data transfer. We believe that other couplers can also benefit from this.

  20. Model observer design for detecting multiple abnormalities in anatomical background images

    NASA Astrophysics Data System (ADS)

    Wen, Gezheng; Markey, Mia K.; Park, Subok

    2016-03-01

    As psychophysical studies are resource-intensive to conduct, model observers are commonly used to assess and optimize medical imaging quality. Existing model observers were typically designed to detect at most one signal. However, in clinical practice, there may be multiple abnormalities in a single image set (e.g., multifocal and multicentric breast cancers (MMBC)), which can impact treatment planning. Prevalence of signals can be different across anatomical regions, and human observers do not know the number or location of signals a priori. As new imaging techniques have the potential to improve multiple-signal detection (e.g., digital breast tomosynthesis may be more effective for diagnosis of MMBC than planar mammography), image quality assessment approaches addressing such tasks are needed. In this study, we present a model-observer mechanism to detect multiple signals in the same image dataset. To handle the high dimensionality of images, a novel implementation of partial least squares (PLS) was developed to estimate different sets of efficient channels directly from the images. Without any prior knowledge of the background or the signals, the PLS channels capture interactions between signals and the background which provide discriminant image information. Corresponding linear decision templates are employed to generate both image-level and location-specific scores on the presence of signals. Our preliminary results show that the model observer using PLS channels, compared to our first attempts with Laguerre-Gauss channels, can achieve high performance with a reasonably small number of channels, and the optimal design of the model observer may vary as the tasks of clinical interest change.

  1. Adapting a weather forecast model for greenhouse gas simulation

    NASA Astrophysics Data System (ADS)

    Polavarapu, S. M.; Neish, M.; Tanguay, M.; Girard, C.; de Grandpré, J.; Gravel, S.; Semeniuk, K.; Chan, D.

    2015-12-01

    The ability to simulate greenhouse gases on the global domain is useful for providing boundary conditions for regional flux inversions, as well as for providing reference data for bias correction of satellite measurements. Given the existence of operational weather and environmental prediction models and assimilation systems at Environment Canada, it makes sense to use these tools for greenhouse gas simulations. In this work, we describe the adaptations needed to reasonably simulate CO2 with a weather forecast model. The main challenges were the implementation of a mass conserving advection scheme, and the careful implementation of a mixing ratio defined with respect to dry air. The transport of tracers through convection was also added, and the vertical mixing through the boundary layer was slightly modified. With all these changes, the model conserves CO2 mass well on the annual time scale, and the high resolution (0.9 degree grid spacing) permits a good description of synoptic scale transport. The use of a coupled meteorological/tracer transport model also permits an assessment of approximations needed in offline transport model approaches, such as the neglect of water vapour mass when computing a tracer mixing ratio with respect to dry air.

  2. Adapting bump model for ventral photoreceptors of Limulus

    PubMed Central

    1982-01-01

    Light-evoked current fluctuations have been recorded from ventral photoreceptors of Limulus for light intensity from threshold up to 10(5) times threshold. These data are analyzed in terms of the adapting bump noise model, which postulates that (a) the response to light is a summation of bumps; and (b) the average size of bump decreases with light intensity, and this is the major mechanism of light adaptation. It is shown here that this model can account for the data well. Furthermore, the model provides a convenient framework to characterize, in terms of bump parameters, the effects of calcium ions, which are known to affect photoreceptor functions. From responses to very dim light, it is found that the average impulse response (average of a large number of responses to dim flashes) can be predicted from knowledge of both the noise characteristics under steady light and the dispersion of latencies of individual bumps. Over the range of light intensities studied, it is shown that (a) the bump rate increases in strict proportionality to light intensity, up to approximately 10(5) bumps per second; and (b) the bump height decreases approximately as the -0.7 power of light intensity; at rates greater than 10(5) bumps per second, the conductance change associated with the single bump seems to reach a minimum value of approximately 10(-11) reciprocal ohms; (c) from the lowest to the highest light intensity, the bump duration decreases approximately by a factor of 2, and the time scale of the dispersion of latencies of individual bumps decreases approximately by a factor of 3; (d) removal of calcium ions from the bath lengthens the latency process and causes an increase in bump height but appears to have no effect on either the bump rate or the bump duration. PMID:7108487

  3. Robust, Adaptive Functional Regression in Functional Mixed Model Framework

    PubMed Central

    Zhu, Hongxiao; Brown, Philip J.; Morris, Jeffrey S.

    2012-01-01

    Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient

  4. Component-based handprint segmentation using adaptive writing style model

    NASA Astrophysics Data System (ADS)

    Garris, Michael D.

    1997-04-01

    Building upon the utility of connected components, NIST has designed a new character segmentor based on statistically modeling the style of a person's handwriting. Simple spatial features capture the characteristics of a particular writer's style of handprint, enabling the new method to maintain a traditional character-level segmentation philosophy without the integration of recognition or the use of oversegmentation and linguistic postprocessing. Estimates for stroke width and character height are used to compute aspect ratio and standard stroke count features that adapt to the writer's style at the field level. The new method has been developed with a predetermined set of fuzzy rules making the segmentor much less fragile and much more adaptive, and the new method successfully reconstructs fragmented characters as well as splits touching characters. The new segmentor was integrated into the NIST public domain form-based handprint recognition systems and then tested on a set of 490 handwriting sample forms found in NIST special database 19. When compared to a simple component-based segmentor, the new adaptable method improved the overall recognition of handprinted digits by 3.4 percent and field level recognition by 6.9 percent, while effectively reducing deletion errors by 82 percent. The same program code and set of parameters successfully segments sequences of uppercase and lowercase characters without any context-based tuning. While not as dramatic as digits, the recognition of uppercase and lowercase characters improved by 1.7 percent and 1.3 percent respectively. The segmentor maintains a relatively straight-forward and logical process flow avoiding convolutions of encoded exceptions as is common in expert systems. As a result, the new segmentor operates very efficiently, and throughput as high as 362 characters per second can be achieved. Letters and numbers are constructed from a predetermined configuration of a relatively small number of strokes. Results

  5. An adaptive radiation model for the origin of new genefunctions

    SciTech Connect

    Francino, M. Pilar

    2004-10-18

    The evolution of new gene functions is one of the keys to evolutionary innovation. Most novel functions result from gene duplication followed by divergence. However, the models hitherto proposed to account for this process are not fully satisfactory. The classic model of neofunctionalization holds that the two paralogous gene copies resulting from a duplication are functionally redundant, such that one of them can evolve under no functional constraints and occasionally acquire a new function. This model lacks a convincing mechanism for the new gene copies to increase in frequency in the population and survive the mutational load expected to accumulate under neutrality, before the acquisition of the rare beneficial mutations that would confer new functionality. The subfunctionalization model has been proposed as an alternative way to generate genes with altered functions. This model also assumes that new paralogous gene copies are functionally redundant and therefore neutral, but it predicts that relaxed selection will affect both gene copies such that some of the capabilities of the parent gene will disappear in one of the copies and be retained in the other. Thus, the functions originally present in a single gene will be partitioned between the two descendant copies. However, although this model can explain increases in gene number, it does not really address the main evolutionary question, which is the development of new biochemical capabilities. Recently, a new concept has been introduced into the gene evolution literature which is most likely to help solve this dilemma. The key point is to allow for a period of natural selection for the duplication per se, before new function evolves, rather than considering gene duplication to be neutral as in the previous models. Here, I suggest a new model that draws on the advantage of postulating selection for gene duplication, and proposes that bursts of adaptive gene amplification in response to specific selection

  6. CBSD Version II component models of the IR celestial background. Technical report

    SciTech Connect

    Kennealy, J.P.; Glaudell, G.A.

    1990-12-07

    CBSD Version II addresses the development of algorithms and software which implement realistic models of all the primary celestial background phenomenologies, including solar system, galactic, and extra-galactic features. During 1990, the CBSD program developed and refined IR scene generation models for the zodiacal emission, thermal emission from asteroids and planets, and the galactic point source background. Chapters in this report are devoted to each of those areas. Ongoing extensions to the point source module for extended source descriptions of nebulae and HII regions are briefly discussed. Treatment of small galaxies will also be a natural extension of the current CBSD point source module. Although no CBSD module yet exists for interstellar IR cirrus, MRC has been working closely with the Royal Aerospace Establishment in England to achieve a data-base understanding of cirrus fractal characteristics. The CBSD modules discussed in Chapters 2, 3, and 4 are all now operational and have been employed to generate a significant variety of scenes. CBSD scene generation capability has been well accepted by both the IR astronomy community and the DOD user community and directly supports the SDIO SSGM program.

  7. FPGA implementation for real-time background subtraction based on Horprasert model.

    PubMed

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W. PMID:22368487

  8. FPGA Implementation for Real-Time Background Subtraction Based on Horprasert Model

    PubMed Central

    Rodriguez-Gomez, Rafael; Fernandez-Sanchez, Enrique J.; Diaz, Javier; Ros, Eduardo

    2012-01-01

    Background subtraction is considered the first processing stage in video surveillance systems, and consists of determining objects in movement in a scene captured by a static camera. It is an intensive task with a high computational cost. This work proposes an embedded novel architecture on FPGA which is able to extract the background on resource-limited environments and offers low degradation (produced because of the hardware-friendly model modification). In addition, the original model is extended in order to detect shadows and improve the quality of the segmentation of the moving objects. We have analyzed the resource consumption and performance in Spartan3 Xilinx FPGAs and compared to others works available on the literature, showing that the current architecture is a good trade-off in terms of accuracy, performance and resources utilization. With less than a 65% of the resources utilization of a XC3SD3400 Spartan-3A low-cost family FPGA, the system achieves a frequency of 66.5 MHz reaching 32.8 fps with resolution 1,024 × 1,024 pixels, and an estimated power consumption of 5.76 W. PMID:22368487

  9. Constraints on open universe models from quadruple anisotropy of the cosmic microwave background

    SciTech Connect

    Gouda, Naoteru; Sugiyama, Naoshi; Sasaki, Misao Kyoto University, Uji )

    1991-05-01

    The quadrupole anisotropy of the cosmic microwave background expected in a variety of open universe models is evaluated by taking full account of contributions from both the generalized Sachs-Wolfe effect and intrinsic photon fluctuations at decoupling. Comparing the results with the observed upper limit of the quadrupole anisotropy, constraints on open universe models are derived. It is concluded that, even if the most conservative attitude is adopted, both hot and cold dark matter models with h = 0.5, Omega(0) not greater than 0.2 and all pure baryonic models with Omega(0) not greater than 0.2 are excluded if the initial density spectrum has the power-law index n not greater than 1, while both hot and cold dark matter models with h = 1.0, Omega(0) not greater than 0.2, and n not greater than 1 are marginally consistent with the observed upper limit of the quadrupole if one allows possible ambiguities in the normalization of the perturbation amplitudes. 29 refs.

  10. Computerized Adaptive Testing: A Comparison of the Nominal Response Model and the Three Parameter Logistic Model.

    ERIC Educational Resources Information Center

    DeAyala, R. J.; Koch, William R.

    A nominal response model-based computerized adaptive testing procedure (nominal CAT) was implemented using simulated data. Ability estimates from the nominal CAT were compared to those from a CAT based upon the three-parameter logistic model (3PL CAT). Furthermore, estimates from both CAT procedures were compared with the known true abilities used…

  11. Turnaround Management Strategies: The Adaptive Model and the Constructive Model. ASHE 1983 Annual Meeting Paper.

    ERIC Educational Resources Information Center

    Chaffee, Ellen E.

    The use of two management strategies by 14 liberal arts and comprehensive colleges attempting to recover from serious financial decline during 1973-1976 were studied. The adaptive model of strategy, based on resource dependence, involves managing demands in order to satisfy critical-resource providers. The constructive model of strategy, based on…

  12. Preliminary Exploration of Adaptive State Predictor Based Human Operator Modeling

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Gregory, Irene M.

    2012-01-01

    Control-theoretic modeling of the human operator dynamic behavior in manual control tasks has a long and rich history. In the last two decades, there has been a renewed interest in modeling the human operator. There has also been significant work on techniques used to identify the pilot model of a given structure. The purpose of this research is to attempt to go beyond pilot identification based on collected experimental data and to develop a predictor of pilot behavior. An experiment was conducted to quantify the effects of changing aircraft dynamics on an operator s ability to track a signal in order to eventually model a pilot adapting to changing aircraft dynamics. A gradient descent estimator and a least squares estimator with exponential forgetting used these data to predict pilot stick input. The results indicate that individual pilot characteristics and vehicle dynamics did not affect the accuracy of either estimator method to estimate pilot stick input. These methods also were able to predict pilot stick input during changing aircraft dynamics and they may have the capability to detect a change in a subject due to workload, engagement, etc., or the effects of changes in vehicle dynamics on the pilot.

  13. The National Astronomy Consortium - An Adaptable Model for OAD?

    NASA Astrophysics Data System (ADS)

    Sheth, Kartik

    2015-08-01

    The National Astronomy Consortium (NAC) is a program led by the National Radio Astronomy Observatory (NRAO) and Associated Universities Inc., (AUI) in partnership with the National Society of Black Physicists (NSBP), and a number of minority and majority universities to increase the numbers of students from underrepresented groups and those otherwise overlooked by the traditional academic pipeline into STEM or STEM-related careers. The seed for the NAC was a partnership between NRAO and Howard University which began with an exchange of a few summer students five years ago. Since then the NAC has grown tremendously. Today the NAC aims to host between 4 to 5 cohorts nationally in an innovative model in which the students are mentored throughout the year with multiple mentors and peer mentoring, continued engagement in research and professional development / career training throughout the academic year and throughout their careers.The NAC model has already shown success and is a very promising and innovative model for increasing participation of young people in STEM and STEM-related careers. I will discuss how this model could be adapted in various countries at all levels of education.

  14. Adaptive elastic networks as models of supercooled liquids

    NASA Astrophysics Data System (ADS)

    Yan, Le; Wyart, Matthieu

    2015-08-01

    The thermodynamics and dynamics of supercooled liquids correlate with their elasticity. In particular for covalent networks, the jump of specific heat is small and the liquid is strong near the threshold valence where the network acquires rigidity. By contrast, the jump of specific heat and the fragility are large away from this threshold valence. In a previous work [Proc. Natl. Acad. Sci. USA 110, 6307 (2013), 10.1073/pnas.1300534110], we could explain these behaviors by introducing a model of supercooled liquids in which local rearrangements interact via elasticity. However, in that model the disorder characterizing elasticity was frozen, whereas it is itself a dynamic variable in supercooled liquids. Here we study numerically and theoretically adaptive elastic network models where polydisperse springs can move on a lattice, thus allowing for the geometry of the elastic network to fluctuate and evolve with temperature. We show numerically that our previous results on the relationship between structure and thermodynamics hold in these models. We introduce an approximation where redundant constraints (highly coordinated regions where the frustration is large) are treated as an ideal gas, leading to analytical predictions that are accurate in the range of parameters relevant for real materials. Overall, these results lead to a description of supercooled liquids, in which the distance to the rigidity transition controls the number of directions in phase space that cost energy and the specific heat.

  15. Tsunami modelling with adaptively refined finite volume methods

    USGS Publications Warehouse

    LeVeque, R.J.; George, D.L.; Berger, M.J.

    2011-01-01

    Numerical modelling of transoceanic tsunami propagation, together with the detailed modelling of inundation of small-scale coastal regions, poses a number of algorithmic challenges. The depth-averaged shallow water equations can be used to reduce this to a time-dependent problem in two space dimensions, but even so it is crucial to use adaptive mesh refinement in order to efficiently handle the vast differences in spatial scales. This must be done in a 'wellbalanced' manner that accurately captures very small perturbations to the steady state of the ocean at rest. Inundation can be modelled by allowing cells to dynamically change from dry to wet, but this must also be done carefully near refinement boundaries. We discuss these issues in the context of Riemann-solver-based finite volume methods for tsunami modelling. Several examples are presented using the GeoClaw software, and sample codes are available to accompany the paper. The techniques discussed also apply to a variety of other geophysical flows. ?? 2011 Cambridge University Press.

  16. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  17. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  18. Industry Cluster's Adaptive Co-competition Behavior Modeling Inspired by Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Xiang, Wei; Ye, Feifan

    Adaptation helps the individual enterprise to adjust its behavior to uncertainties in environment and hence determines a healthy growth of both the individuals and the whole industry cluster as well. This paper is focused on the study on co-competition adaptation behavior of industry cluster, which is inspired by swarm intelligence mechanisms. By referencing to ant cooperative transportation and ant foraging behavior and their related swarm intelligence approaches, the cooperative adaptation and competitive adaptation behavior are studied and relevant models are proposed. Those adaptive co-competition behaviors model can be integrated to the multi-agent system of industry cluster to make the industry cluster model more realistic.

  19. Adaptive Weibull Multiplicative Model and Multilayer Perceptron Neural Networks for Dark-Spot Detection from SAR Imagery

    PubMed Central

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  20. Adaptive Weibull Multiplicative Model and Multilayer Perceptron neural networks for dark-spot detection from SAR imagery.

    PubMed

    Taravat, Alireza; Oppelt, Natascha

    2014-01-01

    Oil spills represent a major threat to ocean ecosystems and their environmental status. Previous studies have shown that Synthetic Aperture Radar (SAR), as its recording is independent of clouds and weather, can be effectively used for the detection and classification of oil spills. Dark formation detection is the first and critical stage in oil-spill detection procedures. In this paper, a novel approach for automated dark-spot detection in SAR imagery is presented. A new approach from the combination of adaptive Weibull Multiplicative Model (WMM) and MultiLayer Perceptron (MLP) neural networks is proposed to differentiate between dark spots and the background. The results have been compared with the results of a model combining non-adaptive WMM and pulse coupled neural networks. The presented approach overcomes the non-adaptive WMM filter setting parameters by developing an adaptive WMM model which is a step ahead towards a full automatic dark spot detection. The proposed approach was tested on 60 ENVISAT and ERS2 images which contained dark spots. For the overall dataset, an average accuracy of 94.65% was obtained. Our experimental results demonstrate that the proposed approach is very robust and effective where the non-adaptive WMM & pulse coupled neural network (PCNN) model generates poor accuracies. PMID:25474376

  1. Modelling MEMS deformable mirrors for astronomical adaptive optics

    NASA Astrophysics Data System (ADS)

    Blain, Celia

    As of July 2012, 777 exoplanets have been discovered utilizing mainly indirect detection techniques. The direct imaging of exoplanets is the next goal for astronomers, because it will reveal the diversity of planets and planetary systems, and will give access to the exoplanet's chemical composition via spectroscopy. With this spectroscopic knowledge, astronomers will be able to know, if a planet is terrestrial and, possibly, even find evidence of life. With so much potential, this branch of astronomy has also captivated the general public attention. The direct imaging of exoplanets remains a challenging task, due to (i) the extremely high contrast between the parent star and the orbiting exoplanet and (ii) their small angular separation. For ground-based observatories, this task is made even more difficult, due to the presence of atmospheric turbulence. High Contrast Imaging (HCI) instruments have been designed to meet this challenge. HCI instruments are usually composed of a coronagraph coupled with the full onaxis corrective capability of an Extreme Adaptive Optics (ExAO) system. An efficient coronagraph separates the faint planet's light from the much brighter starlight, but the dynamic boiling speckles, created by the stellar image, make exoplanet detection impossible without the help of a wavefront correction device. The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system is a high performance HCI instrument developed at Subaru Telescope. The wavefront control system of SCExAO consists of three wavefront sensors (WFS) coupled with a 1024- actuator Micro-Electro-Mechanical-System (MEMS) deformable mirror (DM). MEMS DMs offer a large actuator density, allowing high count DMs to be deployed in small size beams. Therefore, MEMS DMs are an attractive technology for Adaptive Optics (AO) systems and are particularly well suited for HCI instruments employing ExAO technologies. SCExAO uses coherent light modulation in the focal plane introduced by the DM, for

  2. Modeling the behavioral substrates of associate learning and memory - Adaptive neural models

    NASA Technical Reports Server (NTRS)

    Lee, Chuen-Chien

    1991-01-01

    Three adaptive single-neuron models based on neural analogies of behavior modification episodes are proposed, which attempt to bridge the gap between psychology and neurophysiology. The proposed models capture the predictive nature of Pavlovian conditioning, which is essential to the theory of adaptive/learning systems. The models learn to anticipate the occurrence of a conditioned response before the presence of a reinforcing stimulus when training is complete. Furthermore, each model can find the most nonredundant and earliest predictor of reinforcement. The behavior of the models accounts for several aspects of basic animal learning phenomena in Pavlovian conditioning beyond previous related models. Computer simulations show how well the models fit empirical data from various animal learning paradigms.

  3. A region-appearance-based adaptive variational model for 3D liver segmentation

    SciTech Connect

    Peng, Jialin; Dong, Fangfang; Chen, Yunmei; Kong, Dexing

    2014-04-15

    Purpose: Liver segmentation from computed tomography images is a challenging task owing to pixel intensity overlapping, ambiguous edges, and complex backgrounds. The authors address this problem with a novel active surface scheme, which minimizes an energy functional combining both edge- and region-based information. Methods: In this semiautomatic method, the evolving surface is principally attracted to strong edges but is facilitated by the region-based information where edge information is missing. As avoiding oversegmentation is the primary challenge, the authors take into account multiple features and appearance context information. Discriminative cues, such as multilayer consecutiveness and local organ deformation are also implicitly incorporated. Case-specific intensity and appearance constraints are included to cope with the typically large appearance variations over multiple images. Spatially adaptive balancing weights are employed to handle the nonuniformity of image features. Results: Comparisons and validations on difficult cases showed that the authors’ model can effectively discriminate the liver from adhering background tissues. Boundaries weak in gradient or with no local evidence (e.g., small edge gaps or parts with similar intensity to the background) were delineated without additional user constraint. With an average surface distance of 0.9 mm and an average volume overlap of 93.9% on the MICCAI data set, the authors’ model outperformed most state-of-the-art methods. Validations on eight volumes with different initial conditions had segmentation score variances mostly less than unity. Conclusions: The proposed model can efficiently delineate ambiguous liver edges from complex tissue backgrounds with reproducibility. Quantitative validations and comparative results demonstrate the accuracy and efficacy of the model.

  4. Matrix model and holographic baryons in the D0-D4 background

    NASA Astrophysics Data System (ADS)

    Li, Si-wen; Jia, Tuo

    2015-08-01

    We study the spectrum and short-distance two-body force of holographic baryons by the matrix model, which is derived from the Sakai-Sugimoto model in the D0-D4 background (D0-D4/D8 system). The matrix model is derived by using the standard technique in string theory, and it can describe multibaryon systems. We rederive the action of the matrix model from open string theory on the baryon vertex, which is embedded in the D0-D4/D8 system. The matrix model offers a more systematic approach to the dynamics of the baryons at short distances. In our system, we find that the matrix model describes stable baryonic states only if ζ =UQ0 3/UKK 3<2 , where UQ0 3 is related to the number density of smeared D0-branes. This result in our paper is exactly the same as some previous results studied in this system, presented in [W. Cai, C. Wu, and Z. Xiao, Phys. Rev. D 90, 106001 (2014)]. We also compute the baryon spectrum (k =1 case) and short-distance two-body force of baryons (k =2 case). The baryon spectrum is modified and could be able to fit the experimental data if we choose a suitable value for ζ . And the short-distance two-body force of baryons is also modified by the appearance of smeared D0-branes from the original Sakai-Sugimoto model. If ζ >2 , we find that the baryon spectrum will be totally complex and an attractive force will appear in the short-distance interaction of baryons, which may consistently correspond to the existence of unstable baryonic states.

  5. A photoviscoplastic model for photoactivated covalent adaptive networks

    NASA Astrophysics Data System (ADS)

    Ma, Jing; Mu, Xiaoming; Bowman, Christopher N.; Sun, Youyi; Dunn, Martin L.; Qi, H. Jerry; Fang, Daining

    2014-10-01

    Light activated polymers (LAPs) are a class of contemporary materials that when irradiated with light respond with mechanical deformation. Among the different molecular mechanisms of photoactuation, here we study radical induced bond exchange reactions (BERs) that alter macromolecular chains through an addition-fragmentation process where a free chain whose active end group attaches then breaks a network chain. Thus the BER yields a polymer with a covalently adaptable network. When a LAP sample is loaded, the macroscopic consequence of BERs is stress relaxation and plastic deformation. Furthermore, if light penetration through the sample is nonuniform, resulting in nonuniform stress relaxation, the sample will deform after unloading in order to achieve equilibrium. In the past, this light activation mechanism was modeled as a phase evolution process where chain addition-fragmentation process was considered as a phase transformation between stressed phases and newly-born phases that are undeformed and stress free at birth. Such a modeling scheme describes the underlying physics with reasonable fidelity but is computationally expensive. In this paper, we propose a new approach where the BER induced macromolecular network alteration is modeled as a viscoplastic deformation process, based on the observation that stress relaxation due to light irradiation is a time-dependent process similar to that in viscoelastic solids with an irrecoverable deformation after light irradiation. This modeling concept is further translated into a finite deformation photomechanical constitutive model. The rheological representation of this model is a photoviscoplastic element placed in series with a standard linear solid model in viscoelasticity. A two-step iterative implicit scheme is developed for time integration of the two time-dependent elements. We carry out a series of experiments to determine material parameters in our model as well as to validate the performance of the model in

  6. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles.

    PubMed

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  7. Workload Model Based Dynamic Adaptation of Social Internet of Vehicles

    PubMed Central

    Alam, Kazi Masudul; Saini, Mukesh; El Saddik, Abdulmotaleb

    2015-01-01

    Social Internet of Things (SIoT) has gained much interest among different research groups in recent times. As a key member of a smart city, the vehicular domain of SIoT (SIoV) is also undergoing steep development. In the SIoV, vehicles work as sensor-hub to capture surrounding information using the in-vehicle and Smartphone sensors and later publish them for the consumers. A cloud centric cyber-physical system better describes the SIoV model where physical sensing-actuation process affects the cloud based service sharing or computation in a feedback loop or vice versa. The cyber based social relationship abstraction enables distributed, easily navigable and scalable peer-to-peer communication among the SIoV subsystems. These cyber-physical interactions involve a huge amount of data and it is difficult to form a real instance of the system to test the feasibility of SIoV applications. In this paper, we propose an analytical model to measure the workloads of various subsystems involved in the SIoV process. We present the basic model which is further extended to incorporate complex scenarios. We provide extensive simulation results for different parameter settings of the SIoV system. The findings of the analyses are further used to design example adaptation strategies for the SIoV subsystems which would foster deployment of intelligent transport systems. PMID:26389905

  8. Modeling high-resolution broadband discourse in complex adaptive systems.

    PubMed

    Dooley, Kevin J; Corman, Steven R; McPhee, Robert D; Kuhn, Timothy

    2003-01-01

    Numerous researchers and practitioners have turned to complexity science to better understand human systems. Simulation can be used to observe how the microlevel actions of many human agents create emergent structures and novel behavior in complex adaptive systems. In such simulations, communication between human agents is often modeled simply as message passing, where a message or text may transfer data, trigger action, or inform context. Human communication involves more than the transmission of texts and messages, however. Such a perspective is likely to limit the effectiveness and insight that we can gain from simulations, and complexity science itself. In this paper, we propose a model of how close analysis of discursive processes between individuals (high-resolution), which occur simultaneously across a human system (broadband), dynamically evolve. We propose six different processes that describe how evolutionary variation can occur in texts-recontextualization, pruning, chunking, merging, appropriation, and mutation. These process models can facilitate the simulation of high-resolution, broadband discourse processes, and can aid in the analysis of data from such processes. Examples are used to illustrate each process. We make the tentative suggestion that discourse may evolve to the "edge of chaos." We conclude with a discussion concerning how high-resolution, broadband discourse data could actually be collected. PMID:12876447

  9. A Nonlinear Dynamic Inversion Predictor-Based Model Reference Adaptive Controller for a Generic Transport Model

    NASA Technical Reports Server (NTRS)

    Campbell, Stefan F.; Kaneshige, John T.

    2010-01-01

    Presented here is a Predictor-Based Model Reference Adaptive Control (PMRAC) architecture for a generic transport aircraft. At its core, this architecture features a three-axis, non-linear, dynamic-inversion controller. Command inputs for this baseline controller are provided by pilot roll-rate, pitch-rate, and sideslip commands. This paper will first thoroughly present the baseline controller followed by a description of the PMRAC adaptive augmentation to this control system. Results are presented via a full-scale, nonlinear simulation of NASA s Generic Transport Model (GTM).

  10. Thermal-chemical Mantle Convection Models With Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Leng, W.; Zhong, S.

    2008-12-01

    In numerical modeling of mantle convection, resolution is often crucial for resolving small-scale features. New techniques, adaptive mesh refinement (AMR), allow local mesh refinement wherever high resolution is needed, while leaving other regions with relatively low resolution. Both computational efficiency for large- scale simulation and accuracy for small-scale features can thus be achieved with AMR. Based on the octree data structure [Tu et al. 2005], we implement the AMR techniques into the 2-D mantle convection models. For pure thermal convection models, benchmark tests show that our code can achieve high accuracy with relatively small number of elements both for isoviscous cases (i.e. 7492 AMR elements v.s. 65536 uniform elements) and for temperature-dependent viscosity cases (i.e. 14620 AMR elements v.s. 65536 uniform elements). We further implement tracer-method into the models for simulating thermal-chemical convection. By appropriately adding and removing tracers according to the refinement of the meshes, our code successfully reproduces the benchmark results in van Keken et al. [1997] with much fewer elements and tracers compared with uniform-mesh models (i.e. 7552 AMR elements v.s. 16384 uniform elements, and ~83000 tracers v.s. ~410000 tracers). The boundaries of the chemical piles in our AMR code can be easily refined to the scales of a few kilometers for the Earth's mantle and the tracers are concentrated near the chemical boundaries to precisely trace the evolvement of the boundaries. It is thus very suitable for our AMR code to study the thermal-chemical convection problems which need high resolution to resolve the evolvement of chemical boundaries, such as the entrainment problems [Sleep, 1988].

  11. A regional adaptive and assimilative three-dimensional ionospheric model

    NASA Astrophysics Data System (ADS)

    Sabbagh, Dario; Scotto, Carlo; Sgrigna, Vittorio

    2016-03-01

    A regional adaptive and assimilative three-dimensional (3D) ionospheric model is proposed. It is able to ingest real-time data from different ionosondes, providing the ionospheric bottomside plasma frequency fp over the Italian area. The model is constructed on the basis of empirical values for a set of ionospheric parameters Pi[base] over the considered region, some of which have an assigned variation ΔPi. The values for the ionospheric parameters actually observed at a given time at a given site will thus be Pi = Pi[base] + ΔPi. These Pi values are used as input for an electron density N(h) profiler. The latter is derived from the Advanced Ionospheric Profiler (AIP), which is software used by Autoscala as part of the process of automatic inversion of ionogram traces. The 3D model ingests ionosonde data by minimizing the root-mean-square deviation between the observed and modeled values of fp(h) profiles obtained from the associated N(h) values at the points where observations are available. The ΔPi values are obtained from this minimization procedure. The 3D model is tested using data collected at the ionospheric stations of Rome (41.8N, 12.5E) and Gibilmanna (37.9N, 14.0E), and then comparing the results against data from the ionospheric station of San Vito dei Normanni (40.6N, 18.0E). The software developed is able to produce maps of the critical frequencies foF2 and foF1, and of fp at a fixed altitude, with transverse and longitudinal cross-sections of the bottomside ionosphere in a color scale. fp(h) and associated simulated ordinary ionogram traces can easily be produced for any geographic location within the Italian region. fp values within the volume in question can also be provided.

  12. A Adaptive Mixing Depth Model for AN Industrialized Shoreline Area.

    NASA Astrophysics Data System (ADS)

    Dunk, Richard H.

    1993-01-01

    Internal boundary layer characteristics are often overlooked in atmospheric diffusion modeling applications but are essential for accurate air quality assessment. This study focuses on a unique air pollution problem that is partially resolved by representative internal boundary layer description and prediction. Emissions from a secondary non-ferrous smelter located adjacent to a large waterway, which is situated near a major coastal zone, became suspect in causing adverse air quality. In an effort to prove or disprove this allegation, "accepted" air quality modeling was performed. Predicted downwind concentrations indicated that the smelter plume was not responsible for causing regulatory standards to be exceeded. However, chronic community complaints continued to be directed toward the smelter facility. Further investigation into the problem revealed that complaint occurrences coincided with onshore southeasterly flows. Internal boundary layer development during onshore flow was assumed to produce a mixing depth conducive to plume trapping or fumigation. The preceding premise led to the utilization of estimated internal boundary layer depths for dispersion model input in an attempt to improve prediction accuracy. Monitored downwind ambient air concentrations showed that model predictions were still substantially lower than actual values. After analyzing the monitored values and comparing them with actual plume observations conducted during several onshore flow occurrences, the author hypothesized that the waterway could cause a damping effect on internal boundary layer development. This effective decrease in mixing depths would explain the abnormally high ambient air concentrations experienced during onshore flows. Therefore, a full-scale field study was designed and implemented to study the waterway's influence on mixing depth characteristics. The resultant data were compiled and formulated into an area-specific mixing depth model that can be adapted to

  13. Extended adiabatic blast waves and a model of the soft X-ray background. [interstellar matter

    NASA Technical Reports Server (NTRS)

    Cox, D. P.; Anderson, P. R.

    1981-01-01

    An analytical approximation is generated which follows the development of an adiabatic spherical blast wave in a homogeneous ambient medium of finite pressure. An analytical approximation is also presented for the electron temperature distribution resulting from coulomb collisional heating. The dynamical, thermal, ionization, and spectral structures are calculated for blast waves of energy E sub 0 = 5 x 10 to the 50th power ergs in a hot low-density interstellar environment. A formula is presented for estimating the luminosity evolution of such explosions. The B and C bands of the soft X-ray background, it is shown, are reproduced by such a model explosion if the ambient density is about .000004 cm, the blast radius is roughly 100 pc, and the solar system is located inside the shocked region. Evolution in a pre-existing cavity with a strong density gradient may, it is suggested, remove both the M band and OVI discrepancies.

  14. Design optimization and background modeling of the HEX experiment on Chandrayaan-I

    NASA Astrophysics Data System (ADS)

    Sudhakar, Manju; Sreekumar, P.

    2012-11-01

    Spacecraft and their subsystem components are subject to a very hazardous radiation environment in both near-Earth and deep space orbits. Knowledge of the effects of this high energy particle and electromagnetic radiation is essential in designing sensors, electronic circuits and living habitats for humans in near Earth orbit, en route to and on the Moon and Mars. This paper discusses the use of Monte Carlo simulations to optimize system design, radiation source modeling, and determination of background in sensors due to galactic cosmic rays and radiation from the Moon. The results demonstrate the use of Monte Carlo particle transport toolkits to predict secondary production, determine dose rates in space and design required shielding geometry.

  15. Durability-Based Design Guide for an Automotive Structural Composite: Part 2. Background Data and Models

    SciTech Connect

    Corum, J.M.; Battiste, R.L.; Brinkman, C.R.; Ren, W.; Ruggles, M.B.; Weitsman, Y.J.; Yahr, G.T.

    1998-02-01

    This background report is a companion to the document entitled ''Durability-Based Design Criteria for an Automotive Structural Composite: Part 1. Design Rules'' (ORNL-6930). The rules and the supporting material characterization and modeling efforts described here are the result of a U.S. Department of Energy Advanced Automotive Materials project entitled ''Durability of Lightweight Composite Structures.'' The overall goal of the project is to develop experimentally based, durability-driven design guidelines for automotive structural composites. The project is closely coordinated with the Automotive Composites Consortium (ACC). The initial reference material addressed by the rules and this background report was chosen and supplied by ACC. The material is a structural reaction injection-molded isocyanurate (urethane), reinforced with continuous-strand, swirl-mat, E-glass fibers. This report consists of 16 position papers, each summarizing the observations and results of a key area of investigation carried out to provide the basis for the durability-based design guide. The durability issues addressed include the effects of cyclic and sustained loadings, temperature, automotive fluids, vibrations, and low-energy impacts (e.g., tool drops and roadway kickups) on deformation, strength, and stiffness. The position papers cover these durability issues. Topics include (1) tensile, compressive, shear, and flexural properties; (2) creep and creep rupture; (3) cyclic fatigue; (4) the effects of temperature, environment, and prior loadings; (5) a multiaxial strength criterion; (6) impact damage and damage tolerance design; (7) stress concentrations; (8) a damage-based predictive model for time-dependent deformations; (9) confirmatory subscale component tests; and (10) damage development and growth observations.

  16. Rao-Blackwellization for Adaptive Gaussian Sum Nonlinear Model Propagation

    NASA Technical Reports Server (NTRS)

    Semper, Sean R.; Crassidis, John L.; George, Jemin; Mukherjee, Siddharth; Singla, Puneet

    2015-01-01

    When dealing with imperfect data and general models of dynamic systems, the best estimate is always sought in the presence of uncertainty or unknown parameters. In many cases, as the first attempt, the Extended Kalman filter (EKF) provides sufficient solutions to handling issues arising from nonlinear and non-Gaussian estimation problems. But these issues may lead unacceptable performance and even divergence. In order to accurately capture the nonlinearities of most real-world dynamic systems, advanced filtering methods have been created to reduce filter divergence while enhancing performance. Approaches, such as Gaussian sum filtering, grid based Bayesian methods and particle filters are well-known examples of advanced methods used to represent and recursively reproduce an approximation to the state probability density function (pdf). Some of these filtering methods were conceptually developed years before their widespread uses were realized. Advanced nonlinear filtering methods currently benefit from the computing advancements in computational speeds, memory, and parallel processing. Grid based methods, multiple-model approaches and Gaussian sum filtering are numerical solutions that take advantage of different state coordinates or multiple-model methods that reduced the amount of approximations used. Choosing an efficient grid is very difficult for multi-dimensional state spaces, and oftentimes expensive computations must be done at each point. For the original Gaussian sum filter, a weighted sum of Gaussian density functions approximates the pdf but suffers at the update step for the individual component weight selections. In order to improve upon the original Gaussian sum filter, Ref. [2] introduces a weight update approach at the filter propagation stage instead of the measurement update stage. This weight update is performed by minimizing the integral square difference between the true forecast pdf and its Gaussian sum approximation. By adaptively updating

  17. Modeling the distribution of Mg II absorbers around galaxies using background galaxies and quasars

    SciTech Connect

    Bordoloi, R.; Lilly, S. J.; Kacprzak, G. G.; Churchill, C. W.

    2014-04-01

    We present joint constraints on the distribution of Mg II absorption around high redshift galaxies obtained by combining two orthogonal probes, the integrated Mg II absorption seen in stacked background galaxy spectra and the distribution of parent galaxies of individual strong Mg II systems as seen in the spectra of background quasars. We present a suite of models that can be used to predict, for different two- and three-dimensional distributions, how the projected Mg II absorption will depend on a galaxy's apparent inclination, the impact parameter b and the azimuthal angle between the projected vector to the line of sight and the projected minor axis. In general, we find that variations in the absorption strength with azimuthal angles provide much stronger constraints on the intrinsic geometry of the Mg II absorption than the dependence on the inclination of the galaxies. In addition to the clear azimuthal dependence in the integrated Mg II absorption that we reported earlier in Bordoloi et al., we show that strong equivalent width Mg II absorbers (W{sub r} (2796) ≥ 0.3 Å) are also asymmetrically distributed in azimuth around their host galaxies: 72% of the absorbers in Kacprzak et al., and 100% of the close-in absorbers within 35 kpc of the center of their host galaxies, are located within 50° of the host galaxy's projected semi minor axis. It is shown that either composite models consisting of a simple bipolar component plus a spherical or disk component, or a single highly softened bipolar distribution, can well represent the azimuthal dependencies observed in both the stacked spectrum and quasar absorption-line data sets within 40 kpc. Simultaneously fitting both data sets, we find that in the composite model the bipolar cone has an opening angle of ∼100° (i.e., confined to within 50° of the disk axis) and contains about two-thirds of the total Mg II absorption in the system. The single softened cone model has an exponential fall off with azimuthal

  18. Adaptive Flight Control Design with Optimal Control Modification on an F-18 Aircraft Model

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Nguyen, Nhan T.; Griffin, Brian J.

    2010-01-01

    In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to as the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly; however, a large adaptive gain can lead to high-frequency oscillations which can adversely affect the robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient robustness. A damping term (v) is added in the modification to increase damping as needed. Simulations were conducted on a damaged F-18 aircraft (McDonnell Douglas, now The Boeing Company, Chicago, Illinois) with both the standard baseline dynamic inversion controller and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model.

  19. Adaptive Texture Synthesis for Large Scale City Modeling

    NASA Astrophysics Data System (ADS)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  20. Attitude determination using an adaptive multiple model filtering Scheme

    NASA Technical Reports Server (NTRS)

    Lam, Quang; Ray, Surendra N.

    1995-01-01

    Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown

  1. Attitude determination using an adaptive multiple model filtering Scheme

    NASA Astrophysics Data System (ADS)

    Lam, Quang; Ray, Surendra N.

    1995-05-01

    Attitude determination has been considered as a permanent topic of active research and perhaps remaining as a forever-lasting interest for spacecraft system designers. Its role is to provide a reference for controls such as pointing the directional antennas or solar panels, stabilizing the spacecraft or maneuvering the spacecraft to a new orbit. Least Square Estimation (LSE) technique was utilized to provide attitude determination for the Nimbus 6 and G. Despite its poor performance (estimation accuracy consideration), LSE was considered as an effective and practical approach to meet the urgent need and requirement back in the 70's. One reason for this poor performance associated with the LSE scheme is the lack of dynamic filtering or 'compensation'. In other words, the scheme is based totally on the measurements and no attempts were made to model the dynamic equations of motion of the spacecraft. We propose an adaptive filtering approach which employs a bank of Kalman filters to perform robust attitude estimation. The proposed approach, whose architecture is depicted, is essentially based on the latest proof on the interactive multiple model design framework to handle the unknown of the system noise characteristics or statistics. The concept fundamentally employs a bank of Kalman filter or submodel, instead of using fixed values for the system noise statistics for each submodel (per operating condition) as the traditional multiple model approach does, we use an on-line dynamic system noise identifier to 'identify' the system noise level (statistics) and update the filter noise statistics using 'live' information from the sensor model. The advanced noise identifier, whose architecture is also shown, is implemented using an advanced system identifier. To insure the robust performance for the proposed advanced system identifier, it is also further reinforced by a learning system which is implemented (in the outer loop) using neural networks to identify other unknown

  2. MRSA model of learning and adaptation: a qualitative study among the general public

    PubMed Central

    2012-01-01

    Background More people in the US now die from Methicillin Resistant Staphylococcus aureus (MRSA) infections than from HIV/AIDS. Often acquired in healthcare facilities or during healthcare procedures, the extremely high incidence of MRSA infections and the dangerously low levels of literacy regarding antibiotic resistance in the general public are on a collision course. Traditional medical approaches to infection control and the conventional attitude healthcare practitioners adopt toward public education are no longer adequate to avoid this collision. This study helps us understand how people acquire and process new information and then adapt behaviours based on learning. Methods Using constructivist theory, semi-structured face-to-face and phone interviews were conducted to gather pertinent data. This allowed participants to tell their stories so their experiences could deepen our understanding of this crucial health issue. Interview transcripts were analysed using grounded theory and sensitizing concepts. Results Our findings were classified into two main categories, each of which in turn included three subthemes. First, in the category of Learning, we identified how individuals used their Experiences with MRSA, to answer the questions: What was learned? and, How did learning occur? The second category, Adaptation gave us insights into Self-reliance, Reliance on others, and Reflections on the MRSA journey. Conclusions This study underscores the critical importance of educational programs for patients, and improved continuing education for healthcare providers. Five specific results of this study can reduce the vacuum that currently exists between the knowledge and information available to healthcare professionals, and how that information is conveyed to the public. These points include: 1) a common model of MRSA learning and adaptation; 2) the self-directed nature of adult learning; 3) the focus on general MRSA information, care and prevention, and antibiotic

  3. Pharmacokinetic Modeling of Manganese III. Physiological Approaches Accounting for Background and Tracer Kinetics

    SciTech Connect

    Teeguarden, Justin G.; Gearhart, Jeffrey; Clewell, III, H. J.; Covington, Tammie R.; Nong, Andy; Anderson, Melvin E.

    2007-01-01

    Manganese (Mn) is an essential nutrient. Mn deficiency is associated with altered lipid (Kawano et al. 1987) and carbohydrate metabolism (Baly et al. 1984; Baly et al. 1985), abnormal skeletal cartilage development (Keen et al. 2000), decreased reproductive capacity, and brain dysfunction. Occupational and accidental inhalation exposures to aerosols containing high concentrations of Mn produce neurological symptoms with Parkinson-like characteristics in workers. At present, there is also concern about use of the manganese-containing compound, methylcyclopentadienyl manganese tricarbonyl (MMT), in unleaded gasoline as an octane enhancer. Combustion of MMT produces aerosols containing a mixture of manganese salts (Lynam et al. 1999). These Mn particulates may be inhaled at low concentrations by the general public in areas using MMT. Risk assessments for essential elements need to acknowledge that risks occur with either excesses or deficiencies and the presence of significant amounts of these nutrients in the body even in the absence of any exogenous exposures. With Mn there is an added complication, i.e., the primary risk is associated with inhalation while Mn is an essential dietary nutrient. Exposure standards for inhaled Mn will need to consider the substantial background uptake from normal ingestion. Andersen et al. (1999) suggested a generic approach for essential nutrient risk assessment. An acceptable exposure limit could be based on some ‘tolerable’ change in tissue concentration in normal and exposed individuals, i.e., a change somewhere from 10 to 25 % of the individual variation in tissue concentration seen in a large human population. A reliable multi-route, multi-species pharmacokinetic model would be necessary for the implementation of this type of dosimetry-based risk assessment approach for Mn. Physiologically-based pharmacokinetic (PBPK) models for various xenobiotics have proven valuable in contributing to a variety of chemical specific risk

  4. Use of Time Information in Models behind Adaptive System for Building Fluency in Mathematics

    ERIC Educational Resources Information Center

    Rihák, Jirí

    2015-01-01

    In this work we introduce the system for adaptive practice of foundations of mathematics. Adaptivity of the system is primarily provided by selection of suitable tasks, which uses information from a domain model and a student model. The domain model does not use prerequisites but works with splitting skills to more concrete sub-skills. The student…

  5. Adaptive invasive species distribution models: A framework for modeling incipient invasions

    USGS Publications Warehouse

    Uden, Daniel R.; Allen, Craig R.; Angeler, David G.; Corral, Lucia; Fricke, Kent A.

    2015-01-01

    The utilization of species distribution model(s) (SDM) for approximating, explaining, and predicting changes in species’ geographic locations is increasingly promoted for proactive ecological management. Although frameworks for modeling non-invasive species distributions are relatively well developed, their counterparts for invasive species—which may not be at equilibrium within recipient environments and often exhibit rapid transformations—are lacking. Additionally, adaptive ecological management strategies address the causes and effects of biological invasions and other complex issues in social-ecological systems. We conducted a review of biological invasions, species distribution models, and adaptive practices in ecological management, and developed a framework for adaptive, niche-based, invasive species distribution model (iSDM) development and utilization. This iterative, 10-step framework promotes consistency and transparency in iSDM development, allows for changes in invasive drivers and filters, integrates mechanistic and correlative modeling techniques, balances the avoidance of type 1 and type 2 errors in predictions, encourages the linking of monitoring and management actions, and facilitates incremental improvements in models and management across space, time, and institutional boundaries. These improvements are useful for advancing coordinated invasive species modeling, management and monitoring from local scales to the regional, continental and global scales at which biological invasions occur and harm native ecosystems and economies, as well as for anticipating and responding to biological invasions under continuing global change.

  6. Modeling estimates of the effect of acid rain on background radiation dose.

    PubMed Central

    Sheppard, S C; Sheppard, M I

    1988-01-01

    Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the "background" radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable. PMID:3203639

  7. Towards a High Temporal Frequency Grass Canopy Thermal IR Model for Background Signatures

    NASA Technical Reports Server (NTRS)

    Ballard, Jerrell R., Jr.; Smith, James A.; Koenig, George G.

    2004-01-01

    In this paper, we present our first results towards understanding high temporal frequency thermal infrared response from a dense plant canopy and compare the application of our model, driven both by slowly varying, time-averaged meteorological conditions and by high frequency measurements of local and within canopy profiles of relative humidity and wind speed, to high frequency thermal infrared observations. Previously, we have employed three-dimensional ray tracing to compute the intercepted and scattered radiation fluxes and for final scene rendering. For the turbulent fluxes, we employed simple resistance models for latent and sensible heat with one-dimensional profiles of relative humidity and wind speed. Our modeling approach has proven successful in capturing the directional and diurnal variation in background thermal infrared signatures. We hypothesize that at these scales, where the model is typically driven by time-averaged, local meteorological conditions, the primary source of thermal variance arises from the spatial distribution of sunlit and shaded foliage elements within the canopy and the associated radiative interactions. In recent experiments, we have begun to focus on the high temporal frequency response of plant canopies in the thermal infrared at 1 second to 5 minute intervals. At these scales, we hypothesize turbulent mixing plays a more dominant role. Our results indicate that in the high frequency domain, the vertical profile of temperature change is tightly coupled to the within canopy wind speed In the results reported here, the canopy cools from the top down with increased wind velocities and heats from the bottom up at low wind velocities. .

  8. Modeling estimates of the effect of acid rain on background radiation dose.

    PubMed

    Sheppard, S C; Sheppard, M I

    1988-06-01

    Acid rain causes accelerated mobilization of many materials in soils. Natural and anthropogenic radionuclides, especially 226Ra and 137Cs, are among these materials. Okamoto is apparently the only researcher to date who has attempted to quantify the effect of acid rain on the "background" radiation dose to man. He estimated an increase in dose by a factor of 1.3 following a decrease in soil pH of 1 unit. We reviewed literature that described the effects of changes in pH on mobility and plant uptake of Ra and Cs. Generally, a decrease in soil pH by 1 unit will increase mobility and plant uptake by factors of 2 to 7. Thus, Okamoto's dose estimate may be too low. We applied several simulation models to confirm Okamoto's ideas, with most emphasis on an atmospherically driven soil model that predicts water and nuclide flow through a soil profile. We modeled a typical, acid-rain sensitive soil using meteorological data from Geraldton, Ontario. The results, within the range of effects on the soil expected from acidification, showed essentially direct proportionality between the mobility of the nuclides and dose. This supports some of the assumptions invoked by Okamoto. We conclude that a decrease in pH of 1 unit may increase the mobility of Ra and Cs by a factor of 2 or more. Our models predict that this will lead to similar increases in plant uptake and radiological dose to man. Although health effects following such a small increase in dose have not been statistically demonstrated, any increase in dose is probably undesirable. PMID:3203639

  9. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  10. Adapting the transtheoretical model of change to the bereavement process.

    PubMed

    Calderwood, Kimberly A

    2011-04-01

    Theorists currently believe that bereaved people undergo some transformation of self rather than returning to their original state. To advance our understanding of this process, this article presents an adaptation of Prochaska and DiClemente's transtheoretical model of change as it could be applied to the journey that bereaved individuals experience. This theory is unique because it addresses attitudes, intentions, and behavioral processes at each stage; it allows for a focus on a broader range of emotions than just anger and depression; it allows for the recognition of two periods of regression during the bereavement process; and it adds a maintenance stage, which other theories lack. This theory can benefit bereaved individuals directly and through the increased awareness among counselors, family, friends, employers, and society at large. This theory may also be used as a tool for bereavement programs to consider whether they are meeting clients' needs throughout the transformation change bereavement process rather than only focusing on the initial stages characterized by intense emotion. PMID:21553574

  11. Adaptable Information Models in the Global Change Information System

    NASA Astrophysics Data System (ADS)

    Duggan, B.; Buddenberg, A.; Aulenbach, S.; Wolfe, R.; Goldstein, J.

    2014-12-01

    The US Global Change Research Program has sponsored the creation of the Global Change Information System () to provide a web based source of accessible, usable, and timely information about climate and global change for use by scientists, decision makers, and the public. The GCIS played multiple roles during the assembly and release of the Third National Climate Assessment. It provided human and programmable interfaces, relational and semantic representations of information, and discrete identifiers for various types of resources, which could then be manipulated by a distributed team with a wide range of specialties. The GCIS also served as a scalable backend for the web based version of the report. In this talk, we discuss the infrastructure decisions made during the design and deployment of the GCIS, as well as ongoing work to adapt to new types of information. Both a constrained relational database and an open ended triple store are used to ensure data integrity while maintaining fluidity. Using natural primary keys allows identifiers to propagate through both models. Changing identifiers are accomodated through fine grained auditing and explicit mappings to external lexicons. A practical RESTful API is used whose endpoints are also URIs in an ontology. Both the relational schema and the ontology are maleable, and stability is ensured through test driven development and continuous integration testing using modern open source techniques. Content is also validated through continuous testing techniques. A high degres of scalability is achieved through caching.

  12. Comparing and evaluating model estimates of background ozone in surface air over North America

    NASA Astrophysics Data System (ADS)

    Oberman, J.; Fiore, A. M.; Lin, M.; Zhang, L.; Jacob, D. J.; Naik, V.; Horowitz, L. W.

    2011-12-01

    Tropospheric ozone adversely affects human health and vegetation, and is thus a criteria pollutant regulated by the U.S. Environmental Protection Agency (EPA) under the National Ambient Air Quality Standard (NAAQS). Ozone is produced in the atmosphere via photo-oxidation of volatile organic compounds (VOCs) and carbon monoxide (CO) in the presence of nitrogen oxides (NOx). The present EPA approach considers health risks associated with exposure to ozone enhancement above the policy-relevant background (PRB), which is currently defined as the surface concentration of ozone that would exist without North American anthropogenic emissions. PRB thus includes production by natural precursors, production by precursors emitted on foreign continents, and transport of stratospheric ozone into surface air. As PRB is not an observable quantity, it must be estimated using numerical models. We compare PRB estimates for the year 2006 from the GFDL Atmospheric Model 3 (AM3) chemistry-climate model (CCM) and the GEOS-Chem (GC) chemical transport model (CTM). We evaluate the skill of the models in reproducing total surface ozone observed at the U.S. Clean Air Status and Trends Network (CASTNet), dividing the stations into low-elevation (< 1.5 km in altitude, primarily eastern) and high-elevation (> 1.5 km in altitude, all western) subgroups. At the low-elevation sites AM3 estimates of PRB (38±9 ppbv in spring, 27±9 ppbv in summer) are higher than GC (27±7 ppbv in spring, 21±8 ppbv in summer) in both seasons. Analysis at these sites is complicated by a positive bias in AM3 total ozone with respect to the observed total ozone, the source of which is yet unclear. At high-elevation sites, AM3 PRB is higher in the spring (47±8 ppbv) than in the summer (33±8 ppbv). In contrast, GC simulates little seasonal variation at high elevation sites (39±5 ppbv in spring vs. 38±7 ppbv in summer). Seasonal average total ozone at these sites was within 4 ppbv of the observations for both

  13. Agenda Setting for Health Promotion: Exploring an Adapted Model for the Social Media Era

    PubMed Central

    2015-01-01

    Background The foundation of best practice in health promotion is a robust theoretical base that informs design, implementation, and evaluation of interventions that promote the public’s health. This study provides a novel contribution to health promotion through the adaptation of the agenda-setting approach in response to the contribution of social media. This exploration and proposed adaptation is derived from a study that examined the effectiveness of Twitter in influencing agenda setting among users in relation to road traffic accidents in Saudi Arabia. Objective The proposed adaptations to the agenda-setting model to be explored reflect two levels of engagement: agenda setting within the social media sphere and the position of social media within classic agenda setting. This exploratory research aims to assess the veracity of the proposed adaptations on the basis of the hypotheses developed to test these two levels of engagement. Methods To validate the hypotheses, we collected and analyzed data from two primary sources: Twitter activities and Saudi national newspapers. Keyword mentions served as indicators of agenda promotion; for Twitter, interactions were used to measure the process of agenda setting within the platform. The Twitter final dataset comprised 59,046 tweets and 38,066 users who contributed by tweeting, replying, or retweeting. Variables were collected for each tweet and user. In addition, 518 keyword mentions were recorded from six popular Saudi national newspapers. Results The results showed significant ratification of the study hypotheses at both levels of engagement that framed the proposed adaptions. The results indicate that social media facilitates the contribution of individuals in influencing agendas (individual users accounted for 76.29%, 67.79%, and 96.16% of retweet impressions, total impressions, and amplification multipliers, respectively), a component missing from traditional constructions of agenda-setting models. The influence

  14. Improved Discovery of Molecular Interactions in Genome-Scale Data with Adaptive Model-Based Normalization

    PubMed Central

    Brown, Patrick O.

    2013-01-01

    Background High throughput molecular-interaction studies using immunoprecipitations (IP) or affinity purifications are powerful and widely used in biology research. One of many important applications of this method is to identify the set of RNAs that interact with a particular RNA-binding protein (RBP). Here, the unique statistical challenge presented is to delineate a specific set of RNAs that are enriched in one sample relative to another, typically a specific IP compared to a non-specific control to model background. The choice of normalization procedure critically impacts the number of RNAs that will be identified as interacting with an RBP at a given significance threshold – yet existing normalization methods make assumptions that are often fundamentally inaccurate when applied to IP enrichment data. Methods In this paper, we present a new normalization methodology that is specifically designed for identifying enriched RNA or DNA sequences in an IP. The normalization (called adaptive or AD normalization) uses a basic model of the IP experiment and is not a variant of mean, quantile, or other methodology previously proposed. The approach is evaluated statistically and tested with simulated and empirical data. Results and Conclusions The adaptive (AD) normalization method results in a greatly increased range in the number of enriched RNAs identified, fewer false positives, and overall better concordance with independent biological evidence, for the RBPs we analyzed, compared to median normalization. The approach is also applicable to the study of pairwise RNA, DNA and protein interactions such as the analysis of transcription factors via chromatin immunoprecipitation (ChIP) or any other experiments where samples from two conditions, one of which contains an enriched subset of the other, are studied. PMID:23349766

  15. Space weather circulation model of plasma clouds as background radiation medium of space environment.

    NASA Astrophysics Data System (ADS)

    Kalu, A. E.

    A model for Space Weather (SW) Circulation with Plasma Clouds as background radiation medium of Space Environment has been proposed and discussed. Major characteristics of the model are outlined and the model assumes a baroclinic Space Environment in view of observed pronounced horizontal electron temperature gradient with prevailing weak vertical temperature gradient. The primary objective of the study is to be able to monitor and realistically predict on real- or near real-time SW and Space Storms (SWS) affecting human economic systems on Earth as well as the safety and Physiologic comfort of human payload in Space Environment in relation to planned increase in human space flights especially with reference to the ISS Space Shuttle Taxi (ISST) Programme and other prolonged deep Space Missions. Although considerable discussions are now available in the literature on SW issues, routine Meteorological operational applications of SW forecast data and information for Space Environment are still yet to receive adequate attention. The paper attempts to fill this gap in the literature of SW. The paper examines the sensitivity and variability in 3-D continuum of Plasmas in response to solar radiation inputs into the magnetosphere under disturbed Sun condition. Specifically, the presence of plasma clouds in the form of Coronal Mass Ejections (CMEs) is stressed as a major source of danger to Space crews, spacecraft instrumentation and architecture charging problems as well as impacts on numerous radiation - sensitive human economic systems on Earth. Finally, the paper considers the application of model results in the form of effective monitoring of each of the two major phases of manned Spaceflights - take-off and re-entry phases where all-time assessment of spacecraft transient ambient micro-incabin and outside Space Environment is vital for all manned Spaceflights as recently evidenced by the loss of vital information during take-off of the February 1, 2003 US Columbia

  16. Decentralized Adaptive Control of Systems with Uncertain Interconnections, Plant-Model Mismatch and Actuator Failures

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    Decentralized adaptive control is considered for systems consisting of multiple interconnected subsystems. It is assumed that each subsystem s parameters are uncertain and the interconnection parameters are not known. In addition, mismatch can exist between each subsystem and its reference model. A strictly decentralized adaptive control scheme is developed, wherein each subsystem has access only to its own state but has the knowledge of all reference model states. The mismatch is estimated online for each subsystem and the mismatch estimates are used to adaptively modify the corresponding reference models. The adaptive control scheme is extended to the case with actuator failures in addition to mismatch.

  17. A Hydrodynamic Approach to Cosmology: Nonlinear Effects on Cosmic Backgrounds in the Cold Dark Matter Model

    NASA Astrophysics Data System (ADS)

    Scaramella, Roberto; Cen, Renyue; Ostriker, Jeremiah P.

    1993-10-01

    Using the CDM model as a testbed, we produce and analyze sky maps of fluctuations in the cosmic background radiation field due to Sunyaev-Zel'dovich effect, as well as those seen in X-ray background at 1 keV and at 2 keV. These effects are due to the shock heating of baryons in the nonlinear phases of cosmic collapses. Comparing observations with computations provides a powerful tool to constrain cosmological models. We use a highly developed Eulerian mesh code with 1283 cells and 2 × 106 particles. Most of our information comes from simulations with box size 64 h-1 Mpc, but other calculations were made with L = 16 h-1 and L = 4 h-1 Mpc. A standard CDM input spectrum was used with amplitude defined by the requirement (ΔM/M)rms = 1/1.5 on 8 h-1 Mpc scales (lower than the COBE normalization by a factor of 1.6±0.4), with H0 = 50 km s-1 Mpc-1 and Ωb = 0.05. For statistical validity a large number of independent simulations must be run. In all, over 60 simulations were run from z = 20 to z = 0. We produce maps of 50' x 50' with 1' effective resolution by randomly stacking along the past light cone for 0.02 ≤ z ≤ 10 appropriate combinations of computational boxes of different comoving lengths, which are picked from among different realizations of initial conditions. We also compute time evolution, present intensity pixel distributions, and the autocorrelation function of sky fluctuations as a function of angular scale. Our most reliable results are obtained after deletion of bright sources having 1 keV intensity greater than 0.1 keV cm-2 sr-1 s-1 keV-1. Then for the Sunyaev-Zel'dovich parameter γ the mean and dispersion are [barγ, σ(γ)] = (4, 3) × 10-7 with a lognormal distribution providing a good fit for values of y greater than average. The angular correlation function (less secure) is roughly exponential with scale length ˜2'.5. For the X-ray intensity fluctuations, in units of keV s-1 sr-1 cm-2 keV-1 we find barIX1, X2 = (0.02, 0.006) and σX1, X2 = (0

  18. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines.

    PubMed

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  19. Neural control and adaptive neural forward models for insect-like, energy-efficient, and adaptable locomotion of walking machines

    PubMed Central

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    Living creatures, like walking animals, have found fascinating solutions for the problem of locomotion control. Their movements show the impression of elegance including versatile, energy-efficient, and adaptable locomotion. During the last few decades, roboticists have tried to imitate such natural properties with artificial legged locomotion systems by using different approaches including machine learning algorithms, classical engineering control techniques, and biologically-inspired control mechanisms. However, their levels of performance are still far from the natural ones. By contrast, animal locomotion mechanisms seem to largely depend not only on central mechanisms (central pattern generators, CPGs) and sensory feedback (afferent-based control) but also on internal forward models (efference copies). They are used to a different degree in different animals. Generally, CPGs organize basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models allow the machine to autonomously adapt its locomotion to deal with a change of terrain, losing of ground contact during stance phase, stepping on or hitting an obstacle during swing phase, leg damage, and even to promote cockroach-like climbing behavior. Thus, the results presented here show that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines. PMID:23408775

  20. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it

  1. Constraints on Dark Matter Interactions with Standard Model Particles from Cosmic Microwave Background Spectral Distortions.

    PubMed

    Ali-Haïmoud, Yacine; Chluba, Jens; Kamionkowski, Marc

    2015-08-14

    We propose a new method to constrain elastic scattering between dark matter (DM) and standard model particles in the early Universe. Direct or indirect thermal coupling of nonrelativistic DM with photons leads to a heat sink for the latter. This results in spectral distortions of the cosmic microwave background (CMB), the amplitude of which can be as large as a few times the DM-to-photon-number ratio. We compute CMB spectral distortions due to DM-proton, DM-electron, and DM-photon scattering for generic energy-dependent cross sections and DM mass m_{χ}≳1 keV. Using Far-Infrared Absolute Spectrophotometer measurements, we set constraints on the cross sections for m_{χ}≲0.1 MeV. In particular, for energy-independent scattering we obtain σ_{DM-proton}≲10^{-24} cm^{2} (keV/m_{χ})^{1/2}, σ_{DM-electron}≲10^{-27} cm^{2} (keV/m_{χ})^{1/2}, and σ_{DM-photon}≲10^{-39} cm^{2} (m_{χ}/keV). An experiment with the characteristics of the Primordial Inflation Explorer would extend the regime of sensitivity up to masses m_{χ}~1 GeV. PMID:26317709

  2. Detection of Bird Nests during Mechanical Weeding by Incremental Background Modeling and Visual Saliency

    PubMed Central

    Steen, Kim Arild; Therkildsen, Ole Roland; Green, Ole; Karstoft, Henrik

    2015-01-01

    Mechanical weeding is an important tool in organic farming. However, the use of mechanical weeding in conventional agriculture is increasing, due to public demands to lower the use of pesticides and an increased number of pesticide-resistant weeds. Ground nesting birds are highly susceptible to farming operations, like mechanical weeding, which may destroy the nests and reduce the survival of chicks and incubating females. This problem has limited focus within agricultural engineering. However, when the number of machines increases, destruction of nests will have an impact on various species. It is therefore necessary to explore and develop new technology in order to avoid these negative ethical consequences. This paper presents a vision-based approach to automated ground nest detection. The algorithm is based on the fusion of visual saliency, which mimics human attention, and incremental background modeling, which enables foreground detection with moving cameras. The algorithm achieves a good detection rate, as it detects 28 of 30 nests at an average distance of 3.8 m, with a true positive rate of 0.75. PMID:25738766

  3. Cosmic string parameter constraints and model analysis using small scale Cosmic Microwave Background data

    SciTech Connect

    Urrestilla, Jon; Bevis, Neil; Hindmarsh, Mark; Kunz, Martin E-mail: n.bevis@imperial.ac.uk E-mail: martin.kunz@physics.unige.ch

    2011-12-01

    We present a significant update of the constraints on the Abelian Higgs cosmic string tension by cosmic microwave background (CMB) data, enabled both by the use of new high-resolution CMB data from suborbital experiments as well as the latest results of the WMAP satellite, and by improved predictions for the impact of Abelian Higgs cosmic strings on the CMB power spectra. The new cosmic string spectra [1] were improved especially for small angular scales, through the use of larger Abelian Higgs string simulations and careful extrapolation. If Abelian Higgs strings are present then we find improved bounds on their contribution to the CMB anisotropies, fd{sup AH} < 0.095, and on their tension, Gμ{sub AH} < 0.57 × 10{sup −6}, both at 95% confidence level using WMAP7 data; and fd{sup AH} < 0.048 and Gμ{sub AH} < 0.42 × 10{sup −6} using all the CMB data. We also find that using all the CMB data, a scale invariant initial perturbation spectrum, n{sub s} = 1, is now disfavoured at 2.4σ even if strings are present. A Bayesian model selection analysis no longer indicates a preference for strings.

  4. A Systematic Ecological Model for Adapting Physical Activities: Theoretical Foundations and Practical Examples

    ERIC Educational Resources Information Center

    Hutzler, Yeshayahu

    2007-01-01

    This article proposes a theory- and practice-based model for adapting physical activities. The ecological frame of reference includes Dynamic and Action System Theory, World Health Organization International Classification of Function and Disability, and Adaptation Theory. A systematic model is presented addressing (a) the task objective, (b) task…

  5. Command generator tracker based direct model reference adaptive control of a PUMA 560 manipulator. Thesis

    NASA Technical Reports Server (NTRS)

    Swift, David C.

    1992-01-01

    This project dealt with the application of a Direct Model Reference Adaptive Control algorithm to the control of a PUMA 560 Robotic Manipulator. This chapter will present some motivation for using Direct Model Reference Adaptive Control, followed by a brief historical review, the project goals, and a summary of the subsequent chapters.

  6. Data-driven estimations of Standard Model backgrounds to SUSY searches in ATLAS

    SciTech Connect

    Legger, F.

    2008-11-23

    At the Large Hadron Collider (LHC), the strategy for the observation of supersymmetry in the early days is mainly based on inclusive searches. Major backgrounds are constituted by mismeasured multi-jet events and W, Z and t quark production in association with jets. We describe recent work performed in the ATLAS Collaboration to derive these backgrounds from the first ATLAS data.

  7. The Genetic Basis of Phenotypic Adaptation II: The Distribution of Adaptive Substitutions in the Moving Optimum Model

    PubMed Central

    Kopp, Michael; Hermisson, Joachim

    2009-01-01

    {equation*}{\\mathrm{{\\gamma}}}=\\tilde {{\\upsilon}}/({\\mathrm{\\tilde {{\\sigma}}}}{\\Theta}{\\mathrm{{\\omega}}}^{3})\\end{equation*}\\end{document}, which combines the ecological and genetic parameters; (ii) depending on γ, we can distinguish two distinct adaptive regimes: for large γ the adaptive process is mutation limited and dominated by genetic constraints, whereas for small γ it is environmentally limited and dominated by the external ecological dynamics; (iii) deviations from the adaptive-walk approximation occur for large mutation rates, when different mutant alleles interact via linkage or epistasis; and (iv) in contrast to predictions from previous models assuming constant selection, the distribution of adaptive substitutions is generally not exponential. PMID:19805820

  8. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  9. A Joint Model of the X-Ray and Infrared Extragalactic Backgrounds. I. Model Construction and First Results

    NASA Astrophysics Data System (ADS)

    Shi, Yong; Helou, George; Armus, Lee; Stierwalt, Sabrina; Dale, Daniel

    2013-02-01

    We present an extragalactic population model of the cosmic background light to interpret the rich high-quality survey data in the X-ray and IR bands. The model incorporates star formation and supermassive black hole (SMBH) accretion in a co-evolution scenario to fit simultaneously 617 data points of number counts, redshift distributions, and local luminosity functions (LFs) with 19 free parameters. The model has four main components, the total IR LF, the SMBH accretion energy fraction in the IR band, the star formation spectral energy distribution (SED), and the unobscured SMBH SED extinguished with a H I column density distribution. As a result of the observational uncertainties about the star formation and SMBH SEDs, we present several variants of the model. The best-fit reduced χ2 reaches as small as 2.7-2.9 of which a significant amount (>0.8) is contributed by cosmic variances or caveats associated with data. Compared to previous models, the unique result of this model is to constrain the SMBH energy fraction in the IR band that is found to increase with the IR luminosity but decrease with redshift up to z ~ 1.5; this result is separately verified using aromatic feature equivalent-width data. The joint modeling of X-ray and mid-IR data allows for improved constraints on the obscured active galactic nucleus (AGN), especially the Compton-thick AGN population. All variants of the model require that Compton-thick AGN fractions decrease with the SMBH luminosity but increase with redshift while the type 1 AGN fraction has the reverse trend. .

  10. A JOINT MODEL OF THE X-RAY AND INFRARED EXTRAGALACTIC BACKGROUNDS. I. MODEL CONSTRUCTION AND FIRST RESULTS

    SciTech Connect

    Shi, Yong; Helou, George; Armus, Lee; Stierwalt, Sabrina; Dale, Daniel

    2013-02-10

    We present an extragalactic population model of the cosmic background light to interpret the rich high-quality survey data in the X-ray and IR bands. The model incorporates star formation and supermassive black hole (SMBH) accretion in a co-evolution scenario to fit simultaneously 617 data points of number counts, redshift distributions, and local luminosity functions (LFs) with 19 free parameters. The model has four main components, the total IR LF, the SMBH accretion energy fraction in the IR band, the star formation spectral energy distribution (SED), and the unobscured SMBH SED extinguished with a H I column density distribution. As a result of the observational uncertainties about the star formation and SMBH SEDs, we present several variants of the model. The best-fit reduced {chi}{sup 2} reaches as small as 2.7-2.9 of which a significant amount (>0.8) is contributed by cosmic variances or caveats associated with data. Compared to previous models, the unique result of this model is to constrain the SMBH energy fraction in the IR band that is found to increase with the IR luminosity but decrease with redshift up to z {approx} 1.5; this result is separately verified using aromatic feature equivalent-width data. The joint modeling of X-ray and mid-IR data allows for improved constraints on the obscured active galactic nucleus (AGN), especially the Compton-thick AGN population. All variants of the model require that Compton-thick AGN fractions decrease with the SMBH luminosity but increase with redshift while the type 1 AGN fraction has the reverse trend.

  11. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  12. Dynamics of dual prism adaptation: relating novel experimental results to a minimalistic neural model.

    PubMed

    Arévalo, Orlando; Bornschlegl, Mona A; Eberhardt, Sven; Ernst, Udo; Pawelzik, Klaus; Fahle, Manfred

    2013-01-01

    In everyday life, humans interact with a dynamic environment often requiring rapid adaptation of visual perception and motor control. In particular, new visuo-motor mappings must be learned while old skills have to be kept, such that after adaptation, subjects may be able to quickly change between two different modes of generating movements ('dual-adaptation'). A fundamental question is how the adaptation schedule determines the acquisition speed of new skills. Given a fixed number of movements in two different environments, will dual-adaptation be faster if switches ('phase changes') between the environments occur more frequently? We investigated the dynamics of dual-adaptation under different training schedules in a virtual pointing experiment. Surprisingly, we found that acquisition speed of dual visuo-motor mappings in a pointing task is largely independent of the number of phase changes. Next, we studied the neuronal mechanisms underlying this result and other key phenomena of dual-adaptation by relating model simulations to experimental data. We propose a simple and yet biologically plausible neural model consisting of a spatial mapping from an input layer to a pointing angle which is subjected to a global gain modulation. Adaptation is performed by reinforcement learning on the model parameters. Despite its simplicity, the model provides a unifying account for a broad range of experimental data: It quantitatively reproduced the learning rates in dual-adaptation experiments for both direct effect, i.e. adaptation to prisms, and aftereffect, i.e. behavior after removal of prisms, and their independence on the number of phase changes. Several other phenomena, e.g. initial pointing errors that are far smaller than the induced optical shift, were also captured. Moreover, the underlying mechanisms, a local adaptation of a spatial mapping and a global adaptation of a gain factor, explained asymmetric spatial transfer and generalization of prism adaptation, as

  13. Tensor Product Model Transformation Based Adaptive Integral-Sliding Mode Controller: Equivalent Control Method

    PubMed Central

    Zhao, Guoliang; Li, Hongxing

    2013-01-01

    This paper proposes new methodologies for the design of adaptive integral-sliding mode control. A tensor product model transformation based adaptive integral-sliding mode control law with respect to uncertainties and perturbations is studied, while upper bounds on the perturbations and uncertainties are assumed to be unknown. The advantage of proposed controllers consists in having a dynamical adaptive control gain to establish a sliding mode right at the beginning of the process. Gain dynamics ensure a reasonable adaptive gain with respect to the uncertainties. Finally, efficacy of the proposed controller is verified by simulations on an uncertain nonlinear system model. PMID:24453897

  14. Unscented fuzzy-controlled current statistic model and adaptive filtering for tracking maneuvering targets

    NASA Astrophysics Data System (ADS)

    Hu, Hongtao; Jing, Zhongliang; Hu, Shiqiang

    2006-12-01

    A novel adaptive algorithm for tracking maneuvering targets is proposed. The algorithm is implemented with fuzzy-controlled current statistic model adaptive filtering and unscented transformation. A fuzzy system allows the filter to tune the magnitude of maximum accelerations to adapt to different target maneuvers, and unscented transformation can effectively handle nonlinear system. A bearing-only tracking scenario simulation results show the proposed algorithm has a robust advantage over a wide range of maneuvers and overcomes the shortcoming of the traditional current statistic model and adaptive filtering algorithm.

  15. Comparison of Measured Galactic Background Radiation at L-Band with Model

    NASA Technical Reports Server (NTRS)

    LeVine, David M.; Abraham, Saji; Kerr, Yann H.; Wilson, William J.; Skou, Niels; Sobjaerg, Sten

    2004-01-01

    Radiation from the celestial sky in the spectral window at 1.413 GHz is strong and an accurate accounting of this background radiation is needed for calibration and retrieval algorithms. Modern radio astronomy measurements in this window have been converted into a brightness temperature map of the celestial sky at L-band suitable for such applications. This paper presents a comparison of the background predicted by this map with the measurements of several modern L-band remote sensing radiometer Keywords-Galactic background, microwave radiometry; remote sensing;

  16. Geodynamic background of the 2008 Wenchuan earthquake based on 3D visco-elastic numerical modelling

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhu, Bojing; Yang, Xiaolin; Shi, Yaolin

    2016-03-01

    The 2008 Wenchuan earthquake (Mw7.9) occurred in the Longmen Shan fault zone. The stress change and crustal deformation during the accumulation period is computed using 3D finite element modelling assuming visco-elastic rheology. Our results support that the eastward movement of the Tibetan Plateau resulting from the India-Eurasia collision is obstructed at the Longmen Shan fault zone by the strong Yangtze craton. In response, the Tibetan ductile crust thickens and accumulates at the contact between the Tibetan Plateau and the Sichuan Basin. This process implies a strong uplift with the rate of about 1.8 mm/a of the upper crust and induces a stress concentration nearly at the bottom of the Longmen Shan fault zone. We believe that the stress concentration in the Longmen Shan fault zone provides a very important geodynamic background of the 2008 Wenchuan earthquake. Using numerical experiments we find that the key factor controlling this stress concentration process is the large viscosity contrast in the middle and lower crusts between the Tibetan Plateau and the Sichuan Basin. The results show that large viscosity contrast in the middle and lower crusts accelerates the stress concentration in the Longmen Shan fault zone. Fast moving lower crustal flow accelerates this stress accumulation process. During the inter-seismic period, spatially the maximum stress accumulation rate of the eastern margin of the Tibetan Plateau is located nearly at the bottom of the brittle upper crust of the Longmen Shan fault zone. The spatial distribution of the stress accumulation along the strike of the Longmen Shan fault zone is as follows: the normal stress decreases while the shear stress increases from southwest to northeast along the Longmen Shan fault zone. This stress distribution explains the thrust motion in the SW and strike-slip motion in the NE during the 2008 Wenchuan earthquake.

  17. A Method for Estimating Urban Background Concentrations in Support of Hybrid Air Pollution Modeling for Environmental Health Studies

    PubMed Central

    Arunachalam, Saravanan; Valencia, Alejandro; Akita, Yasuyuki; Serre, Marc L.; Omary, Mohammad; Garcia, Valerie; Isakov, Vlad

    2014-01-01

    Exposure studies rely on detailed characterization of air quality, either from sparsely located routine ambient monitors or from central monitoring sites that may lack spatial representativeness. Alternatively, some studies use models of various complexities to characterize local-scale air quality, but often with poor representation of background concentrations. A hybrid approach that addresses this drawback combines a regional-scale model to provide background concentrations and a local-scale model to assess impacts of local sources. However, this approach may double-count sources in the study regions. To address these limitations, we carefully define the background concentration as the concentration that would be measured if local sources were not present, and to estimate these background concentrations we developed a novel technique that combines space-time ordinary kriging (STOK) of observations with outputs from a detailed chemistry-transport model with local sources zeroed out. We applied this technique to support an exposure study in Detroit, Michigan, for several pollutants (including NOx and PM2.5), and evaluated the estimated hybrid concentrations (calculated by combining the background estimates that addresses this issue of double counting with local-scale dispersion model estimates) using observations. Our results demonstrate the strength of this approach specifically by eliminating the problem of double-counting reported in previous hybrid modeling approaches leading to improved estimates of background concentrations, and further highlight the relative importance of NOx vs. PM2.5 in their relative contributions to total concentrations. While a key limitation of this approach is the requirement for another detailed model simulation to avoid double-counting, STOK improves the overall characterization of background concentrations at very fine spatial scales. PMID:25321872

  18. A method for estimating urban background concentrations in support of hybrid air pollution modeling for environmental health studies.

    PubMed

    Arunachalam, Saravanan; Valencia, Alejandro; Akita, Yasuyuki; Serre, Marc L; Omary, Mohammad; Garcia, Valerie; Isakov, Vlad

    2014-01-01

    Exposure studies rely on detailed characterization of air quality, either from sparsely located routine ambient monitors or from central monitoring sites that may lack spatial representativeness. Alternatively, some studies use models of various complexities to characterize local-scale air quality, but often with poor representation of background concentrations. A hybrid approach that addresses this drawback combines a regional-scale model to provide background concentrations and a local-scale model to assess impacts of local sources. However, this approach may double-count sources in the study regions. To address these limitations, we carefully define the background concentration as the concentration that would be measured if local sources were not present, and to estimate these background concentrations we developed a novel technique that combines space-time ordinary kriging (STOK) of observations with outputs from a detailed chemistry-transport model with local sources zeroed out. We applied this technique to support an exposure study in Detroit, Michigan, for several pollutants (including NOx and PM2.5), and evaluated the estimated hybrid concentrations (calculated by combining the background estimates that addresses this issue of double counting with local-scale dispersion model estimates) using observations. Our results demonstrate the strength of this approach specifically by eliminating the problem of double-counting reported in previous hybrid modeling approaches leading to improved estimates of background concentrations, and further highlight the relative importance of NOx vs. PM2.5 in their relative contributions to total concentrations. While a key limitation of this approach is the requirement for another detailed model simulation to avoid double-counting, STOK improves the overall characterization of background concentrations at very fine spatial scales. PMID:25321872

  19. Modeling Lost-Particle Backgrounds in PEP-II Using LPTURTLE

    SciTech Connect

    Fieguth, T.; Barlow, R.; Kozanecki, W.; /DAPNIA, Saclay

    2005-05-17

    Background studies during the design, construction, commissioning, operation and improvement of BaBar and PEP-II have been greatly influenced by results from a program referred to as LPTURTLE (Lost Particle TURTLE) which was originally conceived for the purpose of studying gas background for SLC. This venerable program is still in use today. We describe its use, capabilities and improvements and refer to current results now being applied to BaBar.

  20. Adapting a strategic management model to hospital operating strategies. A model development and justification.

    PubMed

    Swinehart, K; Zimmerer, T W; Oswald, S

    1995-01-01

    Industrial organizations have employed the process of strategic management in their attempts to cope effectively with global competitive pressures, while attempting to build and maintain competitive advantage. With health-care organizations presently trying to cope with an increasingly turbulent environment created by the uncertainty as to pending legislation and anticipated reform, the need for such organizational strategic planning is apparent. Presents and discusses a methodology for adapting a business-oriented model of strategic planning to health care. PMID:10166203

  1. An Adaptive Code for Radial Stellar Model Pulsations

    NASA Astrophysics Data System (ADS)

    Buchler, J. Robert; Kolláth, Zoltán; Marom, Ariel

    1997-09-01

    We describe an implicit 1-D adaptive mesh hydrodynamics code that is specially tailored for radial stellar pulsations. In the Lagrangian limit the code reduces to the well tested Fraley scheme. The code has the useful feature that unwanted, long lasting transients can be avoided by smoothly switching on the adaptive mesh features starting from the Lagrangean code. Thus, a limit cycle pulsation that can readily be computed with the relaxation method of Stellingwerf will converge in a few tens of pulsation cycles when put into the adaptive mesh code. The code has been checked with two shock problems, viz. Noh and Sedov, for which analytical solutions are known, and it has been found to be both accurate and stable. Superior results were obtained through the solution of the total energy (gravitational + kinetic + internal) equation rather than that of the internal energy only.

  2. Modeling irrigation-based climate change adaptation in agriculture: Model development and evaluation in Northeast China

    NASA Astrophysics Data System (ADS)

    Okada, Masashi; Iizumi, Toshichika; Sakurai, Gen; Hanasaki, Naota; Sakai, Toru; Okamoto, Katsuo; Yokozawa, Masayuki

    2015-09-01

    Replacing a rainfed cropping system with an irrigated one is widely assumed to be an effective measure for climate change adaptation. However, many agricultural impact studies have not necessarily accounted for the space-time variations in the water availability under changing climate and land use. Moreover, many hydrologic and agricultural assessments of climate change impacts are not fully integrated. To overcome this shortcoming, a tool that can simultaneously simulate the dynamic interactions between crop production and water resources in a watershed is essential. Here we propose the regional production and circulation coupled model (CROVER) by embedding the PRYSBI-2 (Process-based Regional Yield Simulator with Bayesian Inference version 2) large-area crop model into the global water resources model (called H08), and apply this model to the Songhua River watershed in Northeast China. The evaluation reveals that the model's performance in capturing the major characteristics of historical change in surface soil moisture, river discharge, actual crop evapotranspiration, and soybean yield relative to the reference data during the interval 1979-2010 is satisfactory accurate. The simulation experiments using the model demonstrated that subregional irrigation management, such as designating the area to which irrigation is primarily applied, has measurable influences on the regional crop production in a drought year. This finding suggests that reassessing climate change risk in agriculture using this type of modeling is crucial not to overestimate potential of irrigation-based adaptation.

  3. Adaptive Failure Compensation for Aircraft Tracking Control Using Engine Differential Based Model

    NASA Technical Reports Server (NTRS)

    Liu, Yu; Tang, Xidong; Tao, Gang; Joshi, Suresh M.

    2006-01-01

    An aircraft model that incorporates independently adjustable engine throttles and ailerons is employed to develop an adaptive control scheme in the presence of actuator failures. This model captures the key features of aircraft flight dynamics when in the engine differential mode. Based on this model an adaptive feedback control scheme for asymptotic state tracking is developed and applied to a transport aircraft model in the presence of two types of failures during operation, rudder failure and aileron failure. Simulation results are presented to demonstrate the adaptive failure compensation scheme.

  4. A Model of Family Background, Family Process, Youth Self-Control, and Delinquent Behavior in Two-Parent Families

    ERIC Educational Resources Information Center

    Jeong, So-Hee; Eamon, Mary Keegan

    2009-01-01

    Using data from a national sample of two-parent families with 11- and 12-year-old youths (N = 591), we tested a structural model of family background, family process (marital conflict and parenting), youth self-control, and delinquency four years later. Consistent with the conceptual model, marital conflict and youth self-control are directly…

  5. Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees

    SciTech Connect

    Bremer, P; Pascucci, V; Hamann, B

    2008-12-08

    We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.

  6. Dynamics of Dual Prism Adaptation: Relating Novel Experimental Results to a Minimalistic Neural Model

    PubMed Central

    Arévalo, Orlando; Bornschlegl, Mona A.; Eberhardt, Sven; Ernst, Udo; Pawelzik, Klaus; Fahle, Manfred

    2013-01-01

    In everyday life, humans interact with a dynamic environment often requiring rapid adaptation of visual perception and motor control. In particular, new visuo–motor mappings must be learned while old skills have to be kept, such that after adaptation, subjects may be able to quickly change between two different modes of generating movements (‘dual–adaptation’). A fundamental question is how the adaptation schedule determines the acquisition speed of new skills. Given a fixed number of movements in two different environments, will dual–adaptation be faster if switches (‘phase changes’) between the environments occur more frequently? We investigated the dynamics of dual–adaptation under different training schedules in a virtual pointing experiment. Surprisingly, we found that acquisition speed of dual visuo–motor mappings in a pointing task is largely independent of the number of phase changes. Next, we studied the neuronal mechanisms underlying this result and other key phenomena of dual–adaptation by relating model simulations to experimental data. We propose a simple and yet biologically plausible neural model consisting of a spatial mapping from an input layer to a pointing angle which is subjected to a global gain modulation. Adaptation is performed by reinforcement learning on the model parameters. Despite its simplicity, the model provides a unifying account for a broad range of experimental data: It quantitatively reproduced the learning rates in dual–adaptation experiments for both direct effect, i.e. adaptation to prisms, and aftereffect, i.e. behavior after removal of prisms, and their independence on the number of phase changes. Several other phenomena, e.g. initial pointing errors that are far smaller than the induced optical shift, were also captured. Moreover, the underlying mechanisms, a local adaptation of a spatial mapping and a global adaptation of a gain factor, explained asymmetric spatial transfer and generalization of

  7. Characterization of background air pollution exposure in urban environments using a metric based on Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Gómez-Losada, Álvaro; Pires, José Carlos M.; Pino-Mejías, Rafael

    2016-02-01

    Urban area air pollution results from local air pollutants (from different sources) and horizontal transport (background pollution). Understanding urban air pollution background (lowest) concentration profiles is key in population exposure assessment and epidemiological studies. To this end, air pollution registered at background monitoring sites is studied, but background pollution levels are given as the average of the air pollutant concentrations measured at these sites over long periods of time. This short communication shows how a metric based on Hidden Markov Models (HMMs) can characterise the air pollutant background concentration profiles. HMMs were applied to daily average concentrations of CO, NO2, PM10 and SO2 at thirteen urban monitoring sites from three cities from 2010 to 2013. Using the proposed metric, the mean values of background and ambient air pollution registered at these sites for these primary pollutants were estimated and the ratio of ambient to background air pollution and the difference between them were studied. The ratio indicator for the studied air pollutants during the four-year study sets the background air pollution at 48%-69% of the ambient air pollution, while the difference between these values ranges from 101 to 193 μg/m3, 7-12 μg/m3, 11-13 μg/m3 and 2-3 μg/m3 for CO, NO2, PM10 and SO2, respectively.

  8. Energetic Metabolism and Biochemical Adaptation: A Bird Flight Muscle Model

    ERIC Educational Resources Information Center

    Rioux, Pierre; Blier, Pierre U.

    2006-01-01

    The main objective of this class experiment is to measure the activity of two metabolic enzymes in crude extract from bird pectoral muscle and to relate the differences to their mode of locomotion and ecology. The laboratory is adapted to stimulate the interest of wildlife management students to biochemistry. The enzymatic activities of cytochrome…

  9. Application of the Bifactor Model to Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Seo, Dong Gi

    2011-01-01

    Most computerized adaptive tests (CAT) have been studied under the framework of unidimensional item response theory. However, many psychological variables are multidimensional and might benefit from using a multidimensional approach to CAT. In addition, a number of psychological variables (e.g., quality of life, depression) can be conceptualized…

  10. A model for homeopathic remedy effects: low dose nanoparticles, allostatic cross-adaptation, and time-dependent sensitization in a complex adaptive system

    PubMed Central

    2012-01-01

    Background This paper proposes a novel model for homeopathic remedy action on living systems. Research indicates that homeopathic remedies (a) contain measurable source and silica nanoparticles heterogeneously dispersed in colloidal solution; (b) act by modulating biological function of the allostatic stress response network (c) evoke biphasic actions on living systems via organism-dependent adaptive and endogenously amplified effects; (d) improve systemic resilience. Discussion The proposed active components of homeopathic remedies are nanoparticles of source substance in water-based colloidal solution, not bulk-form drugs. Nanoparticles have unique biological and physico-chemical properties, including increased catalytic reactivity, protein and DNA adsorption, bioavailability, dose-sparing, electromagnetic, and quantum effects different from bulk-form materials. Trituration and/or liquid succussions during classical remedy preparation create “top-down” nanostructures. Plants can biosynthesize remedy-templated silica nanostructures. Nanoparticles stimulate hormesis, a beneficial low-dose adaptive response. Homeopathic remedies prescribed in low doses spaced intermittently over time act as biological signals that stimulate the organism’s allostatic biological stress response network, evoking nonlinear modulatory, self-organizing change. Potential mechanisms include time-dependent sensitization (TDS), a type of adaptive plasticity/metaplasticity involving progressive amplification of host responses, which reverse direction and oscillate at physiological limits. To mobilize hormesis and TDS, the remedy must be appraised as a salient, but low level, novel threat, stressor, or homeostatic disruption for the whole organism. Silica nanoparticles adsorb remedy source and amplify effects. Properly-timed remedy dosing elicits disease-primed compensatory reversal in direction of maladaptive dynamics of the allostatic network, thus promoting resilience and recovery from

  11. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  12. An adapted Coffey model for studying susceptibility losses in interacting magnetic nanoparticles

    PubMed Central

    Osaci, Mihaela

    2015-01-01

    Summary Background: Nanoparticles can be used in biomedical applications, such as contrast agents for magnetic resonance imaging, in tumor therapy or against cardiovascular diseases. Single-domain nanoparticles dissipate heat through susceptibility losses in two modes: Néel relaxation and Brownian relaxation. Results: Since a consistent theory for the Néel relaxation time that is applicable to systems of interacting nanoparticles has not yet been developed, we adapted the Coffey theoretical model for the Néel relaxation time in external magnetic fields in order to consider local dipolar magnetic fields. Then, we obtained the effective relaxation time. The effective relaxation time is further used for obtaining values of specific loss power (SLP) through linear response theory (LRT). A comparative analysis between our model and the discrete orientation model, more often used in literature, and a comparison with experimental data from literature have been carried out, in order to choose the optimal magnetic parameters of a nanoparticle system. Conclusion: In this way, we can study effects of the nanoparticle concentration on SLP in an acceptable range of frequencies and amplitudes of external magnetic fields for biomedical applications, especially for tumor therapy by magnetic hyperthermia. PMID:26665090

  13. Fully nonlinear and exact perturbations of the Friedmann world model: non-flat background

    SciTech Connect

    Noh, Hyerim

    2014-07-01

    We extend the fully non-linear and exact cosmological perturbation equations in a Friedmann background universe to include the background curvature. The perturbation equations are presented in a gauge ready form, so any temporal gauge condition can be adopted freely depending on the problem to be solved. We consider the scalar, and vector perturbations without anisotropic stress. As an application, we analyze the equations in the special case of irrotational zero-pressure fluid in the comoving gauge condition. We also present the fully nonlinear formulation for a minimally coupled scalar field.

  14. High-speed train control based on multiple-model adaptive control with second-level adaptation

    NASA Astrophysics Data System (ADS)

    Zhou, Yonghua; Zhang, Zhenlin

    2014-05-01

    Speed uplift has become the leading trend for the development of current railway traffic. Ideally, under the high-speed transportation infrastructure, trains run at specified positions with designated speeds at appointed times. In view of the faster adaptation ability of multiple-model adaptive control with second-level adaptation (MMAC-SLA), we propose one type of MMAC-SLA for a class of nonlinear systems such as cascaded vehicles. By using an input decomposition technique, the corresponding stability proof is solved for the proposed MMAC-SLA, which synthesises the control signals from the weighted multiple models. The control strategy is utilised to challenge the position and speed tracking of high-speed trains with uncertain parameters. The simulation results demonstrate that the proposed MMAC-SLA can achieve small tracking errors with moderate in-train forces incurred under the control of flattening input signals with practical enforceability. This study also provides a new idea for the control of in-train forces by tracking the positions and speeds of cars while considering power constraints.

  15. Multi-objective parameter optimization of common land model using adaptive surrogate modelling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2014-06-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20-100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) regional LSMs are expensive to run, while conventional multi-objective optimization methods needs a huge number of model runs (typically 105~106). It makes parameter optimization computationally prohibitive. An uncertainty qualification framework was developed to meet the aforementioned challenges: (1) use parameter screening to reduce the number of adjustable parameters; (2) use surrogate models to emulate the response of dynamic models to the variation of adjustable parameters; (3) use an adaptive strategy to promote the efficiency of surrogate modeling based optimization; (4) use a weighting function to transfer multi-objective optimization to single objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column case study of a land surface model - Common Land Model (CoLM) and evaluate the effectiveness and efficiency of the proposed framework. The result indicated that this framework can achieve optimal parameter set using totally 411 model runs, and worth to be extended to other large complex dynamic models, such as regional land surface models, atmospheric models and climate models.

  16. Investigation of the Multiple Model Adaptive Control (MMAC) method for flight control systems

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The application was investigated of control theoretic ideas to the design of flight control systems for the F-8 aircraft. The design of an adaptive control system based upon the so-called multiple model adaptive control (MMAC) method is considered. Progress is reported.

  17. Illness behavior, social adaptation, and the management of illness. A comparison of educational and medical models.

    PubMed

    Mechanic, D

    1977-08-01

    Motivational needs and coping are important aspects of illness response. Clinicians must help guide illness response by suggesting constructive adaptive opportunities and by avoiding reinforcement of maladaptive patterns. This paper examines how the patient's search for meaning, social attributions, and social comparisons shapes adaptation to illness and subsequent disability. It proposes a coping-adaptation model involving the following five resources relevant to rehabilitation: economic assets, abilities and skills, defensive techniques, social supports, and motivational impetus. It is maintained that confusion between illness and illness behavior obfuscates the alternatives available to guide patients through smoother adaptations and resumption of social roles. PMID:328824

  18. Dynamic modeling, property investigation, and adaptive controller design of serial robotic manipulators modeled with structural compliance

    NASA Technical Reports Server (NTRS)

    Tesar, Delbert; Tosunoglu, Sabri; Lin, Shyng-Her

    1990-01-01

    Research results on general serial robotic manipulators modeled with structural compliances are presented. Two compliant manipulator modeling approaches, distributed and lumped parameter models, are used in this study. System dynamic equations for both compliant models are derived by using the first and second order influence coefficients. Also, the properties of compliant manipulator system dynamics are investigated. One of the properties, which is defined as inaccessibility of vibratory modes, is shown to display a distinct character associated with compliant manipulators. This property indicates the impact of robot geometry on the control of structural oscillations. Example studies are provided to illustrate the physical interpretation of inaccessibility of vibratory modes. Two types of controllers are designed for compliant manipulators modeled by either lumped or distributed parameter techniques. In order to maintain the generality of the results, neither linearization is introduced. Example simulations are given to demonstrate the controller performance. The second type controller is also built for general serial robot arms and is adaptive in nature which can estimate uncertain payload parameters on-line and simultaneously maintain trajectory tracking properties. The relation between manipulator motion tracking capability and convergence of parameter estimation properties is discussed through example case studies. The effect of control input update delays on adaptive controller performance is also studied.

  19. Adaptive immunity does not strongly suppress spontaneous tumors in a Sleeping Beauty model of cancer

    PubMed Central

    Rogers, Laura M.; Olivier, Alicia K.; Meyerholz, David K.; Dupuy, Adam J.

    2013-01-01

    The tumor immunosurveillance hypothesis describes a process by which the immune system recognizes and suppresses the growth of transformed cancer cells. A variety of epidemiological and experimental evidence supports this hypothesis. Nevertheless, there are a number of conflicting reports regarding the degree of immune protection conferred, the immune cell types responsible for protection, and the potential contributions of immunosuppressive therapies to tumor induction. The purpose of this study was to determine whether the adaptive immune system actively suppresses tumorigenesis in a Sleeping Beauty (SB) mouse model of cancer. SB transposon mutagenesis was performed in either a wild-type or immunocompromised (Rag2-null) background. Tumor latency and multiplicity were remarkably similar in both immune cohorts, suggesting that the adaptive immune system is not efficiently suppressing tumor formation in our model. Exceptions included skin tumors, which displayed increased multiplicity in wild-type animals, and leukemias, which developed with shorter latency in immune-deficient mice. Overall tumor distribution was also altered such that tumors affecting the gastrointestinal tract were more frequent and hemangiosarcomas were less frequent in immune-deficient mice compared to wild-type mice. Finally, genetic profiling of transposon-induced mutations identified significant differences in mutation prevalence for a number of genes, including Uba1. Taken together, these results indicate that B- and T-cells function to shape the genetic profile of tumors in various tumor types, despite being ineffective at clearing SB-induced tumors. This study represents the first forward genetic screen designed to examine tumor immunosurveillance mechanisms. PMID:23475219

  20. Sensorimotor synchronization with tempo-changing auditory sequences: Modeling temporal adaptation and anticipation.

    PubMed

    van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E

    2015-11-11

    The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal

  1. Analytical approach to an integrate-and-fire model with spike-triggered adaptation

    NASA Astrophysics Data System (ADS)

    Schwalger, Tilo; Lindner, Benjamin

    2015-12-01

    The calculation of the steady-state probability density for multidimensional stochastic systems that do not obey detailed balance is a difficult problem. Here we present the analytical derivation of the stationary joint and various marginal probability densities for a stochastic neuron model with adaptation current. Our approach assumes weak noise but is valid for arbitrary adaptation strength and time scale. The theory predicts several effects of adaptation on the statistics of the membrane potential of a tonically firing neuron: (i) a membrane potential distribution with a convex shape, (ii) a strongly increased probability of hyperpolarized membrane potentials induced by strong and fast adaptation, and (iii) a maximized variability associated with the adaptation current at a finite adaptation time scale.

  2. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    PubMed

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  3. Bayesian Analysis for Exponential Random Graph Models Using the Adaptive Exchange Sampler*

    PubMed Central

    Jin, Ick Hoon; Yuan, Ying; Liang, Faming

    2014-01-01

    Exponential random graph models have been widely used in social network analysis. However, these models are extremely difficult to handle from a statistical viewpoint, because of the intractable normalizing constant and model degeneracy. In this paper, we consider a fully Bayesian analysis for exponential random graph models using the adaptive exchange sampler, which solves the intractable normalizing constant and model degeneracy issues encountered in Markov chain Monte Carlo (MCMC) simulations. The adaptive exchange sampler can be viewed as a MCMC extension of the exchange algorithm, and it generates auxiliary networks via an importance sampling procedure from an auxiliary Markov chain running in parallel. The convergence of this algorithm is established under mild conditions. The adaptive exchange sampler is illustrated using a few social networks, including the Florentine business network, molecule synthetic network, and dolphins network. The results indicate that the adaptive exchange algorithm can produce more accurate estimates than approximate exchange algorithms, while maintaining the same computational efficiency. PMID:24653788

  4. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  5. A Comprehensive and Systematic Model of User Evaluation of Web Search Engines: I. Theory and Background.

    ERIC Educational Resources Information Center

    Su, Louise T.

    2003-01-01

    Reports on a project that proposes and tests a comprehensive and systematic model of user evaluation of Web search engines. This article describes the model, including a set of criteria and measures and a method for implementation. A literature review portrays settings for developing the model and places applications of the model in contemporary…

  6. Ensuring Congruency in Multiscale Modeling: Towards Linking Agent Based and Continuum Biomechanical Models of Arterial Adaptation

    PubMed Central

    Hayenga, Heather N.; Thorne, Bryan C.; Peirce, Shayn M.; Humphrey, Jay D.

    2011-01-01

    There is a need to develop multiscale models of vascular adaptations to understand tissue level manifestations of cellular level mechanisms. Continuum based biomechanical models are well suited for relating blood pressures and flows to stress-mediated changes in geometry and properties, but less so for describing underlying mechanobiological processes. Discrete stochastic agent based models are well suited for representing biological processes at a cellular level, but not for describing tissue level mechanical changes. We present here a conceptually new approach to facilitate the coupling of continuum and agent based models. Because of ubiquitous limitations in both the tissue- and cell-level data from which one derives constitutive relations for continuum models and rule-sets for agent based models, we suggest that model verification should enforce congruency across scales. That is, multiscale model parameters initially determined from data sets representing different scales should be refined, when possible, to ensure that common outputs are consistent. Potential advantages of this approach are illustrated by comparing simulated aortic responses to a sustained increase in blood pressure predicted by continuum and agent based models both before and after instituting a genetic algorithm to refine 16 objectively bounded model parameters. We show that congruency-based parameter refinement not only yielded increased consistency across scales, it also yielded predictions that are closer to in vivo observations. PMID:21809144

  7. Modeling of neutron induced backgrounds in x-ray framing cameras.

    PubMed

    Hagmann, C; Izumi, N; Bell, P; Bradley, D; Conder, A; Eckart, M; Khater, H; Koch, J; Moody, J; Stone, G

    2010-10-01

    Fast neutrons from inertial confinement fusion implosions pose a severe background to conventional multichannel plate (MCP)-based x-ray framing cameras for deuterium-tritium yields >10(13). Nuclear reactions of neutrons in photosensitive elements (charge coupled device or film) cause some of the image noise. In addition, inelastic neutron collisions in the detector and nearby components create a large gamma pulse. The background from the resulting secondary charged particles is twofold: (1) production of light through the Cherenkov effect in optical components and by excitation of the MCP phosphor and (2) direct excitation of the photosensitive elements. We give theoretical estimates of the various contributions to the overall noise and present mitigation strategies for operating in high yield environments. PMID:21034042

  8. A Direct Adaptive Control Approach in the Presence of Model Mismatch

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.; Tao, Gang; Khong, Thuan

    2009-01-01

    This paper considers the problem of direct model reference adaptive control when the plant-model matching conditions are violated due to abnormal changes in the plant or incorrect knowledge of the plant's mathematical structure. The approach consists of direct adaptation of state feedback gains for state tracking, and simultaneous estimation of the plant-model mismatch. Because of the mismatch, the plant can no longer track the state of the original reference model, but may be able to track a new reference model that still provides satisfactory performance. The reference model is updated if the estimated plant-model mismatch exceeds a bound that is determined via robust stability and/or performance criteria. The resulting controller is a hybrid direct-indirect adaptive controller that offers asymptotic state tracking in the presence of plant-model mismatch as well as parameter deviations.

  9. Genetic Background Modulates Impaired Excitability of Inhibitory Neurons in a Mouse Model of Dravet Syndrome

    PubMed Central

    Rubinstein, Moran; Westenbroek, Ruth E.; Yu, Frank H.; Jones, Christina J.; Scheuer, Todd; Catterall, William A.

    2014-01-01

    Dominant loss-of-function mutations in voltage-gated sodium channel NaV1.1 cause Dravet Syndrome, an intractable childhood-onset epilepsy. NaV1.1+/− Dravet Syndrome mice in C57BL/6 genetic background exhibit severe seizures, cognitive and social impairments, and premature death. Here we show that Dravet Syndrome mice in pure 129/SvJ genetic background have many fewer seizures and much less premature death than in pure C57BL/6 background. These mice also have a higher threshold for thermally induced seizures, fewer myoclonic seizures, and no cognitive impairment, similar to patients with Genetic Epilepsy with Febrile Seizures Plus. Consistent with this mild phenotype, mutation of NaV1.1 channels has much less physiological effect on neuronal excitability in 129/SvJ mice. In hippocampal slices, the excitability of CA1 Stratum Oriens interneurons is selectively impaired, while the excitability of CA1 pyramidal cells is unaffected. NaV1.1 haploinsufficiency results in increased rheobase and threshold for action potential firing and impaired ability to sustain high-frequency firing. Moreover, deletion of NaV1.1 markedly reduces the amplification and integration of synaptic events, further contributing to reduced excitability of interneurons. Excitability is less impaired in inhibitory neurons of Dravet Syndrome mice in 129/SvJ genetic background. Because specific deletion of NaV1.1 in forebrain GABAergic interneuons is sufficient to cause the symptoms of Dravet Syndrome in mice, our results support the conclusion that the milder phenotype in 129/SvJ mice is caused by lesser impairment of sodium channel function and electrical excitability in their forebrain interneurons. This mild impairment of excitability of interneurons leads to a milder disease phenotype in 129/SvJ mice, similar to Genetic Epilepsy with Febrile Seizures Plus in humans. PMID:25281316

  10. Integrated optimal allocation model for complex adaptive system of water resources management (I): Methodologies

    NASA Astrophysics Data System (ADS)

    Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Ye, Yushi

    2015-12-01

    Due to the adaption, dynamic and multi-objective characteristics of complex water resources system, it is a considerable challenge to manage water resources in an efficient, equitable and sustainable way. An integrated optimal allocation model is proposed for complex adaptive system of water resources management. The model consists of three modules: (1) an agent-based module for revealing evolution mechanism of complex adaptive system using agent-based, system dynamic and non-dominated sorting genetic algorithm II methods, (2) an optimal module for deriving decision set of water resources allocation using multi-objective genetic algorithm, and (3) a multi-objective evaluation module for evaluating the efficiency of the optimal module and selecting the optimal water resources allocation scheme using project pursuit method. This study has provided a theoretical framework for adaptive allocation, dynamic allocation and multi-objective optimization for a complex adaptive system of water resources management.

  11. Parent Management Training-Oregon Model (PMTO™) in Mexico City: Integrating Cultural Adaptation Activities in an Implementation Model

    PubMed Central

    Baumann, Ana A.; Domenech Rodríguez, Melanie M.; Amador, Nancy G.; Forgatch, Marion S.; Parra-Cardona, J. Rubén

    2015-01-01

    This article describes the process of cultural adaptation at the start of the implementation of the Parent Management Training intervention-Oregon model (PMTO) in Mexico City. The implementation process was guided by the model, and the cultural adaptation of PMTO was theoretically guided by the cultural adaptation process (CAP) model. During the process of the adaptation, we uncovered the potential for the CAP to be embedded in the implementation process, taking into account broader training and economic challenges and opportunities. We discuss how cultural adaptation and implementation processes are inextricably linked and iterative and how maintaining a collaborative relationship with the treatment developer has guided our work and has helped expand our research efforts, and how building human capital to implement PMTO in Mexico supported the implementation efforts of PMTO in other places in the United States. PMID:26052184

  12. On the role of model-based monitoring for adaptive planning under uncertainty

    NASA Astrophysics Data System (ADS)

    Raso, Luciano; Kwakkel, Jan; Timmermans, Jos; Haasnoot, Mariolijn

    2016-04-01

    , triggered by the challenge of uncertainty in operational control, may offer solutions from which monitoring for adaptive planning can benefit. Specifically: (i) in control, observations are incorporated into the model through data assimilation, updating the present state, boundary conditions, and parameters based on new observations, diminishing the shadow of the past; (ii) adaptive control is a way to modify the characteristics of the internal model, incorporating new knowledge on the system, countervailing the inhibition of learning; and (iii) in closed-loop control, a continuous system update equips the controller with "inherent robustness", i.e. to capacity to adapts to new conditions even when these were not initially considered. We aim to explore how inherent robustness addresses the challenge of surprise. Innovations in model-based control might help to improve and adapt the models used to support adaptive delta management to new information (reducing uncertainty). Moreover, this would offer a starting point for using these models not only in the design of adaptive plans, but also as part of the monitoring. The proposed research requires multidisciplinary cooperation between control theory, the policy sciences, and integrated assessment modeling.

  13. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter.

    PubMed

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  14. Modeling of Rate-Dependent Hysteresis Using a GPO-Based Adaptive Filter

    PubMed Central

    Zhang, Zhen; Ma, Yaopeng

    2016-01-01

    A novel generalized play operator-based (GPO-based) nonlinear adaptive filter is proposed to model rate-dependent hysteresis nonlinearity for smart actuators. In the proposed filter, the input signal vector consists of the output of a tapped delay line. GPOs with various thresholds are used to construct a nonlinear network and connected with the input signals. The output signal of the filter is composed of a linear combination of signals from the output of GPOs. The least-mean-square (LMS) algorithm is used to adjust the weights of the nonlinear filter. The modeling results of four adaptive filter methods are compared: GPO-based adaptive filter, Volterra filter, backlash filter and linear adaptive filter. Moreover, a phenomenological operator-based model, the rate-dependent generalized Prandtl-Ishlinskii (RDGPI) model, is compared to the proposed adaptive filter. The various rate-dependent modeling methods are applied to model the rate-dependent hysteresis of a giant magnetostrictive actuator (GMA). It is shown from the modeling results that the GPO-based adaptive filter can describe the rate-dependent hysteresis nonlinear of the GMA more accurately and effectively. PMID:26861349

  15. Construction and solution of an adaptive image-restoration model for removing blur and mixed noise

    NASA Astrophysics Data System (ADS)

    Wang, Youquan; Cui, Lihong; Cen, Yigang; Sun, Jianjun

    2016-03-01

    We establish a practical regularized least-squares model with adaptive regularization for dealing with blur and mixed noise in images. This model has some advantages, such as good adaptability for edge restoration and noise suppression due to the application of a priori spatial information obtained from a polluted image. We further focus on finding an important feature of image restoration using an adaptive restoration model with different regularization parameters in polluted images. A more important observation is that the gradient of an image varies regularly from one regularization parameter to another under certain conditions. Then, a modified graduated nonconvexity approach combined with a median filter version of a spatial information indicator is proposed to seek the solution of our adaptive image-restoration model by applying variable splitting and weighted penalty techniques. Numerical experiments show that the method is robust and effective for dealing with various blur and mixed noise levels in images.

  16. Multivariate adaptive regression splines models for the prediction of energy expenditure in children and adolescents

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Advanced mathematical models have the potential to capture the complex metabolic and physiological processes that result in heat production, or energy expenditure (EE). Multivariate adaptive regression splines (MARS), is a nonparametric method that estimates complex nonlinear relationships by a seri...

  17. A spatially explicit model simulating western corn rootworm (Coleoptera: Chrysomelidae) adaptation to insect-resistant maize.

    PubMed

    Storer, Nicholas P

    2003-10-01

    A stochastic spatially explicit computer model is described that simulates the adaptation by western corn rootworm, Diabrotica virgifera virgifera LeConte, to rootworm-resistance traits in maize. The model reflects the ecology of the rootworm in much of the corn belt of the United States. It includes functions for crop development, egg and larval mortality, adult emergence, mating, egg laying, mortality and dispersal, and alternative methods of rootworm control, to simulate the population dynamics of the rootworm. Adaptation to the resistance trait is assumed to be controlled by a monogenic diallelic locus, whereby the allele for adaptation varies from incompletely recessive to incompletely dominant, depending on the efficacy of the resistance trait. The model was used to compare the rate at which the adaptation allele spread through the population under different nonresistant maize refuge deployment scenarios, and under different levels of crop resistance. For a given refuge size, the model indicated that placing the nonresistant refuge in a block within a rootworm-resistant field would be likely to delay rootworm adaptation rather longer than planting the refuge in separate fields in varying locations. If a portion of the refuge were to be planted in the same fields or in-field blocks each year, rootworm adaptation would be delayed substantially. Rootworm adaptation rates are also predicted to be greatly affected by the level of crop resistance, because of the expectation of dependence of functional dominance on dose. If the dose of the insecticidal protein in the maize is sufficiently high to kill >90% of heterozygotes and approximately 100% of susceptible homozygotes, the trait is predicted to be much more durable than if the dose is lower. A partial sensitivity analysis showed that parameters relating to adult dispersal affected the rate of pest adaptation. Partial validation of the model was achieved by comparing output of the model with field data on

  18. Multiobjective adaptive surrogate modeling-based optimization for parameter estimation of large, complex geophysical models

    NASA Astrophysics Data System (ADS)

    Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu

    2016-03-01

    Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.

  19. The Basic Immune Simulator: An agent-based model to study the interactions between innate and adaptive immunity

    PubMed Central

    Folcik, Virginia A; An, Gary C; Orosz, Charles G

    2007-01-01

    Background We introduce the Basic Immune Simulator (BIS), an agent-based model created to study the interactions between the cells of the innate and adaptive immune system. Innate immunity, the initial host response to a pathogen, generally precedes adaptive immunity, which generates immune memory for an antigen. The BIS simulates basic cell types, mediators and antibodies, and consists of three virtual spaces representing parenchymal tissue, secondary lymphoid tissue and the lymphatic/humoral circulation. The BIS includes a Graphical User Interface (GUI) to facilitate its use as an educational and research tool. Results The BIS was used to qualitatively examine the innate and adaptive interactions of the immune response to a viral infection. Calibration was accomplished via a parameter sweep of initial agent population size, and comparison of simulation patterns to those reported in the basic science literature. The BIS demonstrated that the degree of the initial innate response was a crucial determinant for an appropriate adaptive response. Deficiency or excess in innate immunity resulted in excessive proliferation of adaptive immune cells. Deficiency in any of the immune system components increased the probability of failure to clear the simulated viral infection. Conclusion The behavior of the BIS matches both normal and pathological behavior patterns in a generic viral infection scenario. Thus, the BIS effectively translates mechanistic cellular and molecular knowledge regarding the innate and adaptive immune response and reproduces the immune system's complex behavioral patterns. The BIS can be used both as an educational tool to demonstrate the emergence of these patterns and as a research tool to systematically identify potential targets for more effective treatment strategies for diseases processes including hypersensitivity reactions (allergies, asthma), autoimmunity and cancer. We believe that the BIS can be a useful addition to the growing suite of in

  20. Estimating North American background ozone in U.S. surface air with two independent global models: Variability, uncertainties, and recommendations

    EPA Science Inventory

    Accurate estimates for North American background (NAB) ozone (O3) in surface air over the United States are needed for setting and implementing an attainable national O3 standard. These estimates rely on simulations with atmospheric chemistry-transport models that set North Amer...

  1. Generalized Galileons: All scalar models whose curved background extensions maintain second-order field equations and stress tensors

    SciTech Connect

    Deffayet, C.; Deser, S.; Esposito-Farese, G.

    2009-09-15

    We extend to curved backgrounds all flat-space scalar field models that obey purely second-order equations, while maintaining their second-order dependence on both field and metric. This extension simultaneously restores to second order the, originally higher derivative, stress tensors as well. The process is transparent and uniform for all dimensions.

  2. Solid modelling for the manipulative robot arm (power) and adaptive vision control for space station missions

    NASA Technical Reports Server (NTRS)

    Harrand, V.; Choudry, A.

    1987-01-01

    The structure of a flexible arm derived from concatenation of the Stewart-Table-based links were studied. Solid modeling provides not only a realistic simulation, but is also essential for studying vision algorithms. These algorithms could be used for the adaptive control of the arm, using the well-known algorithms such as shape from shading, edge detection, orientation, etc. Details of solid modeling and its relation to vision based adaptive control are discussed.

  3. Adapting existing models of highly contagious diseases to countries other than their country of origin.

    PubMed

    Dubé, C; Sanchez, J; Reeves, A

    2011-08-01

    Many countries do not have the resources to develop epidemiological models of animal diseases. As a result, it is tempting to use models developed in other countries. However, an existing model may need to be adapted in order for it to be appropriately applied in a country, region, or situation other than that for which it was originally developed. The process of adapting a model has a number of benefits for both model builders and model users. For model builders, it provides insight into the applicability of their model and potentially the opportunity to obtain data for operational validation of components of their model. For users, it is a chance to think about the infection transmission process in detail, to review the data available for modelling, and to learn the principles of epidemiological modelling. Various issues must be addressed when considering adapting a model. Most critically, the assumptions and purpose behind the model must be thoroughly understood, so that new users can determine its suitability for their situation. The process of adapting a model might simply involve changing existing model parameter values (for example, to better represent livestock demographics in a country or region), or might require more substantial (and more labour-intensive) changes to the model code and conceptual model. Adapting a model is easier if the model has a user-friendly interface and easy-to-read user documentation. In addition, models built as frameworks within which disease processes and livestock demographics and contacts are flexible are good candidates for technology transfer projects, which lead to long-term collaborations. PMID:21961228

  4. Time domain and frequency domain design techniques for model reference adaptive control systems

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1971-01-01

    Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.

  5. Cold dark matter confronts the cosmic microwave background - Large-angular-scale anisotropies in Omega sub 0 + lambda 1 models

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola

    1992-01-01

    A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.

  6. Adaptation of model parameters of a rail model at measured rail compliances

    NASA Astrophysics Data System (ADS)

    Ripke, B.

    1992-01-01

    A method for calculation of unknown parameters of a rail model is presented. Measurements were carried out on a test rail of a locomotive factory by means of pulse excitation. Accelerations of the rail head in the vertical and laterial directions were measured with an accelerometer and dilatations were measured as a function of rail flexion with a piezo film. Input and transfer compliances were measured. The obtained data were controlled by means of a fast Fourier transformation analyzer and recorded on magnetic tapes. A model was developed with the finite element method by considering the rail as a Timoshinko beam. Stiffness and damping of bulkhead and tiebar were obtained. A variable threshold mass was introduced for model adaptation to the experiment results in low frequency area.

  7. Direct Adaptive Control Methodologies for Flexible-Joint Space Manipulators with Uncertainties and Modeling Errors

    NASA Astrophysics Data System (ADS)

    Ulrich, Steve

    This work addresses the direct adaptive trajectory tracking control problem associated with lightweight space robotic manipulators that exhibit elastic vibrations in their joints, and which are subject to parametric uncertainties and modeling errors. Unlike existing adaptive control methodologies, the proposed flexible-joint control techniques do not require identification of unknown parameters, or mathematical models of the system to be controlled. The direct adaptive controllers developed in this work are based on the model reference adaptive control approach, and manage modeling errors and parametric uncertainties by time-varying the controller gains using new adaptation mechanisms, thereby reducing the errors between an ideal model and the actual robot system. More specifically, new decentralized adaptation mechanisms derived from the simple adaptive control technique and fuzzy logic control theory are considered in this work. Numerical simulations compare the performance of the adaptive controllers with a nonadaptive and a conventional model-based controller, in the context of 12.6 m xx 12.6 m square trajectory tracking. To validate the robustness of the controllers to modeling errors, a new dynamics formulation that includes several nonlinear effects usually neglected in flexible-joint dynamics models is proposed. Results obtained with the adaptive methodologies demonstrate an increased robustness to both uncertainties in joint stiffness coefficients and dynamics modeling errors, as well as highly improved tracking performance compared with the nonadaptive and model-based strategies. Finally, this work considers the partial state feedback problem related to flexible-joint space robotic manipulators equipped only with sensors that provide noisy measurements of motor positions and velocities. An extended Kalman filter-based estimation strategy is developed to estimate all state variables in real-time. The state estimation filter is combined with an adaptive

  8. Response normalization and blur adaptation: Data and multi-scale model

    PubMed Central

    Elliott, Sarah L.; Georgeson, Mark A.; Webster, Michael A.

    2011-01-01

    Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log–log) slopes from −2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels. PMID:21307174

  9. HMM-Based Style Control for Expressive Speech Synthesis with Arbitrary Speaker's Voice Using Model Adaptation

    NASA Astrophysics Data System (ADS)

    Nose, Takashi; Tachibana, Makoto; Kobayashi, Takao

    This paper presents methods for controlling the intensity of emotional expressions and speaking styles of an arbitrary speaker's synthetic speech by using a small amount of his/her speech data in HMM-based speech synthesis. Model adaptation approaches are introduced into the style control technique based on the multiple-regression hidden semi-Markov model (MRHSMM). Two different approaches are proposed for training a target speaker's MRHSMMs. The first one is MRHSMM-based model adaptation in which the pretrained MRHSMM is adapted to the target speaker's model. For this purpose, we formulate the MLLR adaptation algorithm for the MRHSMM. The second method utilizes simultaneous adaptation of speaker and style from an average voice model to obtain the target speaker's style-dependent HSMMs which are used for the initialization of the MRHSMM. From the result of subjective evaluation using adaptation data of 50 sentences of each style, we show that the proposed methods outperform the conventional speaker-dependent model training when using the same size of speech data of the target speaker.

  10. Particle Swarm Social Adaptive Model for Multi-Agent Based Insurgency Warfare Simulation

    SciTech Connect

    Cui, Xiaohui; Potok, Thomas E

    2009-12-01

    To better understand insurgent activities and asymmetric warfare, a social adaptive model for modeling multiple insurgent groups attacking multiple military and civilian targets is proposed and investigated. This report presents a pilot study using the particle swarm modeling, a widely used non-linear optimal tool to model the emergence of insurgency campaign. The objective of this research is to apply the particle swarm metaphor as a model of insurgent social adaptation for the dynamically changing environment and to provide insight and understanding of insurgency warfare. Our results show that unified leadership, strategic planning, and effective communication between insurgent groups are not the necessary requirements for insurgents to efficiently attain their objective.

  11. Real-Time Robust Adaptive Modeling and Scheduling for an Electronic Commerce Server

    NASA Astrophysics Data System (ADS)

    Du, Bing; Ruan, Chun

    With the increasing importance and pervasiveness of Internet services, it is becoming a challenge for the proliferation of electronic commerce services to provide performance guarantees under extreme overload. This paper describes a real-time optimization modeling and scheduling approach for performance guarantee of electronic commerce servers. We show that an electronic commerce server may be simulated as a multi-tank system. A robust adaptive server model is subject to unknown additive load disturbances and uncertain model matching. Overload control techniques are based on adaptive admission control to achieve timing guarantees. We evaluate the performance of the model using a complex simulation that is subjected to varying model parameters and massive overload.

  12. Anisotropies of the cosmic microwave background in nonstandard cold dark matter models

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Silk, Joseph

    1992-01-01

    Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.

  13. AN OVERVIEW OF THE LAKE MICHIGAN MASS BALANCE MODELING PROJECT: BACKGROUND, ACCOMPLISHMENTS, AND FUTURE WORK

    EPA Science Inventory

    Modeling associated with the Lake Michigan Mass Balance Project (LMMBP) is being conducted using WASP-type water quality models to gain a better understanding of the ecosystem transport and fate of polychlorinated biphenyls (PCBs), atrazine, mercury, and trans-nonachlor in Lake M...

  14. Data for Environmental Modeling (D4EM): Background and Applications of Data Automation

    EPA Science Inventory

    The Data for Environmental Modeling (D4EM) project demonstrates the development of a comprehensive set of open source software tools that overcome obstacles to accessing data needed by automating the process of populating model input data sets with environmental data available fr...

  15. THE HYDROCARBON SPILL SCREENING MODEL (HSSM), VOLUME 2: THEORETICAL BACKGROUND AND SOURCE CODES

    EPA Science Inventory

    A screening model for subsurface release of a nonaqueous phase liquid which is less dense than water (LNAPL) is presented. The model conceptualizes the release as consisting of 1) vertical transport from near the surface to the capillary fringe, 2) radial spreading of an LNAPL l...

  16. Design of a Model Reference Adaptive Controller for an Unmanned Air Vehicle

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Matsutani, Megumi; Annaswamy, Anuradha M.

    2010-01-01

    This paper presents the "Adaptive Control Technology for Safe Flight (ACTS)" architecture, which consists of a non-adaptive controller that provides satisfactory performance under nominal flying conditions, and an adaptive controller that provides robustness under off nominal ones. The design and implementation procedures of both controllers are presented. The aim of these procedures, which encompass both theoretical and practical considerations, is to develop a controller suitable for flight. The ACTS architecture is applied to the Generic Transport Model developed by NASA-Langley Research Center. The GTM is a dynamically scaled test model of a transport aircraft for which a flight-test article and a high-fidelity simulation are available. The nominal controller at the core of the ACTS architecture has a multivariable LQR-PI structure while the adaptive one has a direct, model reference structure. The main control surfaces as well as the throttles are used as control inputs. The inclusion of the latter alleviates the pilot s workload by eliminating the need for cancelling the pitch coupling generated by changes in thrust. Furthermore, the independent usage of the throttles by the adaptive controller enables their use for attitude control. Advantages and potential drawbacks of adaptation are demonstrated by performing high fidelity simulations of a flight-validated controller and of its adaptive augmentation.

  17. Simulation of the dispersion of nuclear contamination using an adaptive Eulerian grid model.

    PubMed

    Lagzi, I; Kármán, D; Turányi, T; Tomlin, A S; Haszpra, L

    2004-01-01

    Application of an Eulerian model using layered adaptive unstructured grids coupled to a meso-scale meteorological model is presented for modelling the dispersion of nuclear contamination following the accidental release from a single but strong source to the atmosphere. The model automatically places a finer resolution grid, adaptively in time, in regions were high spatial numerical error is expected. The high-resolution grid region follows the movement of the contaminated air over time. Using this method, grid resolutions of the order of 6 km can be achieved in a computationally effective way. The concept is illustrated by the simulation of hypothetical nuclear accidents at the Paks NPP, in Central Hungary. The paper demonstrates that the adaptive model can achieve accuracy comparable to that of a high-resolution Eulerian model using significantly less grid points and computer simulation time. PMID:15149762

  18. Myosin filament polymerization and depolymerization in a model of partial length adaptation in airway smooth muscle.

    PubMed

    Ijpma, Gijs; Al-Jumaily, Ahmed M; Cairns, Simeon P; Sieck, Gary C

    2011-09-01

    Length adaptation in airway smooth muscle (ASM) is attributed to reorganization of the cytoskeleton, and in particular the contractile elements. However, a constantly changing lung volume with tidal breathing (hence changing ASM length) is likely to restrict full adaptation of ASM for force generation. There is likely to be continuous length adaptation of ASM between states of incomplete or partial length adaption. We propose a new model that assimilates findings on myosin filament polymerization/depolymerization, partial length adaptation, isometric force, and shortening velocity to describe this continuous length adaptation process. In this model, the ASM adapts to an optimal force-generating capacity in a repeating cycle of events. Initially the myosin filament, shortened by prior length changes, associates with two longer actin filaments. The actin filaments are located adjacent to the myosin filaments, such that all myosin heads overlap with actin to permit maximal cross-bridge cycling. Since in this model the actin filaments are usually longer than myosin filaments, the excess length of the actin filament is located randomly with respect to the myosin filament. Once activated, the myosin filament elongates by polymerization along the actin filaments, with the growth limited by the overlap of the actin filaments. During relaxation, the myosin filaments dissociate from the actin filaments, and then the cycle repeats. This process causes a gradual adaptation of force and instantaneous adaptation of shortening velocity. Good agreement is found between model simulations and the experimental data depicting the relationship between force development, myosin filament density, or shortening velocity and length. PMID:21659490

  19. REVIEW: Internal models in sensorimotor integration: perspectives from adaptive control theory

    NASA Astrophysics Data System (ADS)

    Tin, Chung; Poon, Chi-Sang

    2005-09-01

    Internal models and adaptive controls are empirical and mathematical paradigms that have evolved separately to describe learning control processes in brain systems and engineering systems, respectively. This paper presents a comprehensive appraisal of the correlation between these paradigms with a view to forging a unified theoretical framework that may benefit both disciplines. It is suggested that the classic equilibrium-point theory of impedance control of arm movement is analogous to continuous gain-scheduling or high-gain adaptive control within or across movement trials, respectively, and that the recently proposed inverse internal model is akin to adaptive sliding control originally for robotic manipulator applications. Modular internal models' architecture for multiple motor tasks is a form of multi-model adaptive control. Stochastic methods, such as generalized predictive control, reinforcement learning, Bayesian learning and Hebbian feedback covariance learning, are reviewed and their possible relevance to motor control is discussed. Possible applicability of a Luenberger observer and an extended Kalman filter to state estimation problems—such as sensorimotor prediction or the resolution of vestibular sensory ambiguity—is also discussed. The important role played by vestibular system identification in postural control suggests an indirect adaptive control scheme whereby system states or parameters are explicitly estimated prior to the implementation of control. This interdisciplinary framework should facilitate the experimental elucidation of the mechanisms of internal models in sensorimotor systems and the reverse engineering of such neural mechanisms into novel brain-inspired adaptive control paradigms in future.

  20. Cosmic microwave background anisotropies in cold dark matter models with cosmological constant: The intermediate versus large angular scales

    NASA Technical Reports Server (NTRS)

    Stompor, Radoslaw; Gorski, Krzysztof M.

    1994-01-01

    We obtain predictions for cosmic microwave background anisotropies at angular scales near 1 deg in the context of cold dark matter models with a nonzero cosmological constant, normalized to the Cosmic Background Explorer (COBE) Differential Microwave Radiometer (DMR) detection. The results are compared to those computed in the matter-dominated models. We show that the coherence length of the Cosmic Microwave Background (CMB) anisotropy is almost insensitive to cosmological parameters, and the rms amplitude of the anisotropy increases moderately with decreasing total matter density, while being most sensitive to the baryon abundance. We apply these results in the statistical analysis of the published data from the UCSB South Pole (SP) experiment (Gaier et al. 1992; Schuster et al. 1993). We reject most of the Cold Dark Matter (CDM)-Lambda models at the 95% confidence level when both SP scans are simulated together (although the combined data set renders less stringent limits than the Gaier et al. data alone). However, the Schuster et al. data considered alone as well as the results of some other recent experiments (MAX, MSAM, Saskatoon), suggest that typical temperature fluctuations on degree scales may be larger than is indicated by the Gaier et al. scan. If so, CDM-Lambda models may indeed provide, from a point of view of CMB anisotropies, an acceptable alternative to flat CDM models.

  1. The Targowski and Bowman Model of Communication: Problems and Proposals for Adaptation.

    ERIC Educational Resources Information Center

    van Hoorde, Johan

    1990-01-01

    Outlines and analyzes the Targowski/Bowman model of communication. Suggests adaptations for the model, noting that these changes increase the model's explanatory power and its capacity to predict the communicative outcome of a message given in a business situation. (MM)

  2. Adaptation of a general circulation model to ocean dynamics

    NASA Technical Reports Server (NTRS)

    Turner, R. E.; Rees, T. H.; Woodbury, G. E.

    1976-01-01

    A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.

  3. Adaptive Ambient Illumination Based on Color Harmony Model

    NASA Astrophysics Data System (ADS)

    Kikuchi, Ayano; Hirai, Keita; Nakaguchi, Toshiya; Tsumura, Norimichi; Miyake, Yoichi

    We investigated the relationship between ambient illumination and psychological effect by applying a modified color harmony model. We verified the proposed model by analyzing correlation between psychological value and modified color harmony score. Experimental results showed the possibility to obtain the best color for illumination using this model.

  4. Adapting the Sport Education Model for Children with Disabilities

    ERIC Educational Resources Information Center

    Presse, Cindy; Block, Martin E.; Horton, Mel; Harvey, William J.

    2011-01-01

    The sport education model (SEM) has been widely used as a curriculum and instructional model to provide children with authentic and active sport experiences in physical education. In this model, students are assigned various roles to gain a deeper understanding of the sport or activity. This article provides a brief overview of the SEM and…

  5. Crop plants as models for understanding plant adaptation and diversification

    PubMed Central

    Olsen, Kenneth M.; Wendel, Jonathan F.

    2013-01-01

    Since the time of Darwin, biologists have understood the promise of crop plants and their wild relatives for providing insight into the mechanisms of phenotypic evolution. The intense selection imposed by our ancestors during plant domestication and subsequent crop improvement has generated remarkable transformations of plant phenotypes. Unlike evolution in natural settings, descendent and antecedent conditions for crop plants are often both extant, providing opportunities for direct comparisons through crossing and other experimental approaches. Moreover, since domestication has repeatedly generated a suite of “domestication syndrome” traits that are shared among crops, opportunities exist for gaining insight into the genetic and developmental mechanisms that underlie parallel adaptive evolution. Advances in our understanding of the genetic architecture of domestication-related traits have emerged from combining powerful molecular technologies with advanced experimental designs, including nested association mapping, genome-wide association studies, population genetic screens for signatures of selection, and candidate gene approaches. These studies may be combined with high-throughput evaluations of the various “omics” involved in trait transformation, revealing a diversity of underlying causative mutations affecting phenotypes and their downstream propagation through biological networks. We summarize the state of our knowledge of the mutational spectrum that generates phenotypic novelty in domesticated plant species, and our current understanding of how domestication can reshape gene expression networks and emergent phenotypes. An exploration of traits that have been subject to similar selective pressures across crops (e.g., flowering time) suggests that a diversity of targeted genes and causative mutational changes can underlie parallel adaptation in the context of crop evolution. PMID:23914199

  6. Crop plants as models for understanding plant adaptation and diversification.

    PubMed

    Olsen, Kenneth M; Wendel, Jonathan F

    2013-01-01

    Since the time of Darwin, biologists have understood the promise of crop plants and their wild relatives for providing insight into the mechanisms of phenotypic evolution. The intense selection imposed by our ancestors during plant domestication and subsequent crop improvement has generated remarkable transformations of plant phenotypes. Unlike evolution in natural settings, descendent and antecedent conditions for crop plants are often both extant, providing opportunities for direct comparisons through crossing and other experimental approaches. Moreover, since domestication has repeatedly generated a suite of "domestication syndrome" traits that are shared among crops, opportunities exist for gaining insight into the genetic and developmental mechanisms that underlie parallel adaptive evolution. Advances in our understanding of the genetic architecture of domestication-related traits have emerged from combining powerful molecular technologies with advanced experimental designs, including nested association mapping, genome-wide association studies, population genetic screens for signatures of selection, and candidate gene approaches. These studies may be combined with high-throughput evaluations of the various "omics" involved in trait transformation, revealing a diversity of underlying causative mutations affecting phenotypes and their downstream propagation through biological networks. We summarize the state of our knowledge of the mutational spectrum that generates phenotypic novelty in domesticated plant species, and our current understanding of how domestication can reshape gene expression networks and emergent phenotypes. An exploration of traits that have been subject to similar selective pressures across crops (e.g., flowering time) suggests that a diversity of targeted genes and causative mutational changes can underlie parallel adaptation in the context of crop evolution. PMID:23914199

  7. Demand modelling of passenger air travel: An analysis and extension. Volume 1: Background and summary

    NASA Technical Reports Server (NTRS)

    Jacobson, I. D.

    1978-01-01

    The framework for a model of travel demand which will be useful in predicting the total market for air travel between two cities is discussed. Variables to be used in determining the need for air transportation where none currently exists and the effect of changes in system characteristics on attracting latent demand are identified. Existing models are examined in order to provide insight into their strong points and shortcomings. Much of the existing behavioral research in travel demand is incorporated to allow the inclusion of non-economic factors, such as convenience. The model developed is characterized as a market segmentation model. This is a consequence of the strengths of disaggregation and its natural evolution to a usable aggregate formulation. The need for this approach both pedagogically and mathematically is discussed.

  8. Modeling the performance of direct-detection Doppler lidar systems including cloud and solar background variability.

    PubMed

    McGill, M J; Hart, W D; McKay, J A; Spinhirne, J D

    1999-10-20

    Previous modeling of the performance of spaceborne direct-detection Doppler lidar systems assumed extremely idealized atmospheric models. Here we develop a technique for modeling the performance of these systems in a more realistic atmosphere, based on actual airborne lidar observations. The resulting atmospheric model contains cloud and aerosol variability that is absent in other simulations of spaceborne Doppler lidar instruments. To produce a realistic simulation of daytime performance, we include solar radiance values that are based on actual measurements and are allowed to vary as the viewing scene changes. Simulations are performed for two types of direct-detection Doppler lidar system: the double-edge and the multichannel techniques. Both systems were optimized to measure winds from Rayleigh backscatter at 355 nm. Simulations show that the measurement uncertainty during daytime is degraded by only approximately 10-20% compared with nighttime performance, provided that a proper solar filter is included in the instrument design. PMID:18324169

  9. Modeling cognitive effects on visual search for targets in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Ruda, Harald; Hoffman, James

    1998-07-01

    To understand how a human operator performs visual search in complex scenes, it is necessary to take into account top- down cognitive biases in addition to bottom-up visual saliency effects. We constructed a model to elucidate the relationship between saliency and cognitive effects in the domain of visual search for distant targets in photo- realistic images of cluttered scenes. In this domain, detecting targets is difficult and requires high visual acuity. Sufficient acuity is only available near the fixation point, i.e. in the fovea. Hence, the choice of fixation points is the most important determinant of whether targets get detected. We developed a model that predicts the 2D distribution of fixation probabilities directly from an image. Fixation probabilities were computed as a function of local contrast (saliency effect) and proximity to the horizon (cognitive effect: distant targets are more likely to be found c close to the horizon). For validation, the model's predictions were compared to ensemble statistics of subjects' actual fixation locations, collected with an eye- tracker. The model's predictions correlated well with the observed data. Disabling the horizon-proximity functionality of the model significantly degraded prediction accuracy, demonstrating that cognitive effects must be accounted for when modeling visual search.

  10. Left ventricle motion modeling and analysis by adaptive-size physically based models

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Chen; Goldgof, Dmitry B.

    1992-06-01

    This paper presents a new physically based modeling method which employs adaptive-size meshes to model left ventricle (LV) shape and track its motion during cardiac cycle. The mesh size increases or decreases dynamically during surface reconstruction process to locate nodes near surface areas of interest and to minimize the fitting error. Further, presented with multiple 3-D data frames, the mesh size varies as the LV undergoes nonrigid motion. Simulation results illustrate the performance and accuracy of the proposed algorithm. Then, the algorithm is applied to the volumetric temporal cardiac data. The LV data was acquired by the 3-D computed tomography scanner. It was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 by 128 by 118) images taken through the heart cycle.

  11. A low-dimensional, time-resolved and adapting model neuron.

    PubMed

    Cartling, B

    1996-07-01

    A low-dimensional, time-resolved and adapting model neuron is formulated and evaluated. The model is an extension of the integrate-and-fire type of model with respect to adaptation and of a recent adapting firing-rate model with respect to time-resolution. It is obtained from detailed conductance-based models by a separation of fast and slow ionic processes of action potential generation. The model explicitly includes firing-rate regulation via the slow afterhyperpolarization phase of action potentials, which is controlled by calcium-sensitive potassium channels. It is demonstrated that the model closely reproduces the firing pattern and excitability behaviour of a detailed multicompartment conductance-based model of a neocortical pyramidal cell. The inclusion of adaptation in a model neuron is important for its capability to generate complex dynamics of networks of interconnected neurons. The time-resolution is required for studies of systems in which the temporal aspects of neural coding are important. The simplicity of the model facilitates analytical studies, insight into neurocomputational mechanisms and simulations of large-scale systems. The capability to generate complex network computations may also make the model useful in practical applications of artificial neural networks. PMID:8891839

  12. Neuro- and sensoriphysiological Adaptations to Microgravity using Fish as Model System

    NASA Astrophysics Data System (ADS)

    Anken, R.

    The phylogenetic development of all organisms took place under constant gravity conditions, against which they achieved specific countermeasures for compensation and adaptation. On this background, it is still an open question to which extent altered gravity such as hyper- or microgravity (centrifuge/spaceflight) affects the normal individual development, either on the systemic level of the whole organism or on the level of individual organs or even single cells. The present review provides information on this topic, focusing on the effects of altered gravity on developing fish as model systems even for higher vertebrates including humans, with special emphasis on the effect of altered gravity on behaviour and particularly on the developing brain and vestibular system. Overall, the results speak in favour of the following concept: Short-term altered gravity (˜ 1 day) can induce transient sensorimotor disorders (kinetoses) due to malfunctions of the inner ear, originating from asymmetric otoliths. The regain of normal postural control is likely due to a reweighing of sensory inputs. During long-term altered gravity (several days and more), complex adptations on the level of the central and peripheral vestibular system occur. This work was financially supported by the German Aerospace Center (DLR) e.V. (FKZ: 50 WB 9997).

  13. Numerical Modeling of Long Bone Adaptation due to Mechanical Loading: Correlation with Experiments

    PubMed Central

    Kumar, Natarajan Chennimalai; Dantzig, Jonathan A.; Jasiuk, Iwona M.; Robling, Alex G.; Turner, Charles H.

    2011-01-01

    The process of external bone adaptation in cortical bone is modeled mathematically using finite element (FE) stress analysis coupled with an evolution model, in which adaptation response is triggered by mechanical stimulus represented by strain energy density. The model is applied to experiments in which a rat ulna is subjected to cyclic loading, and the results demonstrate the ability of the model to predict the bone adaptation response. The FE mesh is generated from micro-computed tomography (μCT) images of the rat ulna, and the stress analysis is carried out using boundary and loading conditions on the rat ulna obtained from the experiments [Robling, A. G., F. M. Hinant, D. B. Burr, and C. H. Turner. J. Bone Miner. Res. 17:1545–1554, 2002]. The external adaptation process is implemented in the model by moving the surface nodes of the FE mesh based on an evolution law characterized by two parameters: one that captures the rate of the adaptation process (referred to as gain); and the other characterizing the threshold value of the mechanical stimulus required for adaptation (referred to as threshold-sensitivity). A parametric study is carried out to evaluate the effect of these two parameters on the adaptation response. We show, following comparison of results from the simulations to the experimental observations of Robling et al. (J. Bone Miner. Res. 17:1545–1554, 2002), that splitting the loading cycles into different number of bouts affects the threshold-sensitivity but not the rate of adaptation. We also show that the threshold-sensitivity parameter can quantify the mechanosensitivity of the osteocytes. PMID:20013156

  14. From epidemics to information propagation: Striking differences in structurally similar adaptive network models

    NASA Astrophysics Data System (ADS)

    Trajanovski, Stojan; Guo, Dongchao; Van Mieghem, Piet

    2015-09-01

    The continuous-time adaptive susceptible-infected-susceptible (ASIS) epidemic model and the adaptive information diffusion (AID) model are two adaptive spreading processes on networks, in which a link in the network changes depending on the infectious state of its end nodes, but in opposite ways: (i) In the ASIS model a link is removed between two nodes if exactly one of the nodes is infected to suppress the epidemic, while a link is created in the AID model to speed up the information diffusion; (ii) a link is created between two susceptible nodes in the ASIS model to strengthen the healthy part of the network, while a link is broken in the AID model due to the lack of interest in informationless nodes. The ASIS and AID models may be considered as first-order models for cascades in real-world networks. While the ASIS model has been exploited in the literature, we show that the AID model is realistic by obtaining a good fit with Facebook data. Contrary to the common belief and intuition for such similar models, we show that the ASIS and AID models exhibit different but not opposite properties. Most remarkably, a unique metastable state always exists in the ASIS model, while there an hourglass-shaped region of instability in the AID model. Moreover, the epidemic threshold is a linear function in the effective link-breaking rate in the AID model, while it is almost constant but noisy in the AID model.

  15. Transferred and Adapted Models of Secondary Education in Ghana: What Implications for National Development?

    NASA Astrophysics Data System (ADS)

    Quist, Hubert O.

    2003-09-01

    The secondary-education models implemented in Ghana since colonial times constitute a classic case of "educational transfer and adaptation". Transferred from England, and in recent years the United States of America and Japan, these models have had a significant impact on Ghana's development in diverse ways. Yet educational research on Ghana has under-recognized this important issue of "educational transfer and adaptation", especially the relationship between these transferred models and national development. This study addresses such neglect by first focusing on those institutions that served as prototypes. Second, it appraises the models pointing out their implications for national development. It is contended that the foreign models that were adapted (indigenised) have been significant instruments for the human- resource and socio-political development of Ghana. However, their emphasis on the academic type of education ultimately has tended to create a situation of dependency particularly with respect to techno-scientific and economic development.

  16. Competition and fixation of cohorts of adaptive mutations under Fisher geometrical model.

    PubMed

    Moura de Sousa, Jorge A; Alpedrinha, João; Campos, Paulo R A; Gordo, Isabel

    2016-01-01

    One of the simplest models of adaptation to a new environment is Fisher's Geometric Model (FGM), in which populations move on a multidimensional landscape defined by the traits under selection. The predictions of this model have been found to be consistent with current observations of patterns of fitness increase in experimentally evolved populations. Recent studies investigated the dynamics of allele frequency change along adaptation of microbes to simple laboratory conditions and unveiled a dramatic pattern of competition between cohorts of mutations, i.e., multiple mutations simultaneously segregating and ultimately reaching fixation. Here, using simulations, we study the dynamics of phenotypic and genetic change as asexual populations under clonal interference climb a Fisherian landscape, and ask about the conditions under which FGM can display the simultaneous increase and fixation of multiple mutations-mutation cohorts-along the adaptive walk. We find that FGM under clonal interference, and with varying levels of pleiotropy, can reproduce the experimentally observed competition between different cohorts of mutations, some of which have a high probability of fixation along the adaptive walk. Overall, our results show that the surprising dynamics of mutation cohorts recently observed during experimental adaptation of microbial populations can be expected under one of the oldest and simplest theoretical models of adaptation-FGM. PMID:27547562

  17. Public Knowledge of Oral Cancer and Modelling of Demographic Background Factors Affecting this Knowledge in Khartoum State, Sudan

    PubMed Central

    Al-Hakimi, Hamdi A.; Othman, Abdulqaher E.; Mohamed, Omima G.; Saied, Abdulaal M.; Ahmed, Waled A.

    2016-01-01

    Objectives: Knowledge of oral cancer affects early detection and diagnosis of this disease. This study aimed to assess the current level of public knowledge of oral cancer in Khartoum State, Sudan, and examine how demographic background factors affect this knowledge. Methods: This cross-sectional study involved 501 participants recruited by systematic random sampling from the outpatient records of three major hospitals in Khartoum State between November 2012 and February 2013. A pretested structured questionnaire was designed to measure knowledge levels. A logistic regression model was utilised with demographic background variables as independent variables and knowledge of oral cancer as the dependent variable. A path analysis was conducted to build a structural model. Results: Of the 501 participants, 42.5% had no knowledge of oral cancer, while 5.4%, 39.9% and 12.2% had low, moderate and high knowledge levels, respectively. Logistic regression modelling showed that age, place of residence and education levels were significantly associated with knowledge levels (P = 0.009, 0.017 and <0.001, respectively). According to the structural model, age and place of residence had a prominent direct effect on knowledge, while age and residence also had a prominent indirect effect mediated through education levels. Conclusion: Education levels had the most prominent positive effect on knowledge of oral cancer among outpatients at major hospitals in Khartoum State. Moreover, education levels were found to mediate the effect of other background variables. PMID:27606114

  18. Sensitivities of eyewall replacement cycle to model physics, vortex structure, and background winds in numerical simulations of tropical cyclones

    NASA Astrophysics Data System (ADS)

    Zhu, Zhenduo; Zhu, Ping

    2015-01-01

    series of sensitivity experiments by the Weather Research and Forecasting (WRF) model is used to investigate the impact of model physics, vortex axisymmetric radial structure, and background wind on secondary eyewall formation (SEF) and eyewall replacement cycle (ERC) in three-dimensional full physics numerical simulations. It is found that the vertical turbulent mixing parameterization can substantially affect the concentric ring structure of tangential wind associated with SEF through a complicated interaction among eyewall and outer rainband heating, radial inflow in the boundary layer, surface layer processes, and shallow convection in the moat. Large snow terminal velocity can substantially change the vertical distribution of eyewall diabatic heating to result in a strong radial inflow in the boundary layer, and thus, favors the development of shallow convection in the moat allowing the outer rainband convection to move closer to the inner eyewall, which may leave little room both temporally and spatially for a full development of a secondary maximum of tangential wind. Small radius of maximum wind (RMW) of a vortex and small potential vorticity (PV) skirt outside the RMW tend to generate double-eyewall replacement and may lead to an ERC without a clean secondary concentric maximum of tangential wind. A sufficiently large background wind can smooth out an ERC that would otherwise occur without background wind for a vortex with a small or moderate PV skirt. However, background wind does not appear to have an impact on an ERC if the vortex has a sufficiently large PV skirt.

  19. Stochastic stage-structured modeling of the adaptive immune system

    SciTech Connect

    Chao, D. L.; Davenport, M. P.; Forrest, S.; Perelson, Alan S.,

    2003-01-01

    We have constructed a computer model of the cytotoxic T lymphocyte (CTL) response to antigen and the maintenance of immunological memory. Because immune responses often begin with small numbers of cells and there is great variation among individual immune systems, we have chosen to implement a stochastic model that captures the life cycle of T cells more faithfully than deterministic models. Past models of the immune response have been differential equation based, which do not capture stochastic effects, or agent-based, which are computationally expensive. We use a stochastic stage-structured approach that has many of the advantages of agent-based modeling but is more efficient. Our model can provide insights into the effect infections have on the CTL repertoire and the response to subsequent infections.

  20. Extended adiabatic blast waves and a model of the soft X-ray background

    NASA Technical Reports Server (NTRS)

    Cox, D. P.; Anderson, P. R.

    1982-01-01

    The suggestion has been made that much of the soft X-ray background observed in X-ray astronomy might arise from being inside a very large supernova blast wave propagating in the hot, low-density component of the interstellar (ISM) medium. An investigation is conducted to study this possibility. An analytic approximation is presented for the nonsimilar time evolution of the dynamic structure of an adiabatic blast wave generated by a point explosion in a homogeneous ambient medium. A scheme is provided for evaluating the electron-temperature distribution for the evolving structure, and a procedure is presented for following the state of a given fluid element through the evolving dynamical and thermal structures. The results of the investigation show that, if the solar system were located within a blast wave, the Wisconsin soft X-ray rocket payload would measure the B and C band count rates that it does measure, provided conditions correspond to the values calculated in the investigation.

  1. Modeling bee swarming behavior through diffusion adaptation with asymmetric information sharing

    NASA Astrophysics Data System (ADS)

    Li, Jinchao; Sayed, Ali H.

    2012-12-01

    Honeybees swarm when they move to a new site for their hive. During the process of swarming, their behavior can be analyzed by classifying them as informed bees or uninformed bees, where the informed bees have some information about the destination while the uninformed bees follow the informed bees. The swarm's movement can be viewed as a network of mobile nodes with asymmetric information exchange about their destination. In these networks, adaptive and mobile agents share information on the fly and adapt their estimates in response to local measurements and data shared with neighbors. Diffusion adaptation is used to model the adaptation process in the presence of asymmetric nodes and noisy data. The simulations indicate that the models are able to emulate the swarming behavior of bees under varied conditions such as a small number of informed bees, sharing of target location, sharing of target direction, and noisy measurements.

  2. Multi-objective parameter optimization of common land model using adaptive surrogate modeling

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Li, J.; Wang, C.; Di, Z.; Dai, Y.; Ye, A.; Miao, C.

    2015-05-01

    Parameter specification usually has significant influence on the performance of land surface models (LSMs). However, estimating the parameters properly is a challenging task due to the following reasons: (1) LSMs usually have too many adjustable parameters (20 to 100 or even more), leading to the curse of dimensionality in the parameter input space; (2) LSMs usually have many output variables involving water/energy/carbon cycles, so that calibrating LSMs is actually a multi-objective optimization problem; (3) Regional LSMs are expensive to run, while conventional multi-objective optimization methods need a large number of model runs (typically ~105-106). It makes parameter optimization computationally prohibitive. An uncertainty quantification framework was developed to meet the aforementioned challenges, which include the following steps: (1) using parameter screening to reduce the number of adjustable parameters, (2) using surrogate models to emulate the responses of dynamic models to the variation of adjustable parameters, (3) using an adaptive strategy to improve the efficiency of surrogate modeling-based optimization; (4) using a weighting function to transfer multi-objective optimization to single-objective optimization. In this study, we demonstrate the uncertainty quantification framework on a single column application of a LSM - the Common Land Model (CoLM), and evaluate the effectiveness and efficiency of the proposed framework. The result indicate that this framework can efficiently achieve optimal parameters in a more effective way. Moreover, this result implies the possibility of calibrating other large complex dynamic models, such as regional-scale LSMs, atmospheric models and climate models.

  3. Adaptation of Mesoscale Weather Models to Local Forecasting

    NASA Technical Reports Server (NTRS)

    Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.

    2003-01-01

    Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes

  4. Socioeconomic Background and Occupational Achievement: Extensions of a Basic Model. Final Report.

    ERIC Educational Resources Information Center

    Duncan, Otis Dudley; And Others

    To synthesize knowledge concerning factors which affect occupational achievement through a set of explicit models based upon the concept of the socioeconomic life cycle, six major bodies of data from various sources were collected and subjected to secondary analysis. A number of items of lesser scope were gleaned from additional sources for use in…

  5. DATA FOR ENVIRONMENTAL MODELING (D4EM): BACKGROUND AND EXAMPLE APPLICATIONS OF DATA AUTOMATION

    EPA Science Inventory

    Data is a basic requirement for most modeling applications. Collecting data is expensive and time consuming. High speed internet connections and growing databases of online environmental data go a long way to overcoming issues of data scarcity. Among the obstacles still remaining...

  6. Human search for a target on a textured background is consistent with a stochastic model.

    PubMed

    Clarke, Alasdair D F; Green, Patrick; Chantler, Mike J; Hunt, Amelia R

    2016-05-01

    Previous work has demonstrated that search for a target in noise is consistent with the predictions of the optimal search strategy, both in the spatial distribution of fixation locations and in the number of fixations observers require to find the target. In this study we describe a challenging visual-search task and compare the number of fixations required by human observers to find the target to predictions made by a stochastic search model. This model relies on a target-visibility map based on human performance in a separate detection task. If the model does not detect the target, then it selects the next saccade by randomly sampling from the distribution of saccades that human observers made. We find that a memoryless stochastic model matches human performance in this task. Furthermore, we find that the similarity in the distribution of fixation locations between human observers and the ideal observer does not replicate: Rather than making the signature doughnut-shaped distribution predicted by the ideal search strategy, the fixations made by observers are best described by a central bias. We conclude that, when searching for a target in noise, humans use an essentially random strategy, which achieves near optimal behavior due to biases in the distributions of saccades we have a tendency to make. The findings reconcile the existence of highly efficient human search performance with recent studies demonstrating clear failures of optimality in single and multiple saccade tasks. PMID:27145531

  7. Forecasting Library Futures: Participative Decisionmaking with a Microcomputer Model. Background Paper. Workshop 3.

    ERIC Educational Resources Information Center

    Mason, Thomas R.; Newton, Evan

    This paper describes the use of a microcomputer model program to predict library collection growth at Cornell University, particularly in Olin Library, which is Cornell's central research facility. The possible effects of increased online information retrieval and microform or videodisc usage on library storage needs are also briefly discussed. A…

  8. Modeling and validating Bayesian accrual models on clinical data and simulations using adaptive priors.

    PubMed

    Jiang, Yu; Simon, Steve; Mayo, Matthew S; Gajewski, Byron J

    2015-02-20

    Slow recruitment in clinical trials leads to increased costs and resource utilization, which includes both the clinic staff and patient volunteers. Careful planning and monitoring of the accrual process can prevent the unnecessary loss of these resources. We propose two hierarchical extensions to the existing Bayesian constant accrual model: the accelerated prior and the hedging prior. The new proposed priors are able to adaptively utilize the researcher's previous experience and current accrual data to produce the estimation of trial completion time. The performance of these models, including prediction precision, coverage probability, and correct decision-making ability, is evaluated using actual studies from our cancer center and simulation. The results showed that a constant accrual model with strongly informative priors is very accurate when accrual is on target or slightly off, producing smaller mean squared error, high percentage of coverage, and a high number of correct decisions as to whether or not continue the trial, but it is strongly biased when off target. Flat or weakly informative priors provide protection against an off target prior but are less efficient when the accrual is on target. The accelerated prior performs similar to a strong prior. The hedging prior performs much like the weak priors when the accrual is extremely off target but closer to the strong priors when the accrual is on target or only slightly off target. We suggest improvements in these models and propose new models for future research. PMID:25376910

  9. A structural model of the adaptive human pilot

    NASA Technical Reports Server (NTRS)

    Hess, R. A.

    1979-01-01

    A compensatory tracking model of the human pilot is offered which attempts to provide a more realistic representation of the human's signal processing structure than that which is exhibited by pilot models currently in use. Two features of the model distinguish it from other representations of the human pilot. First, proprioceptive information from the control stick or manipulator constitutes one of the major feedback paths in the model, providing feedback of vehicle output rate due to control activity. Implicit in this feedback loop is a model of the vehicle dynamics which is valid in and beyond the region of crossover. Second, error-rate information is continuously derived and independently but intermittently controlled. An output injected remnant model is offered and qualitatively justified on the basis of providing a measure of the effect of inaccuracies such as time variations in the pilot's internal model of the controlled element dynamics. The data from experimental tracking tasks involving five different controlled element dynamics and one nonideal viewing condition were matched with model generated describing functions and remnant power spectral densities.

  10. FEMHD: An adaptive finite element method for MHD and edge modelling

    SciTech Connect

    Strauss, H.R.

    1995-07-01

    This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.

  11. Adaptive Work Strategy for Evaluating a Conceptual Site Model

    NASA Astrophysics Data System (ADS)

    Dietrich, P.; Utom, A. U.; Werban, U.

    2015-12-01

    A comprehensive, diagnostic, procedural and adaptive scheme involving a combination of geophysical and direct push methods was developed and applied in the Wurmlingen study site situated within the region of Baden-Württemberg (southwest Germany). The goal of the study was to test the applicability of electrical resistivity method in imaging resistivity contrasts, and mapping the depth to and lateral extent of field-scale subsurface structures and existence of flow paths that may control concentration gradients of groundwater solution contents. Based on a relatively fast and cost-effective areal mapping with vertical electrical sounding technique, a northwest-southeast trending stream-channel-like depression (low apparent resistivity feature) through a Pleistocene aquifer was detected. For a more detailed characterization, we implemented electrical resistivity tomography method followed by direct push (DP) technologies. Beside the use of DP for verification of structures identified by geophysical tools, we used it for multi-level groundwater sampling. Results from groundwater chemistry indicate zones of steep nitrate concentration gradients associated with the feature.

  12. Model adaptation in a central controller for a sewer system

    NASA Astrophysics Data System (ADS)

    van Nooijen, Ronald; Kolechkina, Alla; Mol, Bart

    2013-04-01

    For small sewer systems that combine foul water and storm water sewer functions in flat terrain, central control of the sewer system may have problems during dry weather. These systems are a combination of local gravity flow networks connected by pumps. Under those conditions the level in the wet well (local storage at the pumping station) should be kept below the entrance pipe but above the top of the intake of the pump. The pumps are dimensioned to cope with the combined flow of foul water and precipitation run off so their capacity is relatively large when compared wityh the volume available in the wet well. Under local control this is not a major problem because the effective controller time step is very short. For central control the control time step can become a problem. Especially when there is uncertainty about the relation between level and volume in the wet well. In this paper we describe a way to dynamically adapt the level to volume relation based on dry weather behaviour. This is important because a better estimate of this volume will reduce the number of on/off cycles for the pumps. It will also allow detection and correction for changes in pump performance due to aging.

  13. MODELING EXTRAGALACTIC FOREGROUNDS AND SECONDARIES FOR UNBIASED ESTIMATION OF COSMOLOGICAL PARAMETERS FROM PRIMARY COSMIC MICROWAVE BACKGROUND ANISOTROPY

    SciTech Connect

    Millea, M.; Knox, L.; Dore, O.; Dudley, J.; Holder, G.; Shaw, L.; Song, Y.-S.; Zahn, O.

    2012-02-10

    Using the latest physical modeling and constrained by the most recent data, we develop a phenomenological parameterized model of the contributions to intensity and polarization maps at millimeter wavelengths from external galaxies and Sunyaev-Zeldovich effects. We find such modeling to be necessary for estimation of cosmological parameters from Planck data. For example, ignoring the clustering of the infrared background would result in a bias in n{sub s} of 7{sigma} in the context of an eight-parameter cosmological model. We show that the simultaneous marginalization over a full foreground model can eliminate such biases, while increasing the statistical uncertainty in cosmological parameters by less than 20%. The small increases in uncertainty can be significantly reduced with the inclusion of higher-resolution ground-based data. The multi-frequency analysis we employ involves modeling 46 total power spectra and marginalization over 17 foreground parameters. We show that we can also reduce the data to a best estimate of the cosmic microwave background power spectra, with just two principal components (with constrained amplitudes) describing residual foreground contamination.

  14. Analysis and modeling of fixation point selection for visual search in cluttered backgrounds

    NASA Astrophysics Data System (ADS)

    Snorrason, Magnus; Hoffman, James; Ruda, Harald

    2000-07-01

    Hard-to-see targets are generally only detected by human observers once they have been fixated. Hence, understanding how the human visual system allocates fixation locations is necessary for predicting target detectability. Visual search experiments were conducted where observers searched for military vehicles in cluttered terrain. Instantaneous eye position measurements were collected using an eye tracker. The resulting data was partitioned into fixations and saccades, and analyzed for correlation with various image properties. The fixation data was used to validate out model for predicting fixation locations. This model generates a saliency map from bottom-up image features, such as local contrast. To account for top-down scene understanding effects, a separate cognitive bias map is generated. The combination of these two maps provides a fixation probability map, from which sequences of fixation points were generated.

  15. Using box models to quantify zonal distributions and emissions of halocarbons in the background atmosphere.

    NASA Astrophysics Data System (ADS)

    Elkins, J. W.; Nance, J. D.; Dutton, G. S.; Montzka, S. A.; Hall, B. D.; Miller, B.; Butler, J. H.; Mondeel, D. J.; Siso, C.; Moore, F. L.; Hintsa, E. J.; Wofsy, S. C.; Rigby, M. L.

    2015-12-01

    The Halocarbons and other Atmospheric Trace Species (HATS) of NOAA's Global Monitoring Division started measurements of the major chlorofluorocarbons and nitrous oxide in 1977 from flask samples collected at five remote sites around the world. Our program has expanded to over 40 compounds at twelve sites, which includes six in situ instruments and twelve flask sites. The Montreal Protocol for Substances that Deplete the Ozone Layer and its subsequent amendments has helped to decrease the concentrations of many of the ozone depleting compounds in the atmosphere. Our goal is to provide zonal emission estimates for these trace gases from multi-box models and their estimated atmospheric lifetimes in this presentation and make the emission values available on our web site. We plan to use our airborne measurements to calibrate the exchange times between the boxes for 5-box and 12-box models using sulfur hexafluoride where emissions are better understood.

  16. An object-oriented, technology-adaptive information model

    NASA Technical Reports Server (NTRS)

    Anyiwo, Joshua C.

    1995-01-01

    The primary objective was to develop a computer information system for effectively presenting NASA's technologies to American industries, for appropriate commercialization. To this end a comprehensive information management model, applicable to a wide variety of situations, and immune to computer software/hardware technological gyrations, was developed. The model consists of four main elements: a DATA_STORE, a data PRODUCER/UPDATER_CLIENT and a data PRESENTATION_CLIENT, anchored to a central object-oriented SERVER engine. This server engine facilitates exchanges among the other model elements and safeguards the integrity of the DATA_STORE element. It is designed to support new technologies, as they become available, such as Object Linking and Embedding (OLE), on-demand audio-video data streaming with compression (such as is required for video conferencing), Worldwide Web (WWW) and other information services and browsing, fax-back data requests, presentation of information on CD-ROM, and regular in-house database management, regardless of the data model in place. The four components of this information model interact through a system of intelligent message agents which are customized to specific information exchange needs. This model is at the leading edge of modern information management models. It is independent of technological changes and can be implemented in a variety of ways to meet the specific needs of any communications situation. This summer a partial implementation of the model has been achieved. The structure of the DATA_STORE has been fully specified and successfully tested using Microsoft's FoxPro 2.6 database management system. Data PRODUCER/UPDATER and PRESENTATION architectures have been developed and also successfully implemented in FoxPro; and work has started on a full implementation of the SERVER engine. The model has also been successfully applied to a CD-ROM presentation of NASA's technologies in support of Langley Research Center's TAG

  17. MGGPOD: a Monte Carlo Suite for Modeling Instrumental Line and Continuum Backgrounds in Gamma-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Weidenspointner, G.; Harris, M. J.; Sturner, S.; Teegarden, B. J.; Ferguson, C.

    2004-01-01

    Intense and complex instrumental backgrounds, against which the much smaller signals from celestial sources have to be discerned, are a notorious problem for low and intermediate energy gamma-ray astronomy (approximately 50 keV - 10 MeV). Therefore a detailed qualitative and quantitative understanding of instrumental line and continuum backgrounds is crucial for most stages of gamma-ray astronomy missions, ranging from the design and development of new instrumentation through performance prediction to data reduction. We have developed MGGPOD, a user-friendly suite of Monte Carlo codes built around the widely used GEANT (Version 3.21) package, to simulate ab initio the physical processes relevant for the production of instrumental backgrounds. These include the build-up and delayed decay of radioactive isotopes as well as the prompt de-excitation of excited nuclei, both of which give rise to a plethora of instrumental gamma-ray background lines in addition t o continuum backgrounds. The MGGPOD package and documentation are publicly available for download. We demonstrate the capabilities of the MGGPOD suite by modeling high resolution gamma-ray spectra recorded by the Transient Gamma-Ray Spectrometer (TGRS) on board Wind during 1995. The TGRS is a Ge spectrometer operating in the 40 keV to 8 MeV range. Due to its fine energy resolution, these spectra reveal the complex instrumental background in formidable detail, particularly the many prompt and delayed gamma-ray lines. We evaluate the successes and failures of the MGGPOD package in reproducing TGRS data, and provide identifications for the numerous instrumental lines.

  18. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  19. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model

    PubMed Central

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher

    2015-01-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  20. Identification-free adaptive optimal control based on switching predictive models

    NASA Astrophysics Data System (ADS)

    Luo, Wenguang; Pan, Shenghui; Ma, Zhaomin; Lan, Hongli

    2008-10-01

    An identification-free adaptive optimal control based on switching predictive models is proposed for the systems with big inertia, long time delay and multi models. Multi predictive models are set in the identification-free adaptive predictive control, and switched according to the optimal switching instants in control of the switching law along with the system running situations in real time. The switching law is designed based on the most important character parameter of the systems, and the optimal switching instants are computed out with the optimal theory for switched systems. The simulation test results show the proposed method is suitable to the systems, such as superheated steam temperature systems of electric power plants, can provide excellent control performance, improve rejecting disturbance ability and self-adaptability, and has lower demand on the predictive model precision.

  1. Global solution for a kinetic chemotaxis model with internal dynamics and its fast adaptation limit

    NASA Astrophysics Data System (ADS)

    Liao, Jie

    2015-12-01

    A nonlinear kinetic chemotaxis model with internal dynamics incorporating signal transduction and adaptation is considered. This paper is concerned with: (i) the global solution for this model, and, (ii) its fast adaptation limit to Othmer-Dunbar-Alt type model. This limit gives some insight to the molecular origin of the chemotaxis behaviour. First, by using the Schauder fixed point theorem, the global existence of weak solution is proved based on detailed a priori estimates, under quite general assumptions. However, the Schauder theorem does not provide uniqueness, so additional analysis is required to be developed for uniqueness. Next, the fast adaptation limit of this model is derived by extracting a weak convergence subsequence in measure space. For this limit, the first difficulty is to show the concentration effect on the internal state. Another difficulty is the strong compactness argument on the chemical potential, which is essential for passing the nonlinear kinetic equation to the weak limit.

  2. Modeling the fluctuations of the cosmic infrared background: what did we learn from Planck?

    NASA Astrophysics Data System (ADS)

    Bethermin, Matthieu

    2015-08-01

    The CIB is the relic emission of the dust heated by young stars across. It is a powerful probe of the star formation history in the Universe. The distribution of star-forming galaxies in the large-scale structures is imprinted in the anisotropies of the CIB. They are thus one of the keys to understand how large-scale structures shaped the evolution of the galaxies. Planck measured these anisotropies with an unprecedented accuracy. However, the CIB is an integrated emission and a model is necessary to disentangle the contribution of the different redshifts.Large-scale anisotropies can be interpreted using a linear model. This simple approach relies on a minimal number of hypotheses. We found a star formation history consistent with the extrapolation of the Herschel luminosity function. This rules out any major contribution of faint IR galaxies. We also constrained the mean mass of the dark matter halos hosting the galaxies, which emit the CIB. This mass is almost constant from z=4 to z=0, while dark matter halos grew very quickly during this interval of time. The structures hosting star formation are thus not the same at low and high redshifts. This also suggests the existence of a halo mass for which the star formation is most efficient.Halo occupation models can describe in details how dark matter halos are populated by infrared galaxies. We coupled a phenomenological model of galaxy evolution calibrated on Herschel data with a halo model, using the technique of abundance matching. This approach allows to naturally reproduce the CIB anisotropies. We found that the efficiency of halos to convert accreted baryons into stars varies strongly with halo mass, but not with time. This highlights the role played by host halos as regulator of the star formation in galaxies.I will finally explain how we could have access to 3D information with future instruments and isolate more efficiently the highest redshift using intensity mapping of bright sub-millimeter lines. I will

  3. "Your Model Is Predictive-- but Is It Useful?" Theoretical and Empirical Considerations of a New Paradigm for Adaptive Tutoring Evaluation

    ERIC Educational Resources Information Center

    González-Brenes, José P.; Huang, Yun

    2015-01-01

    Classification evaluation metrics are often used to evaluate adaptive tutoring systems-- programs that teach and adapt to humans. Unfortunately, it is not clear how intuitive these metrics are for practitioners with little machine learning background. Moreover, our experiments suggest that existing convention for evaluating tutoring systems may…

  4. Do common mechanisms of adaptation mediate color discrimination and appearance? Contrast adaptation

    NASA Astrophysics Data System (ADS)

    Hillis, James M.; Brainard, David H.

    2007-08-01

    Are effects of background contrast on color appearance and sensitivity controlled by the same mechanism of adaptation? We examined the effects of background color contrast on color appearance and on color-difference sensitivity under well-matched conditions. We linked the data using Fechner's hypothesis that the rate of apparent stimulus change is proportional to sensitivity and examined a family of parametric models of adaptation. Our results show that both appearance and discrimination are consistent with the same mechanism of adaptation.

  5. Coastal Adaptation Planning for Sea Level Rise and Extremes: A Global Model for Adaptation Decision-making at the Local Level Given Uncertain Climate Projections

    NASA Astrophysics Data System (ADS)

    Turner, D.

    2014-12-01

    Understanding the potential economic and physical impacts of climate change on coastal resources involves evaluating a number of distinct adaptive responses. This paper presents a tool for such analysis, a spatially-disaggregated optimization model for adaptation to sea level rise (SLR) and storm surge, the Coastal Impact and Adaptation Model (CIAM). This decision-making framework fills a gap between very detailed studies of specific locations and overly aggregate global analyses. While CIAM is global in scope, the optimal adaptation strategy is determined at the local level, evaluating over 12,000 coastal segments as described in the DIVA database (Vafeidis et al. 2006). The decision to pursue a given adaptation measure depends on local socioeconomic factors like income, population, and land values and how they develop over time, relative to the magnitude of potential coastal impacts, based on geophysical attributes like inundation zones and storm surge. For example, the model's decision to protect or retreat considers the costs of constructing and maintaining coastal defenses versus those of relocating people and capital to minimize damages from land inundation and coastal storms. Uncertain storm surge events are modeled with a generalized extreme value distribution calibrated to data on local surge extremes. Adaptation is optimized for the near-term outlook, in an "act then learn then act" framework that is repeated over the model time horizon. This framework allows the adaptation strategy to be flexibly updated, reflecting the process of iterative risk management. CIAM provides new estimates of the economic costs of SLR; moreover, these detailed results can be compactly represented in a set of adaptation and damage functions for use in integrated assessment models. Alongside the optimal result, CIAM evaluates suboptimal cases and finds that global costs could increase by an order of magnitude, illustrating the importance of adaptive capacity and coastal policy.

  6. Nonlinear model identification and adaptive model predictive control using neural networks.

    PubMed

    Akpan, Vincent A; Hassapis, George D

    2011-04-01

    This paper presents two new adaptive model predictive control algorithms, both consisting of an on-line process identification part and a predictive control part. Both parts are executed at each sampling instant. The predictive control part of the first algorithm is the Nonlinear Model Predictive Control strategy and the control part of the second algorithm is the Generalized Predictive Control strategy. In the identification parts of both algorithms the process model is approximated by a series-parallel neural network structure which is trained by a recursive least squares (ARLS) method. The two control algorithms have been applied to: 1) the temperature control of a fluidized bed furnace reactor (FBFR) of a pilot plant and 2) the auto-pilot control of an F-16 aircraft. The training and validation data of the neural network are obtained from the open-loop simulation of the FBFR and the nonlinear F-16 aircraft models. The identification and control simulation results show that the first algorithm outperforms the second one at the expense of extra computation time. PMID:21281932

  7. Cosmic microwave background and large-scale structure constraints on a simple quintessential inflation model

    SciTech Connect

    Rosenfeld, Rogerio; Frieman, Joshua A.; /Fermilab /Chicago U., Astron. Astrophys. Ctr.

    2006-11-01

    We derive constraints on a simple quintessential inflation model, based on a spontaneously broken {Phi}{sup 4} theory, imposed by the Wilkinson Microwave Anisotropy Probe three-year data (WMAP3) and by galaxy clustering results from the Sloan Digital Sky Survey (SDSS). We find that the scale of symmetry breaking must be larger than about 3 Planck masses in order for inflation to generate acceptable values of the scalar spectral index and of the tensor-to-scalar ratio. We also show that the resulting quintessence equation-of-state can evolve rapidly at recent times and hence can potentially be distinguished from a simple cosmological constant in this parameter regime.

  8. Semantic Description of Educational Adaptive Hypermedia Based on a Conceptual Model

    ERIC Educational Resources Information Center

    Papasalouros, Andreas; Retalis, Symeon; Papaspyrou, Nikolaos

    2004-01-01

    The role of conceptual modeling in Educational Adaptive Hypermedia Applications (EAHA) is especially important. A conceptual model of an educational application depicts the instructional solution that is implemented, containing information about concepts that must be ac-quired by learners, tasks in which learners must be involved and resources…

  9. Enhancing Mentors' Effectiveness: The Promise of the "Adaptive Mentorship[C]" Model

    ERIC Educational Resources Information Center

    Ralph, Edwin George; Walker, Keith D.

    2010-01-01

    The "Adaptive Mentorship[C]" ("AM") model is described and implications are raised for its wider implementation. The researchers derived the AM model from earlier contingency leadership approaches; and during the last two decades, they have further refined AM through application and research. They suggest the benefits and transferability of AM to…

  10. A Standard-Based Model for Adaptive E-Learning Platform for Mauritian Academic Institutions

    ERIC Educational Resources Information Center

    Kanaksabee, P.; Odit, M. P.; Ramdoyal, A.

    2011-01-01

    The key aim of this paper is to introduce a standard-based model for adaptive e-learning platform for Mauritian academic institutions and to investigate the conditions and tools required to implement this model. The main forces of the system are that it allows collaborative learning, communication among user, and reduce considerable paper work.…

  11. Application of positive-real functions in hyperstable discrete model-reference adaptive system design.

    NASA Technical Reports Server (NTRS)

    Karmarkar, J. S.

    1972-01-01

    Proposal of an algorithmic procedure, based on mathematical programming methods, to design compensators for hyperstable discrete model-reference adaptive systems (MRAS). The objective of the compensator is to render the MRAS insensitive to initial parameter estimates within a maximized hypercube in the model parameter space.

  12. Coping with Relationship Stressors: The Impact of Different Working Models of Attachment and Links to Adaptation

    ERIC Educational Resources Information Center

    Seiffge-Krenke, Inge

    2006-01-01

    The study explores the role of working models of attachment in the process of coping with relationship stressors with a focus on long-term adaptation. In a 7-year longitudinal study of 112 participants, stress and coping were assessed during adolescence and emerging adulthood. In addition, working models of attachment were assessed by employing…

  13. Evaluation of the Stress Adjustment and Adaptation Model among Families Reporting Economic Pressure

    ERIC Educational Resources Information Center

    Vandsburger, Etty; Biggerstaff, Marilyn A.

    2004-01-01

    This research evaluates the Stress Adjustment and Adaptation Model (double ABCX model) examining the effects resiliency resources on family functioning when families experience economic pressure. Families (N = 128) with incomes at or below the poverty line from a rural area of a southern state completed measures of perceived economic pressure,…

  14. Modeling Speed-Accuracy Tradeoff in Adaptive System for Practicing Estimation

    ERIC Educational Resources Information Center

    Nižnan, Juraj

    2015-01-01

    Estimation is useful in situations where an exact answer is not as important as a quick answer that is good enough. A web-based adaptive system for practicing estimates is currently being developed. We propose a simple model for estimating student's latent skill of estimation. This model combines a continuous measure of correctness and response…

  15. Computerized Adaptive Testing Using a Class of High-Order Item Response Theory Models

    ERIC Educational Resources Information Center

    Huang, Hung-Yu; Chen, Po-Hsi; Wang, Wen-Chung

    2012-01-01

    In the human sciences, a common assumption is that latent traits have a hierarchical structure. Higher order item response theory models have been developed to account for this hierarchy. In this study, computerized adaptive testing (CAT) algorithms based on these kinds of models were implemented, and their performance under a variety of…

  16. Stable indirect adaptive switching control for fuzzy dynamical systems based on T-S multiple models

    NASA Astrophysics Data System (ADS)

    Sofianos, Nikolaos A.; Boutalis, Yiannis S.

    2013-08-01

    A new indirect adaptive switching fuzzy control method for fuzzy dynamical systems, based on Takagi-Sugeno (T-S) multiple models is proposed in this article. Motivated by the fact that indirect adaptive control techniques suffer from poor transient response, especially when the initialisation of the estimation model is highly inaccurate and the region of uncertainty for the plant parameters is very large, we present a fuzzy control method that utilises the advantages of multiple models strategy. The dynamical system is expressed using the T-S method in order to cope with the nonlinearities. T-S adaptive multiple models of the system to be controlled are constructed using different initial estimations for the parameters while one feedback linearisation controller corresponds to each model according to a specified reference model. The controller to be applied is determined at every time instant by the model which best approximates the plant using a switching rule with a suitable performance index. Lyapunov stability theory is used in order to obtain the adaptive law for the multiple models parameters, ensuring the asymptotic stability of the system while a modification in this law keeps the control input away from singularities. Also, by introducing the next best controller logic, we avoid possible infeasibilities in the control signal. Simulation results are presented, indicating the effectiveness and the advantages of the proposed method.

  17. Adapting STePS, an Adult Team Problem Solving Model, for Use with Sixth Grade Students.

    ERIC Educational Resources Information Center

    Sheive, L. T.; And Others

    Structured Team Problem Solving (STePS) is a problem solving model for shared decision making. This project uses the model to discover if children can learn using this method, and what adaptations would be necessary for child use. Sixth grade students in their social studies class worked together in teams (6-8) to identify what they already think…

  18. Towards Increased Relevance: Context-Adapted Models of the Learning Organization

    ERIC Educational Resources Information Center

    Örtenblad, Anders

    2015-01-01

    Purpose: The purposes of this paper are to take a closer look at the relevance of the idea of the learning organization for organizations in different generalized organizational contexts; to open up for the existence of multiple, context-adapted models of the learning organization; and to suggest a number of such models.…

  19. Columbia River Statistical Update Model, Version 4. 0 (COLSTAT4): Background documentation and user's guide

    SciTech Connect

    Whelan, G.; Damschen, D.W.; Brockhaus, R.D.

    1987-08-01

    Daily-averaged temperature and flow information on the Columbia River just downstream of Priest Rapids Dam and upstream of river mile 380 were collected and stored in a data base. The flow information corresponds to discharges that were collected daily from October 1, 1959, through July 28, 1986. The temperature information corresponds to values that were collected daily from January 1, 1965, through May 27, 1986. The computer model, COLSTAT4 (Columbia River Statistical Update - Version 4.0 model), uses the temperature-discharge data base to statistically analyze temperature and flow conditions by computing the frequency of occurrence and duration of selected temperatures and flow rates for the Columbia River. The COLSTAT4 code analyzes the flow and temperature information in a sequential time frame (i.e., a continuous analysis over a given time period); it also analyzes this information in a seasonal time frame (i.e., a periodic analysis over a specific season from year to year). A provision is included to enable the user to edit and/or extend the data base of temperature and flow information. This report describes the COLSTAT4 code and the information contained in its data base.

  20. Modelling the background aerosol climatologies (1989-2010) for the Mediterranean basin

    NASA Astrophysics Data System (ADS)

    Jimenez-Guerrero, Pedro; Jerez, Sonia

    2014-05-01

    Aerosol levels and composition are influenced by multiple atmospheric physico-chemical processes that can affect them from its release point (as primary aerosol), or via gas-to-particle conversion processes that give rise to secondary aerosols. The contribution of the various aerosol sources, the role of long-range transport and the contribution of primary and secondary particulate matter to the ambient aerosol concentrations over Europe are not well known (Kulmala et al., 2009). Focusing on the Mediterranean, Querol et al. (2009) point out that there is a lack of studies on the variability of particulate matter (PM) along the Mediterranean basin, necessary for understanding the special features that differentiate aerosol processes between the western, eastern and central Mediterranean basins. In this perspective, modelling systems based on state-of-science chemistry transport models (CTMs) are fundamental elements to investigate the transport and chemistry of pollutants behaviour at different scales and to assess the impact of emissions in aerosol levels and composition. Therefore, this study aims to summarise the results on the levels and chemical composition of aerosols along the Mediterranean basin, highlighting the marked gradient between the western-central-eastern coasts. Special attention is paid to the analysis of the seasonality of PM composition and levels. For this purpose, the regional modelling system WRF-CHIMERE-EMEP has been implemented for conducting a full transient simulation for the ERA-Interim period (1989-2010) using year-to-year changing EMEP emissions. The domain of study covers Europe with a horizontal resolution of 25 km and a vertical resolution of 23 layers in the troposphere; however the analysis focuses on the Mediterranean area. The PM levels and composition are compared to the measured values reported by the EMEP network, showing a good agreement with observations for both western and eastern Mediterranean. The modelling results for