NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
Finite element modelling and updating of a lively footbridge: The complete process
NASA Astrophysics Data System (ADS)
Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul
2007-03-01
The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This interpretation is often missing in the published literature. It was found that the composite slabs were less stiff than originally assumed and that the asphalt layer contributed considerably to the deck stiffness.
NASA Astrophysics Data System (ADS)
Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan
2018-01-01
The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.
Nonequivalence of updating rules in evolutionary games under high mutation rates.
Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Nonequivalence of updating rules in evolutionary games under high mutation rates
NASA Astrophysics Data System (ADS)
Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Electronic Education System Model-2
ERIC Educational Resources Information Center
Güllü, Fatih; Kuusik, Rein; Laanpere, Mart
2015-01-01
In this study we presented new EES Model-2 extended from EES model for more productive implementation in e-learning process design and modelling in higher education. The most updates were related to uppermost instructional layer. We updated learning processes object of the layer for adaptation of educational process for young and old people,…
Timing Interactions in Social Simulations: The Voter Model
NASA Astrophysics Data System (ADS)
Fernández-Gracia, Juan; Eguíluz, Víctor M.; Miguel, Maxi San
The recent availability of huge high resolution datasets on human activities has revealed the heavy-tailed nature of the interevent time distributions. In social simulations of interacting agents the standard approach has been to use Poisson processes to update the state of the agents, which gives rise to very homogeneous activity patterns with a well defined characteristic interevent time. As a paradigmatic opinion model we investigate the voter model and review the standard update rules and propose two new update rules which are able to account for heterogeneous activity patterns. For the new update rules each node gets updated with a probability that depends on the time since the last event of the node, where an event can be an update attempt (exogenous update) or a change of state (endogenous update). We find that both update rules can give rise to power law interevent time distributions, although the endogenous one more robustly. Apart from that for the exogenous update rule and the standard update rules the voter model does not reach consensus in the infinite size limit, while for the endogenous update there exist a coarsening process that drives the system toward consensus configurations.
Recent Updates of A Multi-Phase Transport (AMPT) Model
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei
2008-10-01
We will present recent updates to the AMPT model, a Monte Carlo transport model for high energy heavy ion collisions, since its first public release in 2004 and the corresponding detailed descriptions in Phys. Rev. C 72, 064901 (2005). The updates often result from user requests. Some of these updates expand the physics processes or descriptions in the model, while some updates improve the usability of the model such as providing the initial parton distributions or help avoid crashes on some operating systems. We will also explain how the AMPT model is being maintained and updated.
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
Build-up Approach to Updating the Mock Quiet Spike(TradeMark) Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike(TradeMark) (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented in order to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike(TradeMark) project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
Build-up Approach to Updating the Mock Quiet Spike(TM)Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. The NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Ho-Ling; Davis, Stacy Cagle
2009-12-01
This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the secondmore » major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that is possible on the overall totals, to the current FHWA estimates. Because NONROAD2005 model was designed for emission estimation purposes (i.e., not for measuring fuel consumption), it covers different equipment populations from those the FHWA models were based on. Thus, a direct comparison generally was not possible in most sectors. As a result, NONROAD2005 data were not used in the 2008 update of the FHWA off-highway models. The quality of fuel use estimates directly affect the data quality in many tables published in the Highway Statistics. Although updates have been made to the Off-Highway Gasoline Use Model and the Public Use Gasoline Model, some challenges remain due to aging model equations and discontinuation of data sources.« less
Disruption of the Right Temporoparietal Junction Impairs Probabilistic Belief Updating.
Mengotti, Paola; Dombert, Pascasie L; Fink, Gereon R; Vossel, Simone
2017-05-31
Generating and updating probabilistic models of the environment is a fundamental modus operandi of the human brain. Although crucial for various cognitive functions, the neural mechanisms of these inference processes remain to be elucidated. Here, we show the causal involvement of the right temporoparietal junction (rTPJ) in updating probabilistic beliefs and we provide new insights into the chronometry of the process by combining online transcranial magnetic stimulation (TMS) with computational modeling of behavioral responses. Female and male participants performed a modified location-cueing paradigm, where false information about the percentage of cue validity (%CV) was provided in half of the experimental blocks to prompt updating of prior expectations. Online double-pulse TMS over rTPJ 300 ms (but not 50 ms) after target appearance selectively decreased participants' updating of false prior beliefs concerning %CV, reflected in a decreased learning rate of a Rescorla-Wagner model. Online TMS over rTPJ also impacted on participants' explicit beliefs, causing them to overestimate %CV. These results confirm the involvement of rTPJ in updating of probabilistic beliefs, thereby advancing our understanding of this area's function during cognitive processing. SIGNIFICANCE STATEMENT Contemporary views propose that the brain maintains probabilistic models of the world to minimize surprise about sensory inputs. Here, we provide evidence that the right temporoparietal junction (rTPJ) is causally involved in this process. Because neuroimaging has suggested that rTPJ is implicated in divergent cognitive domains, the demonstration of an involvement in updating internal models provides a novel unifying explanation for these findings. We used computational modeling to characterize how participants change their beliefs after new observations. By interfering with rTPJ activity through online transcranial magnetic stimulation, we showed that participants were less able to update prior beliefs with TMS delivered at 300 ms after target onset. Copyright © 2017 the authors 0270-6474/17/375419-10$15.00/0.
Build-Up Approach to Updating the Mock Quiet Spike Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
When a new aircraft is designed or a modification is done to an existing aircraft, the aeroelastic properties of the aircraft should be examined to ensure the aircraft is flight worthy. Evaluating the aeroelastic properties of a new or modified aircraft can include performing a variety of analyses, such as modal and flutter analyses. In order to produce accurate results from these analyses, it is imperative to work with finite element models (FEM) that have been validated by or correlated to ground vibration test (GVT) data, Updating an analytical model using measured data is a challenge in the area of structural dynamics. The analytical model update process encompasses a series of optimizations that match analytical frequencies and mode shapes to the measured modal characteristics of structure. In the past, the method used to update a model to test data was "trial and error." This is an inefficient method - running a modal analysis, comparing the analytical results to the GVT data, manually modifying one or more structural parameters (mass, CG, inertia, area, etc.), rerunning the analysis, and comparing the new analytical modal characteristics to the GVT modal data. If the match is close enough (close enough defined by analyst's updating requirements), then the updating process is completed. If the match does not meet updating-requirements, then the parameters are changed again and the process is repeated. Clearly, this manual optimization process is highly inefficient for large FEM's and/or a large number of structural parameters. NASA Dryden Flight Research Center (DFRC) has developed, in-house, a Mode Matching Code that automates the above-mentioned optimization process, DFRC's in-house Mode Matching Code reads mode shapes and frequencies acquired from GVT to create the target model. It also reads the current analytical model, as we11 as the design variables and their upper and lower limits. It performs a modal analysis on this model and modifies it to create an updated model that has similar mode shapes and frequencies as those of the target model. The Mode Matching Code output frequencies and modal assurance criteria (MAC) values that allow for the quantified comparison of the updated model versus the target model. A recent application of this code is the F453 supersonic flight testing platform, NASA DFRC possesses a modified F-15B that is used as a test bed aircraft for supersonic flight experiments. Traditionally, the finite element model of the test article is generated. A GVT is done on the test article ta validate and update its FEM. This FEM is then mated to the F-15B model, which was correlated to GVT data in fall of 2004, A GVT is conducted with the test article mated to the aircraft, and this mated F-15B/ test article FEM is correlated to this final GVT.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Construction and Updating of Event Models in Auditory Event Processing
ERIC Educational Resources Information Center
Huff, Markus; Maurer, Annika E.; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank
2018-01-01
Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event…
An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry
NASA Astrophysics Data System (ADS)
Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul
2013-12-01
The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Adaptation of clinical prediction models for application in local settings.
Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M
2012-01-01
When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.
Construction and updating of event models in auditory event processing.
Huff, Markus; Maurer, Annika E; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank
2018-02-01
Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event boundaries. Evidence from reading time studies (increased reading times with increasing amount of change) suggest that updating of event models is incremental. We present results from 5 experiments that studied event processing (including memory formation processes and reading times) using an audio drama as well as a transcript thereof as stimulus material. Experiments 1a and 1b replicated the event boundary advantage effect for memory. In contrast to recent evidence from studies using visual stimulus material, Experiments 2a and 2b found no support for incremental updating with normally sighted and blind participants for recognition memory. In Experiment 3, we replicated Experiment 2a using a written transcript of the audio drama as stimulus material, allowing us to disentangle encoding and retrieval processes. Our results indicate incremental updating processes at encoding (as measured with reading times). At the same time, we again found recognition performance to be unaffected by the amount of change. We discuss these findings in light of current event cognition theories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Dissociable effects of surprise and model update in parietal and anterior cingulate cortex
O’Reilly, Jill X.; Schüffelgen, Urs; Cuell, Steven F.; Behrens, Timothy E. J.; Mars, Rogier B.; Rushworth, Matthew F. S.
2013-01-01
Brains use predictive models to facilitate the processing of expected stimuli or planned actions. Under a predictive model, surprising (low probability) stimuli or actions necessitate the immediate reallocation of processing resources, but they can also signal the need to update the underlying predictive model to reflect changes in the environment. Surprise and updating are often correlated in experimental paradigms but are, in fact, distinct constructs that can be formally defined as the Shannon information (IS) and Kullback–Leibler divergence (DKL) associated with an observation. In a saccadic planning task, we observed that distinct behaviors and brain regions are associated with surprise/IS and updating/DKL. Although surprise/IS was associated with behavioral reprogramming as indexed by slower reaction times, as well as with activity in the posterior parietal cortex [human lateral intraparietal area (LIP)], the anterior cingulate cortex (ACC) was specifically activated during updating of the predictive model (DKL). A second saccade-sensitive region in the inferior posterior parietal cortex (human 7a), which has connections to both LIP and ACC, was activated by surprise and modulated by updating. Pupillometry revealed a further dissociation between surprise and updating with an early positive effect of surprise and late negative effect of updating on pupil area. These results give a computational account of the roles of the ACC and two parietal saccade regions, LIP and 7a, by which their involvement in diverse tasks can be understood mechanistically. The dissociation of functional roles between regions within the reorienting/reprogramming network may also inform models of neurological phenomena, such as extinction and Balint syndrome, and neglect. PMID:23986499
Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.
ERIC Educational Resources Information Center
Gray, Susan; And Others
An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…
NASA Astrophysics Data System (ADS)
Sani, M. S. M.; Nazri, N. A.; Alawi, D. A. J.
2017-09-01
Resistance spot welding (RSW) is a proficient joining method commonly used for sheet metal joining and become one of the oldest spot welding processes use in industry especially in the automotive. RSW involves the application of heat and pressure without neglecting time taken when joining two or more metal sheets at a localized area which is claimed as the most efficient welding process in metal fabrication. The purpose of this project is to perform model updating of RSW plate structure between mild steel 1010 and stainless steel 304. In order to do the updating, normal mode finite element analysis (FEA) and experimental modal analysis (EMA) have been carried out. Result shows that the discrepancies of natural frequency between FEA and EMA are below than 10 %. Sensitivity model updating is evaluated in order to make sure which parameters are influences in this structural dynamic modification. Young’s modulus and density both materials are indicate significant parameters to do model updating. As a conclusion, after perform model updating, total average error of dissimilar RSW plate is improved significantly.
NASA Astrophysics Data System (ADS)
Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.
2018-02-01
This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.
Proposed reporting model update creates dialogue between FASB and not-for-profits.
Mosrie, Norman C
2016-04-01
Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.
Rapid Automated Aircraft Simulation Model Updating from Flight Data
NASA Technical Reports Server (NTRS)
Brian, Geoff; Morelli, Eugene A.
2011-01-01
Techniques to identify aircraft aerodynamic characteristics from flight measurements and compute corrections to an existing simulation model of a research aircraft were investigated. The purpose of the research was to develop a process enabling rapid automated updating of aircraft simulation models using flight data and apply this capability to all flight regimes, including flight envelope extremes. The process presented has the potential to improve the efficiency of envelope expansion flight testing, revision of control system properties, and the development of high-fidelity simulators for pilot training.
Remaking Memories: Reconsolidation Updates Positively Motivated Spatial Memory in Rats
ERIC Educational Resources Information Center
Jones, Bethany; Bukoski, Elizabeth; Nadel, Lynn; Fellous, Jean-Marc
2012-01-01
There is strong evidence that reactivation of a memory returns it to a labile state, initiating a restabilization process termed reconsolidation, which allows for updating of the memory. In this study we investigated reactivation-dependent updating using a new positively motivated spatial task in rodents that was designed specifically to model a…
Information distribution in distributed microprocessor based flight control systems
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1977-01-01
This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.
Adapting to change: The role of the right hemisphere in mental model building and updating.
Filipowicz, Alex; Anderson, Britt; Danckert, James
2016-09-01
We recently proposed that the right hemisphere plays a crucial role in the processes underlying mental model building and updating. Here, we review the evidence we and others have garnered to support this novel account of right hemisphere function. We begin by presenting evidence from patient work that suggests a critical role for the right hemisphere in the ability to learn from the statistics in the environment (model building) and adapt to environmental change (model updating). We then provide a review of neuroimaging research that highlights a network of brain regions involved in mental model updating. Next, we outline specific roles for particular regions within the network such that the anterior insula is purported to maintain the current model of the environment, the medial prefrontal cortex determines when to explore new or alternative models, and the inferior parietal lobule represents salient and surprising information with respect to the current model. We conclude by proposing some future directions that address some of the outstanding questions in the field of mental model building and updating. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Annual Report: Carbon Capture Simulation Initiative (CCSI) (30 September 2013)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, David C.; Syamlal, Madhava; Cottrell, Roger
2013-09-30
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that is developing and deploying state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models, with uncertainty quantification (UQ), optimization, risk analysis and decision making capabilities. The CCSI Toolset incorporates commercial and open-source software currently in use by industry and is also developing new software tools asmore » necessary to fill technology gaps identified during execution of the project. Ultimately, the CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. CCSI is led by the National Energy Technology Laboratory (NETL) and leverages the Department of Energy (DOE) national laboratories’ core strengths in modeling and simulation, bringing together the best capabilities at NETL, Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Lawrence Livermore National Laboratory (LLNL), and Pacific Northwest National Laboratory (PNNL). The CCSI’s industrial partners provide representation from the power generation industry, equipment manufacturers, technology providers and engineering and construction firms. The CCSI’s academic participants (Carnegie Mellon University, Princeton University, West Virginia University, Boston University and the University of Texas at Austin) bring unparalleled expertise in multiphase flow reactors, combustion, process synthesis and optimization, planning and scheduling, and process control techniques for energy processes. During Fiscal Year (FY) 13, CCSI announced the initial release of its first set of computational tools and models during the October 2012 meeting of its Industry Advisory Board. This initial release led to five companies licensing the CCSI Toolset under a Test and Evaluation Agreement this year. By the end of FY13, the CCSI Technical Team had completed development of an updated suite of computational tools and models. The list below summarizes the new and enhanced toolset components that were released following comprehensive testing during October 2013. 1. FOQUS. Framework for Optimization and Quantification of Uncertainty and Sensitivity. Package includes: FOQUS Graphic User Interface (GUI), simulation-based optimization engine, Turbine Client, and heat integration capabilities. There is also an updated simulation interface and new configuration GUI for connecting Aspen Plus or Aspen Custom Modeler (ACM) simulations to FOQUS and the Turbine Science Gateway. 2. A new MFIX-based Computational Fluid Dynamics (CFD) model to predict particle attrition. 3. A new dynamic reduced model (RM) builder, which generates computationally efficient RMs of the behavior of a dynamic system. 4. A completely re-written version of the algebraic surrogate model builder for optimization (ALAMO). The new version is several orders of magnitude faster than the initial release and eliminates the MATLAB dependency. 5. A new suite of high resolution filtered models for the hydrodynamics associated with horizontal cylindrical objects in a flow path. 6. The new Turbine Science Gateway (Cluster), which supports FOQUS for running multiple simulations for optimization or UQ using a local computer or cluster. 7. A new statistical tool (BSS-ANOVA-UQ) for calibration and validation of CFD models. 8. A new basic data submodel in Aspen Plus format for a representative high viscosity capture solvent, 2-MPZ system. 9. An updated RM tool for CFD (REVEAL) that can create a RM from MFIX. A new lightweight, stand-alone version will be available in late 2013. 10. An updated RM integration tool to convert the RM from REVEAL into a CAPE-OPEN or ACM model for use in a process simulator. 11. An updated suite of unified steady-state and dynamic process models for solid sorbent carbon capture included bubbling fluidized bed and moving bed reactors. 12. An updated and unified set of compressor models including steady-state design point model and dynamic model with surge detection. 13. A new framework for the synthesis and optimization of coal oxycombustion power plants using advanced optimization algorithms. This release focuses on modeling and optimization of a cryogenic air separation unit (ASU). 14. A new technical risk model in spreadsheet format. 15. An updated version of the sorbent kinetic/equilibrium model for parameter estimation for the 1st generation sorbent model. 16. An updated process synthesis superstructure model to determine optimal process configurations utilizing surrogate models from ALAMO for adsorption and regeneration in a solid sorbent process. 17. Validation models for NETL Carbon Capture Unit utilizing sorbent AX. Additional validation models will be available for sorbent 32D in 2014. 18. An updated hollow fiber membrane model and system example for carbon capture. 19. An updated reference power plant model in Thermoflex that includes additional steam extraction and reinjection points to enable heat integration module. 20. An updated financial risk model in spreadsheet format.« less
Data update in a land information network
NASA Astrophysics Data System (ADS)
Mullin, Robin C.
1988-01-01
The on-going update of data exchanged in a land information network is examined. In the past, major developments have been undertaken to enable the exchange of data between land information systems. A model of a land information network and the data update process have been developed. Based on these, a functional description of the database and software to perform data updating is presented. A prototype of the data update process was implemented using the ARC/INFO geographic information system. This was used to test four approaches to data updating, i.e., bulk, block, incremental, and alert updates. A bulk update is performed by replacing a complete file with an updated file. A block update requires that the data set be partitioned into blocks. When an update occurs, only the blocks which are affected need to be transferred. An incremental update approach records each feature which is added or deleted and transmits only the features needed to update the copy of the file. An alert is a marker indicating that an update has occurred. It can be placed in a file to warn a user that if he is active in an area containing markers, updated data is available. The four approaches have been tested using a cadastral data set.
Egocentric-updating during navigation facilitates episodic memory retrieval.
Gomez, Alice; Rousset, Stéphane; Baciu, Monica
2009-11-01
Influential models suggest that spatial processing is essential for episodic memory [O'Keefe, J., & Nadel, L. (1978). The hippocampus as a cognitive map. London: Oxford University Press]. However, although several types of spatial relations exist, such as allocentric (i.e. object-to-object relations), egocentric (i.e. static object-to-self relations) or egocentric updated on navigation information (i.e. self-to-environment relations in a dynamic way), usually only allocentric representations are described as potentially subserving episodic memory [Nadel, L., & Moscovitch, M. (1998). Hippocampal contributions to cortical plasticity. Neuropharmacology, 37(4-5), 431-439]. This study proposes to confront the allocentric representation hypothesis with an egocentric updated with self-motion representation hypothesis. In the present study, we explored retrieval performance in relation to these two types of spatial processing levels during learning. Episodic remembering has been assessed through Remember responses in a recall and in a recognition task, combined with a "Remember-Know-Guess" paradigm [Gardiner, J. M. (2001). Episodic memory and autonoetic consciousness: A first-person approach. Philosophical Transactions of the Royal Society B: Biological Sciences, 356(1413), 1351-1361] to assess the autonoetic level of responses. Our results show that retrieval performance was significantly higher when encoding was performed in the egocentric-updated condition. Although egocentric updated with self-motion and allocentric representations are not mutually exclusive, these results suggest that egocentric updating processing facilitates remember responses more than allocentric processing. The results are discussed according to Burgess and colleagues' model of episodic memory [Burgess, N., Becker, S., King, J. A., & O'Keefe, J. (2001). Memory for events and their spatial context: models and experiments. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 356(1413), 1493-1503].
Preliminary Model of Porphyry Copper Deposits
Berger, Byron R.; Ayuso, Robert A.; Wynn, Jeffrey C.; Seal, Robert R.
2008-01-01
The U.S. Geological Survey (USGS) Mineral Resources Program develops mineral-deposit models for application in USGS mineral-resource assessments and other mineral resource-related activities within the USGS as well as for nongovernmental applications. Periodic updates of models are published in order to incorporate new concepts and findings on the occurrence, nature, and origin of specific mineral deposit types. This update is a preliminary model of porphyry copper deposits that begins an update process of porphyry copper models published in USGS Bulletin 1693 in 1986. This update includes a greater variety of deposit attributes than were included in the 1986 model as well as more information about each attribute. It also includes an expanded discussion of geophysical and remote sensing attributes and tools useful in resource evaluations, a summary of current theoretical concepts of porphyry copper deposit genesis, and a summary of the environmental attributes of unmined and mined deposits.
iTree-Hydro: Snow hydrology update for the urban forest hydrology model
Yang Yang; Theodore A. Endreny; David J. Nowak
2011-01-01
This article presents snow hydrology updates made to iTree-Hydro, previously called the Urban Forest EffectsâHydrology model. iTree-Hydro Version 1 was a warm climate model developed by the USDA Forest Service to provide a process-based planning tool with robust water quantity and quality predictions given data limitations common to most urban areas. Cold climate...
Information dissemination model for social media with constant updates
NASA Astrophysics Data System (ADS)
Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui
2018-07-01
With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.
The components of working memory updating: an experimental decomposition and individual differences.
Ecker, Ullrich K H; Lewandowsky, Stephan; Oberauer, Klaus; Chee, Abby E H
2010-01-01
Working memory updating (WMU) has been identified as a cognitive function of prime importance for everyday tasks and has also been found to be a significant predictor of higher mental abilities. Yet, little is known about the constituent processes of WMU. We suggest that operations required in a typical WMU task can be decomposed into 3 major component processes: retrieval, transformation, and substitution. We report a large-scale experiment that instantiated all possible combinations of those 3 component processes. Results show that the 3 components make independent contributions to updating performance. We additionally present structural equation models that link WMU task performance and working memory capacity (WMC) measures. These feature the methodological advancement of estimating interindividual covariation and experimental effects on mean updating measures simultaneously. The modeling results imply that WMC is a strong predictor of WMU skills in general, although some component processes-in particular, substitution skills-were independent of WMC. Hence, the reported predictive power of WMU measures may rely largely on common WM functions also measured in typical WMC tasks, although substitution skills may make an independent contribution to predicting higher mental abilities. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
76 FR 296 - Periodic Reporting
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-04
... part would update the mail processing portion of the Parcel Select/Parcel Return Service cost models...) processing cost model that was filed as Proposal Seven on September 8, 2010. Proposal Thirteen at 1. These... develop the Standard Mail/non-flat machinable (NFM) mail processing cost model. It also proposes to use...
NASA Astrophysics Data System (ADS)
Li, Ning; McLaughlin, Dennis; Kinzelbach, Wolfgang; Li, WenPeng; Dong, XinGuang
2015-10-01
Model uncertainty needs to be quantified to provide objective assessments of the reliability of model predictions and of the risk associated with management decisions that rely on these predictions. This is particularly true in water resource studies that depend on model-based assessments of alternative management strategies. In recent decades, Bayesian data assimilation methods have been widely used in hydrology to assess uncertain model parameters and predictions. In this case study, a particular data assimilation algorithm, the Ensemble Smoother with Multiple Data Assimilation (ESMDA) (Emerick and Reynolds, 2012), is used to derive posterior samples of uncertain model parameters and forecasts for a distributed hydrological model of Yanqi basin, China. This model is constructed using MIKESHE/MIKE11software, which provides for coupling between surface and subsurface processes (DHI, 2011a-d). The random samples in the posterior parameter ensemble are obtained by using measurements to update 50 prior parameter samples generated with a Latin Hypercube Sampling (LHS) procedure. The posterior forecast samples are obtained from model runs that use the corresponding posterior parameter samples. Two iterative sample update methods are considered: one based on an a perturbed observation Kalman filter update and one based on a square root Kalman filter update. These alternatives give nearly the same results and converge in only two iterations. The uncertain parameters considered include hydraulic conductivities, drainage and river leakage factors, van Genuchten soil property parameters, and dispersion coefficients. The results show that the uncertainty in many of the parameters is reduced during the smoother updating process, reflecting information obtained from the observations. Some of the parameters are insensitive and do not benefit from measurement information. The correlation coefficients among certain parameters increase in each iteration, although they generally stay below 0.50.
Lu, Xiaoman; Zheng, Guang; Miller, Colton; Alvarado, Ernesto
2017-09-08
Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% ( n = 35, p < 0.05, RMSE = 2.20 kg/m²) and 85% ( n = 100, p < 0.01, RMSE = 1.71 kg/m²) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB.
Lu, Xiaoman; Zheng, Guang; Miller, Colton
2017-01-01
Monitoring and understanding the spatio-temporal variations of forest aboveground biomass (AGB) is a key basis to quantitatively assess the carbon sequestration capacity of a forest ecosystem. To map and update forest AGB in the Greater Khingan Mountains (GKM) of China, this work proposes a physical-based approach. Based on the baseline forest AGB from Landsat Enhanced Thematic Mapper Plus (ETM+) images in 2008, we dynamically updated the annual forest AGB from 2009 to 2012 by adding the annual AGB increment (ABI) obtained from the simulated daily and annual net primary productivity (NPP) using the Boreal Ecosystem Productivity Simulator (BEPS) model. The 2012 result was validated by both field- and aerial laser scanning (ALS)-based AGBs. The predicted forest AGB for 2012 estimated from the process-based model can explain 31% (n = 35, p < 0.05, RMSE = 2.20 kg/m2) and 85% (n = 100, p < 0.01, RMSE = 1.71 kg/m2) of variation in field- and ALS-based forest AGBs, respectively. However, due to the saturation of optical remote sensing-based spectral signals and contribution of understory vegetation, the BEPS-based AGB tended to underestimate/overestimate the AGB for dense/sparse forests. Generally, our results showed that the remotely sensed forest AGB estimates could serve as the initial carbon pool to parameterize the process-based model for NPP simulation, and the combination of the baseline forest AGB and BEPS model could effectively update the spatiotemporal distribution of forest AGB. PMID:28885556
NASA Technical Reports Server (NTRS)
Mahanama, Sarith P.; Koster, Randal D.; Walker, Gregory K.; Takacs, Lawrence L.; Reichle, Rolf H.; De Lannoy, Gabrielle; Liu, Qing; Zhao, Bin; Suarez, Max J.
2015-01-01
The Earths land surface boundary conditions in the Goddard Earth Observing System version 5 (GEOS-5) modeling system were updated using recent high spatial and temporal resolution global data products. The updates include: (i) construction of a global 10-arcsec land-ocean lakes-ice mask; (ii) incorporation of a 10-arcsec Globcover 2009 land cover dataset; (iii) implementation of Level 12 Pfafstetter hydrologic catchments; (iv) use of hybridized SRTM global topography data; (v) construction of the HWSDv1.21-STATSGO2 merged global 30 arc second soil mineral and carbon data in conjunction with a highly-refined soil classification system; (vi) production of diffuse visible and near-infrared 8-day MODIS albedo climatologies at 30-arcsec from the period 2001-2011; and (vii) production of the GEOLAND2 and MODIS merged 8-day LAI climatology at 30-arcsec for GEOS-5. The global data sets were preprocessed and used to construct global raster data files for the software (mkCatchParam) that computes parameters on catchment-tiles for various atmospheric grids. The updates also include a few bug fixes in mkCatchParam, as well as changes (improvements in algorithms, etc.) to mkCatchParam that allow it to produce tile-space parameters efficiently for high resolution AGCM grids. The update process also includes the construction of data files describing the vegetation type fractions, soil background albedo, nitrogen deposition and mean annual 2m air temperature to be used with the future Catchment CN model and the global stream channel network to be used with the future global runoff routing model. This report provides detailed descriptions of the data production process and data file format of each updated data set.
Updating National Topographic Data Base Using Change Detection Methods
NASA Astrophysics Data System (ADS)
Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.
2016-06-01
The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.
The Use of Multiple Data Sources in the Process of Topographic Maps Updating
NASA Astrophysics Data System (ADS)
Cantemir, A.; Visan, A.; Parvulescu, N.; Dogaru, M.
2016-06-01
The methods used in the process of updating maps have evolved and become more complex, especially upon the development of the digital technology. At the same time, the development of technology has led to an abundance of available data that can be used in the updating process. The data sources came in a great variety of forms and formats from different acquisition sensors. Satellite images provided by certain satellite missions are now available on space agencies portals. Images stored in archives of satellite missions such us Sentinel, Landsat and other can be downloaded free of charge.The main advantages are represented by the large coverage area and rather good spatial resolution that enables the use of these images for the map updating at an appropriate scale. In our study we focused our research of these images on 1: 50.000 scale map. DEM that are globally available could represent an appropriate input for watershed delineation and stream network generation, that can be used as support for hydrography thematic layer update. If, in addition to remote sensing aerial photogrametry and LiDAR data are ussed, the accuracy of data sources is enhanced. Ortophotoimages and Digital Terrain Models are the main products that can be used for feature extraction and update. On the other side, the use of georeferenced analogical basemaps represent a significant addition to the process. Concerning the thematic maps, the classic representation of the terrain by contour lines derived from DTM, remains the best method of surfacing the earth on a map, nevertheless the correlation with other layers such as Hidrography are mandatory. In the context of the current national coverage of the Digital Terrain Model, one of the main concerns of the National Center of Cartography, through the Cartography and Photogrammetry Department, is represented by the exploitation of the available data in order to update the layers of the Topographic Reference Map 1:5000, known as TOPRO5 and at the same time, through the generalization and additional data sources of the Romanian 1:50 000 scale map. This paper also investigates the general perspective of DTM automatic use derived products in the process of updating the topographic maps.
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
NASA Astrophysics Data System (ADS)
Li, Y. P.; Elbern, H.; Lu, K. D.; Friese, E.; Kiendler-Scharr, A.; Mentel, Th. F.; Wang, X. S.; Wahner, A.; Zhang, Y. H.
2013-03-01
The formation of Secondary organic aerosol (SOA) was simulated with the Secondary ORGanic Aerosol Model (SORGAM) by a classical gas-particle partitioning concept, using the two-product model approach, which is widely used in chemical transport models. In this study, we extensively updated SORGAM including three major modifications: firstly, we derived temperature dependence functions of the SOA yields for aromatics and biogenic VOCs, based on recent chamber studies within a sophisticated mathematic optimization framework; secondly, we implemented the SOA formation pathways from photo oxidation (OH initiated) of isoprene; thirdly, we implemented the SOA formation channel from NO3-initiated oxidation of reactive biogenic hydrocarbons (isoprene and monoterpenes). The temperature dependence functions of the SOA yields were validated against available chamber experiments. Moreover, the whole updated SORGAM module was validated against ambient SOA observations represented by the summed oxygenated organic aerosol (OOA) concentrations abstracted from Aerosol Mass Spectrometer (AMS) measurements at a rural site near Rotterdam, the Netherlands, performed during the IMPACT campaign in May 2008. In this case, we embedded both the original and the updated SORGAM module into the EURopean Air pollution and Dispersion-Inverse Model (EURAD-IM), which showed general good agreements with the observed meteorological parameters and several secondary products such as O3, sulfate and nitrate. With the updated SORGAM module, the EURAD-IM model also captured the observed SOA concentrations reasonably well especially those during nighttime. In contrast, the EURAD-IM model before update underestimated the observations by a factor of up to 5. The large improvements of the modeled SOA concentrations by updated SORGAM were attributed to the mentioned three modifications. Embedding the temperature dependence functions of the SOA yields, including the new pathways from isoprene photo oxidations, and switching on the SOA formation from NO3 initiated biogenic VOCs oxidations contributed to this enhancement by 10%, 22% and 47%, respectively. However, the EURAD-IM model with updated SORGAM still clearly underestimated the afternoon SOA observations up to a factor of two. More work such as to improve the simulated OH concentrations under high VOCs and low NOx concentrations, further including the SOA formation from semi-volatile organic compounds, the correct aging process of aerosols, oligomerization process and the influence on the biogenic SOA by the anthropogenic SOA, are still required to fill the gap.
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
Guillermo A. Mendoza; Roger J. Meimban; Philip A. Araman; William G. Luppold
1991-01-01
A log inventory model and a real-time hardwood process simulation model were developed and combined into an integrated production planning and control system for hardwood sawmills. The log inventory model was designed to monitor and periodically update the status of the logs in the log yard. The process simulation model was designed to estimate various sawmill...
Toward a Neural Basis of Music Perception – A Review and Updated Model
Koelsch, Stefan
2011-01-01
Music perception involves acoustic analysis, auditory memory, auditory scene analysis, processing of interval relations, of musical syntax and semantics, and activation of (pre)motor representations of actions. Moreover, music perception potentially elicits emotions, thus giving rise to the modulation of emotional effector systems such as the subjective feeling system, the autonomic nervous system, the hormonal, and the immune system. Building on a previous article (Koelsch and Siebel, 2005), this review presents an updated model of music perception and its neural correlates. The article describes processes involved in music perception, and reports EEG and fMRI studies that inform about the time course of these processes, as well as about where in the brain these processes might be located. PMID:21713060
PRMS-IV, the precipitation-runoff modeling system, version 4
Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.
2015-01-01
Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.
Multiplicative Forests for Continuous-Time Processes
Weiss, Jeremy C.; Natarajan, Sriraam; Page, David
2013-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability. PMID:25284967
Multiplicative Forests for Continuous-Time Processes.
Weiss, Jeremy C; Natarajan, Sriraam; Page, David
2012-01-01
Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.
Aircraft engine sensor fault diagnostics using an on-line OBEM update method.
Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye
2017-01-01
This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault.
Aircraft engine sensor fault diagnostics using an on-line OBEM update method
Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye
2017-01-01
This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault. PMID:28182692
Online Updating of Statistical Inference in the Big Data Setting.
Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui
2016-01-01
We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
Joint Inversion of 3d Mt/gravity/magnetic at Pisagua Fault.
NASA Astrophysics Data System (ADS)
Bascur, J.; Saez, P.; Tapia, R.; Humpire, M.
2017-12-01
This work shows the results of a joint inversion at Pisagua Fault using 3D Magnetotellurics (MT), gravity and regional magnetic data. The MT survey has a poor coverage of study area with only 21 stations; however, it allows to detect a low resistivity zone aligned with the Pisagua Fault trace that it is interpreted as a damage zone. The integration of gravity and magnetic data, which have more dense sampling and coverage, adds more detail and resolution to the detected low resistivity structure and helps to improve the structure interpretation using the resulted models (density, magnetic-susceptibility and electrical resistivity). The joint inversion process minimizes a multiple target function which includes the data misfit, model roughness and coupling norms (crossgradient and direct relations) for all geophysical methods considered (MT, gravity and magnetic). This process is solved iteratively using the Gauss-Newton method which updates the model of each geophysical method improving its individual data misfit, model roughness and the coupling with the other geophysical models. For solving the model updates of magnetic and gravity methods were developed dedicated 3D inversion software codes which include the coupling norms with additionals geophysical parameters. The model update of the 3D MT is calculated using an iterative method which sequentially filters the priority model and the output model of a single 3D MT inversion process for obtaining the resistivity model coupled solution with the gravity and magnetic methods.
Separate encoding of model-based and model-free valuations in the human brain.
Beierholm, Ulrik R; Anen, Cedric; Quartz, Steven; Bossaerts, Peter
2011-10-01
Behavioral studies have long shown that humans solve problems in two ways, one intuitive and fast (System 1, model-free), and the other reflective and slow (System 2, model-based). The neurobiological basis of dual process problem solving remains unknown due to challenges of separating activation in concurrent systems. We present a novel neuroeconomic task that predicts distinct subjective valuation and updating signals corresponding to these two systems. We found two concurrent value signals in human prefrontal cortex: a System 1 model-free reinforcement signal and a System 2 model-based Bayesian signal. We also found a System 1 updating signal in striatal areas and a System 2 updating signal in lateral prefrontal cortex. Further, signals in prefrontal cortex preceded choices that are optimal according to either updating principle, while signals in anterior cingulate cortex and globus pallidus preceded deviations from optimal choice for reinforcement learning. These deviations tended to occur when uncertainty regarding optimal values was highest, suggesting that disagreement between dual systems is mediated by uncertainty rather than conflict, confirming recent theoretical proposals. Copyright © 2011 Elsevier Inc. All rights reserved.
Universal Darwinism As a Process of Bayesian Inference.
Campbell, John O
2016-01-01
Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.
Universal Darwinism As a Process of Bayesian Inference
Campbell, John O.
2016-01-01
Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an “experiment” in the external world environment, and the results of that “experiment” or the “surprise” entailed by predicted and actual outcomes of the “experiment.” Minimization of free energy implies that the implicit measure of “surprise” experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature. PMID:27375438
The Blessing and the Curse of the Multiplicative Updates
NASA Astrophysics Data System (ADS)
Warmuth, Manfred K.
Multiplicative updates multiply the parameters by nonnegative factors. These updates are motivated by a Maximum Entropy Principle and they are prevalent in evolutionary processes where the parameters are for example concentrations of species and the factors are survival rates. The simplest such update is Bayes rule and we give an in vitro selection algorithm for RNA strands that implements this rule in the test tube where each RNA strand represents a different model. In one liter of the RNA "soup" there are approximately 1020 different strands and therefore this is a rather high-dimensional implementation of Bayes rule.
Updates to the CMAQ Post Processing and Evaluation Tools for 2016
In the spring of 2016, the evaluation tools distributed with the CMAQ model code were updated and new tools were added to the existing set of tools. Observation data files, compatible with the AMET software, were also made available on the CMAS website for the first time with the...
Narrative event boundaries, reading times, and expectation.
Pettijohn, Kyle A; Radvansky, Gabriel A
2016-10-01
During text comprehension, readers create mental representations of the described events, called situation models. When new information is encountered, these models must be updated or new ones created. Consistent with the event indexing model, previous studies have shown that when readers encounter an event shift, reading times often increase. However, such increases are not consistently observed. This paper addresses this inconsistency by examining the extent to which reading-time differences observed at event shifts reflect an unexpectedness in the narrative rather than processes involved in model updating. In two reassessments of prior work, event shifts known to increase reading time were rated as less expected, and expectedness ratings significantly predicted reading time. In three new experiments, participants read stories in which an event shift was or was not foreshadowed, thereby influencing expectedness of the shift. Experiment 1 revealed that readers do not expect event shifts, but foreshadowing eliminates this. Experiment 2 showed that foreshadowing does not affect identification of event shifts. Finally, Experiment 3 found that, although reading times increased when an event shift was not foreshadowed, they were not different from controls when it was. Moreover, responses to memory probes were slower following an event shift regardless of foreshadowing, suggesting that situation model updating had taken place. Overall, the results support the idea that previously observed reading time increases at event shifts reflect, at least in part, a reader's unexpected encounter with a shift rather than an increase in processing effort required to update a situation model.
Valence-Dependent Belief Updating: Computational Validation
Kuzmanovic, Bojana; Rigoux, Lionel
2017-01-01
People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499
Valence-Dependent Belief Updating: Computational Validation.
Kuzmanovic, Bojana; Rigoux, Lionel
2017-01-01
People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.
Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng
2018-05-03
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Qin, Fangjun; Jiang, Sai; Zha, Feng
2018-01-01
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538
The Cancer Family Caregiving Experience: An Updated and Expanded Conceptual Model
Fletcher, Barbara Swore; Miaskowski, Christine; Given, Barbara; Schumacher, Karen
2011-01-01
Objective The decade from 2000–2010 was an era of tremendous growth in family caregiving research specific to the cancer population. This research has implications for how cancer family caregiving is conceptualized, yet the most recent comprehensive model of cancer family caregiving was published ten years ago. Our objective was to develop an updated and expanded comprehensive model of the cancer family caregiving experience, derived from concepts and variables used in research during past ten years. Methods A conceptual model was developed based on cancer family caregiving research published from 2000–2010. Results Our updated and expanded model has three main elements: 1) the stress process, 2) contextual factors, and 3) the cancer trajectory. Emerging ways of conceptualizing the relationships between and within model elements are addressed, as well as an emerging focus on caregiver-patient dyads as the unit of analysis. Conclusions Cancer family caregiving research has grown dramatically since 2000 resulting in a greatly expanded conceptual landscape. This updated and expanded model of the cancer family caregiving experience synthesizes the conceptual implications of an international body of work and demonstrates tremendous progress in how cancer family caregiving research is conceptualized. PMID:22000812
Dynamic updating atlas for heart segmentation with a nonlinear field-based model.
Cai, Ken; Yang, Rongqian; Yue, Hongwei; Li, Lihua; Ou, Shanxing; Liu, Feng
2017-09-01
Segmentation of cardiac computed tomography (CT) images is an effective method for assessing the dynamic function of the heart and lungs. In the atlas-based heart segmentation approach, the quality of segmentation usually relies upon atlas images, and the selection of those reference images is a key step. The optimal goal in this selection process is to have the reference images as close to the target image as possible. This study proposes an atlas dynamic update algorithm using a scheme of nonlinear deformation field. The proposed method is based on the features among double-source CT (DSCT) slices. The extraction of these features will form a base to construct an average model and the created reference atlas image is updated during the registration process. A nonlinear field-based model was used to effectively implement a 4D cardiac segmentation. The proposed segmentation framework was validated with 14 4D cardiac CT sequences. The algorithm achieved an acceptable accuracy (1.0-2.8 mm). Our proposed method that combines a nonlinear field-based model and dynamic updating atlas strategies can provide an effective and accurate way for whole heart segmentation. The success of the proposed method largely relies on the effective use of the prior knowledge of the atlas and the similarity explored among the to-be-segmented DSCT sequences. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.
2011-07-01
This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.
ERIC Educational Resources Information Center
Banta, Trudy W., Ed.
2014-01-01
This issue of "Assessment Update" presents the following articles: (1) Effective Leadership Assessment: A 360-Degree Process; (2) Editor's Notes: Accentuating the Positive in Our Work; (3) The Broadcast Education Association's Model Rubrics Project: Building Consensus One Rubric at a Time; (4) Building a Better…
Single-Trial Event-Related Potential Correlates of Belief Updating
Murawski, Carsten; Bode, Stefan
2015-01-01
Abstract Belief updating—the process by which an agent alters an internal model of its environment—is a core function of the CNS. Recent theory has proposed broad principles by which belief updating might operate, but more precise details of its implementation in the human brain remain unclear. In order to address this question, we studied how two components of the human event-related potential encoded different aspects of belief updating. Participants completed a novel perceptual learning task while electroencephalography was recorded. Participants learned the mapping between the contrast of a dynamic visual stimulus and a monetary reward and updated their beliefs about a target contrast on each trial. A Bayesian computational model was formulated to estimate belief states at each trial and was used to quantify the following two variables: belief update size and belief uncertainty. Robust single-trial regression was used to assess how these model-derived variables were related to the amplitudes of the P3 and the stimulus-preceding negativity (SPN), respectively. Results showed a positive relationship between belief update size and P3 amplitude at one fronto-central electrode, and a negative relationship between SPN amplitude and belief uncertainty at a left central and a right parietal electrode. These results provide evidence that belief update size and belief uncertainty have distinct neural signatures that can be tracked in single trials in specific ERP components. This, in turn, provides evidence that the cognitive mechanisms underlying belief updating in humans can be described well within a Bayesian framework. PMID:26473170
Jones, Joseph L.; Haluska, Tana L.; Kresch, David L.
2001-01-01
A method of updating flood inundation maps at a fraction of the expense of using traditional methods was piloted in Washington State as part of the U.S. Geological Survey Urban Geologic and Hydrologic Hazards Initiative. Large savings in expense may be achieved by building upon previous Flood Insurance Studies and automating the process of flood delineation with a Geographic Information System (GIS); increases in accuracy and detail result from the use of very-high-accuracy elevation data and automated delineation; and the resulting digital data sets contain valuable ancillary information such as flood depth, as well as greatly facilitating map storage and utility. The method consists of creating stage-discharge relations from the archived output of the existing hydraulic model, using these relations to create updated flood stages for recalculated flood discharges, and using a GIS to automate the map generation process. Many of the effective flood maps were created in the late 1970?s and early 1980?s, and suffer from a number of well recognized deficiencies such as out-of-date or inaccurate estimates of discharges for selected recurrence intervals, changes in basin characteristics, and relatively low quality elevation data used for flood delineation. FEMA estimates that 45 percent of effective maps are over 10 years old (FEMA, 1997). Consequently, Congress has mandated the updating and periodic review of existing maps, which have cost the Nation almost 3 billion (1997) dollars. The need to update maps and the cost of doing so were the primary motivations for piloting a more cost-effective and efficient updating method. New technologies such as Geographic Information Systems and LIDAR (Light Detection and Ranging) elevation mapping are key to improving the efficiency of flood map updating, but they also improve the accuracy, detail, and usefulness of the resulting digital flood maps. GISs produce digital maps without manual estimation of inundated areas between cross sections, and can generate working maps across a broad range of scales, for any selected area, and overlayed with easily updated cultural features. Local governments are aggressively collecting very-high-accuracy elevation data for numerous reasons; this not only lowers the cost and increases accuracy of flood maps, but also inherently boosts the level of community involvement in the mapping process. These elevation data are also ideal for hydraulic modeling, should an existing model be judged inadequate.
TRMM Microwave Imager (TMI) Updates for Final Data Version Release
NASA Technical Reports Server (NTRS)
Kroodsma, Rachael A; Bilanow, Stephen; Ji, Yimin; McKague, Darren
2017-01-01
The Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) dataset released by the Precipitation Processing System (PPS) will be updated to a final version within the next year. These updates are based on increased knowledge in recent years of radiometer calibration and sensor performance issues. In particular, the Global Precipitation Measurement (GPM) Microwave Imager (GMI) is used as a model for many of the TMI version updates. This paper discusses four aspects of the TMI data product that will be improved: spacecraft attitude, calibration and quality control, along-scan bias corrections, and sensor pointing accuracy. These updates will be incorporated into the final TMI data version, improving the quality of the data product and ensuring accurate geophysical parameters can be derived from TMI.
Environmental Control Systems for Exploration Missions One and Two
NASA Technical Reports Server (NTRS)
Falcone, Mark A.
2017-01-01
In preparing for Exploration Missions One and Two (EM-1 & EM-2), the Ground Systems Development and Operations Program has significant updates to be made to nearly all facilities. This is all being done to accommodate the Space Launch System, which will be the world’s largest rocket in history upon fruition. Facilitating the launch of such a rocket requires an updated Vehicle Assembly Building, an upgraded Launchpad, Payload Processing Facility, and more. In this project, Environmental Control Systems across several facilities were involved, though there is a focus around the Mobile Launcher and Launchpad. Parts were ordered, analysis models were updated, design drawings were updated, and more.
Updates to the NASA Space Telecommunications Radio System (STRS) Architecture
NASA Technical Reports Server (NTRS)
Kacpura, Thomas J.; Handler, Louis M.; Briones, Janette; Hall, Charles S.
2008-01-01
This paper describes an update of the Space Telecommunications Radio System (STRS) open architecture for NASA space based radios. The STRS architecture has been defined as a framework for the design, development, operation and upgrade of space based software defined radios, where processing resources are constrained. The architecture has been updated based upon reviews by NASA missions, radio providers, and component vendors. The STRS Standard prescribes the architectural relationship between the software elements used in software execution and defines the Application Programmer Interface (API) between the operating environment and the waveform application. Modeling tools have been adopted to present the architecture. The paper will present a description of the updated API, configuration files, and constraints. Minimum compliance is discussed for early implementations. The paper then closes with a summary of the changes made and discussion of the relevant alignment with the Object Management Group (OMG) SWRadio specification, and enhancements to the specialized signal processing abstraction.
Chen, C P; Wan, J Z
1999-01-01
A fast learning algorithm is proposed to find an optimal weights of the flat neural networks (especially, the functional-link network). Although the flat networks are used for nonlinear function approximation, they can be formulated as linear systems. Thus, the weights of the networks can be solved easily using a linear least-square method. This formulation makes it easier to update the weights instantly for both a new added pattern and a new added enhancement node. A dynamic stepwise updating algorithm is proposed to update the weights of the system on-the-fly. The model is tested on several time-series data including an infrared laser data set, a chaotic time-series, a monthly flour price data set, and a nonlinear system identification problem. The simulation results are compared to existing models in which more complex architectures and more costly training are needed. The results indicate that the proposed model is very attractive to real-time processes.
Finite grade pheromone ant colony optimization for image segmentation
NASA Astrophysics Data System (ADS)
Yuanjing, F.; Li, Y.; Liangjun, K.
2008-06-01
By combining the decision process of ant colony optimization (ACO) with the multistage decision process of image segmentation based on active contour model (ACM), an algorithm called finite grade ACO (FACO) for image segmentation is proposed. This algorithm classifies pheromone into finite grades and updating of the pheromone is achieved by changing the grades and the updated quantity of pheromone is independent from the objective function. The algorithm that provides a new approach to obtain precise contour is proved to converge to the global optimal solutions linearly by means of finite Markov chains. The segmentation experiments with ultrasound heart image show the effectiveness of the algorithm. Comparing the results for segmentation of left ventricle images shows that the ACO for image segmentation is more effective than the GA approach and the new pheromone updating strategy appears good time performance in optimization process.
Ren, Xuezhu; Schweizer, Karl; Wang, Tengfei; Chu, Pei; Gong, Qin
2017-10-01
The aim of the current study is to provide new insights into the relationship between executive functions and intelligence measures in considering the item-position effect observed in intelligence items. Raven's Advanced Progressive Matrices (APM) and Horn's LPS reasoning test were used to assess fluid intelligence which served as criterion in investigating the relationship between intelligence and executive functions. A battery of six experimental tasks measured the updating, shifting, and inhibition processes of executive functions. Data were collected from 205 university students. Fluid intelligence showed substantial correlations with the updating and inhibition processes and no correlation with the shifting process without considering the item-position effect. Next, the fixed-link model was applied to APM and LPS data separately to decompose them into an ability component and an item-position component. The results of relating the components to executive functions showed that the updating and shifting processes mainly contributed to the item-position component whereas the inhibition process was mainly associated with the ability component of each fluid intelligence test. These findings suggest that improvements in the efficiency of updating and shifting processes are likely to occur during the course of completing intelligence measures and inhibition is important for intelligence in general. Copyright © 2017 Elsevier B.V. All rights reserved.
Fish tracking by combining motion based segmentation and particle filtering
NASA Astrophysics Data System (ADS)
Bichot, E.; Mascarilla, L.; Courtellemont, P.
2006-01-01
In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.
Advances in land modeling of KIAPS based on the Noah Land Surface Model
NASA Astrophysics Data System (ADS)
Koo, Myung-Seo; Baek, Sunghye; Seol, Kyung-Hee; Cho, Kyoungmi
2017-08-01
As of 2013, the Noah Land Surface Model (LSM) version 2.7.1 was implemented in a new global model being developed at the Korea Institute of Atmospheric Prediction Systems (KIAPS). This land surface scheme is further refined in two aspects, by adding new physical processes and by updating surface input parameters. Thus, the treatment of glacier land, sea ice, and snow cover are addressed more realistically. Inconsistencies in the amount of absorbed solar flux at ground level by the land surface and radiative processes are rectified. In addition, new parameters are available by using 1-km land cover data, which had usually not been possible at a global scale. Land surface albedo/emissivity climatology is newly created using Moderate-Resolution Imaging Spectroradiometer (MODIS) satellitebased data and adjusted parameterization. These updates have been applied to the KIAPS-developed model and generally provide a positive impact on near-surface weather forecasting.
Walking through doorways causes forgetting: Event structure or updating disruption?
Pettijohn, Kyle A; Radvansky, Gabriel A
2016-11-01
According to event cognition theory, people segment experience into separate event models. One consequence of this segmentation is that when people transport objects from one location to another, memory is worse than if people move across a large location. In two experiments participants navigated through a virtual environment, and recognition memory was tested in either the presence or the absence of a location shift for objects that were recently interacted with (i.e., just picked up or set down). Of particular concern here is whether this location updating effect is due to (a) differences in retention intervals as a result of the navigation process, (b) a temporary disruption in cognitive processing that may occur as a result of the updating processes, or (c) a need to manage multiple event models, as has been suggested in prior research. Experiment 1 explored whether retention interval is driving this effect by recording travel times from the acquisition of an object and the probe time. The results revealed that travel times were similar, thereby rejecting a retention interval explanation. Experiment 2 explored whether a temporary disruption in processing is producing the effect by introducing a 3-second delay prior to the presentation of a memory probe. The pattern of results was not affected by adding a delay, thereby rejecting a temporary disruption account. These results are interpreted in the context of the event horizon model, which suggests that when there are multiple event models that contain common elements there is interference at retrieval, which compromises performance.
Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A
2013-01-01
A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P < .001). In the updating process, age, history, and additional candidate predictors did not significantly increase discrimination, being 94%, and leaving only 4 predictors of the original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tanaka, T.; Tachikawa, Y.; Ichikawa, Y.; Yorozu, K.
2017-12-01
Flood is one of the most hazardous disasters and causes serious damage to people and property around the world. To prevent/mitigate flood damage through early warning system and/or river management planning, numerical modelling of flood-inundation processes is essential. In a literature, flood-inundation models have been extensively developed and improved to achieve flood flow simulation with complex topography at high resolution. With increasing demands on flood-inundation modelling, its computational burden is now one of the key issues. Improvements of computational efficiency of full shallow water equations are made from various perspectives such as approximations of the momentum equations, parallelization technique, and coarsening approaches. To support these techniques and more improve the computational efficiency of flood-inundation simulations, this study proposes an Automatic Domain Updating (ADU) method of 2-D flood-inundation simulation. The ADU method traces the wet and dry interface and automatically updates the simulation domain in response to the progress and recession of flood propagation. The updating algorithm is as follow: first, to register the simulation cells potentially flooded at initial stage (such as floodplains nearby river channels), and then if a registered cell is flooded, to register its surrounding cells. The time for this additional process is saved by checking only cells at wet and dry interface. The computation time is reduced by skipping the processing time of non-flooded area. This algorithm is easily applied to any types of 2-D flood inundation models. The proposed ADU method is implemented to 2-D local inertial equations for the Yodo River basin, Japan. Case studies for two flood events show that the simulation is finished within two to 10 times smaller time showing the same result as that without the ADU method.
Knips, Guido; Zibner, Stephan K U; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp.
Knips, Guido; Zibner, Stephan K. U.; Reimann, Hendrik; Schöner, Gregor
2017-01-01
Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp. PMID:28303100
Kim, Na Young; Wittenberg, Ellen; Nam, Chang S
2017-01-01
This study investigated the interaction between two executive function processes, inhibition and updating, through analyses of behavioral, neurophysiological, and effective connectivity metrics. Although, many studies have focused on behavioral effects of executive function processes individually, few studies have examined the dynamic causal interactions between these two functions. A total of twenty participants from a local university performed a dual task combing flanker and n-back experimental paradigms, and completed the Operation Span Task designed to measure working memory capacity. We found that both behavioral (accuracy and reaction time) and neurophysiological (P300 amplitude and alpha band power) metrics on the inhibition task (i.e., flanker task) were influenced by the updating load (n-back level) and modulated by working memory capacity. Using independent component analysis, source localization (DIPFIT), and Granger Causality analysis of the EEG time-series data, the present study demonstrated that manipulation of cognitive demand in a dual executive function task influenced the causal neural network. We compared connectivity across three updating loads (n-back levels) and found that experimental manipulation of working memory load enhanced causal connectivity of a large-scale neurocognitive network. This network contains the prefrontal and parietal cortices, which are associated with inhibition and updating executive function processes. This study has potential applications in human performance modeling and assessment of mental workload, such as the design of training materials and interfaces for those performing complex multitasking under stress.
Towards a more transparent HTA process in Poland: new Polish HTA methodological guidelines
Lach, Krzysztof; Dziwisz, Michal; Rémuzat, Cécile; Toumi, Mondher
2017-01-01
ABSTRACT Introduction: Health technology assessment (HTA) in Poland supports reimbursement decisions via the Polish HTA Agency (AOTMiT), whose guidelines were updated in 2016. Methods: We identified key changes introduced by the update and, before guideline publication, analysed discrepancies between AOTMiT assessments and the submitting marketing authorisation holders (MAHs) to elucidate the context of the update. We compared the clarity and detail of the new guidelines versus those of the UK’s National Institute for Health and Care Excellence (NICE). Results: The update specified more precise requirements for items such as indirect comparison or input data for economic modelling. Agency–MAH discrepancies relating to the subjects of the HTA update were found in 14.6% of published documents. The new Polish HTA guidelines were as clear and detailed as NICE’s on topics such as assessing quality of evidence and economic modelling, but were less informative when describing (for example) pairwise meta-analysis. Conclusions: The Polish HTA guidelines update demonstrates lessons learned from internal and external experiences. The new guidelines adhere more closely to UK HTA standards, being clearer and more informative. While the update is expected to reduce Agency–MAH discrepancies, there remain areas for development, such as providing templates to aid HTA submissions. PMID:28804603
2005 v4.3 Technical Support Document
Emissions Modeling for the Final Mercury and Air Toxics Standards Technical Support Document describes how updated 2005 NEI, version 2 emissions were processed for air quality modeling in support of the final Mercury and Air Toxics Standards (MATS).
Conceptual Development of a National Volcanic Hazard Model for New Zealand
NASA Astrophysics Data System (ADS)
Stirling, Mark; Bebbington, Mark; Brenna, Marco; Cronin, Shane; Christophersen, Annemarie; Deligne, Natalia; Hurst, Tony; Jolly, Art; Jolly, Gill; Kennedy, Ben; Kereszturi, Gabor; Lindsay, Jan; Neall, Vince; Procter, Jonathan; Rhoades, David; Scott, Brad; Shane, Phil; Smith, Ian; Smith, Richard; Wang, Ting; White, James D. L.; Wilson, Colin J. N.; Wilson, Tom
2017-06-01
We provide a synthesis of a workshop held in February 2016 to define the goals, challenges and next steps for developing a national probabilistic volcanic hazard model for New Zealand. The workshop involved volcanologists, statisticians, and hazards scientists from GNS Science, Massey University, University of Otago, Victoria University of Wellington, University of Auckland, and University of Canterbury. We also outline key activities that will develop the model components, define procedures for periodic update of the model, and effectively articulate the model to end-users and stakeholders. The development of a National Volcanic Hazard Model is a formidable task that will require long-term stability in terms of team effort, collaboration and resources. Development of the model in stages or editions that are modular will make the process a manageable one that progressively incorporates additional volcanic hazards over time, and additional functionalities (e.g. short-term forecasting). The first edition is likely to be limited to updating and incorporating existing ashfall hazard models, with the other hazards associated with lahar, pyroclastic density currents, lava flow, ballistics, debris avalanche, and gases/aerosols being considered in subsequent updates.
A generic biogeochemical module for earth system models
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.-Y.; Leung, L. R.
2013-06-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into earth system models (e.g. community land models - CLM), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into the CLM model. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems.
NASA Astrophysics Data System (ADS)
Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.
2014-04-01
Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.
Real-time model learning using Incremental Sparse Spectrum Gaussian Process Regression.
Gijsberts, Arjan; Metta, Giorgio
2013-05-01
Novel applications in unstructured and non-stationary human environments require robots that learn from experience and adapt autonomously to changing conditions. Predictive models therefore not only need to be accurate, but should also be updated incrementally in real-time and require minimal human intervention. Incremental Sparse Spectrum Gaussian Process Regression is an algorithm that is targeted specifically for use in this context. Rather than developing a novel algorithm from the ground up, the method is based on the thoroughly studied Gaussian Process Regression algorithm, therefore ensuring a solid theoretical foundation. Non-linearity and a bounded update complexity are achieved simultaneously by means of a finite dimensional random feature mapping that approximates a kernel function. As a result, the computational cost for each update remains constant over time. Finally, algorithmic simplicity and support for automated hyperparameter optimization ensures convenience when employed in practice. Empirical validation on a number of synthetic and real-life learning problems confirms that the performance of Incremental Sparse Spectrum Gaussian Process Regression is superior with respect to the popular Locally Weighted Projection Regression, while computational requirements are found to be significantly lower. The method is therefore particularly suited for learning with real-time constraints or when computational resources are limited. Copyright © 2012 Elsevier Ltd. All rights reserved.
Online coupled camera pose estimation and dense reconstruction from video
Medioni, Gerard; Kang, Zhuoliang
2016-11-01
A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta
2009-07-01
Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.
Secure SCADA communication by using a modified key management scheme.
Rezai, Abdalhossein; Keshavarzi, Parviz; Moravej, Zahra
2013-07-01
This paper presents and evaluates a new cryptographic key management scheme which increases the efficiency and security of the Supervisory Control And Data Acquisition (SCADA) communication. In the proposed key management scheme, two key update phases are used: session key update and master key update. In the session key update phase, session keys are generated in the master station. In the master key update phase, the Elliptic Curve Diffie-Hellman (ECDH) protocol is used. The Poisson process is also used to model the Security Index (SI) and Quality of Service (QoS). Our analysis shows that the proposed key management not only supports the required speed in the MODBUS implementation but also has several advantages compared to other key management schemes for secure communication in SCADA networks. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Model-based vision for space applications
NASA Technical Reports Server (NTRS)
Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald
1992-01-01
This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.
Mining moving object trajectories in location-based services for spatio-temporal database update
NASA Astrophysics Data System (ADS)
Guo, Danhuai; Cui, Weihong
2008-10-01
Advances in wireless transmission and mobile technology applied to LBS (Location-based Services) flood us with amounts of moving objects data. Vast amounts of gathered data from position sensors of mobile phones, PDAs, or vehicles hide interesting and valuable knowledge and describe the behavior of moving objects. The correlation between temporal moving patterns of moving objects and geo-feature spatio-temporal attribute was ignored, and the value of spatio-temporal trajectory data was not fully exploited too. Urban expanding or frequent town plan change bring about a large amount of outdated or imprecise data in spatial database of LBS, and they cannot be updated timely and efficiently by manual processing. In this paper we introduce a data mining approach to movement pattern extraction of moving objects, build a model to describe the relationship between movement patterns of LBS mobile objects and their environment, and put up with a spatio-temporal database update strategy in LBS database based on trajectories spatiotemporal mining. Experimental evaluation reveals excellent performance of the proposed model and strategy. Our original contribution include formulation of model of interaction between trajectory and its environment, design of spatio-temporal database update strategy based on moving objects data mining, and the experimental application of spatio-temporal database update by mining moving objects trajectories.
Alqahtani, Saeed; Bukhari, Ishfaq; Albassam, Ahmed; Alenazi, Maha
2018-05-28
The intestinal absorption process is a combination of several events that are governed by various factors. Several transport mechanisms are involved in drug absorption through enterocytes via active and/or passive processes. The transported molecules then undergo intestinal metabolism, which together with intestinal transport may affect the systemic availability of drugs. Many studies have provided clear evidence on the significant role of intestinal first-pass metabolism on drug bioavailability and degree of drug-drug interactions (DDIs). Areas covered: This review provides an update on the role of intestinal first-pass metabolism in the oral bioavailability of drugs and prediction of drug-drug interactions. It also provides a comprehensive overview and summary of the latest update in the role of PBPK modeling in prediction of intestinal metabolism and DDIs in humans. Expert opinion: The contribution of intestinal first-pass metabolism in the oral bioavailability of drugs and prediction of DDIs has become more evident over the last few years. Several in vitro, in situ, and in vivo models have been developed to evaluate the role of first-pass metabolism and to predict DDIs. Currently, physiologically based pharmacokinetic modeling is considered the most valuable tool for the prediction of intestinal first-pass metabolism and DDIs.
Funamizu, Akihiro; Ito, Makoto; Doya, Kenji; Kanzaki, Ryohei; Takahashi, Hirokazu
2015-01-01
Because humans and animals encounter various situations, the ability to adaptively decide upon responses to any situation is essential. To date, however, decision processes and the underlying neural substrates have been investigated under specific conditions; thus, little is known about how various conditions influence one another in these processes. In this study, we designed a binary choice task with variable- and fixed-reward conditions and investigated neural activities of the prelimbic cortex and dorsomedial striatum in rats. Variable- and fixed-reward conditions induced flexible and inflexible behaviors, respectively; one of the two conditions was randomly assigned in each trial for testing the possibility of condition interference. Rats were successfully conditioned such that they could find the better reward holes of variable-reward-condition and fixed-reward-condition trials. A learning interference model, which updated expected rewards (i.e., values) used in variable-reward-condition trials on the basis of combined experiences of both conditions, better fit choice behaviors than conventional models which updated values in each condition independently. Thus, although rats distinguished the trial condition, they updated values in a condition-interference manner. Our electrophysiological study suggests that this interfering value-updating is mediated by the prelimbic cortex and dorsomedial striatum. First, some prelimbic cortical and striatal neurons represented the action-reward associations irrespective of trial conditions. Second, the striatal neurons kept tracking the values of variable-reward condition even in fixed-reward-condition trials, such that values were possibly interferingly updated even in the fixed-reward condition.
A Cognitive Architecture for Human Performance Process Model Research
1992-11-01
individually defined, updatable world representation which is a description of the world as the operator knows it. It contains rules for decisions, an...operate it), and rules of engagement (knowledge about the operator’s expected behavior). The HPP model works in the following way. Information enters...based models depict the problem-solving processes of experts. The experts’ knowledge is represented in symbol structures, along with rules for
Saha, Dibakar; Alluri, Priyanka; Gan, Albert
2017-01-01
The Highway Safety Manual (HSM) presents statistical models to quantitatively estimate an agency's safety performance. The models were developed using data from only a few U.S. states. To account for the effects of the local attributes and temporal factors on crash occurrence, agencies are required to calibrate the HSM-default models for crash predictions. The manual suggests updating calibration factors every two to three years, or preferably on an annual basis. Given that the calibration process involves substantial time, effort, and resources, a comprehensive analysis of the required calibration factor update frequency is valuable to the agencies. Accordingly, the objective of this study is to evaluate the HSM's recommendation and determine the required frequency of calibration factor updates. A robust Bayesian estimation procedure is used to assess the variation between calibration factors computed annually, biennially, and triennially using data collected from over 2400 miles of segments and over 700 intersections on urban and suburban facilities in Florida. Bayesian model yields a posterior distribution of the model parameters that give credible information to infer whether the difference between calibration factors computed at specified intervals is credibly different from the null value which represents unaltered calibration factors between the comparison years or in other words, zero difference. The concept of the null value is extended to include the range of values that are practically equivalent to zero. Bayesian inference shows that calibration factors based on total crash frequency are required to be updated every two years in cases where the variations between calibration factors are not greater than 0.01. When the variations are between 0.01 and 0.05, calibration factors based on total crash frequency could be updated every three years. Copyright © 2016 Elsevier Ltd. All rights reserved.
Business process modeling in healthcare.
Ruiz, Francisco; Garcia, Felix; Calahorra, Luis; Llorente, César; Gonçalves, Luis; Daniel, Christel; Blobel, Bernd
2012-01-01
The importance of the process point of view is not restricted to a specific enterprise sector. In the field of health, as a result of the nature of the service offered, health institutions' processes are also the basis for decision making which is focused on achieving their objective of providing quality medical assistance. In this chapter the application of business process modelling - using the Business Process Modelling Notation (BPMN) standard is described. Main challenges of business process modelling in healthcare are the definition of healthcare processes, the multi-disciplinary nature of healthcare, the flexibility and variability of the activities involved in health care processes, the need of interoperability between multiple information systems, and the continuous updating of scientific knowledge in healthcare.
Humor Facilitates Text Comprehension: Evidence from Eye Movements
ERIC Educational Resources Information Center
Ferstl, Evelyn C.; Israel, Laura; Putzar, Lisa
2017-01-01
One crucial property of verbal jokes is that the punchline usually contains an incongruency that has to be resolved by updating the situation model representation. In the standard pragmatic model, these processes are considered to require cognitive effort. However, only few studies compared jokes to texts requiring a situation model revision…
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.
Electron-cloud updated simulation results for the PSR, and recent results for the SNS
NASA Astrophysics Data System (ADS)
Pivi, M.; Furman, M. A.
2002-05-01
Recent simulation results for the main features of the electron cloud in the storage ring of the Spallation Neutron Source (SNS) at Oak Ridge, and updated results for the Proton Storage Ring (PSR) at Los Alamos are presented in this paper. A refined model for the secondary emission process including the so called true secondary, rediffused and backscattered electrons has recently been included in the electron-cloud code.
Spatio-Semantic Comparison of Large 3d City Models in Citygml Using a Graph Database
NASA Astrophysics Data System (ADS)
Nguyen, S. H.; Yao, Z.; Kolbe, T. H.
2017-10-01
A city may have multiple CityGML documents recorded at different times or surveyed by different users. To analyse the city's evolution over a given period of time, as well as to update or edit the city model without negating modifications made by other users, it is of utmost importance to first compare, detect and locate spatio-semantic changes between CityGML datasets. This is however difficult due to the fact that CityGML elements belong to a complex hierarchical structure containing multi-level deep associations, which can basically be considered as a graph. Moreover, CityGML allows multiple syntactic ways to define an object leading to syntactic ambiguities in the exchange format. Furthermore, CityGML is capable of including not only 3D urban objects' graphical appearances but also their semantic properties. Since to date, no known algorithm is capable of detecting spatio-semantic changes in CityGML documents, a frequent approach is to replace the older models completely with the newer ones, which not only costs computational resources, but also loses track of collaborative and chronological changes. Thus, this research proposes an approach capable of comparing two arbitrarily large-sized CityGML documents on both semantic and geometric level. Detected deviations are then attached to their respective sources and can easily be retrieved on demand. As a result, updating a 3D city model using this approach is much more efficient as only real changes are committed. To achieve this, the research employs a graph database as the main data structure for storing and processing CityGML datasets in three major steps: mapping, matching and updating. The mapping process transforms input CityGML documents into respective graph representations. The matching process compares these graphs and attaches edit operations on the fly. Found changes can then be executed using the Web Feature Service (WFS), the standard interface for updating geographical features across the web.
Patch Transporter: Incentivized, Decentralized Software Patch System for WSN and IoT Environments
Lee, JongHyup
2018-01-01
In the complicated settings of WSN (Wireless Sensor Networks) and IoT (Internet of Things) environments, keeping a number of heterogeneous devices updated is a challenging job, especially with respect to effectively discovering target devices and rapidly delivering the software updates. In this paper, we convert the traditional software update process to a distributed service. We set an incentive system for faithfully transporting the patches to the recipient devices. The incentive system motivates independent, self-interested transporters for helping the devices to be updated. To ensure the system correctly operates, we employ the blockchain system that enforces the commitment in a decentralized manner. We also present a detailed specification for the proposed protocol and validate it by model checking and simulations for correctness. PMID:29438337
Patch Transporter: Incentivized, Decentralized Software Patch System for WSN and IoT Environments.
Lee, JongHyup
2018-02-13
[-12]In the complicated settings of WSN (Wireless Sensor Networks) and IoT (Internet of Things) environments, keeping a number of heterogeneous devices updated is a challenging job, especially with respect to effectively discovering target devices and rapidly delivering the software updates. In this paper, we convert the traditional software update process to a distributed service. We set an incentive system for faithfully transporting the patches to the recipient devices. The incentive system motivates independent, self-interested transporters for helping the devices to be updated. To ensure the system correctly operates, we employ the blockchain system that enforces the commitment in a decentralized manner. We also present a detailed specification for the proposed protocol and validate it by model checking and simulations for correctness.
PSI Model Curriculum for Office Careers.
ERIC Educational Resources Information Center
Professional Secretaries International, Kansas City, MO.
The PSI [Professional Secretaries International] Model Curriculum for Office Careers provides a framework for the curriculum revision process, making it easier for schools to update, change, expand, or revise their office programs. Through a series of suggested courses, this curriculum develops the knowledge, skills, and attitudes office…
Guideline on terminology and definitions of updating clinical guidelines: The Updating Glossary.
Martínez García, Laura; Pardo-Hernández, Hector; Sanabria, Andrea Juliana; Alonso-Coello, Pablo; Penman, Katrina; McFarlane, Emma
2018-03-01
The Guidelines International Network (G-I-N) Updating Guidelines Working Group launched an initiative to develop a glossary (the Updating Glossary) with domains, terms, definitions, and synonyms related to updating of clinical guidelines (CGs). The steering committee developed an initial list of domains, terms, definitions, and synonyms through brainstorming and discussion. The panel members participated in three rounds of feedback to discuss, refine, and clarify the proposed terms, definitions, and synonyms. Finally, the panel members were surveyed to assess their level of agreement regarding the glossary. Eighteen terms were identified and defined: (1) continuous updating, (2) decision to update, (3) fixed updating, (4) full updating, (5) impact of the new evidence, (6) partial updating, (7) prioritization process, (8) reporting process, (9) signal for an update, (10) surveillance process, (11) time of validity, (12) timeframe, (13) tools and resources, (14) up to date, (15) update cycle, (16) update unit, (17) updated version, and (18) updating strategy. Consensus was reached for all terms, definitions, and synonyms (median agreement scores ≥ 6); except for one term. The G-I-N Updating Guidelines Working Group assembled the Updating Glossary to facilitate and improve the knowledge exchange among CGs developers, researchers, and users. Copyright © 2017 Elsevier Inc. All rights reserved.
Carvalho, Fabiana M.; Chaim, Khallil T.; Sanchez, Tiago A.; de Araujo, Draulio B.
2016-01-01
The updating of prospective internal models is necessary to accurately predict future observations. Uncertainty-driven internal model updating has been studied using a variety of perceptual paradigms, and have revealed engagement of frontal and parietal areas. In a distinct literature, studies on temporal expectations have also characterized a time-perception network, which relies on temporal orienting of attention. However, the updating of prospective internal models is highly dependent on temporal attention, since temporal attention must be reoriented according to the current environmental demands. In this study, we used functional magnetic resonance imaging (fMRI) to evaluate to what extend the continuous manipulation of temporal prediction would recruit update-related areas and the time-perception network areas. We developed an exogenous temporal task that combines rhythm cueing and time-to-contact principles to generate implicit temporal expectation. Two patterns of motion were created: periodic (simple harmonic oscillation) and non-periodic (harmonic oscillation with variable acceleration). We found that non-periodic motion engaged the exogenous temporal orienting network, which includes the ventral premotor and inferior parietal cortices, and the cerebellum, as well as the presupplementary motor area, which has previously been implicated in internal model updating, and the motion-sensitive area MT+. Interestingly, we found a right-hemisphere preponderance suggesting the engagement of explicit timing mechanisms. We also show that the periodic motion condition, when compared to the non-periodic motion, activated a particular subset of the default-mode network (DMN) midline areas, including the left dorsomedial prefrontal cortex (DMPFC), anterior cingulate cortex (ACC), and bilateral posterior cingulate cortex/precuneus (PCC/PC). It suggests that the DMN plays a role in processing contextually expected information and supports recent evidence that the DMN may reflect the validation of prospective internal models and predictive control. Taken together, our findings suggest that continuous manipulation of temporal predictions engages representations of temporal prediction as well as task-independent updating of internal models. PMID:27313526
A data collection and processing procedure for evaluating a research program
Giuseppe Rensi; H. Dean Claxton
1972-01-01
A set of computer programs compiled for the information processing requirements of a model for evaluating research proposals are described. The programs serve to assemble and store information, periodically update it, and convert it to a form usable for decision-making. Guides for collecting and coding data are explained. The data-processing options available and...
Yeast 5 – an expanded reconstruction of the Saccharomyces cerevisiae metabolic network
2012-01-01
Background Efforts to improve the computational reconstruction of the Saccharomyces cerevisiae biochemical reaction network and to refine the stoichiometrically constrained metabolic models that can be derived from such a reconstruction have continued since the first stoichiometrically constrained yeast genome scale metabolic model was published in 2003. Continuing this ongoing process, we have constructed an update to the Yeast Consensus Reconstruction, Yeast 5. The Yeast Consensus Reconstruction is a product of efforts to forge a community-based reconstruction emphasizing standards compliance and biochemical accuracy via evidence-based selection of reactions. It draws upon models published by a variety of independent research groups as well as information obtained from biochemical databases and primary literature. Results Yeast 5 refines the biochemical reactions included in the reconstruction, particularly reactions involved in sphingolipid metabolism; updates gene-reaction annotations; and emphasizes the distinction between reconstruction and stoichiometrically constrained model. Although it was not a primary goal, this update also improves the accuracy of model prediction of viability and auxotrophy phenotypes and increases the number of epistatic interactions. This update maintains an emphasis on standards compliance, unambiguous metabolite naming, and computer-readable annotations available through a structured document format. Additionally, we have developed MATLAB scripts to evaluate the model’s predictive accuracy and to demonstrate basic model applications such as simulating aerobic and anaerobic growth. These scripts, which provide an independent tool for evaluating the performance of various stoichiometrically constrained yeast metabolic models using flux balance analysis, are included as Additional files 1, 2 and 3. Conclusions Yeast 5 expands and refines the computational reconstruction of yeast metabolism and improves the predictive accuracy of a stoichiometrically constrained yeast metabolic model. It differs from previous reconstructions and models by emphasizing the distinction between the yeast metabolic reconstruction and the stoichiometrically constrained model, and makes both available as Additional file 4 and Additional file 5 and at http://yeast.sf.net/ as separate systems biology markup language (SBML) files. Through this separation, we intend to make the modeling process more accessible, explicit, transparent, and reproducible. PMID:22663945
Lendínez, Cristina; Pelegrina, Santiago; Lechuga, M Teresa
2014-01-01
The present study investigates the process of updating representations in working memory (WM) and how similarity between the information involved influences this process. In WM updating tasks, the similarity in terms of numerical distance between the number to be substituted and the new one facilitates the updating process. We aimed to disentangle the possible effect of two dimensions of similarity that may contribute to this numerical effect: numerical distance itself and common digits shared between the numbers involved. Three experiments were conducted in which different ranges of distances and the coincidence between the digits of the two numbers involved in updating were manipulated. Results showed that the two dimensions of similarity had an effect on updating times. The greater the similarity between the information maintained in memory and the new information that substituted it, the faster the updating. This is consistent both with the idea of distributed representations based on features, and with a selective updating process based on a feature overwriting mechanism. Thus, updating in WM can be understood as a selective substitution process influenced by similarity in which only certain parts of the representation stored in memory are changed.
Yin, Shasha; Zheng, Junyu; Lu, Qing; Yuan, Zibing; Huang, Zhijiong; Zhong, Liuju; Lin, Hui
2015-05-01
Accurate and gridded VOC emission inventories are important for improving regional air quality model performance. In this study, a four-level VOC emission source categorization system was proposed. A 2010-based gridded Pearl River Delta (PRD) regional VOC emission inventory was developed with more comprehensive source coverage, latest emission factors, and updated activity data. The total anthropogenic VOC emission was estimated to be about 117.4 × 10(4)t, in which on-road mobile source shared the largest contribution, followed by industrial solvent use and industrial processes sources. Among the industrial solvent use source, furniture manufacturing and shoemaking were major VOC emission contributors. The spatial surrogates of VOC emission were updated for major VOC sources such as industrial sectors and gas stations. Subsector-based temporal characteristics were investigated and their temporal variations were characterized. The impacts of updated VOC emission estimates and spatial surrogates were evaluated by modeling O₃ concentration in the PRD region in the July and October of 2010, respectively. The results indicated that both updated emission estimates and spatial allocations can effectively reduce model bias on O₃ simulation. Further efforts should be made on the refinement of source classification, comprehensive collection of activity data, and spatial-temporal surrogates in order to reduce uncertainty in emission inventory and improve model performance. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Fang, Y.; Huang, M.; Liu, C.; Li, H.; Leung, L. R.
2013-11-01
Physical and biogeochemical processes regulate soil carbon dynamics and CO2 flux to and from the atmosphere, influencing global climate changes. Integration of these processes into Earth system models (e.g., community land models (CLMs)), however, currently faces three major challenges: (1) extensive efforts are required to modify modeling structures and to rewrite computer programs to incorporate new or updated processes as new knowledge is being generated, (2) computational cost is prohibitively expensive to simulate biogeochemical processes in land models due to large variations in the rates of biogeochemical processes, and (3) various mathematical representations of biogeochemical processes exist to incorporate different aspects of fundamental mechanisms, but systematic evaluation of the different mathematical representations is difficult, if not impossible. To address these challenges, we propose a new computational framework to easily incorporate physical and biogeochemical processes into land models. The new framework consists of a new biogeochemical module, Next Generation BioGeoChemical Module (NGBGC), version 1.0, with a generic algorithm and reaction database so that new and updated processes can be incorporated into land models without the need to manually set up the ordinary differential equations to be solved numerically. The reaction database consists of processes of nutrient flow through the terrestrial ecosystems in plants, litter, and soil. This framework facilitates effective comparison studies of biogeochemical cycles in an ecosystem using different conceptual models under the same land modeling framework. The approach was first implemented in CLM and benchmarked against simulations from the original CLM-CN code. A case study was then provided to demonstrate the advantages of using the new approach to incorporate a phosphorus cycle into CLM. To our knowledge, the phosphorus-incorporated CLM is a new model that can be used to simulate phosphorus limitation on the productivity of terrestrial ecosystems. The method presented here could in theory be applied to simulate biogeochemical cycles in other Earth system models.
A recursive Bayesian updating model of haptic stiffness perception.
Wu, Bing; Klatzky, Roberta L
2018-06-01
Stiffness of many materials follows Hooke's Law, but the mechanism underlying the haptic perception of stiffness is not as simple as it seems in the physical definition. The present experiments support a model by which stiffness perception is adaptively updated during dynamic interaction. Participants actively explored virtual springs and estimated their stiffness relative to a reference. The stimuli were simulations of linear springs or nonlinear springs created by modulating a linear counterpart with low-amplitude, half-cycle (Experiment 1) or full-cycle (Experiment 2) sinusoidal force. Experiment 1 showed that subjective stiffness increased (decreased) as a linear spring was positively (negatively) modulated by a half-sinewave force. In Experiment 2, an opposite pattern was observed for full-sinewave modulations. Modeling showed that the results were best described by an adaptive process that sequentially and recursively updated an estimate of stiffness using the force and displacement information sampled over trajectory and time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Malroy, Eric T.
2007-01-01
The programs, arrays and logic structure were developed to enable the dynamic update of conductors in thermal desktop. The MatLab program FMHTPRE.m processes the Thermal Desktop conductors and sets up the arrays. The user needs to manually copy portions of the output to different input regions in Thermal Desktop. Also, Fortran subroutines are provided that perform the actual updates to the conductors. The subroutines are setup for helium gas, but the equations can be modified for other gases. The maximum number of free molecular conductors allowed is 10,000 for a given radiation task. Additional radiation tasks for FMHT can be generated to account for more conductors. Modifications to the Fortran subroutines may be warranted, when the mode of heat transfer is in the mixed or continuum mode. The FMHT Thermal Desktop model should be activated by using the "Case Set Manager" once the model is setup. Careful setup of the model is needed to avoid excessive solve times.
Adaptive classifier for steel strip surface defects
NASA Astrophysics Data System (ADS)
Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li
2017-01-01
Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.
Theory of agent-based market models with controlled levels of greed and anxiety
NASA Astrophysics Data System (ADS)
Papadopoulos, P.; Coolen, A. C. C.
2010-01-01
We use generating functional analysis to study minority-game-type market models with generalized strategy valuation updates that control the psychology of agents' actions. The agents' choice between trend-following and contrarian trading, and their vigor in each, depends on the overall state of the market. Even in 'fake history' models, the theory now involves an effective overall bid process (coupled to the effective agent process) which can exhibit profound remanence effects and new phase transitions. For some models the bid process can be solved directly, others require Maxwell-construction-type approximations.
Updating the Finite Element Model of the Aerostructures Test Wing Using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Lung, Shun-Fat; Pak, Chan-Gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the aerostructures test wing (ATW), which was designed and tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
2005 v4.2 Technical Support Document
Technical Support Document for the Final Transport Rule describes how updated 2005 NEI, version 2 emissions and were processed for air quality modeling in support of the Cross-state Air Pollution Rule (CSAPR).
Ismail, Mahmoud; Philbin, James
2015-04-01
The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies' metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata.
Ismail, Mahmoud; Philbin, James
2015-01-01
Abstract. The digital imaging and communications in medicine (DICOM) information model combines pixel data and its metadata in a single object. There are user scenarios that only need metadata manipulation, such as deidentification and study migration. Most picture archiving and communication system use a database to store and update the metadata rather than updating the raw DICOM files themselves. The multiseries DICOM (MSD) format separates metadata from pixel data and eliminates duplicate attributes. This work promotes storing DICOM studies in MSD format to reduce the metadata processing time. A set of experiments are performed that update the metadata of a set of DICOM studies for deidentification and migration. The studies are stored in both the traditional single frame DICOM (SFD) format and the MSD format. The results show that it is faster to update studies’ metadata in MSD format than in SFD format because the bulk data is separated in MSD and is not retrieved from the storage system. In addition, it is space efficient to store the deidentified studies in MSD format as it shares the same bulk data object with the original study. In summary, separation of metadata from pixel data using the MSD format provides fast metadata access and speeds up applications that process only the metadata. PMID:26158117
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aden, A.; Ruth, M.; Ibsen, K.
This report is an update of NREL's ongoing process design and economic analyses of processes related to developing ethanol from lignocellulosic feedstocks. The U.S. Department of Energy (DOE) is promoting the development of ethanol from lignocellulosic feedstocks as an alternative to conventional petroleum-based transportation fuels. DOE funds both fundamental and applied research in this area and needs a method for predicting cost benefits of many research proposals. To that end, the National Renewable Energy Laboratory (NREL) has modeled many potential process designs and estimated the economics of each process during the last 20 years. This report is an update ofmore » the ongoing process design and economic analyses at NREL. We envision updating this process design report at regular intervals; the purpose being to ensure that the process design incorporates all new data from NREL research, DOE funded research and other sources, and that the equipment costs are reasonable and consistent with good engineering practice for plants of this type. For the non-research areas this means using equipment and process approaches as they are currently used in industrial applications. For the last report, published in 1999, NREL performed a complete review and update of the process design and economic model for the biomass-to-ethanol process utilizing co-current dilute acid prehydrolysis with simultaneous saccharification (enzymatic) and co-fermentation. The process design included the core technologies being researched by the DOE: prehydrolysis, simultaneous saccharification and co-fermentation, and cellulase enzyme production. In addition, all ancillary areas--feed handling, product recovery and purification, wastewater treatment (WWT), lignin combustor and boiler-turbogenerator, and utilities--were included. NREL engaged Delta-T Corporation (Delta-T) to assist in the process design evaluation, the process equipment costing, and overall plant integration. The process design and costing for the lignin combustor and boiler turbogenerator was reviewed by Reaction Engineering Inc. (REI) and Merrick & Company reviewed the wastewater treatment. Since then, NREL has engaged Harris Group (Harris) to perform vendor testing, process design, and costing of critical equipment identified during earlier work. This included solid/liquid separation and pretreatment reactor design and costing. Corn stover handling was also investigated to support DOE's decision to focus on corn stover as a feedstock for lignocellulosic ethanol. Working with Harris, process design and costing for these areas were improved through vendor designs, costing, and vendor testing in some cases. In addition to this work, enzyme costs were adjusted to reflect collaborative work between NREL and enzyme manufacturers (Genencor International and Novozymes Biotech) to provide a delivered enzyme for lignocellulosic feedstocks. This report is the culmination of our work and represents an updated process design and cost basis for the process using a corn stover feedstock. The process design and economic model are useful for predicting the cost benefits of proposed research. Proposed research results can be translated into modifications of the process design, and the economic impact can be assessed. This allows DOE, NREL, and other researchers to set priorities on future research with an understanding of potential reductions to the ethanol production cost. To be economically viable, ethanol production costs must be below market values for ethanol. DOE has chosen a target ethanol selling price of $1.07 per gallon as a goal for 2010. The conceptual design and costs presented here are based on a 2010 plant start-up date. The key research targets required to achieve this design and the $1.07 value are discussed in the report.« less
Situation Model Updating in Young and Older Adults: Global versus Incremental Mechanisms
Bailey, Heather R.; Zacks, Jeffrey M.
2015-01-01
Readers construct mental models of situations described by text. Activity in narrative text is dynamic, so readers must frequently update their situation models when dimensions of the situation change. Updating can be incremental, such that a change leads to updating just the dimension that changed, or global, such that the entire model is updated. Here, we asked whether older and young adults make differential use of incremental and global updating. Participants read narratives containing changes in characters and spatial location and responded to recognition probes throughout the texts. Responses were slower when probes followed a change, suggesting that situation models were updated at changes. When either dimension changed, responses to probes for both dimensions were slowed; this provides evidence for global updating. Moreover, older adults showed stronger evidence of global updating than did young adults. One possibility is that older adults perform more global updating to offset reduced ability to manipulate information in working memory. PMID:25938248
The LANDFIRE Refresh strategy: updating the national dataset
Nelson, Kurtis J.; Connot, Joel A.; Peterson, Birgit E.; Martin, Charley
2013-01-01
The LANDFIRE Program provides comprehensive vegetation and fuel datasets for the entire United States. As with many large-scale ecological datasets, vegetation and landscape conditions must be updated periodically to account for disturbances, growth, and natural succession. The LANDFIRE Refresh effort was the first attempt to consistently update these products nationwide. It incorporated a combination of specific systematic improvements to the original LANDFIRE National data, remote sensing based disturbance detection methods, field collected disturbance information, vegetation growth and succession modeling, and vegetation transition processes. This resulted in the creation of two complete datasets for all 50 states: LANDFIRE Refresh 2001, which includes the systematic improvements, and LANDFIRE Refresh 2008, which includes the disturbance and succession updates to the vegetation and fuel data. The new datasets are comparable for studying landscape changes in vegetation type and structure over a decadal period, and provide the most recent characterization of fuel conditions across the country. The applicability of the new layers is discussed and the effects of using the new fuel datasets are demonstrated through a fire behavior modeling exercise using the 2011 Wallow Fire in eastern Arizona as an example.
Mattson, Marifran; Basu, Ambar
2010-07-01
The Center for Disease Control's (CDC) Diethylstilbestrol (DES) Update, a campaign to educate people who may have been exposed to the drug DES, is framed on the premises of the social marketing model, namely formative research, audience segmentation, product, price, placement, promotion, and campaign evaluation. More than that, the campaign takes a critical step in extending the social marketing paradigm by highlighting the need to situate the messaging process at the heart of any health communication campaign. This article uses CDC's DES Update as a case study to illustrate an application of a message development tool within social marketing. This tool promotes the operationalization of messaging within health campaigns. Ultimately, the goal of this project is to extend the social marketing model and provide useful theoretical guidance to health campaign practitioners on how to accomplish stellar communication within a social marketing campaign.
Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model
Jensen, Greg; Muñoz, Fabian; Alkan, Yelda; Ferrera, Vincent P.; Terrace, Herbert S.
2015-01-01
Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort’s success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models. PMID:26407227
Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.
Jensen, Greg; Muñoz, Fabian; Alkan, Yelda; Ferrera, Vincent P; Terrace, Herbert S
2015-01-01
Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.
A BEME systematic review of the effects of interprofessional education: BEME Guide No. 39.
Reeves, Scott; Fletcher, Simon; Barr, Hugh; Birch, Ivan; Boet, Sylvain; Davies, Nigel; McFadyen, Angus; Rivera, Josette; Kitto, Simon
2016-07-01
Interprofessional education (IPE) aims to bring together different professionals to learn with, from, and about one another in order to collaborate more effectively in the delivery of safe, high-quality care for patients/clients. Given its potential for improving collaboration and care delivery, there have been repeated calls for the wider-scale implementation of IPE across education and clinical settings. Increasingly, a range of IPE initiatives are being implemented and evaluated which are adding to the growth of evidence for this form of education. The overall aim of this review is to update a previous BEME review published in 2007. In doing so, this update sought to synthesize the evolving nature of the IPE evidence. Medline, CINAHL, BEI, and ASSIA were searched from May 2005 to June 2014. Also, journal hand searches were undertaken. All potential abstracts and papers were screened by pairs of reviewers to determine inclusion. All included papers were assessed for methodological quality and those deemed as "high quality" were included. The presage-process-product (3P) model and a modified Kirkpatrick model were employed to analyze and synthesize the included studies. Twenty-five new IPE studies were included in this update. These studies were added to the 21 studies from the previous review to form a complete data set of 46 high-quality IPE studies. In relation to the 3P model, overall the updated review found that most of the presage and process factors identified from the previous review were further supported in the newer studies. In regard to the products (outcomes) reported, the results from this review continue to show far more positive than neutral or mixed outcomes reported in the included studies. Based on the modified Kirkpatrick model, the included studies suggest that learners respond well to IPE, their attitudes and perceptions of one another improve, and they report increases in collaborative knowledge and skills. There is more limited, but growing, evidence related to changes in behavior, organizational practice, and benefits to patients/clients. This updated review found that key context (presage) and process factors reported in the previous review continue to have resonance on the delivery of IPE. In addition, the newer studies have provided further evidence for the effects on IPE related to a number of different outcomes. Based on these conclusions, a series of key implications for the development of IPE are offered.
The Voronoi spatio-temporal data structure
NASA Astrophysics Data System (ADS)
Mioc, Darka
2002-04-01
Current GIS models cannot integrate the temporal dimension of spatial data easily. Indeed, current GISs do not support incremental (local) addition and deletion of spatial objects, and they can not support the temporal evolution of spatial data. Spatio-temporal facilities would be very useful in many GIS applications: harvesting and forest planning, cadastre, urban and regional planning, and emergency planning. The spatio-temporal model that can overcome these problems is based on a topological model---the Voronoi data structure. Voronoi diagrams are irregular tessellations of space, that adapt to spatial objects and therefore they are a synthesis of raster and vector spatial data models. The main advantage of the Voronoi data structure is its local and sequential map updates, which allows us to automatically record each event and performed map updates within the system. These map updates are executed through map construction commands that are composed of atomic actions (geometric algorithms for addition, deletion, and motion of spatial objects) on the dynamic Voronoi data structure. The formalization of map commands led to the development of a spatial language comprising a set of atomic operations or constructs on spatial primitives (points and lines), powerful enough to define the complex operations. This resulted in a new formal model for spatio-temporal change representation, where each update is uniquely characterized by the numbers of newly created and inactivated Voronoi regions. This is used for the extension of the model towards the hierarchical Voronoi data structure. In this model, spatio-temporal changes induced by map updates are preserved in a hierarchical data structure that combines events and corresponding changes in topology. This hierarchical Voronoi data structure has an implicit time ordering of events visible through changes in topology, and it is equivalent to an event structure that can support temporal data without precise temporal information. This formal model of spatio-temporal change representation is currently applied to retroactive map updates and visualization of map evolution. It offers new possibilities in the domains of temporal GIS, transaction processing, spatio-temporal queries, spatio-temporal analysis, map animation and map visualization.
Title V in South Carolina -- An Update.
ERIC Educational Resources Information Center
Jacob, Nelson L.
Since South Carolina's Title V Community and Resource Development (CRD) project is limited to one small rural county (Williamsburg) affording careful documentation, this paper explicates South Carolina's CRD process via a social action model. This project, then, is described in terms of the following model components: (1) community initiative…
B.G. Marcot; J.D. Steventon; G.D. Sutherland; R.K. McCann
2006-01-01
We provide practical guidelines for developing, testing, and revising Bayesian belief networks (BBNs). Primary steps in this process include creating influence diagrams of the hypothesized "causal web" of key factors affecting a species or ecological outcome of interest; developing a first, alpha-level BBN model from the influence diagram; revising the model...
Persistence of opinion in the Sznajd consensus model: computer simulation
NASA Astrophysics Data System (ADS)
Stauffer, D.; de Oliveira, P. M. C.
2002-12-01
The density of never changed opinions during the Sznajd consensus-finding process decays with time t as 1/t^θ. We find θ simeq 3/8 for a chain, compatible with the exact Ising result of Derrida et al. In higher dimensions, however, the exponent differs from the Ising θ. With simultaneous updating of sublattices instead of the usual random sequential updating, the number of persistent opinions decays roughly exponentially. Some of the simulations used multi-spin coding.
Improvements of the Radiation Code "MstrnX" in AORI/NIES/JAMSTEC Models
NASA Astrophysics Data System (ADS)
Sekiguchi, M.; Suzuki, K.; Takemura, T.; Watanabe, M.; Ogura, T.
2015-12-01
There is a large demand for an accurate yet rapid radiation transfer scheme accurate for general climate models. The broadband radiative transfer code "mstrnX", ,which was developed by Atmosphere and Ocean Research Institute (AORI) and was implemented in several global and regional climate models cooperatively developed in the Japanese research community, for example, MIROC (the Model for Interdisciplinary Research on Climate) [Watanabe et al., 2010], NICAM (Non-hydrostatic Icosahedral Atmospheric Model) [Satoh et al, 2008], and CReSS (Cloud Resolving Storm Simulator) [Tsuboki and Sakakibara, 2002]. In this study, we improve the gas absorption process and the scattering process of ice particles. For update of gas absorption process, the absorption line database is replaced by the latest versions of the Harvard-Smithsonian Center, HITRAN2012. An optimization method is adopted in mstrnX to decrease the number of integration points for the wavenumber integration using the correlated k-distribution method and to increase the computational efficiency in each band. The integration points and weights of the correlated k-distribution are optimized for accurate calculation of the heating rate up to altitude of 70 km. For this purpose we adopted a new non-linear optimization method of the correlated k-distribution and studied an optimal initial condition and the cost function for the non-linear optimization. It is known that mstrnX has a considerable bias in case of quadrapled carbon dioxide concentrations [Pincus et al., 2015], however, the bias is decreased by this improvement. For update of scattering process of ice particles, we adopt a solid column as an ice crystal habit [Yang et al., 2013]. The single scattering properties are calculated and tabulated in advance. The size parameter of this table is ranged from 0.1 to 1000 in mstrnX, we expand the maximum to 50000 in order to correspond to large particles, like fog and rain drop. Those update will be introduced to MIROC and adopted for CMIP6 experiment.
MOVES2014: Evaporative Emissions Report
Vehicle evaporative emissions are now modeled in EPA’s MOVES according to physical processes, permeation, tank vapor venting, liquid leaks, and refueling emissions. With this update, the following improvements are being incorporated into MOVES evaporative emissions methodology, a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rutqvist, Jonny; Blanco Martin, Laura; Mukhopadhyay, Sumit
In this report, we present FY2014 progress by Lawrence Berkeley National Laboratory (LBNL) related to modeling of coupled thermal-hydrological-mechanical-chemical (THMC) processes in salt and their effect on brine migration at high temperatures. LBNL’s work on the modeling of coupled THMC processes in salt was initiated in FY2012, focusing on exploring and demonstrating the capabilities of an existing LBNL modeling tool (TOUGH-FLAC) for simulating temperature-driven coupled flow and geomechanical processes in salt. This work includes development related to, and implementation of, essential capabilities, as well as testing the model against relevant information and published experimental data related to the fate andmore » transport of water. we provide more details on the FY2014 work, first presenting updated tools and improvements made to the TOUGH-FLAC simulator, and the use of this updated tool in a new model simulation of long-term THM behavior within a generic repository in a salt formation. This is followed by the description of current benchmarking and validations efforts, including the TSDE experiment. We then present the current status in the development of constitutive relationships and the dual-continuum model for brine migration. We conclude with an outlook for FY2015, which will be much focused on model validation against field experiments and on the use of the model for the design studies related to a proposed heater experiment.« less
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2008-01-01
Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2008-01-01
Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization [MDAO] tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.
Toward Near Real-Time Tomography of the Upper Mantle
NASA Astrophysics Data System (ADS)
Debayle, E.; Dubuffet, F.
2014-12-01
We added a layer of automation to the Debayle and Ricard (2012)'s waveform modeling scheme for fundamental and higher mode surface waves in the period range 50-160s. We processed all the Rayleigh waveforms recorded on the LHZ channel by the virtual networks GSN_broadband, FDSN_all, and US_backbone between January 1996 and December 2013. Six millions of waveforms were obtained from IRIS DMC. We check that all the necessary information (instrument response, global CMT determination) is available and that each record includes a velocity window which encompasses the surface wave. Selected data must also have a signal-to-noise ratio greater than 3 in a range covering at least the periods between 50 and 100 s. About 3 millions of waveforms are selected (92% of the rejections are due to the signal to noise ratio criterion) and processed using Debayle and Ricard (2012)'s scheme, which allows the successful modeling of about 1.5 millions of waveforms. We complete this database with 60,000 waveforms recorded between 1976 and 1996 or after 1996 during various temporary experiments and with 161,730 Rayleigh waveforms analyzed at longer period, between 120 and 360 s. The whole data set is inverted using Debayle and Sambridge (2004)'s scheme to produce a 3D shear velocity model. A simple shell command "update_tomo" can then update our seismic model in an entirely automated way. Currently, this command checks from the CMT catalog what are the potential data available at the GSN_broadband, FDSN_all, and US_backbone virtual networks, uses web services to request these data from IRIS DMC and applies the processing chain described above to update our seismic model. We plan to update our seismic model on a regular basis in a near future, and to make it available on the web. Our most recent seismic model includes azimuthal anisotropy, achieves a lateral resolution of few hundred kilometers and a vertical resolution of a few tens of kilometers. The correlation with surface tectonics is very strong in the uppermost 200 km. Regions deeper than 400 km show no velocity contrasts larger than 1%, except for high velocity slabs which produce broad high velocity regions within the transition zone. The use of higher modes and long period surface waves allows us to extract the shear velocity structure down to about 1000 km depth.
ERIC Educational Resources Information Center
Elyria City Board of Education, OH.
Total Quality Management (TQM) is a process and strategy designed to improve an organization's effectiveness and efficiency. The Elyria Schools, named as Ohio's model urban school district in 1991, uses TQM to implement updated strategic goals through a process emphasizing teamwork, best knowledge, prevention, and commitment to continuous…
Pre-Service Teachers' Material Development Process Based on the ADDIE Model: E-Book Design
ERIC Educational Resources Information Center
Usta, Necla Dönmez; Güntepe, Ebru Turan
2017-01-01
With the developments in information and communication technologies, books which are fundamental information sources for students throughout their education and training process are being transformed into electronic book (e-book) formats. E-books provide interactive environments, and they are also updateable materials, which shows that, in time,…
Context updating during sentence comprehension: the effect of aboutness topic.
Burmester, Juliane; Spalek, Katharina; Wartenburger, Isabell
2014-10-01
To communicate efficiently, speakers typically link their utterances to the discourse environment and adapt their utterances to the listener's discourse representation. Information structure describes how linguistic information is packaged within a discourse to optimize information transfer. The present study investigates the nature and time course of context integration (i.e., aboutness topic vs. neutral context) on the comprehension of German declarative sentences with either subject-before-object (SO) or object-before-subject (OS) word order using offline comprehensibility judgments and online event-related potentials (ERPs). Comprehensibility judgments revealed that the topic context selectively facilitated comprehension of stories containing OS (i.e., non-canonical) sentences. In the ERPs, the topic context effect was reflected in a less pronounced late positivity at the sentence-initial object. In line with the Syntax-Discourse Model, we argue that these context-induced effects are attributable to reduced processing costs for updating the current discourse model. The results support recent approaches of neurocognitive models of discourse processing. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.
Neural substrates of updating the prediction through prediction error during decision making.
Wang, Ying; Ma, Ning; He, Xiaosong; Li, Nan; Wei, Zhengde; Yang, Lizhuang; Zha, Rujing; Han, Long; Li, Xiaoming; Zhang, Daren; Liu, Ying; Zhang, Xiaochu
2017-08-15
Learning of prediction error (PE), including reward PE and risk PE, is crucial for updating the prediction in reinforcement learning (RL). Neurobiological and computational models of RL have reported extensive brain activations related to PE. However, the occurrence of PE does not necessarily predict updating the prediction, e.g., in a probability-known event. Therefore, the brain regions specifically engaged in updating the prediction remain unknown. Here, we conducted two functional magnetic resonance imaging (fMRI) experiments, the probability-unknown Iowa Gambling Task (IGT) and the probability-known risk decision task (RDT). Behavioral analyses confirmed that PEs occurred in both tasks but were only used for updating the prediction in the IGT. By comparing PE-related brain activations between the two tasks, we found that the rostral anterior cingulate cortex/ventral medial prefrontal cortex (rACC/vmPFC) and the posterior cingulate cortex (PCC) activated only during the IGT and were related to both reward and risk PE. Moreover, the responses in the rACC/vmPFC and the PCC were modulated by uncertainty and were associated with reward prediction-related brain regions. Electric brain stimulation over these regions lowered the performance in the IGT but not in the RDT. Our findings of a distributed neural circuit of PE processing suggest that the rACC/vmPFC and the PCC play a key role in updating the prediction through PE processing during decision making. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Vassena, G.; Clerici, A.
2018-05-01
The state of the art of 3D surveying technologies, if correctly applied, allows to obtain 3D coloured models of large open pit mines using different technologies as terrestrial laser scanner (TLS), with images, combined with UAV based digital photogrammetry. GNSS and/or total station are also currently used to geo reference the model. The University of Brescia has been realised a project to map in 3D an open pit mine located in Botticino, a famous location of marble extraction close to Brescia in North Italy. Terrestrial Laser Scanner 3D point clouds combined with RGB images and digital photogrammetry from UAV have been used to map a large part of the cave. By rigorous and well know procedures a 3D point cloud and mesh model have been obtained using an easy and rigorous approach. After the description of the combined mapping process, the paper describes the innovative process proposed for the daily/weekly update of the model itself. To realize this task a SLAM technology approach is described, using an innovative approach based on an innovative instrument capable to run an automatic localization process and real time on the field change detection analysis.
Statistical and perceptual updating: correlated impairments in right brain injury.
Stöttinger, Elisabeth; Filipowicz, Alex; Marandi, Elahe; Quehl, Nadine; Danckert, James; Anderson, Britt
2014-06-01
It has been hypothesized that many of the cognitive impairments commonly seen after right brain damage (RBD) can be characterized as a failure to build or update mental models. We (Danckert et al. in Neglect as a disorder of representational updating. NOVA Open Access, New York, 2012a; Cereb Cortex 22:2745-2760, 2012b) were the first to directly assess the association between RBD and updating and found that RBD patients were unable to exploit a strongly biased play strategy in their opponent in the children's game rock, paper, scissors. Given that this game required many other cognitive capacities (i.e., working memory, sustained attention, reward processing), RBD patients could have failed this task for various reasons other than a failure to update. To assess the generality of updating deficits after RBD, we had RBD, left brain-damaged (LBD) patients and healthy controls (HCs) describe line drawings that evolved gradually from one figure (e.g., rabbit) to another (e.g., duck) in addition to the RPS updating task. RBD patients took significantly longer to alter their perceptual report from the initial object to the final object than did LBD patients and HCs. Although both patient groups performed poorly on the RPS task, only the RBD patients showed a significant correlation between the two, very different, updating tasks. We suggest these data indicate a general deficiency in the ability to update mental representations following RBD.
ENGINEERED BARRIER SYSTEM: PHYSICAL AND CHEMICAL ENVIRONMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. Jarek
2005-08-29
The purpose of this model report is to describe the evolution of the physical and chemical environmental conditions within the waste emplacement drifts of the repository, including the drip shield and waste package surfaces. The resulting seepage evaporation and gas abstraction models are used in the total system performance assessment for the license application (TSPA-LA) to assess the performance of the engineered barrier system and the waste form. This report develops and documents a set of abstraction-level models that describe the engineered barrier system physical and chemical environment. Where possible, these models use information directly from other reports as input,more » which promotes integration among process models used for TSPA-LA. Specific tasks and activities of modeling the physical and chemical environment are included in ''Technical Work Plan for: Near-Field Environment and Transport In-Drift Geochemistry Model Report Integration'' (BSC 2005 [DIRS 173782], Section 1.2.2). As described in the technical work plan, the development of this report is coordinated with the development of other engineered barrier system reports. To be consistent with other project documents that address features, events, and processes (FEPs), Table 6.14.1 of the current report includes updates to FEP numbers and FEP subjects for two FEPs identified in the technical work plan (TWP) governing this report (BSC 2005 [DIRS 173782]). FEP 2.1.09.06.0A (Reduction-oxidation potential in EBS), as listed in Table 2 of the TWP (BSC 2005 [DIRS 173782]), has been updated in the current report to FEP 2.1.09.06.0B (Reduction-oxidation potential in Drifts; see Table 6.14-1). FEP 2.1.09.07.0A (Reaction kinetics in EBS), as listed in Table 2 of the TWP (BSC 2005 [DIRS 173782]), has been updated in the current report to FEP 2.1.09.07.0B (Reaction kinetics in Drifts; see Table 6.14-1). These deviations from the TWP are justified because they improve integration with FEPs documents. The updates have no impact on the model developed in this report.« less
NASA Astrophysics Data System (ADS)
Chen, G. W.; Omenzetter, P.
2016-04-01
This paper presents the implementation of an updating procedure for the finite element model (FEM) of a prestressed concrete continuous box-girder highway off-ramp bridge. Ambient vibration testing was conducted to excite the bridge, assisted by linear chirp sweepings induced by two small electrodynamic shakes deployed to enhance the excitation levels, since the bridge was closed to traffic. The data-driven stochastic subspace identification method was executed to recover the modal properties from measurement data. An initial FEM was developed and correlation between the experimental modal results and their analytical counterparts was studied. Modelling of the pier and abutment bearings was carefully adjusted to reflect the real operational conditions of the bridge. The subproblem approximation method was subsequently utilized to automatically update the FEM. For this purpose, the influences of bearing stiffness, and mass density and Young's modulus of materials were examined as uncertain parameters using sensitivity analysis. The updating objective function was defined based on a summation of squared values of relative errors of natural frequencies between the FEM and experimentation. All the identified modes were used as the target responses with the purpose of putting more constrains for the optimization process and decreasing the number of potentially feasible combinations for parameter changes. The updated FEM of the bridge was able to produce sufficient improvements in natural frequencies in most modes of interest, and can serve for a more precise dynamic response prediction or future investigation of the bridge health.
Updating Sea Spray Aerosol Emissions in the Community Multiscale Air Quality Model
NASA Astrophysics Data System (ADS)
Gantt, B.; Bash, J. O.; Kelly, J.
2014-12-01
Sea spray aerosols (SSA) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. In this study, the Community Multiscale Air Quality (CMAQ) model is updated to enhance fine mode SSA emissions, include sea surface temperature (SST) dependency, and revise surf zone emissions. Based on evaluation with several regional and national observational datasets in the continental U.S., the updated emissions generally improve surface concentrations predictions of primary aerosols composed of sea-salt and secondary aerosols affected by sea-salt chemistry in coastal and near-coastal sites. Specifically, the updated emissions lead to better predictions of the magnitude and coastal-to-inland gradient of sodium, chloride, and nitrate concentrations at Bay Regional Atmospheric Chemistry Experiment (BRACE) sites near Tampa, FL. Including SST-dependency to the SSA emission parameterization leads to increased sodium concentrations in the southeast U.S. and decreased concentrations along the Pacific coast and northeastern U.S., bringing predictions into closer agreement with observations at most Interagency Monitoring of Protected Visual Environments (IMPROVE) and Chemical Speciation Network (CSN) sites. Model comparison with California Research at the Nexus of Air Quality and Climate Change (CalNex) observations will also be discussed, with particular focus on the South Coast Air Basin where clean marine air mixes with anthropogenic pollution in a complex environment. These SSA emission updates enable more realistic simulation of chemical processes in coastal environments, both in clean marine air masses and mixtures of clean marine and polluted conditions.
NASA Astrophysics Data System (ADS)
Wu, Jie; Yan, Quan-sheng; Li, Jian; Hu, Min-yi
2016-04-01
In bridge construction, geometry control is critical to ensure that the final constructed bridge has the consistent shape as design. A common method is by predicting the deflections of the bridge during each construction phase through the associated finite element models. Therefore, the cambers of the bridge during different construction phases can be determined beforehand. These finite element models are mostly based on the design drawings and nominal material properties. However, the accuracy of these bridge models can be large due to significant uncertainties of the actual properties of the materials used in construction. Therefore, the predicted cambers may not be accurate to ensure agreement of bridge geometry with design, especially for long-span bridges. In this paper, an improved geometry control method is described, which incorporates finite element (FE) model updating during the construction process based on measured bridge deflections. A method based on the Kriging model and Latin hypercube sampling is proposed to perform the FE model updating due to its simplicity and efficiency. The proposed method has been applied to a long-span continuous girder concrete bridge during its construction. Results show that the method is effective in reducing construction error and ensuring the accuracy of the geometry of the final constructed bridge.
1988/1989 household travel survey
DOT National Transportation Integrated Search
1989-07-01
The primary objectives of this study were to provide the data: (1) : to update the trip generation rates used in the Maricopa Association of Governments (MAG) travel demand forecasting process, and; (2) to validate the MAG trip distribution model. Th...
Emsoft User's Guide and Modeling Software (2002 Update)
Chemicals that readily vaporize at relatively low temperatures can migrate from contaminated soils into the atmosphere via a process called volatilization. Volatilization represents a potentially significant exposure pathway because humans can come in contact with volatilized com...
DORIS Starec ground antenna characterization and impact on positioning
NASA Astrophysics Data System (ADS)
Tourain, C.; Moreaux, G.; Auriol, A.; Saunier, J.
2016-12-01
In a geodetic radio frequency observing system the phase center offsets and phase center variations of ground antennae are a fundamental component of mathematical models of the system observables. In this paper we describe work aimed at improving the DORIS Starec ground antenna phase center definition model. Seven antennas were analyzed in the Compact Antenna Test Range (CATR), a dedicated CNES facility. With respect to the manufacturer specified phase center offset, the measured antennae varied between -6 mm and +4 mm due to manufacturing variations. To solve this problem, discussions were held with the manufacturer, leading to an improvement of the manufacturing process. This work results in a reduction in the scatter to ±1 mm. The phase center position has been kept unchanged and associated phase law has been updated and provided to users of the International DORIS Service (IDS). This phase law is applicable to all Starec antennas (before and after manufacturing process consolidation) and is azimuth independent. An error budget taking into account these updated characteristics has been established for the antenna alone: ±2 mm on the horizontal plane and ±3 mm on the up component, maximum error values for antennas named type C (Saunier et al., 2016) produced with consolidated manufacturing process. Finally the impact of this updated characterization on positioning results has been analyzed and shows a scale offset only of the order of +12 mm for the Terrestrial Reference Frame.
Predictive codes of familiarity and context during the perceptual learning of facial identities
NASA Astrophysics Data System (ADS)
Apps, Matthew A. J.; Tsakiris, Manos
2013-11-01
Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.
Real Time Updating Genetic Network Programming for Adapting to the Change of Stock Prices
NASA Astrophysics Data System (ADS)
Chen, Yan; Mabu, Shingo; Shimada, Kaoru; Hirasawa, Kotaro
The key in stock trading model is to take the right actions for trading at the right time, primarily based on the accurate forecast of future stock trends. Since an effective trading with given information of stock prices needs an intelligent strategy for the decision making, we applied Genetic Network Programming (GNP) to creating a stock trading model. In this paper, we propose a new method called Real Time Updating Genetic Network Programming (RTU-GNP) for adapting to the change of stock prices. There are three important points in this paper: First, the RTU-GNP method makes a stock trading decision considering both the recommendable information of technical indices and the candlestick charts according to the real time stock prices. Second, we combine RTU-GNP with a Sarsa learning algorithm to create the programs efficiently. Also, sub-nodes are introduced in each judgment and processing node to determine appropriate actions (buying/selling) and to select appropriate stock price information depending on the situation. Third, a Real Time Updating system has been firstly introduced in our paper considering the change of the trend of stock prices. The experimental results on the Japanese stock market show that the trading model with the proposed RTU-GNP method outperforms other models without real time updating. We also compared the experimental results using the proposed method with Buy&Hold method to confirm its effectiveness, and it is clarified that the proposed trading model can obtain much higher profits than Buy&Hold method.
Modeling stroke rehabilitation processes using the Unified Modeling Language (UML).
Ferrante, Simona; Bonacina, Stefano; Pinciroli, Francesco
2013-10-01
In organising and providing rehabilitation procedures for stroke patients, the usual need for many refinements makes it inappropriate to attempt rigid standardisation, but greater detail is required concerning workflow. The aim of this study was to build a model of the post-stroke rehabilitation process. The model, implemented in the Unified Modeling Language, was grounded on international guidelines and refined following the clinical pathway adopted at local level by a specialized rehabilitation centre. The model describes the organisation of the rehabilitation delivery and it facilitates the monitoring of recovery during the process. Indeed, a system software was developed and tested to support clinicians in the digital administration of clinical scales. The model flexibility assures easy updating after process evolution. Copyright © 2013 Elsevier Ltd. All rights reserved.
Integrated Main Propulsion System Performance Reconstruction Process/Models
NASA Technical Reports Server (NTRS)
Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael
2013-01-01
The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.
NASA Astrophysics Data System (ADS)
Callahan, P. S.; Wilson, B. D.; Xing, Z.; Raskin, R. G.
2010-12-01
We have developed a web-based system to allow updating and subsetting of TOPEX data. The Altimeter Service will be operated by PODAAC along with their other provision of oceanographic data. The Service could be easily expanded to other mission data. An Altimeter Service is crucial to the improvement and expanded use of altimeter data. A service is necessary for altimetry because the result of most interest - sea surface height anomaly (SSHA) - is composed of several components that are updated individually and irregularly by specialized experts. This makes it difficult for projects to provide the most up-to-date products. Some components are the subject of ongoing research, so the ability for investigators to make products for comparison or sharing is important. The service will allow investigators/producers to get their component models or processing into widespread use much more quickly. For coastal altimetry, the ability to subset the data to the area of interest and insert specialized models (e.g., tides) or data processing results is crucial. A key part of the Altimeter Service is having data producers provide updated or local models and data. In order for this to succeed, producers need to register their products with the Altimeter Service and to provide the product in a form consistent with the service update methods. We will describe the capabilities of the web service and the methods for providing new components. Currently the Service is providing TOPEX GDRs with Retracking (RGDRs) in netCDF format that has been coordinated with Jason data. Users can add new orbits, tide models, gridded geophysical fields such as mean sea surface, and along-track corrections as they become available and are installed by PODAAC. The updated fields are inserted into the netCDF files while the previous values are retained for comparison. The Service will also generate SSH and SSHA. In addition, the Service showcases a feature that plots any variable from files in netCDF. The research described here was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
Family Background, Self-Confidence and Economic Outcomes
ERIC Educational Resources Information Center
Filippin, Antonio; Paccagnella, Marco
2012-01-01
In this paper we analyze the role played by self-confidence, modeled as beliefs about one's ability, in shaping task choices. We propose a model in which fully rational agents exploit all the available information to update their beliefs using Bayes' rule, eventually learning their true type. We show that when the learning process does not…
Walking through doorways causes forgetting: environmental integration.
Radvansky, Gabriel A; Tamplin, Andrea K; Krawietz, Sabine A
2010-12-01
Memory for objects declines when people move from one location to another (the location updating effect). However, it is unclear whether this is attributable to event model updating or to task demands. The focus here was on the degree of integration for probed-for information with the experienced environment. In prior research, the probes were verbal labels of visual objects. Experiment 1 assessed whether this was a consequence of an item-probe mismatch, as with transfer-appropriate processing. Visual probes were used to better coordinate what was seen with the nature of the memory probe. In Experiment 2, people received additional word pairs to remember, which were less well integrated with the environment, to assess whether the probed-for information needed to be well integrated. The results showed location updating effects in both cases. These data are consistent with an event cognition view that mental updating of a dynamic event disrupts memory.
Rydell, Robert J; Van Loo, Katie J; Boucher, Kathryn L
2014-03-01
Stereotype threat research shows that women's math performance can be reduced by activating gender-based math stereotypes. Models of stereotype threat assert that threat reduces cognitive functioning, thereby accounting for its negative effects. This work provides a more detailed understanding of the cognitive processes through which stereotype threat leads women to underperform at math and to take risks, by examining which basic executive functions (inhibition, shifting, and updating) account for these outcomes. In Experiments 1 and 2, women under threat showed reduced inhibition, reduced updating, and reduced math performance compared with women in a control condition (or men); however, only updating accounted for women's poor math performance under threat. In Experiment 3, only updating accounted for stereotype threat's effect on women's math performance, whereas only inhibition accounted for the effect of threat on risk-taking, suggesting that distinct executive functions can account for different stereotype threat-related outcomes.
Evidence for selective executive function deficits in ecstasy/polydrug users.
Fisk, J E; Montgomery, C
2009-01-01
Previous research has suggested that the separate aspects of executive functioning are differentially affected by ecstasy use. Although the inhibition process appears to be unaffected by ecstasy use, it is unclear whether this is true of heavy users under conditions of high demand. Tasks loading on the updating process have been shown to be adversely affected by ecstasy use. However, it remains unclear whether the deficits observed reflect the executive aspects of the tasks or whether they are domain general in nature affecting both verbal and visuo-spatial updating. Fourteen heavy ecstasy users (mean total lifetime use 1000 tablets), 39 light ecstasy users (mean total lifetime use 150 tablets) and 28 non-users were tested on tasks loading on the inhibition executive process (random letter generation) and the updating component process (letter updating, visuo-spatial updating and computation span). Heavy users were not impaired in random letter generation even under conditions designed to be more demanding. Ecstasy-related deficits were observed on all updating measures and were statistically significant for two of the three measures. Following controls for various aspects of cannabis use, statistically significant ecstasy-related deficits were obtained on all three updating measures. It was concluded that the inhibition process is unaffected by ecstasy use even among heavy users. By way of contrast, the updating process appears to be impaired in ecstasy users with the deficit apparently domain general in nature.
Dawn Orbit Determination Team: Trajectory Modeling and Reconstruction Processes at Vesta
NASA Technical Reports Server (NTRS)
Abrahamson, Matthew J.; Ardito, Alessandro; Han, Dongsuk; Haw, Robert; Kennedy, Brian; Mastrodemos, Nick; Nandi, Sumita; Park, Ryan; Rush, Brian; Vaughan, Andrew
2013-01-01
The Dawn spacecraft spent over a year in orbit around Vesta from July 2011 through August 2012. In order to maintain the designated science reference orbits and enable the transfers between those orbits, precise and timely orbit determination was required. Challenges included low-thrust ion propulsion modeling, estimation of relatively unknown Vesta gravity and rotation models, track-ing data limitations, incorporation of real-time telemetry into dynamics model updates, and rapid maneuver design cycles during transfers. This paper discusses the dynamics models, filter configuration, and data processing implemented to deliver a rapid orbit determination capability to the Dawn project.
Cholinergic stimulation enhances Bayesian belief updating in the deployment of spatial attention.
Vossel, Simone; Bauer, Markus; Mathys, Christoph; Adams, Rick A; Dolan, Raymond J; Stephan, Klaas E; Friston, Karl J
2014-11-19
The exact mechanisms whereby the cholinergic neurotransmitter system contributes to attentional processing remain poorly understood. Here, we applied computational modeling to psychophysical data (obtained from a spatial attention task) under a psychopharmacological challenge with the cholinesterase inhibitor galantamine (Reminyl). This allowed us to characterize the cholinergic modulation of selective attention formally, in terms of hierarchical Bayesian inference. In a placebo-controlled, within-subject, crossover design, 16 healthy human subjects performed a modified version of Posner's location-cueing task in which the proportion of validly and invalidly cued targets (percentage of cue validity, % CV) changed over time. Saccadic response speeds were used to estimate the parameters of a hierarchical Bayesian model to test whether cholinergic stimulation affected the trial-wise updating of probabilistic beliefs that underlie the allocation of attention or whether galantamine changed the mapping from those beliefs to subsequent eye movements. Behaviorally, galantamine led to a greater influence of probabilistic context (% CV) on response speed than placebo. Crucially, computational modeling suggested this effect was due to an increase in the rate of belief updating about cue validity (as opposed to the increased sensitivity of behavioral responses to those beliefs). We discuss these findings with respect to cholinergic effects on hierarchical cortical processing and in relation to the encoding of expected uncertainty or precision. Copyright © 2014 the authors 0270-6474/14/3415735-08$15.00/0.
2017-09-01
VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY
A visual tracking method based on deep learning without online model updating
NASA Astrophysics Data System (ADS)
Tang, Cong; Wang, Yicheng; Feng, Yunsong; Zheng, Chao; Jin, Wei
2018-02-01
The paper proposes a visual tracking method based on deep learning without online model updating. In consideration of the advantages of deep learning in feature representation, deep model SSD (Single Shot Multibox Detector) is used as the object extractor in the tracking model. Simultaneously, the color histogram feature and HOG (Histogram of Oriented Gradient) feature are combined to select the tracking object. In the process of tracking, multi-scale object searching map is built to improve the detection performance of deep detection model and the tracking efficiency. In the experiment of eight respective tracking video sequences in the baseline dataset, compared with six state-of-the-art methods, the method in the paper has better robustness in the tracking challenging factors, such as deformation, scale variation, rotation variation, illumination variation, and background clutters, moreover, its general performance is better than other six tracking methods.
A review of statistical updating methods for clinical prediction models.
Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew
2018-01-01
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.
A Comparison of Updating Processes in Children Good or Poor in Arithmetic Word Problem-Solving
ERIC Educational Resources Information Center
Passolunghi, Maria Chiara; Pazzaglia, Francesca
2005-01-01
This study examines the updating ability of poor or good problem solvers. Seventy-eight fourth-graders, 43 good and 35 poor arithmetic word problem-solvers, performed the Updating Test used in Palladino et al. [Palladino, P., Cornoldi, C., De Beni, R., and Pazzaglia F. (2002). Working memory and updating processes in reading comprehension. Memory…
NASA Astrophysics Data System (ADS)
Griffin, J.; Clark, D.; Allen, T.; Ghasemi, H.; Leonard, M.
2017-12-01
Standard probabilistic seismic hazard assessment (PSHA) simulates earthquake occurrence as a time-independent process. However paleoseismic studies in slowly deforming regions such as Australia show compelling evidence that large earthquakes on individual faults cluster within active periods, followed by long periods of quiescence. Therefore the instrumental earthquake catalog, which forms the basis of PSHA earthquake recurrence calculations, may only capture the state of the system over the period of the catalog. Together this means that data informing our PSHA may not be truly time-independent. This poses challenges in developing PSHAs for typical design probabilities (such as 10% in 50 years probability of exceedance): Is the present state observed through the instrumental catalog useful for estimating the next 50 years of earthquake hazard? Can paleo-earthquake data, that shows variations in earthquake frequency over time-scales of 10,000s of years or more, be robustly included in such PSHA models? Can a single PSHA logic tree be useful over a range of different probabilities of exceedance? In developing an updated PSHA for Australia, decadal-scale data based on instrumental earthquake catalogs (i.e. alternative area based source models and smoothed seismicity models) is integrated with paleo-earthquake data through inclusion of a fault source model. Use of time-dependent non-homogeneous Poisson models allows earthquake clustering to be modeled on fault sources with sufficient paleo-earthquake data. This study assesses the performance of alternative models by extracting decade-long segments of the instrumental catalog, developing earthquake probability models based on the remaining catalog, and testing performance against the extracted component of the catalog. Although this provides insights into model performance over the short-term, for longer timescales it is recognised that model choice is subject to considerable epistemic uncertainty. Therefore a formal expert elicitation process has been used to assign weights to alternative models for the 2018 update to Australia's national PSHA.
Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.
Fernández-Gracia, J; Eguíluz, V M; San Miguel, M
2011-07-01
We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.
Weinstein, Nathan; Ortiz-Gutiérrez, Elizabeth; Muñoz, Stalin; Rosenblueth, David A; Álvarez-Buylla, Elena R; Mendoza, Luis
2015-03-13
There are recent experimental reports on the cross-regulation between molecules involved in the control of the cell cycle and the differentiation of the vulval precursor cells (VPCs) of Caenorhabditis elegans. Such discoveries provide novel clues on how the molecular mechanisms involved in the cell cycle and cell differentiation processes are coordinated during vulval development. Dynamic computational models are helpful to understand the integrated regulatory mechanisms affecting these cellular processes. Here we propose a simplified model of the regulatory network that includes sufficient molecules involved in the control of both the cell cycle and cell differentiation in the C. elegans vulva to recover their dynamic behavior. We first infer both the topology and the update rules of the cell cycle module from an expected time series. Next, we use a symbolic algorithmic approach to find which interactions must be included in the regulatory network. Finally, we use a continuous-time version of the update rules for the cell cycle module to validate the cyclic behavior of the network, as well as to rule out the presence of potential artifacts due to the synchronous updating of the discrete model. We analyze the dynamical behavior of the model for the wild type and several mutants, finding that most of the results are consistent with published experimental results. Our model shows that the regulation of Notch signaling by the cell cycle preserves the potential of the VPCs and the three vulval fates to differentiate and de-differentiate, allowing them to remain completely responsive to the concentration of LIN-3 and lateral signal in the extracellular microenvironment.
Through-process modelling of texture and anisotropy in AA5182
NASA Astrophysics Data System (ADS)
Crumbach, M.; Neumann, L.; Goerdeler, M.; Aretz, H.; Gottstein, G.; Kopp, R.
2006-07-01
A through-process texture and anisotropy prediction for AA5182 sheet production from hot rolling through cold rolling and annealing is reported. Thermo-mechanical process data predicted by the finite element method (FEM) package T-Pack based on the software LARSTRAN were fed into a combination of physics based microstructure models for deformation texture (GIA), work hardening (3IVM), nucleation texture (ReNuc), and recrystallization texture (StaRT). The final simulated sheet texture was fed into a FEM simulation of cup drawing employing a new concept of interactively updated texture based yield locus predictions. The modelling results of texture development and anisotropy were compared to experimental data. The applicability to other alloys and processes is discussed.
Stanley, W.D.; Blakely, R.J.
1995-01-01
The Geysers-Clear Lake geothermal area encompasses a large dry-steam production area in The Geysers field and a documented high-temperature, high-pressure, water-dominated system in the area largely south of Clear Lake, which has not been developed. An updated view is presented of the geological/geophysical complexities of the crust in this region in order to address key unanswered questions about the heat source and tectonics. Forward modeling, multidimensional inversions, and ideal body analysis of the gravity data, new electromagnetic sounding models, and arguments made from other geophysical data sets suggest that many of the geophysical anomalies have significant contributions from rock property and physical state variations in the upper 7 km and not from "magma' at greater depths. Regional tectonic and magmatic processes are analyzed to develop an updated scenario for pluton emplacement that differs substantially from earlier interpretations. In addition, a rationale is outlined for future exploration for geothermal resources in The Geysers-Clear Lake area. -from Authors
McKenna, Róisín; Rushe, T.; Woodcock, Kate A.
2017-01-01
The structure of executive function (EF) has been the focus of much debate for decades. What is more, the complexity and diversity provided by the developmental period only adds to this contention. The development of executive function plays an integral part in the expression of children's behavioral, cognitive, social, and emotional capabilities. Understanding how these processes are constructed during development allows for effective measurement of EF in this population. This meta-analysis aims to contribute to a better understanding of the structure of executive function in children. A coordinate-based meta-analysis was conducted (using BrainMap GingerALE 2.3), which incorporated studies administering functional magnetic resonance imaging (fMRI) during inhibition, switching, and working memory updating tasks in typical children (aged 6–18 years). The neural activation common across all executive tasks was compared to that shared by tasks pertaining only to inhibition, switching or updating, which are commonly considered to be fundamental executive processes. Results support the existence of partially separable but partially overlapping inhibition, switching, and updating executive processes at a neural level, in children over 6 years. Further, the shared neural activation across all tasks (associated with a proposed “unitary” component of executive function) overlapped to different degrees with the activation associated with each individual executive process. These findings provide evidence to support the suggestion that one of the most influential structural models of executive functioning in adults can also be applied to children of this age. However, the findings also call for careful consideration and measurement of both specific executive processes, and unitary executive function in this population. Furthermore, a need is highlighted for a new systematic developmental model, which captures the integrative nature of executive function in children. PMID:28439231
Vernooij, Robin W. M.; Alonso-Coello, Pablo; Brouwers, Melissa
2017-01-01
Background Scientific knowledge is in constant development. Consequently, regular review to assure the trustworthiness of clinical guidelines is required. However, there is still a lack of preferred reporting items of the updating process in updated clinical guidelines. The present article describes the development process of the Checklist for the Reporting of Updated Guidelines (CheckUp). Methods and Findings We developed an initial list of items based on an overview of research evidence on clinical guideline updating, the Appraisal of Guidelines for Research and Evaluation (AGREE) II Instrument, and the advice of the CheckUp panel (n = 33 professionals). A multistep process was used to refine this list, including an assessment of ten existing updated clinical guidelines, interviews with key informants (response rate: 54.2%; 13/24), a three-round Delphi consensus survey with the CheckUp panel (33 participants), and an external review with clinical guideline methodologists (response rate: 90%; 53/59) and users (response rate: 55.6%; 10/18). CheckUp includes 16 items that address (1) the presentation of an updated guideline, (2) editorial independence, and (3) the methodology of the updating process. In this article, we present the methodology to develop CheckUp and include as a supplementary file an explanation and elaboration document. Conclusions CheckUp can be used to evaluate the completeness of reporting in updated guidelines and as a tool to inform guideline developers about reporting requirements. Editors may request its completion from guideline authors when submitting updated guidelines for publication. Adherence to CheckUp will likely enhance the comprehensiveness and transparency of clinical guideline updating for the benefit of patients and the public, health care professionals, and other relevant stakeholders. PMID:28072838
Changes in Sea Levels around the British Isles Revisited (Invited)
NASA Astrophysics Data System (ADS)
Teferle, F. N.; Hansen, D. N.; Bingley, R. M.; Williams, S. D.; Woodworth, P. L.; Gehrels, W. R.; Bradley, S. L.; Stocchi, P.
2009-12-01
Recently a number of new and/or updated sources for estimates of vertical land movements for the British Isles have become available allowing the relative and average changes in sea levels for this region to be revisited. The geodetic data set stems from a combination of re-processed continuous Global Positioning System (GPS) measurements from stations in the British Isles and from a global reference frame network, and absolute gravity (AG) measurements from two stations in the British Isles. The geologic data set of late Holocene sea level indicators has recently been updated, now applying corrections for the 20th century sea level rise, syphoning effect and late Holocene global ice melt, and expanded to Northern Ireland and Ireland. Several new model predictions of the glacial isostatic adjustment (GIA) process active in this region form the modelling data set of vertical land movements for the British Isles. Correcting the updated revised local reference (RLR) trends from the Permanent Service for Mean Sea Level (PSMSL) with these vertical land movement data sets, regional and averaged changes in sea levels around the British Isles have been investigated. Special focus is thereby also given to the coastal areas that have recently been identified within the UK Climate Projections 2009.
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Input-output model for MACCS nuclear accident impacts estimation¹
DOE Office of Scientific and Technical Information (OSTI.GOV)
Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N
Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less
NASA Technical Reports Server (NTRS)
Nobbs, Steven G.
1995-01-01
An overview of the performance seeking control (PSC) algorithm and details of the important components of the algorithm are given. The onboard propulsion system models, the linear programming optimization, and engine control interface are described. The PSC algorithm receives input from various computers on the aircraft including the digital flight computer, digital engine control, and electronic inlet control. The PSC algorithm contains compact models of the propulsion system including the inlet, engine, and nozzle. The models compute propulsion system parameters, such as inlet drag and fan stall margin, which are not directly measurable in flight. The compact models also compute sensitivities of the propulsion system parameters to change in control variables. The engine model consists of a linear steady state variable model (SSVM) and a nonlinear model. The SSVM is updated with efficiency factors calculated in the engine model update logic, or Kalman filter. The efficiency factors are used to adjust the SSVM to match the actual engine. The propulsion system models are mathematically integrated to form an overall propulsion system model. The propulsion system model is then optimized using a linear programming optimization scheme. The goal of the optimization is determined from the selected PSC mode of operation. The resulting trims are used to compute a new operating point about which the optimization process is repeated. This process is continued until an overall (global) optimum is reached before applying the trims to the controllers.
43 CFR 2884.12 - What is the processing fee for a grant or TUP application?
Code of Federal Regulations, 2011 CFR
2011-10-01
... changes in the IPD-GDP. See paragraph (c) of this section for update information (1) Applications for new... part). (c) BLM will revise paragraph (b) of this section to update the processing fees for Categories 1... update Category 5 processing fees as specified in the Master Agreement. You also may obtain a copy of the...
43 CFR 2884.12 - What is the processing fee for a grant or TUP application?
Code of Federal Regulations, 2014 CFR
2014-10-01
... changes in the IPD-GDP. See paragraph (c) of this section for update information (1) Applications for new... part). (c) BLM will revise paragraph (b) of this section to update the processing fees for Categories 1... update Category 5 processing fees as specified in the Master Agreement. You also may obtain a copy of the...
43 CFR 2884.12 - What is the processing fee for a grant or TUP application?
Code of Federal Regulations, 2012 CFR
2012-10-01
... changes in the IPD-GDP. See paragraph (c) of this section for update information (1) Applications for new... part). (c) BLM will revise paragraph (b) of this section to update the processing fees for Categories 1... update Category 5 processing fees as specified in the Master Agreement. You also may obtain a copy of the...
43 CFR 2884.12 - What is the processing fee for a grant or TUP application?
Code of Federal Regulations, 2013 CFR
2013-10-01
... changes in the IPD-GDP. See paragraph (c) of this section for update information (1) Applications for new... part). (c) BLM will revise paragraph (b) of this section to update the processing fees for Categories 1... update Category 5 processing fees as specified in the Master Agreement. You also may obtain a copy of the...
Stratospheric processes: Observations and interpretation
NASA Technical Reports Server (NTRS)
Brune, William H.; Cox, R. Anthony; Turco, Richard; Brasseur, Guy P.; Matthews, W. Andrew; Zhou, Xiuji; Douglass, Anne; Zander, Rudi J.; Prendez, Margarita; Rodriguez, Jose M.
1991-01-01
Explaining the observed ozone trends discussed in an earlier update and predicting future trends requires an understanding of the stratospheric processes that affect ozone. Stratospheric processes occur on both large and small spatial scales and over both long and short periods of time. Because these diverse processes interact with each other, only in rare cases can individual processes be studied by direct observation. Generally the cause and effect relationships for ozone changes were established by comparisons between observations and model simulations. Increasingly, these comparisons rely on the developing, observed relationships among trace gases and dynamical quantities to initialize and constrain the simulations. The goal of this discussion of stratospheric processes is to describe the causes for the observed ozone trends as they are currently understood. At present, we understand with considerable confidence the stratospheric processes responsible for the Antarctic ozone hole but are only beginning to understand the causes of the ozone trends at middle latitudes. Even though the causes of the ozone trends at middle latitudes were not clearly determined, it is likely that they, just as those over Antarctica, involved chlorine and bromine chemistry that was enhanced by heterogeneous processes. This discussion generally presents only an update of the observations that have occurred for stratospheric processes since the last assessment (World Meteorological Organization (WMO), 1990), and is not a complete review of all the new information about stratospheric processes. It begins with an update of the previous assessment of polar stratospheres (WMO, 1990), followed by a discussion on the possible causes for the ozone trends at middle latitudes and on the effects of bromine and of volcanoes.
Application and Evaluation of MODIS LAI, FPAR, and Albedo ...
MODIS vegetation and albedo products provide a more realistic representation of surface conditions for input to the WRF/CMAQ modeling system. However, the initial evaluation of ingesting MODIS data into the system showed mixed results, with increased bias and error for 2-m temperature and reduced bias and error for 2-m mixing ratio. Recently, the WRF/CMAQ land surface and boundary laywer processes have been updated. In this study, MODIS vegetation and albedo data are input to the updated WRF/CMAQ meteorology and air quality simulations for 2006 over a North American (NA) 12-km domain. The evaluation of the simulation results shows that the updated WRF/CMAQ system improves 2-m temperature estimates over the pre-update base modeling system estimates. The MODIS vegetation input produces a realistic spring green-up that progresses through time from the south to north. Overall, MODIS input reduces 2-m mixing ration bias during the growing season. The NA west shows larger positive O3 bias during the growing season because of reduced gas phase deposition resulting from lower O3 deposition velocities driven by reduced vegetation cover. The O3 bias increase associated with the realistic vegetation representation indicates that further improvement may be needed in the WRF/CMAQ system. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment. AMAD’s rese
Theory of quantized systems: formal basis for DEVS/HLA distributed simulation environment
NASA Astrophysics Data System (ADS)
Zeigler, Bernard P.; Lee, J. S.
1998-08-01
In the context of a DARPA ASTT project, we are developing an HLA-compliant distributed simulation environment based on the DEVS formalism. This environment will provide a user- friendly, high-level tool-set for developing interoperable discrete and continuous simulation models. One application is the study of contract-based predictive filtering. This paper presents a new approach to predictive filtering based on a process called 'quantization' to reduce state update transmission. Quantization, which generates state updates only at quantum level crossings, abstracts a sender model into a DEVS representation. This affords an alternative, efficient approach to embedding continuous models within distributed discrete event simulations. Applications of quantization to message traffic reduction are discussed. The theory has been validated by DEVSJAVA simulations of test cases. It will be subject to further test in actual distributed simulations using the DEVS/HLA modeling and simulation environment.
Escudero, Miguel; Hooper, Dan; Witte, Samuel J.
2017-02-20
Utilizing an exhaustive set of simplified models, we revisit dark matter scenarios potentially capable of generating the observed Galactic Center gamma-ray excess, updating constraints from the LUX and PandaX-II experiments, as well as from the LHC and other colliders. We identify a variety of pseudoscalar mediated models that remain consistent with all constraints. In contrast, dark matter candidates which annihilate through a spin-1 mediator are ruled out by direct detection constraints unless the mass of the mediator is near an annihilation resonance, or the mediator has a purely vector coupling to the dark matter and a purely axial coupling tomore » Standard Model fermions. Furthermore, all scenarios in which the dark matter annihilates through t-channel processes are now ruled out by a combination of the constraints from LUX/PandaX-II and the LHC.« less
Voter dynamics on an adaptive network with finite average connectivity
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Abhishek; Schmittmann, Beate
2009-03-01
We study a simple model for voter dynamics in a two-party system. The opinion formation process is implemented in a random network of agents in which interactions are not restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships, so that there is no history dependence in the model. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion and with opponents. Using simulations and analytic arguments, we determine the final steady states and the relaxation into these states for different system sizes. In contrast to earlier studies, the average connectivity (``degree'') of each agent is constant here, independent of the system size. This has significant consequences for the long-time behavior of the model.
Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P
2018-01-01
Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
NASA Astrophysics Data System (ADS)
Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping
2018-05-01
Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.
Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, mercedes C.
2006-01-01
The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.
Advanced Technology System Scheduling Governance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ang, Jim; Carnes, Brian; Hoang, Thuc
In the fall of 2005, the Advanced Simulation and Computing (ASC) Program appointed a team to formulate a governance model for allocating resources and scheduling the stockpile stewardship workload on ASC capability systems. This update to the original document takes into account the new technical challenges and roles for advanced technology (AT) systems and the new ASC Program workload categories that must be supported. The goal of this updated model is to effectively allocate and schedule AT computing resources among all three National Nuclear Security Administration (NNSA) laboratories for weapons deliverables that merit priority on this class of resource. Themore » process outlined below describes how proposed work can be evaluated and approved for resource allocations while preserving high effective utilization of the systems. This approach will provide the broadest possible benefit to the Stockpile Stewardship Program (SSP).« less
Iterative LQG Controller Design Through Closed-Loop Identification
NASA Technical Reports Server (NTRS)
Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.
1996-01-01
This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.
Guidance for updating clinical practice guidelines: a systematic review of methodological handbooks.
Vernooij, Robin W M; Sanabria, Andrea Juliana; Solà, Ivan; Alonso-Coello, Pablo; Martínez García, Laura
2014-01-02
Updating clinical practice guidelines (CPGs) is a crucial process for maintaining the validity of recommendations. Methodological handbooks should provide guidance on both developing and updating CPGs. However, little is known about the updating guidance provided by these handbooks. We conducted a systematic review to identify and describe the updating guidance provided by CPG methodological handbooks and included handbooks that provide updating guidance for CPGs. We searched in the Guidelines International Network library, US National Guidelines Clearinghouse and MEDLINE (PubMed) from 1966 to September 2013. Two authors independently selected the handbooks and extracted the data. We used descriptive statistics to analyze the extracted data and conducted a narrative synthesis. We included 35 handbooks. Most handbooks (97.1%) focus mainly on developing CPGs, including variable degrees of information about updating. Guidance on identifying new evidence and the methodology of assessing the need for an update is described in 11 (31.4%) and eight handbooks (22.8%), respectively. The period of time between two updates is described in 25 handbooks (71.4%), two to three years being the most frequent (40.0%). The majority of handbooks do not provide guidance for the literature search, evidence selection, assessment, synthesis, and external review of the updating process. Guidance for updating CPGs is poorly described in methodological handbooks. This guidance should be more rigorous and explicit. This could lead to a more optimal updating process, and, ultimately to valid trustworthy guidelines.
Some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models.
NASA Astrophysics Data System (ADS)
Knudsen, Thomas; Aasbjerg Nielsen, Allan
2013-04-01
The Danish national elevation model, DK-DEM, was introduced in 2009 and is based on LiDAR data collected in the time frame 2005-2007. Hence, DK-DEM is aging, and it is time to consider how to integrate new data with the current model in a way that improves the representation of new landscape features, while still preserving the overall (very high) quality of the model. In LiDAR terms, 2005 is equivalent to some time between the palaeolithic and the neolithic. So evidently, when (and if) an update project is launched, we may expect some notable improvements due to the technical and scientific developments from the last half decade. To estimate the magnitude of these potential improvements, and to devise efficient and effective ways of integrating the new and old data, we currently carry out a number of case studies based on comparisons between the current terrain model (with a ground sample distance, GSD, of 1.6 m), and a number of new high resolution point clouds (10-70 points/m2). Not knowing anything about the terms of a potential update project, we consider multiple scenarios ranging from business as usual: A new model with the same GSD, but improved precision, to aggressive upscaling: A new model with 4 times better GSD, i.e. a 16-fold increase in the amount of data. Especially in the latter case speeding up the gridding process is important. Luckily recent results from one of our case studies reveal that for very high resolution data in smooth terrain (which is the common case in Denmark), using local mean (LM) as grid value estimator is only negligibly worse than using the theoretically "best" estimator, i.e. ordinary kriging (OK) with rigorous modelling of the semivariogram. The bias in a leave one out cross validation differs on the micrometer level, while the RMSE differs on the 0.1 mm level. This is fortunate, since a LM estimator can be implemented in plain stream mode, letting the points from the unstructured point cloud (i.e. no TIN generation) stream through the processor, individually contributing to the nearest grid posts in a memory mapped grid file. Algorithmically this is very efficient, but it would be even more efficient if we did not have to handle so much data. Another of our recent case studies focuses on this. The basic idea is to ignore data that does not tell us anything new. We do this by looking at anomalies between the current height model and the new point cloud, then computing a correction grid for the current model. Points with insignificant anomalies are simply removed from the point cloud, and the correction grid is computed using the remaining point anomalies only. Hence, we only compute updates in areas of significant change, speeding up the process, and giving us new insight of the precision of the current model which in turn results in improved metadata for both the current and the new model. Currently we focus on simple approaches for creating a smooth update process for integration of heterogeneous data sets. On the other hand, as years go by and multiple generations of data become available, more advanced approaches will probably become necessary (e.g. a multi campaign bundle adjustment, improving the oldest data using cross-over adjustment with newer campaigns). But to prepare for such approaches, it is important already now to organize and evaluate the ancillary (GPS, INS) and engineering level data for the current data sets. This is essential if future generations of DEM users should be able to benefit from future conceptions of "some safe and sensible shortcuts for efficiently upscaled updates of existing elevation models".
Learning during processing Word learning doesn’t wait for word recognition to finish
Apfelbaum, Keith S.; McMurray, Bob
2017-01-01
Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082
NASA Astrophysics Data System (ADS)
Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin
2016-11-01
This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.
Options as information: rational reversals of evaluation and preference.
Sher, Shlomi; McKenzie, Craig R M
2014-06-01
This article develops a rational analysis of an important class of apparent preference reversals-joint-separate reversals traditionally explained by the evaluability hypothesis. The "options-as-information" model considers a hypothetical rational actor with limited knowledge about the market distribution of a stimulus attribute. The actor's evaluations are formed via a 2-stage process-an inferential stage in which beliefs are updated on the basis of the sample of options received, followed by an assessment stage in which options are evaluated in light of these updated beliefs. This process generates joint-separate reversals in standard experimental designs. The normative model explains why the evaluability hypothesis works when it does, identifies boundary conditions for the hypothesis, and clarifies some common misconceptions about these effects. In particular, it implies that joint-separate reversals are not irrational; in fact, they are not preference reversals. However, in expanded designs where more than 2 options are jointly evaluated, the model predicts that genuine (and rational) preference reversals will sometimes emerge. Results of 3 experiments suggest an excellent fit between the rational actor model and the judgments of human actors in joint-separate experiments. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Orbital frontal cortex updates state-induced value change for decision-making.
Baltz, Emily T; Yalcinbas, Ege A; Renteria, Rafael; Gremel, Christina M
2018-06-13
Recent hypotheses have posited that orbital frontal cortex (OFC) is important for using inferred consequences to guide behavior. Less clear is OFC's contribution to goal-directed or model-based behavior, where the decision to act is controlled by previous experience with the consequence or outcome. Investigating OFC's role in learning about changed outcomes separate from decision-making is not trivial and often the two are confounded. Here we adapted an incentive learning task to mice, where we investigated processes controlling experience-based outcome updating independent from inferred action control. We found chemogenetic OFC attenuation did not alter the ability to perceive motivational state-induced changes in outcome value but did prevent the experience-based updating of this change. Optogenetic inhibition of OFC excitatory neuron activity selectively when experiencing an outcome change disrupted the ability to update, leaving mice unable to infer the appropriate behavior. Our findings support a role for OFC in learning that controls decision-making. © 2018, Baltz et al.
Neural and computational processes underlying dynamic changes in self-esteem
Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J
2017-01-01
Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an ‘interpersonal vulnerability’ dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability. PMID:29061228
Neural and computational processes underlying dynamic changes in self-esteem.
Will, Geert-Jan; Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J
2017-10-24
Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an 'interpersonal vulnerability' dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability.
Volunteer Development. Practice Application Brief.
ERIC Educational Resources Information Center
Kerka, Sandra
Certain practices in volunteer development have proved successful to help organizations make the best use of their volunteers. Development should be a comprehensive, continuous process through which individuals can extend, update, and adapt their knowledge, skills, and abilities to enhance their performance and potential. A model for volunteer…
The development of executive functions and early mathematics: a dynamic relationship.
Van der Ven, Sanne H G; Kroesbergen, Evelyn H; Boom, Jan; Leseman, Paul P M
2012-03-01
The relationship between executive functions and mathematical skills has been studied extensively, but results are inconclusive, and how this relationship evolves longitudinally is largely unknown. The aim was to investigate the factor structure of executive functions in inhibition, shifting, and updating; the longitudinal development of executive functions and mathematics; and the relation between them. A total of 211 children in grade 2 (7-8 years old) from 10 schools in the Netherlands. Children were followed in grade 1 and 2 of primary education. Executive functions and mathematics were measured four times. The test battery contained multiple tasks for each executive function: Animal stroop, local global, and Simon task for inhibition; Animal Shifting, Trail Making Test in Colours, and Sorting Task for shifting; and Digit Span Backwards, Odd One Out, and Keep Track for updating. The factor structure of executive functions was assessed and relations with mathematics were investigated using growth modelling. Confirmatory factor analysis (CFA) showed that inhibition and shifting could not be distinguished from each other. Updating was a separate factor, and its development was strongly related to mathematical development while inhibition and shifting did not predict mathematics in the presence of the updating factor. The strong relationship between updating and mathematics suggest that updating skills play a key role in the maths learning process. This makes updating a promising target for future intervention studies. ©2011 The British Psychological Society.
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
10 CFR 72.70 - Safety analysis report updating.
Code of Federal Regulations, 2011 CFR
2011-01-01
... original FSAR or, as appropriate, the last update to the FSAR under this section. The update shall include... for an ISFSI or MRS shall update periodically, as provided in paragraphs (b) and (c) of this section... applicant commitments developed during the license approval and/or hearing process. (b) Each update shall...
10 CFR 72.70 - Safety analysis report updating.
Code of Federal Regulations, 2014 CFR
2014-01-01
... original FSAR or, as appropriate, the last update to the FSAR under this section. The update shall include... for an ISFSI or MRS shall update periodically, as provided in paragraphs (b) and (c) of this section... applicant commitments developed during the license approval and/or hearing process. (b) Each update shall...
10 CFR 72.70 - Safety analysis report updating.
Code of Federal Regulations, 2013 CFR
2013-01-01
... original FSAR or, as appropriate, the last update to the FSAR under this section. The update shall include... for an ISFSI or MRS shall update periodically, as provided in paragraphs (b) and (c) of this section... applicant commitments developed during the license approval and/or hearing process. (b) Each update shall...
10 CFR 72.70 - Safety analysis report updating.
Code of Federal Regulations, 2012 CFR
2012-01-01
... original FSAR or, as appropriate, the last update to the FSAR under this section. The update shall include... for an ISFSI or MRS shall update periodically, as provided in paragraphs (b) and (c) of this section... applicant commitments developed during the license approval and/or hearing process. (b) Each update shall...
The effect of ageing on recollection: the role of the binding updating process.
Boujut, Arnaud; Clarys, David
2016-10-01
The aim of this study was to highlight the underlying process responsible for the age-related deficit in recollection. Through two experiments using the Remember-Know-Guess procedure (Gardiner, J. M., & Richardson-Klavehn, A. [2000]. Remembering and knowing. In The Oxford handbook of memory (pp. 229-244). New York, NY: Oxford University Press) in recognition, we manipulated the opportunity to update bindings between target items and their encoding context, in young and older adults. In the first experiment we impaired the binding updating process during the encoding of items, while in the second we supported this process. The results indicated that the "Remember" responses in the younger group were specifically reduced by the impairment of the binding updating process (Exp. 1), suggesting that this ability is useful for them to encode a specific episode. Conversely, only the "Remember" responses in the older group were improved in accuracy by supporting the binding updating process (Exp. 2), suggesting that their weakness in this ability is the source of their failure to improve the accuracy of their memories. The overall results support the hypothesis that the age-related decline in episodic memory is partly due to a greater vulnerability to interference on bindings, impairing the ability to update content-context bindings as and when events occur.
Sentry: An Automated Close Approach Monitoring System for Near-Earth Objects
NASA Astrophysics Data System (ADS)
Chamberlin, A. B.; Chesley, S. R.; Chodas, P. W.; Giorgini, J. D.; Keesey, M. S.; Wimberly, R. N.; Yeomans, D. K.
2001-11-01
In response to international concern about potential asteroid impacts on Earth, NASA's Near-Earth Object (NEO) Program Office has implemented a new system called ``Sentry'' to automatically update the orbits of all NEOs on a daily basis and compute Earth close approaches up to 100 years into the future. Results are published on our web site (http://neo.jpl.nasa.gov/) and updated orbits and ephemerides made available via the JPL Horizons ephemeris service (http://ssd.jpl.nasa.gov/horizons.html). Sentry collects new and revised astrometric observations from the Minor Planet Center (MPC) via their electronic circulars (MPECs) in near real time as well as radar and optical astrometry sent directly from observers. NEO discoveries and identifications are detected in MPECs and processed appropriately. In addition to these daily updates, Sentry synchronizes with each monthly batch of MPC astrometry and automatically updates all NEO observation files. Daily and monthly processing of NEO astrometry is managed using a queuing system which allows for manual intervention of selected NEOs without interfering with the automatic system. At the heart of Sentry is a fully automatic orbit determination program which handles outlier rejection and ensures convergence in the new solution. Updated orbital elements and their covariances are published via Horizons and our NEO web site, typically within 24 hours. A new version of Horizons, in development, will allow computation of ephemeris uncertainties using covariance data. The positions of NEOs with updated orbits are numerically integrated up to 100 years into the future and each close approach to any perturbing body in our dynamic model (all planets, Moon, Ceres, Pallas, Vesta) is recorded. Significant approaches are flagged for extended analysis including Monte Carlo studies. Results, such as minimum encounter distances and future Earth impact probabilities, are published on our NEO web site.
Assessing the performance of eight real-time updating models and procedures for the Brosna River
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.
2005-10-01
The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.
Ren, Xuezhu; Altmeyer, Michael; Reiss, Siegbert; Schweizer, Karl
2013-02-01
Perceptual attention and executive attention represent two higher-order types of attention and associate with distinctly different ways of information processing. It is hypothesized that these two types of attention implicate different cognitive processes, which are assumed to account for the differential effects of perceptual attention and executive attention on fluid intelligence. Specifically, an encoding process is assumed to be crucial in completing the tasks of perceptual attention while two executive processes, updating and shifting, are stimulated in completing the tasks of executive attention. The proposed hypothesis was tested by means of an integrative approach combining experimental manipulations and psychometric modeling. In a sample of 210 participants the encoding process has proven indispensable in completing the tasks of perceptual attention, and this process accounted for a considerable part of fluid intelligence that was assessed by two figural reasoning tests. In contrast, the two executive processes, updating and shifting, turned out to be necessary in performance according to the tasks of executive attention and these processes accounted for a larger part of the variance in fluid intelligence than that of the processes underlying perceptual attention. Copyright © 2012 Elsevier B.V. All rights reserved.
Imitate or innovate: Competition of strategy updating attitudes in spatial social dilemma games
NASA Astrophysics Data System (ADS)
Danku, Zsuzsa; Wang, Zhen; Szolnoki, Attila
2018-01-01
Evolution is based on the assumption that competing players update their strategies to increase their individual payoffs. However, while the applied updating method can be different, most of previous works proposed uniform models where players use identical way to revise their strategies. In this work we explore how imitation-based or learning attitude and innovation-based or myopic best-response attitude compete for space in a complex model where both attitudes are available. In the absence of additional cost the best response trait practically dominates the whole snow-drift game parameter space which is in agreement with the average payoff difference of basic models. When additional cost is involved then the imitation attitude can gradually invade the whole parameter space but this transition happens in a highly nontrivial way. However, the role of competing attitudes is reversed in the stag-hunt parameter space where imitation is more successful in general. Interestingly, a four-state solution can be observed for the latter game which is a consequence of an emerging cyclic dominance between possible states. These phenomena can be understood by analyzing the microscopic invasion processes, which reveals the unequal propagation velocities of strategies and attitudes.
ISS Ambient Air Quality: Updated Inventory of Known Aerosol Sources
NASA Technical Reports Server (NTRS)
Meyer, Marit
2014-01-01
Spacecraft cabin air quality is of fundamental importance to crew health, with concerns encompassing both gaseous contaminants and particulate matter. Little opportunity exists for direct measurement of aerosol concentrations on the International Space Station (ISS), however, an aerosol source model was developed for the purpose of filtration and ventilation systems design. This model has successfully been applied, however, since the initial effort, an increase in the number of crewmembers from 3 to 6 and new processes on board the ISS necessitate an updated aerosol inventory to accurately reflect the current ambient aerosol conditions. Results from recent analyses of dust samples from ISS, combined with a literature review provide new predicted aerosol emission rates in terms of size-segregated mass and number concentration. Some new aerosol sources have been considered and added to the existing array of materials. The goal of this work is to provide updated filtration model inputs which can verify that the current ISS filtration system is adequate and filter lifetime targets are met. This inventory of aerosol sources is applicable to other spacecraft, and becomes more important as NASA considers future long term exploration missions, which will preclude the opportunity for resupply of filtration products.
Yang, Xiaohuan; Huang, Yaohuan; Dong, Pinliang; Jiang, Dong; Liu, Honghui
2009-01-01
The spatial distribution of population is closely related to land use and land cover (LULC) patterns on both regional and global scales. Population can be redistributed onto geo-referenced square grids according to this relation. In the past decades, various approaches to monitoring LULC using remote sensing and Geographic Information Systems (GIS) have been developed, which makes it possible for efficient updating of geo-referenced population data. A Spatial Population Updating System (SPUS) is developed for updating the gridded population database of China based on remote sensing, GIS and spatial database technologies, with a spatial resolution of 1 km by 1 km. The SPUS can process standard Moderate Resolution Imaging Spectroradiometer (MODIS L1B) data integrated with a Pattern Decomposition Method (PDM) and an LULC-Conversion Model to obtain patterns of land use and land cover, and provide input parameters for a Population Spatialization Model (PSM). The PSM embedded in SPUS is used for generating 1 km by 1 km gridded population data in each population distribution region based on natural and socio-economic variables. Validation results from finer township-level census data of Yishui County suggest that the gridded population database produced by the SPUS is reliable.
Faunt, C.C.; Hanson, R.T.; Martin, P.; Schmid, W.
2011-01-01
California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence. ?? 2011 ASCE.
Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang
2011-01-01
California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.
NASA Technical Reports Server (NTRS)
Howard, Joseph M.; Ha, Kong Q.; Shiri, Ron; Smith, J. Scott; Mosier, Gary; Muheim, Danniella
2008-01-01
This paper is part five of a series on the ongoing optical modeling activities for the James Webb Space Telescope (JWST). The first two papers discussed modeling JWST on-orbit performance using wavefront sensitivities to predict line of sight motion induced blur, and stability during thermal transients. The third paper investigates the aberrations resulting from alignment and figure compensation of the controllable degrees of freedom (primary and secondary mirrors), which may be encountered during ground alignment and on-orbit commissioning of the observatory, and the fourth introduced the software toolkits used to perform much of the optical analysis for JWST. The work here models observatory operations by simulating line-of-sight image motion and alignment drifts over a two-week period. Alignment updates are then simulated using wavefront sensing and control processes to calculate and perform the corrections. A single model environment in Matlab is used for evaluating the predicted performance of the observatory during these operations.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Post-Launch Calibration and Testing of Space Weather Instruments on GOES-R Satellite
NASA Technical Reports Server (NTRS)
Tadikonda, S. K.; Merrow, Cynthia S.; Kronenwetter, Jeffrey A.; Comeyne, Gustave J.; Flanagan, Daniel G.; Todrita, Monica
2016-01-01
The Geostationary Operational Environmental Satellite - R (GOES-R) is the first of a series of satellites to be launched, with the first launch scheduled for October 2016. The three instruments Solar UltraViolet Imager (SUVI), Extreme ultraviolet and X-ray Irradiance Sensor (EXIS), and Space Environment In-Situ Suite (SEISS) provide the data needed as inputs for the product updates National Oceanic and Atmospheric Administration (NOAA) provides to the public. SUVI is a full-disk extreme ultraviolet imager enabling Active Region characterization, filament eruption, and flare detection. EXIS provides inputs to solar back-ground-sevents impacting climate models. SEISS provides particle measurements over a wide energy-and-flux range that varies by several orders of magnitude and these data enable updates to spacecraft charge models for electrostatic discharge. EXIS and SEISS have been tested and calibrated end-to-end in ground test facilities around the United States. Due to the complexity of the SUVI design, data from component tests were used in a model to predict on-orbit performance. The ground tests and model updates provided inputs for designing the on-orbit calibration tests. A series of such tests have been planned for the Post-Launch Testing (PLT) of each of these instruments, and specific parameters have been identified that will be updated in the Ground Processing Algorithms, on-orbit parameter tables, or both. Some of SUVI and EXIS calibrations require slewing them off the Sun, while no such maneuvers are needed for SEISS. After a six-month PLT period the GOES-R is expected to be operational. The calibration details are presented in this paper.
Post-Launch Calibration and Testing of Space Weather Instruments on GOES-R Satellite
NASA Technical Reports Server (NTRS)
Tadikonda, Sivakumara S. K.; Merrow, Cynthia S.; Kronenwetter, Jeffrey A.; Comeyne, Gustave J.; Flanagan, Daniel G.; Todirita, Monica
2016-01-01
The Geostationary Operational Environmental Satellite - R (GOES-R) is the first of a series of satellites to be launched, with the first launch scheduled for October 2016. The three instruments - Solar Ultra Violet Imager (SUVI), Extreme ultraviolet and X-ray Irradiance Sensor (EXIS), and Space Environment In-Situ Suite (SEISS) provide the data needed as inputs for the product updates National Oceanic and Atmospheric Administration (NOAA) provides to the public. SUVI is a full-disk extreme ultraviolet imager enabling Active Region characterization, filament eruption, and flare detection. EXIS provides inputs to solar backgrounds/events impacting climate models. SEISS provides particle measurements over a wide energy-and-flux range that varies by several orders of magnitude and these data enable updates to spacecraft charge models for electrostatic discharge. EXIS and SEISS have been tested and calibrated end-to-end in ground test facilities around the United States. Due to the complexity of the SUVI design, data from component tests were used in a model to predict on-orbit performance. The ground tests and model updates provided inputs for designing the on-orbit calibration tests. A series of such tests have been planned for the Post-Launch Testing (PLT) of each of these instruments, and specific parameters have been identified that will be updated in the Ground Processing Algorithms, on-orbit parameter tables, or both. Some of SUVI and EXIS calibrations require slewing them off the Sun, while no such maneuvers are needed for SEISS. After a six-month PLT period the GOES-R is expected to be operational. The calibration details are presented in this paper.
Technical Update for Vocational Agriculture Teachers in Secondary Schools. Final Report.
ERIC Educational Resources Information Center
Iowa State Univ. of Science and Technology, Ames. Dept. of Agricultural Education.
A project provided ongoing opportunities for teachers in Iowa to upgrade their expertise in agribusiness management using new technology; production, processing, and marketing agricultural products; biotechnology in agriculture; and conservation of natural resources. The project also modeled effective teaching methods and strategies. Project…
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-06-01
This module presents the primary aspects of SACM compared to the traditional Superfund response process. In addition, this module discusses presumptive remedies by covering what they are, and providing an overview of the guidance EPA has developed.
Face Adaptation and Attractiveness Aftereffects in 8-Year-Olds and Adults
ERIC Educational Resources Information Center
Anzures, Gizelle; Mondloch, Catherine J.; Lackner, Christine
2009-01-01
A novel method was used to investigate developmental changes in face processing: attractiveness aftereffects. Consistent with the norm-based coding model, viewing consistently distorted faces shifts adults' attractiveness preferences toward the adapting stimuli. Thus, adults' attractiveness judgments are influenced by a continuously updated face…
NASA Astrophysics Data System (ADS)
Ren, Qianci
2018-04-01
Full waveform inversion (FWI) of ground penetrating radar (GPR) is a promising technique to quantitatively evaluate the permittivity and conductivity of near subsurface. However, these two parameters are simultaneously inverted in the GPR FWI, increasing the difficulty to obtain accurate inversion results for both parameters. In this study, I present a structural constrained GPR FWI procedure to jointly invert the two parameters, aiming to force a structural relationship between permittivity and conductivity in the process of model reconstruction. The structural constraint is enforced by a cross-gradient function. In this procedure, the permittivity and conductivity models are inverted alternately at each iteration and updated with hierarchical frequency components in the frequency domain. The joint inverse problem is solved by the truncated Newton method which considering the effect of Hessian operator and using the approximated solution of Newton equation to be the perturbation model in the updating process. The joint inversion procedure is tested by three synthetic examples. The results show that jointly inverting permittivity and conductivity in GPR FWI effectively increases the structural similarities between the two parameters, corrects the structures of parameter models, and significantly improves the accuracy of conductivity model, resulting in a better inversion result than the individual inversion.
Danielson, Jeffrey J.; Poppenga, Sandra K.; Brock, John C.; Evans, Gayla A.; Tyler, Dean; Gesch, Dean B.; Thatcher, Cindy A.; Barras, John
2016-01-01
During the coming decades, coastlines will respond to widely predicted sea-level rise, storm surge, and coastalinundation flooding from disastrous events. Because physical processes in coastal environments are controlled by the geomorphology of over-the-land topography and underwater bathymetry, many applications of geospatial data in coastal environments require detailed knowledge of the near-shore topography and bathymetry. In this paper, an updated methodology used by the U.S. Geological Survey Coastal National Elevation Database (CoNED) Applications Project is presented for developing coastal topobathymetric elevation models (TBDEMs) from multiple topographic data sources with adjacent intertidal topobathymetric and offshore bathymetric sources to generate seamlessly integrated TBDEMs. This repeatable, updatable, and logically consistent methodology assimilates topographic data (land elevation) and bathymetry (water depth) into a seamless coastal elevation model. Within the overarching framework, vertical datum transformations are standardized in a workflow that interweaves spatially consistent interpolation (gridding) techniques with a land/water boundary mask delineation approach. Output gridded raster TBDEMs are stacked into a file storage system of mosaic datasets within an Esri ArcGIS geodatabase for efficient updating while maintaining current and updated spatially referenced metadata. Topobathymetric data provide a required seamless elevation product for several science application studies, such as shoreline delineation, coastal inundation mapping, sediment-transport, sea-level rise, storm surge models, and tsunami impact assessment. These detailed coastal elevation data are critical to depict regions prone to climate change impacts and are essential to planners and managers responsible for mitigating the associated risks and costs to both human communities and ecosystems. The CoNED methodology approach has been used to construct integrated TBDEM models in Mobile Bay, the northern Gulf of Mexico, San Francisco Bay, the Hurricane Sandy region, and southern California.
The updating of clinical practice guidelines: insights from an international survey
2011-01-01
Background Clinical practice guidelines (CPGs) have become increasingly popular, and the methodology to develop guidelines has evolved enormously. However, little attention has been given to the updating process, in contrast to the appraisal of the available literature. We conducted an international survey to identify current practices in CPG updating and explored the need to standardize and improve the methods. Methods We developed a questionnaire (28 items) based on a review of the existing literature about guideline updating and expert comments. We carried out the survey between March and July 2009, and it was sent by email to 106 institutions: 69 members of the Guidelines International Network who declared that they developed CPGs; 30 institutions included in the U.S. National Guideline Clearinghouse database that published more than 20 CPGs; and 7 institutions selected by an expert committee. Results Forty-four institutions answered the questionnaire (42% response rate). In the final analysis, 39 completed questionnaires were included. Thirty-six institutions (92%) reported that they update their guidelines. Thirty-one institutions (86%) have a formal procedure for updating their guidelines, and 19 (53%) have a formal procedure for deciding when a guideline becomes out of date. Institutions describe the process as moderately rigorous (36%) or acknowledge that it could certainly be more rigorous (36%). Twenty-two institutions (61%) alert guideline users on their website when a guideline is older than three to five years or when there is a risk of being outdated. Twenty-five institutions (64%) support the concept of "living guidelines," which are continuously monitored and updated. Eighteen institutions (46%) have plans to design a protocol to improve their guideline-updating process, and 21 (54%) are willing to share resources with other organizations. Conclusions Our study is the first to describe the process of updating CPGs among prominent guideline institutions across the world, providing a comprehensive picture of guideline updating. There is an urgent need to develop rigorous international standards for this process and to minimize duplication of effort internationally. PMID:21914177
NASA Astrophysics Data System (ADS)
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Michael; Elgowainy, Amgad; Han, Jeongwoo
This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.
Updates to the Demographic and Spatial Allocation Models to ...
EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing development scenarios up to 2100. This newest version includes updated population and land use data sets and addresses limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide (Final Report) describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.
Mathematical model of bone drilling for virtual surgery system
NASA Astrophysics Data System (ADS)
Alaytsev, Innokentiy K.; Danilova, Tatyana V.; Manturov, Alexey O.; Mareev, Gleb O.; Mareev, Oleg V.
2018-04-01
The bone drilling is an essential part of surgeries in ENT and Dentistry. A proper training of drilling machine handling skills is impossible without proper modelling of the drilling process. Utilization of high precision methods like FEM is limited due to the requirement of 1000 Hz update rate for haptic feedback. The study presents a mathematical model of the drilling process that accounts the properties of materials, the geometry and the rotation rate of a burr to compute the removed material volume. The simplicity of the model allows for integrating it in the high-frequency haptic thread. The precision of the model is enough for a virtual surgery system targeted on the training of the basic surgery skills.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.
2004-01-01
This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.
Improved meteorology from an updated WRF/CMAQ modeling ...
Realistic vegetation characteristics and phenology from the Moderate Resolution Imaging Spectroradiometer (MODIS) products improve the simulation for the meteorology and air quality modeling system WRF/CMAQ (Weather Research and Forecasting model and Community Multiscale Air Quality model) that employs the Pleim-Xiu land surface model (PX LSM). Recently, PX LSM WRF/CMAQ has been updated in vegetation, soil, and boundary layer processes resulting in improved 2 m temperature (T) and mixing ratio (Q), 10 m wind speed, and surface ozone simulations across the domain compared to the previous version for a period around August 2006. Yearlong meteorology simulations with the updated system demonstrate that MODIS input helps reduce bias of the 2 m Q estimation during the growing season from April to September. Improvements follow the green-up in the southeast from April and move toward the west and north through August. From October to March, MODIS input does not have much influence on the system because vegetation is not as active. The greatest effects of MODIS input include more accurate phenology, better representation of leaf area index (LAI) for various forest ecosystems and agricultural areas, and realistically sparse vegetation coverage in the western drylands. Despite the improved meteorology, MODIS input causes higher bias for the surface O3 simulation in April, August, and October in areas where MODIS LAI is much less than the base LAI. Thus, improvement
Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun
2014-01-01
Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage. PMID:24574888
Modeled and Observed Altitude Distributions of the Micrometeoroid Influx in Radar Detection
NASA Astrophysics Data System (ADS)
Swarnalingam, N.; Janches, D.; Plane, J. M. C.; Carrillo-Sánchez, J. D.; Sternovsky, Z.; Pokorny, P.; Nesvorny, D.
2017-12-01
The altitude distributions of the micrometeoroids are a representation of the radar response function of the incoming flux and thus can be utilized to calibrate radar measurements. These in turn, can be used to determine the rate of ablation and ionization of the meteoroids and ultimately the input flux. During the ablation process, electrons are created and subsequently these electrons produce backscatter signals when they encounter the transmitted signals from radar. In this work, we investigate the altitude distribution by exploring different sizes as well as the aspect sensitivity of the meteor head echoes. We apply an updated version of the Chemical Ablation Model (CABMOD), which includes results from laboratory simulation of meteor ablation for different metallic constituents. In particular, the updated version simulates the ablation of Na. It is observed in the updated version that electrons are produced to a wider altitude range with the peak production occurs at lower altitudes compared to the previous version. The results are compared to head echo meteor observations utilizing the Arecibo 430 MHz radar.
Integrating Remote Sensing and Disease Surveillance to Forecast Malaria Epidemics
NASA Astrophysics Data System (ADS)
Wimberly, M. C.; Beyane, B.; DeVos, M.; Liu, Y.; Merkord, C. L.; Mihretie, A.
2015-12-01
Advance information about the timing and locations of malaria epidemics can facilitate the targeting of resources for prevention and emergency response. Early detection methods can detect incipient outbreaks by identifying deviations from expected seasonal patterns, whereas early warning approaches typically forecast future malaria risk based on lagged responses to meteorological factors. A critical limiting factor for implementing either of these approaches is the need for timely and consistent acquisition, processing and analysis of both environmental and epidemiological data. To address this need, we have developed EPIDEMIA - an integrated system for surveillance and forecasting of malaria epidemics. The EPIDEMIA system includes a public health interface for uploading and querying weekly surveillance reports as well as algorithms for automatically validating incoming data and updating the epidemiological surveillance database. The newly released EASTWeb 2.0 software application automatically downloads, processes, and summaries remotely-sensed environmental data from multiple earth science data archives. EASTWeb was implemented as a component of the EPIDEMIA system, which combines the environmental monitoring data and epidemiological surveillance data into a unified database that supports both early detection and early warning models. Dynamic linear models implemented with Kalman filtering were used to carry out forecasting and model updating. Preliminary forecasts have been disseminated to public health partners in the Amhara Region of Ethiopia and will be validated and refined as the EPIDEMIA system ingests new data. In addition to continued model development and testing, future work will involve updating the public health interface to provide a broader suite of outbreak alerts and data visualization tools that are useful to our public health partners. The EPIDEMIA system demonstrates a feasible approach to synthesizing the information from epidemiological surveillance systems and remotely-sensed environmental monitoring systems to improve malaria epidemic detection and forecasting.
Updates on Modeling the Water Cycle with the NASA Ames Mars Global Climate Model
NASA Technical Reports Server (NTRS)
Kahre, M. A.; Haberle, R. M.; Hollingsworth, J. L.; Montmessin, F.; Brecht, A. S.; Urata, R.; Klassen, D. R.; Wolff, M. J.
2017-01-01
Global Circulation Models (GCMs) have made steady progress in simulating the current Mars water cycle. It is now widely recognized that clouds are a critical component that can significantly affect the nature of the simulated water cycle. Two processes in particular are key to implementing clouds in a GCM: the microphysical processes of formation and dissipation, and their radiative effects on heating/ cooling rates. Together, these processes alter the thermal structure, change the dynamics, and regulate inter-hemispheric transport. We have made considerable progress representing these processes in the NASA Ames GCM, particularly in the presence of radiatively active water ice clouds. We present the current state of our group's water cycle modeling efforts, show results from selected simulations, highlight some of the issues, and discuss avenues for further investigation.
Gyrofluid Modeling of Turbulent, Kinetic Physics
NASA Astrophysics Data System (ADS)
Despain, Kate Marie
2011-12-01
Gyrofluid models to describe plasma turbulence combine the advantages of fluid models, such as lower dimensionality and well-developed intuition, with those of gyrokinetics models, such as finite Larmor radius (FLR) effects. This allows gyrofluid models to be more tractable computationally while still capturing much of the physics related to the FLR of the particles. We present a gyrofluid model derived to capture the behavior of slow solar wind turbulence and describe the computer code developed to implement the model. In addition, we describe the modifications we made to a gyrofluid model and code that simulate plasma turbulence in tokamak geometries. Specifically, we describe a nonlinear phase mixing phenomenon, part of the E x B term, that was previously missing from the model. An inherently FLR effect, it plays an important role in predicting turbulent heat flux and diffusivity levels for the plasma. We demonstrate this importance by comparing results from the updated code to studies done previously by gyrofluid and gyrokinetic codes. We further explain what would be necessary to couple the updated gyrofluid code, gryffin, to a turbulent transport code, thus allowing gryffin to play a role in predicting profiles for fusion devices such as ITER and to explore novel fusion configurations. Such a coupling would require the use of Graphical Processing Units (GPUs) to make the modeling process fast enough to be viable. Consequently, we also describe our experience with GPU computing and demonstrate that we are poised to complete a gryffin port to this innovative architecture.
Persuasion and the Mass Communication Process.
ERIC Educational Resources Information Center
Sternthal, Brian
The author addresses his dissertation to two audiences: the mass communications practitioner, to help update his knowledge about the phenomena, and the researcher, to provide a starting point for a systematic pursuit of knowledge about media. In the first part, the author presents a model for persuasive mass communications, specifying the critical…
The National Health Educator Job Analysis 2010: Process and Outcomes
ERIC Educational Resources Information Center
Doyle, Eva I.; Caro, Carla M.; Lysoby, Linda; Auld, M. Elaine; Smith, Becky J.; Muenzen, Patricia M.
2012-01-01
The National Health Educator Job Analysis 2010 was conducted to update the competencies model for entry- and advanced-level health educators. Qualitative and quantitative methods were used. Structured interviews, focus groups, and a modified Delphi technique were implemented to engage 59 health educators from diverse work settings and experience…
Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception
ERIC Educational Resources Information Center
Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…
Updating Procedures Can Reorganize the Neural Circuit Supporting a Fear Memory.
Kwapis, Janine L; Jarome, Timothy J; Ferrara, Nicole C; Helmstetter, Fred J
2017-07-01
Established memories undergo a period of vulnerability following retrieval, a process termed 'reconsolidation.' Recent work has shown that the hypothetical process of reconsolidation is only triggered when new information is presented during retrieval, suggesting that this process may allow existing memories to be modified. Reconsolidation has received increasing attention as a possible therapeutic target for treating disorders that stem from traumatic memories, yet little is known about how this process changes the original memory. In particular, it is unknown whether reconsolidation can reorganize the neural circuit supporting an existing memory after that memory is modified with new information. Here, we show that trace fear memory undergoes a protein synthesis-dependent reconsolidation process following exposure to a single updating trial of delay conditioning. Further, this reconsolidation-dependent updating process appears to reorganize the neural circuit supporting the trace-trained memory, so that it better reflects the circuit supporting delay fear. Specifically, after a trace-to-delay update session, the amygdala is now required for extinction of the updated memory but the retrosplenial cortex is no longer required for retrieval. These results suggest that updating procedures could be used to force a complex, poorly defined memory circuit to rely on a better-defined neural circuit that may be more amenable to behavioral or pharmacological manipulation. This is the first evidence that exposure to new information can fundamentally reorganize the neural circuit supporting an existing memory.
NASA Astrophysics Data System (ADS)
Grubbs, Guy; Michell, Robert; Samara, Marilia; Hampton, Donald; Hecht, James; Solomon, Stanley; Jahn, Jorg-Micha
2018-01-01
It is important to routinely examine and update models used to predict auroral emissions resulting from precipitating electrons in Earth's magnetotail. These models are commonly used to invert spectral auroral ground-based images to infer characteristics about incident electron populations when in situ measurements are unavailable. In this work, we examine and compare auroral emission intensities predicted by three commonly used electron transport models using varying electron population characteristics. We then compare model predictions to same-volume in situ electron measurements and ground-based imaging to qualitatively examine modeling prediction error. Initial comparisons showed differences in predictions by the GLobal airglOW (GLOW) model and the other transport models examined. Chemical reaction rates and radiative rates in GLOW were updated using recent publications, and predictions showed better agreement with the other models and the same-volume data, stressing that these rates are important to consider when modeling auroral processes. Predictions by each model exhibit similar behavior for varying atmospheric constants, energies, and energy fluxes. Same-volume electron data and images are highly correlated with predictions by each model, showing that these models can be used to accurately derive electron characteristics and ionospheric parameters based solely on multispectral optical imaging data.
NASA Technical Reports Server (NTRS)
Carnahan, Richard S., Jr.; Corey, Stephen M.; Snow, John B.
1989-01-01
Applications of rapid prototyping and Artificial Intelligence techniques to problems associated with Space Station-era information management systems are described. In particular, the work is centered on issues related to: (1) intelligent man-machine interfaces applied to scientific data user support, and (2) the requirement that intelligent information management systems (IIMS) be able to efficiently process metadata updates concerning types of data handled. The advanced IIMS represents functional capabilities driven almost entirely by the needs of potential users. Space Station-era scientific data projected to be generated is likely to be significantly greater than data currently processed and analyzed. Information about scientific data must be presented clearly, concisely, and with support features to allow users at all levels of expertise efficient and cost-effective data access. Additionally, mechanisms for allowing more efficient IIMS metadata update processes must be addressed. The work reported covers the following IIMS design aspects: IIMS data and metadata modeling, including the automatic updating of IIMS-contained metadata, IIMS user-system interface considerations, including significant problems associated with remote access, user profiles, and on-line tutorial capabilities, and development of an IIMS query and browse facility, including the capability to deal with spatial information. A working prototype has been developed and is being enhanced.
A trust region approach with multivariate Padé model for optimal circuit design
NASA Astrophysics Data System (ADS)
Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.
2017-11-01
Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.
NASA Technical Reports Server (NTRS)
Gaston, S.; Wertheim, M.; Orourke, J. A.
1973-01-01
Summary, consolidation and analysis of specifications, manufacturing process and test controls, and performance results for OAO-2 and OAO-3 lot 20 Amp-Hr sealed nickel cadmium cells and batteries are reported. Correlation of improvements in control requirements with performance is a key feature. Updates for a cell/battery computer model to improve performance prediction capability are included. Applicability of regression analysis computer techniques to relate process controls to performance is checked.
A Comparison of the SOCIT and DebriSat Experiments
NASA Technical Reports Server (NTRS)
Ausay, E.; Cornejo, A.; Horn, A.; Palma, K.; Sato, T.; Blake, B.; Pistella, F.; Boyle, C.; Todd, N.; Zimmerman, J.;
2017-01-01
This paper explores the differences between, and shares the lessons learned from, two hypervelocity impact experiments critical to the update of Department of Defense (DOD) and National Aeronautics and Space Administration (NASA) satellite breakup models. The procedures as well as the processes of the fourth Satellite Orbital Debris Characterization Impact Test (SOCIT4) were analyzed and related to the ongoing DebriSat experiment. SOCIT4 accounted for about 90% of the entire satellite mass, but only analyzed approximately 59% with a total of approximately 4,700 fragments. DebriSat aims to recover and analyze 90% of the initial mass and to do so, fragments with at least a longest dimension of 2 mm are collected and processed. DebriSat's use of modern materials, especially carbon fiber, significantly increases the fragment count and to date, there are over 126,000 fragments collected. Challenges, such as procedures and human inputs, encountered throughout the DebriSat experiment are also shared. While, SOCIT4 laid the foundation for the majority of DebriSat processes, the technological advancements since SOCIT4 allow for more accurate, rigorous, and in-depth, procedures that will aid the update of satellite breakup models.
Utilizing Flight Data to Update Aeroelastic Stability Estimates
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty
1997-01-01
Stability analysis of high performance aircraft must account for errors in the system model. A method for computing flutter margins that incorporates flight data has been developed using robust stability theory. This paper considers applying this method to update flutter margins during a post-flight or on-line analysis. Areas of modeling uncertainty that arise when using flight data with this method are investigated. The amount of conservatism in the resulting flutter margins depends on the flight data sets used to update the model. Post-flight updates of flutter margins for an F/A-18 are presented along with a simulation of on-line updates during a flight test.
EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...
Bayesian Approaches for Model and Multi-mission Satellites Data Fusion
NASA Astrophysics Data System (ADS)
Khaki, M., , Dr; Forootan, E.; Awange, J.; Kuhn, M.
2017-12-01
Traditionally, data assimilation is formulated as a Bayesian approach that allows one to update model simulations using new incoming observations. This integration is necessary due to the uncertainty in model outputs, which mainly is the result of several drawbacks, e.g., limitations in accounting for the complexity of real-world processes, uncertainties of (unknown) empirical model parameters, and the absence of high resolution (both spatially and temporally) data. Data assimilation, however, requires knowledge of the physical process of a model, which may be either poorly described or entirely unavailable. Therefore, an alternative method is required to avoid this dependency. In this study we present a novel approach which can be used in hydrological applications. A non-parametric framework based on Kalman filtering technique is proposed to improve hydrological model estimates without using a model dynamics. Particularly, we assesse Kalman-Taken formulations that take advantage of the delay coordinate method to reconstruct nonlinear dynamics in the absence of the physical process. This empirical relationship is then used instead of model equations to integrate satellite products with model outputs. We use water storage variables from World-Wide Water Resources Assessment (W3RA) simulations and update them using data known as the Gravity Recovery And Climate Experiment (GRACE) terrestrial water storage (TWS) and also surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) over Australia for the period of 2003 to 2011. The performance of the proposed integration method is compared with data obtained from the more traditional assimilation scheme using the Ensemble Square-Root Filter (EnSRF) filtering technique (Khaki et al., 2017), as well as by evaluating them against ground-based soil moisture and groundwater observations within the Murray-Darling Basin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacFarlane, Joseph J.; Golovkin, I. E.; Woodruff, P. R.
2009-08-07
This Final Report summarizes work performed under DOE STTR Phase II Grant No. DE-FG02-05ER86258 during the project period from August 2006 to August 2009. The project, “Development of Spectral and Atomic Models for Diagnosing Energetic Particle Characteristics in Fast Ignition Experiments,” was led by Prism Computational Sciences (Madison, WI), and involved collaboration with subcontractors University of Nevada-Reno and Voss Scientific (Albuquerque, NM). In this project, we have: Developed and implemented a multi-dimensional, multi-frequency radiation transport model in the LSP hybrid fluid-PIC (particle-in-cell) code [1,2]. Updated the LSP code to support the use of accurate equation-of-state (EOS) tables generated by Prism’smore » PROPACEOS [3] code to compute more accurate temperatures in high energy density physics (HEDP) plasmas. Updated LSP to support the use of Prism’s multi-frequency opacity tables. Generated equation of state and opacity data for LSP simulations for several materials being used in plasma jet experimental studies. Developed and implemented parallel processing techniques for the radiation physics algorithms in LSP. Benchmarked the new radiation transport and radiation physics algorithms in LSP and compared simulation results with analytic solutions and results from numerical radiation-hydrodynamics calculations. Performed simulations using Prism radiation physics codes to address issues related to radiative cooling and ionization dynamics in plasma jet experiments. Performed simulations to study the effects of radiation transport and radiation losses due to electrode contaminants in plasma jet experiments. Updated the LSP code to generate output using NetCDF to provide a better, more flexible interface to SPECT3D [4] in order to post-process LSP output. Updated the SPECT3D code to better support the post-processing of large-scale 2-D and 3-D datasets generated by simulation codes such as LSP. Updated atomic physics modeling to provide for more comprehensive and accurate atomic databases that feed into the radiation physics modeling (spectral simulations and opacity tables). Developed polarization spectroscopy modeling techniques suitable for diagnosing energetic particle characteristics in HEDP experiments. A description of these items is provided in this report. The above efforts lay the groundwork for utilizing the LSP and SPECT3D codes in providing simulation support for DOE-sponsored HEDP experiments, such as plasma jet and fast ignition physics experiments. We believe that taken together, the LSP and SPECT3D codes have unique capabilities for advancing our understanding of the physics of these HEDP plasmas. Based on conversations early in this project with our DOE program manager, Dr. Francis Thio, our efforts emphasized developing radiation physics and atomic modeling capabilities that can be utilized in the LSP PIC code, and performing radiation physics studies for plasma jets. A relatively minor component focused on the development of methods to diagnose energetic particle characteristics in short-pulse laser experiments related to fast ignition physics. The period of performance for the grant was extended by one year to August 2009 with a one-year no-cost extension, at the request of subcontractor University of Nevada-Reno.« less
Calibration, Projection, and Final Image Products of MESSENGER's Mercury Dual Imaging System
NASA Astrophysics Data System (ADS)
Denevi, Brett W.; Chabot, Nancy L.; Murchie, Scott L.; Becker, Kris J.; Blewett, David T.; Domingue, Deborah L.; Ernst, Carolyn M.; Hash, Christopher D.; Hawkins, S. Edward; Keller, Mary R.; Laslo, Nori R.; Nair, Hari; Robinson, Mark S.; Seelos, Frank P.; Stephens, Grant K.; Turner, F. Scott; Solomon, Sean C.
2018-02-01
We present an overview of the operations, calibration, geodetic control, photometric standardization, and processing of images from the Mercury Dual Imaging System (MDIS) acquired during the orbital phase of the MESSENGER spacecraft's mission at Mercury (18 March 2011-30 April 2015). We also provide a summary of all of the MDIS products that are available in NASA's Planetary Data System (PDS). Updates to the radiometric calibration included slight modification of the frame-transfer smear correction, updates to the flat fields of some wide-angle camera (WAC) filters, a new model for the temperature dependence of narrow-angle camera (NAC) and WAC sensitivity, and an empirical correction for temporal changes in WAC responsivity. Further, efforts to characterize scattered light in the WAC system are described, along with a mosaic-dependent correction for scattered light that was derived for two regional mosaics. Updates to the geometric calibration focused on the focal lengths and distortions of the NAC and all WAC filters, NAC-WAC alignment, and calibration of the MDIS pivot angle and base. Additionally, two control networks were derived so that the majority of MDIS images can be co-registered with sub-pixel accuracy; the larger of the two control networks was also used to create a global digital elevation model. Finally, we describe the image processing and photometric standardization parameters used in the creation of the MDIS advanced products in the PDS, which include seven large-scale mosaics, numerous targeted local mosaics, and a set of digital elevation models ranging in scale from local to global.
Application of Artificial Intelligence for Bridge Deterioration Model.
Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.
Application of Artificial Intelligence for Bridge Deterioration Model
Chen, Zhang; Wu, Yangyang; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121
Control of interference during working memory updating.
Szmalec, Arnaud; Verbruggen, Frederick; Vandierendonck, André; Kemps, Eva
2011-02-01
The current study examined the nature of the processes underlying working memory updating. In 4 experiments using the n-back paradigm, the authors demonstrate that continuous updating of items in working memory prevents strong binding of those items to their contexts in working memory, and hence leads to an increased susceptibility to proactive interference. Results of Experiments 1 and 2 show that this interference reflects a competition between a process that reveals the degree of familiarity of an item and a context-sensitive recollection process that depends on the strength of bindings in working memory. Experiment 3 further clarifies the origins of interference during updating by demonstrating that even items that are semantically related to the updated working memory contents but that have not been maintained in working memory before cause proactive interference. Finally, the results of Experiment 4 indicate that the occurrence of interference leads to top-down behavioral adjustments that prioritize recollection over familiarity assessment. The implications of these findings for the construct validity of the n-back task, for the control processes involved in working memory updating, and for the concept of executive control more generally are discussed. (c) 2010 APA, all rights reserved.
2017-04-13
modelling code, a parallel benchmark , and a communication avoiding version of the QR algorithm. Further, several improvements to the OmpSs model were...movement; and a port of the dynamic load balancing library to OmpSs. Finally, several updates to the tools infrastructure were accomplished, including: an...OmpSs: a basic algorithm on image processing applications, a mini application representative of an ocean modelling code, a parallel benchmark , and a
A study on rational function model generation for TerraSAR-X imagery.
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-09-09
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10-3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction.
A Study on Rational Function Model Generation for TerraSAR-X Imagery
Eftekhari, Akram; Saadatseresht, Mohammad; Motagh, Mahdi
2013-01-01
The Rational Function Model (RFM) has been widely used as an alternative to rigorous sensor models of high-resolution optical imagery in photogrammetry and remote sensing geometric processing. However, not much work has been done to evaluate the applicability of the RF model for Synthetic Aperture Radar (SAR) image processing. This paper investigates how to generate a Rational Polynomial Coefficient (RPC) for high-resolution TerraSAR-X imagery using an independent approach. The experimental results demonstrate that the RFM obtained using the independent approach fits the Range-Doppler physical sensor model with an accuracy of greater than 10−3 pixel. Because independent RPCs indicate absolute errors in geolocation, two methods can be used to improve the geometric accuracy of the RFM. In the first method, Ground Control Points (GCPs) are used to update SAR sensor orientation parameters, and the RPCs are calculated using the updated parameters. Our experiment demonstrates that by using three control points in the corners of the image, an accuracy of 0.69 pixels in range and 0.88 pixels in the azimuth direction is achieved. For the second method, we tested the use of an affine model for refining RPCs. In this case, by applying four GCPs in the corners of the image, the accuracy reached 0.75 pixels in range and 0.82 pixels in the azimuth direction. PMID:24021971
Overview and Evaluation of the Community Multiscale Air ...
The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In late 2016 or early 2017, CMAQ version 5.2 will be released. This new version of CMAQ will contain important updates from the current CMAQv5.1 modeling system, along with several instrumented versions of the model (e.g. decoupled direct method and sulfur tracking). Some specific model updates include the implementation of a new wind-blown dust treatment in CMAQv5.2, a significant improvement over the treatment in v5.1 which can severely overestimate wind-blown dust under certain conditions. Several other major updates to the modeling system include an update to the calculation of aerosols; implementation of full halogen chemistry (CMAQv5.1 contains a partial implementation of halogen chemistry); the new carbon bond 6 (CB6) chemical mechanism; updates to cloud model in CMAQ; and a new lightning assimilation scheme for the WRF model which significant improves the placement and timing of convective precipitation in the WRF precipitation fields. Numerous other updates to the modeling system will also be available in v5.2.
U-10Mo Baseline Fuel Fabrication Process Description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hubbard, Lance R.; Arendt, Christina L.; Dye, Daniel F.
This document provides a description of the U.S. High Power Research Reactor (USHPRR) low-enriched uranium (LEU) fuel fabrication process. This document is intended to be used in conjunction with the baseline process flow diagram (PFD) presented in Appendix A. The baseline PFD is used to document the fabrication process, communicate gaps in technology or manufacturing capabilities, convey alternatives under consideration, and as the basis for a dynamic simulation model of the fabrication process. The simulation model allows for the assessment of production rates, costs, and manufacturing requirements (manpower, fabrication space, numbers and types of equipment, etc.) throughout the lifecycle ofmore » the USHPRR program. This document, along with the accompanying PFD, is updated regularly« less
An on-line modified least-mean-square algorithm for training neurofuzzy controllers.
Tan, Woei Wan
2007-04-01
The problem hindering the use of data-driven modelling methods for training controllers on-line is the lack of control over the amount by which the plant is excited. As the operating schedule determines the information available on-line, the knowledge of the process may degrade if the setpoint remains constant for an extended period. This paper proposes an identification algorithm that alleviates "learning interference" by incorporating fuzzy theory into the normalized least-mean-square update rule. The ability of the proposed methodology to achieve faster learning is examined by employing the algorithm to train a neurofuzzy feedforward controller for controlling a liquid level process. Since the proposed identification strategy has similarities with the normalized least-mean-square update rule and the recursive least-square estimator, the on-line learning rates of these algorithms are also compared.
The Lagrangian particle dispersion model FLEXPART version 10
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Sollum, Espen; Grythe, Henrik; Kristiansen, Nina; Cassiani, Massimo; Eckhardt, Sabine; Thompson, Rona; Groot Zwaaftnik, Christine; Evangeliou, Nikolaos; Hamburger, Thomas; Sodemann, Harald; Haimberger, Leopold; Henne, Stephan; Brunner, Dominik; Burkhart, John; Fouilloux, Anne; Fang, Xuekun; Phillip, Anne; Seibert, Petra; Stohl, Andreas
2017-04-01
The Lagrangian particle dispersion model FLEXPART was in its first original release in 1998 designed for calculating the long-range and mesoscale dispersion of air pollutants from point sources, such as after an accident in a nuclear power plant. The model has now evolved into a comprehensive tool for atmospheric transport modelling and analysis. Its application fields are extended to a range of atmospheric transport processes for both atmospheric gases and aerosols, e.g. greenhouse gases, short-lived climate forces like black carbon, volcanic ash and gases as well as studies of the water cycle. We present the newest release, FLEXPART version 10. Since the last publication fully describing FLEXPART (version 6.2), the model code has been parallelised in order to allow for the possibility to speed up computation. A new, more detailed gravitational settling parametrisation for aerosols was implemented, and the wet deposition scheme for aerosols has been heavily modified and updated to provide a more accurate representation of this physical process. In addition, an optional new turbulence scheme for the convective boundary layer is available, that considers the skewness in the vertical velocity distribution. Also, temporal variation and temperature dependence of the OH-reaction are included. Finally, user input files are updated to a more convenient and user-friendly namelist format, and the option to produce the output-files in netCDF-format instead of binary format is implemented. We present these new developments and show recent model applications. Moreover, we also introduce some tools for the preparation of the meteorological input data, as well as for the processing of FLEXPART output data.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating
Lee, Young-Joo; Cho, Soojin
2016-01-01
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125
76 FR 42750 - National Science Board: Sunshine Act Meetings; Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-19
...) Update NSB Information Item: Network for Earthquake Engineering Simulation (NEES) Update NSB Information... Teleconference Discussion on the Timeline, Process and Procedures for Evaluating Nominees Update on Committee...
24 CFR 902.51 - Updating of public housing unit address information.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Service and Satisfaction § 902.51 Updating of public housing unit address information. (a) Electronic updating. The survey process for the Resident Service and Satisfaction Indicator is dependent upon... any points for the PHAS Resident Service and Satisfaction Indicator. (c) Electronic updating of the...
Normal response function method for mass and stiffness matrix updating using complex FRFs
NASA Astrophysics Data System (ADS)
Pradhan, S.; Modak, S. V.
2012-10-01
Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.
Dynamic reduction of dimensions of a document vector in a document search and retrieval system
Jiao, Yu; Potok, Thomas E.
2011-05-03
The method and system of the invention involves processing each new document (20) coming into the system into a document vector (16), and creating a document vector with reduced dimensionality (17) for comparison with the data model (15) without recomputing the data model (15). These operations are carried out by a first computer (11) while a second computer (12) updates the data model (18), which can be comprised of an initial large group of documents (19) and is premised on the computing an initial data model (13, 14, 15) to provide a reference point for determining document vectors from documents processed from the data stream (20).
NEMS Freight Transportation Module Improvement Study
2015-01-01
The U.S. Energy Information Administration (EIA) contracted with IHS Global, Inc. (IHS) to analyze the relationship between the value of industrial output, physical output, and freight movement in the United States for use in updating analytic assumptions and modeling structure within the National Energy Modeling System (NEMS) freight transportation module, including forecasting methodologies and processes to identify possible alternative approaches that would improve multi-modal freight flow and fuel consumption estimation.
Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery
NASA Astrophysics Data System (ADS)
Jarzabek-Rychard, M.; Karpina, M.
2016-06-01
Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.
WebbPSF: Updated PSF Models Based on JWST Ground Testing Results
NASA Astrophysics Data System (ADS)
Osborne, Shannon; Perrin, Marshall D.; Melendez Hernandez, Marcio
2018-06-01
WebbPSF is a widely-used package that allows astronomers to create simulated point spread functions (PSFs) for the James Webb Space Telescope (JWST). WebbPSF provides the user with the flexibility to produce PSFs for direct imaging and coronographic modes, for a range of filters and masks, and across all the JWST instruments. These PSFs can then be analyzed with built-in evaluation tools or can be output to be used with users’ own tools. In the most recent round of updates, the accuracy of the PSFs have been improved with updated analyses of the instrument test data from NASA Goddard and with the new data from the testing of the combined Optical Telescope Element and Integrated Science Instrument Module (OTIS) at NASA Johnson. A post-processing function applying detector effects and pupil distortions to input PSFs has also been added to the WebbPSF package.
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2018-03-01
vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
NASA Astrophysics Data System (ADS)
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the memory function as a new prior and generate samples from it for further updating when more geophysical data is available. We applied this approach for deep oil reservoir characterization and for shallow subsurface flow monitoring. The model reduction approach reliably helps reduce the joint seismic/EM/radar inversion computational time to reasonable levels. Continuous inversion images are obtained using time-lapse data with the “memory function” applied in the Bayesian inversion.
NASA Technical Reports Server (NTRS)
Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian
2008-01-01
The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.
Artificial Boundary Conditions for Finite Element Model Update and Damage Detection
2017-03-01
BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION by Emmanouil Damanakis March 2017 Thesis Advisor: Joshua H. Gordis...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ARTIFICIAL BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION...release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In structural engineering, a finite element model is often
An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating
NASA Astrophysics Data System (ADS)
Ratcliffe, M. J.; Lieven, N. A. J.
1999-03-01
Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.
Virtual Model Validation of Complex Multiscale Systems: Applications to Nonlinear Elastostatics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oden, John Tinsley; Prudencio, Ernest E.; Bauman, Paul T.
We propose a virtual statistical validation process as an aid to the design of experiments for the validation of phenomenological models of the behavior of material bodies, with focus on those cases in which knowledge of the fabrication process used to manufacture the body can provide information on the micro-molecular-scale properties underlying macroscale behavior. One example is given by models of elastomeric solids fabricated using polymerization processes. We describe a framework for model validation that involves Bayesian updates of parameters in statistical calibration and validation phases. The process enables the quanti cation of uncertainty in quantities of interest (QoIs) andmore » the determination of model consistency using tools of statistical information theory. We assert that microscale information drawn from molecular models of the fabrication of the body provides a valuable source of prior information on parameters as well as a means for estimating model bias and designing virtual validation experiments to provide information gain over calibration posteriors.« less
In-situ biogas upgrading process: Modeling and simulations aspects.
Lovato, Giovanna; Alvarado-Morales, Merlin; Kovalovszki, Adam; Peprah, Maria; Kougias, Panagiotis G; Rodrigues, José Alberto Domingues; Angelidaki, Irini
2017-12-01
Biogas upgrading processes by in-situ hydrogen (H 2 ) injection are still challenging and could benefit from a mathematical model to predict system performance. Therefore, a previous model on anaerobic digestion was updated and expanded to include the effect of H 2 injection into the liquid phase of a fermenter with the aim of modeling and simulating these processes. This was done by including hydrogenotrophic methanogen kinetics for H 2 consumption and inhibition effect on the acetogenic steps. Special attention was paid to gas to liquid transfer of H 2 . The final model was successfully validated considering a set of Case Studies. Biogas composition and H 2 utilization were correctly predicted, with overall deviation below 10% compared to experimental measurements. Parameter sensitivity analysis revealed that the model is highly sensitive to the H 2 injection rate and mass transfer coefficient. The model developed is an effective tool for predicting process performance in scenarios with biogas upgrading. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Frey, M. P.; Stamm, C.; Schneider, M. K.; Reichert, P.
2011-12-01
A distributed hydrological model was used to simulate the distribution of fast runoff formation as a proxy for critical source areas for herbicide pollution in a small agricultural catchment in Switzerland. We tested to what degree predictions based on prior knowledge without local measurements could be improved upon relying on observed discharge. This learning process consisted of five steps: For the prior prediction (step 1), knowledge of the model parameters was coarse and predictions were fairly uncertain. In the second step, discharge data were used to update the prior parameter distribution. Effects of uncertainty in input data and model structure were accounted for by an autoregressive error model. This step decreased the width of the marginal distributions of parameters describing the lower boundary (percolation rates) but hardly affected soil hydraulic parameters. Residual analysis (step 3) revealed model structure deficits. We modified the model, and in the subsequent Bayesian updating (step 4) the widths of the posterior marginal distributions were reduced for most parameters compared to those of the prior. This incremental procedure led to a strong reduction in the uncertainty of the spatial prediction. Thus, despite only using spatially integrated data (discharge), the spatially distributed effect of the improved model structure can be expected to improve the spatially distributed predictions also. The fifth step consisted of a test with independent spatial data on herbicide losses and revealed ambiguous results. The comparison depended critically on the ratio of event to preevent water that was discharged. This ratio cannot be estimated from hydrological data only. The results demonstrate that the value of local data is strongly dependent on a correct model structure. An iterative procedure of Bayesian updating, model testing, and model modification is suggested.
NASA Astrophysics Data System (ADS)
McAllister, M.; Gochis, D.; Dugger, A. L.; Karsten, L. R.; McCreight, J. L.; Pan, L.; Rafieeinasab, A.; Read, L. K.; Sampson, K. M.; Yu, W.
2017-12-01
The community WRF-Hydro modeling system is publicly available and provides researchers and operational forecasters a flexible and extensible capability for performing multi-scale, multi-physics options for hydrologic modeling that can be run independent or fully-interactive with the WRF atmospheric model. The core WRF-Hydro physics model contains very high-resolution descriptions of terrestrial hydrologic process representations such as land-atmosphere exchanges of energy and moisture, snowpack evolution, infiltration, terrain routing, channel routing, basic reservoir representation and hydrologic data assimilation. Complementing the core physics components of WRF-Hydro are an ecosystem of pre- and post-processing tools that facilitate the preparation of terrain and meteorological input data, an open-source hydrologic model evaluation toolset (Rwrfhydro), hydrologic data assimilation capabilities with DART and advanced model visualization capabilities. The National Center for Atmospheric Research (NCAR), through collaborative support from the National Science Foundation and other funding partners, provides community support for the entire WRF-Hydro system through a variety of mechanisms. This presentation summarizes the enhanced user support capabilities that are being developed for the community WRF-Hydro modeling system. These products and services include a new website, open-source code repositories, documentation and user guides, test cases, online training materials, live, hands-on training sessions, an email list serve, and individual user support via email through a new help desk ticketing system. The WRF-Hydro modeling system and supporting tools which now include re-gridding scripts and model calibration have recently been updated to Version 4 and are merging toward capabilities of the National Water Model.
A review and update of the Virginia Department of Transportation cash flow forecasting model.
DOT National Transportation Integrated Search
1996-01-01
This report details the research done to review and update components of the VDOT cash flow forecasting model. Specifically, the study updated the monthly factors submodel used to predict payments on construction contracts. For the other submodel rev...
A revised Self- and Family Management Framework.
Grey, Margaret; Schulman-Green, Dena; Knafl, Kathleen; Reynolds, Nancy R
2015-01-01
Research on self- and family management of chronic conditions has advanced over the past 6 years, but the use of simple frameworks has hampered the understanding of the complexities involved. We sought to update our previously published model with new empirical, synthetic, and theoretical work. We used synthesis of previous studies to update the framework. We propose a revised framework that clarifies facilitators and barriers, processes, proximal outcomes, and distal outcomes of self- and family management and their relationships. We offer the revised framework as a model that can be used in studies aimed at advancing self- and family management science. The use of the framework to guide studies would allow for the design of studies that can address more clearly how self-management interventions work and under what conditions. Copyright © 2015 Elsevier Inc. All rights reserved.
Numerical Modeling of Electroacoustic Logging Including Joule Heating
NASA Astrophysics Data System (ADS)
Plyushchenkov, Boris D.; Nikitin, Anatoly A.; Turchaninov, Victor I.
It is well known that electromagnetic field excites acoustic wave in a porous elastic medium saturated with fluid electrolyte due to electrokinetic conversion effect. Pride's equations describing this process are written in isothermal approximation. Update of these equations, which allows to take influence of Joule heating on acoustic waves propagation into account, is proposed here. This update includes terms describing the initiation of additional acoustic waves excited by thermoelastic stresses and the heat conduction equation with right side defined by Joule heating. Results of numerical modeling of several problems of propagation of acoustic waves excited by an electric field source with and without consideration of Joule heating effect in their statements are presented. From these results, it follows that influence of Joule heating should be taken into account at the numerical simulation of electroacoustic logging and at the interpretation of its log data.
Closing the loop on improvement: Packaging experience in the Software Engineering Laboratory
NASA Technical Reports Server (NTRS)
Waligora, Sharon R.; Landis, Linda C.; Doland, Jerry T.
1994-01-01
As part of its award-winning software process improvement program, the Software Engineering Laboratory (SEL) has developed an effective method for packaging organizational best practices based on real project experience into useful handbooks and training courses. This paper shares the SEL's experience over the past 12 years creating and updating software process handbooks and training courses. It provides cost models and guidelines for successful experience packaging derived from SEL experience.
The Effect of Improved Sub-Daily Earth Rotation Models on Global GPS Data Processing
NASA Astrophysics Data System (ADS)
Yoon, S.; Choi, K. K.
2017-12-01
Throughout the various International GNSS Service (IGS) products, strong periodic signals have been observed around the 14 day period. This signal is clearly visible in all IGS time-series such as those related to orbit ephemerides, Earth rotation parameters (ERP) and ground station coordinates. Recent studies show that errors in the sub-daily Earth rotation models are the main factors that induce such noise. Current IGS orbit processing standards adopted the IERS 2010 convention and its sub-daily Earth rotation model. Since the IERS convention had published, recent advances in the VLBI analysis have made contributions to update the sub-daily Earth rotation models. We have compared several proposed sub-daily Earth rotation models and show the effect of using those models on orbit ephemeris, Earth rotation parameters and ground station coordinates generated by the NGS global GPS data processing strategy.
Processes of Discourse Integration: Evidence from Event-Related Brain Potentials
ERIC Educational Resources Information Center
Ferretti, Todd R.; Singer, Murray; Harwood, Jenna
2013-01-01
We used ERP methodology to investigate how readers validate discourse concepts and update situation models when those concepts followed factive (e.g., knew) and nonfactive (e.g., "guessed") verbs, and also when they were true, false, or indeterminate with reference to previous discourse. Following factive verbs, early (P2) and later brain…
ERIC Educational Resources Information Center
Botvinick, Matthew; Plaut, David C.
2004-01-01
In everyday tasks, selecting actions in the proper sequence requires a continuously updated representation of temporal context. Previous models have addressed this problem by positing a hierarchy of processing units, mirroring the roughly hierarchical structure of naturalistic tasks themselves. The present study considers an alternative framework,…
General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagwell, L.; Bennett, P.; Flach, G.
2017-02-21
This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).
Density profiles of the exclusive queuing process
NASA Astrophysics Data System (ADS)
Arita, Chikashi; Schadschneider, Andreas
2012-12-01
The exclusive queuing process (EQP) incorporates the exclusion principle into classic queuing models. It is characterized by, in addition to the entrance probability α and exit probability β, a third parameter: the hopping probability p. The EQP can be interpreted as an exclusion process of variable system length. Its phase diagram in the parameter space (α,β) is divided into a convergent phase and a divergent phase by a critical line which consists of a curved part and a straight part. Here we extend previous studies of this phase diagram. We identify subphases in the divergent phase, which can be distinguished by means of the shape of the density profile, and determine the velocity of the system length growth. This is done for EQPs with different update rules (parallel, backward sequential and continuous time). We also investigate the dynamics of the system length and the number of customers on the critical line. They are diffusive or subdiffusive with non-universal exponents that also depend on the update rules.
A Mismatch-Based Model for Memory Reconsolidation and Extinction in Attractor Networks
Amaral, Olavo B.
2011-01-01
The processes of memory reconsolidation and extinction have received increasing attention in recent experimental research, as their potential clinical applications begin to be uncovered. A number of studies suggest that amnestic drugs injected after reexposure to a learning context can disrupt either of the two processes, depending on the behavioral protocol employed. Hypothesizing that reconsolidation represents updating of a memory trace in the hippocampus, while extinction represents formation of a new trace, we have built a neural network model in which either simple retrieval, reconsolidation or extinction of a stored attractor can occur upon contextual reexposure, depending on the similarity between the representations of the original learning and reexposure sessions. This is achieved by assuming that independent mechanisms mediate Hebbian-like synaptic strengthening and mismatch-driven labilization of synaptic changes, with protein synthesis inhibition preferentially affecting the former. Our framework provides a unified mechanistic explanation for experimental data showing (a) the effect of reexposure duration on the occurrence of reconsolidation or extinction and (b) the requirement of memory updating during reexposure to drive reconsolidation. PMID:21826231
Support vector machine incremental learning triggered by wrongly predicted samples
NASA Astrophysics Data System (ADS)
Tang, Ting-long; Guan, Qiu; Wu, Yi-rong
2018-05-01
According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.
Development of landsat-5 thematic mapper internal calibrator gain and offset table
Barsi, J.A.; Chander, G.; Micijevic, E.; Markham, B.L.; Haque, Md. O.
2008-01-01
The National Landsat Archive Production System (NLAPS) has been the primary processing system for Landsat data since U.S. Geological Survey (USGS) Earth Resources Observation and Science Center (EROS) started archiving Landsat data. NLAPS converts raw satellite data into radiometrically and geometrically calibrated products. NLAPS has historically used the Internal Calibrator (IC) to calibrate the reflective bands of the Landsat-5 Thematic Mapper (TM), even though the lamps in the IC were less stable than the TM detectors, as evidenced by vicarious calibration results. In 2003, a major effort was made to model the actual TM gain change and to update NLAPS to use this model rather than the unstable IC data for radiometric calibration. The model coefficients were revised in 2007 to reflect greater understanding of the changes in the TM responsivity. While the calibration updates are important to users with recently processed data, the processing system no longer calculates the original IC gain or offset. For specific applications, it is useful to have a record of the gain and offset actually applied to the older data. Thus, the NLAPS calibration database was used to generate estimated daily values for the radiometric gain and offset that might have been applied to TM data. This paper discusses the need for and generation of the NLAPSIC gain and offset tables. A companion paper covers the application of and errors associated with using these tables.
Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding
NASA Astrophysics Data System (ADS)
Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.
2018-04-01
In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.
Test and analysis procedures for updating math models of Space Shuttle payloads
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.
1991-01-01
Over the next decade or more, the Space Shuttle will continue to be the primary transportation system for delivering payloads to Earth orbit. Although a number of payloads have already been successfully carried by the Space Shuttle in the payload bay of the Orbiter vehicle, there continues to be a need for evaluation of the procedures used for verifying and updating the math models of the payloads. The verified payload math models is combined with an Orbiter math model for the coupled-loads analysis, which is required before any payload can fly. Several test procedures were employed for obtaining data for use in verifying payload math models and for carrying out the updating of the payload math models. Research was directed at the evaluation of test/update procedures for use in the verification of Space Shuttle payload math models. The following research tasks are summarized: (1) a study of free-interface test procedures; (2) a literature survey and evaluation of model update procedures; and (3) the design and construction of a laboratory payload simulator.
Updates to the Demographic and Spatial Allocation Models to ...
EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change modeling by providing nationwide housing development scenarios up to 2100. ICLUS V2 includes updated population and land use data sets and addressing limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. [2017 UPDATE] Get the latest version of ICLUS and stay up-to-date by signing up to the ICLUS mailing list. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.
Lee, A J; Cunningham, A P; Kuchenbaecker, K B; Mavaddat, N; Easton, D F; Antoniou, A C
2014-01-01
Background: The Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) is a risk prediction model that is used to compute probabilities of carrying mutations in the high-risk breast and ovarian cancer susceptibility genes BRCA1 and BRCA2, and to estimate the future risks of developing breast or ovarian cancer. In this paper, we describe updates to the BOADICEA model that extend its capabilities, make it easier to use in a clinical setting and yield more accurate predictions. Methods: We describe: (1) updates to the statistical model to include cancer incidences from multiple populations; (2) updates to the distributions of tumour pathology characteristics using new data on BRCA1 and BRCA2 mutation carriers and women with breast cancer from the general population; (3) improvements to the computational efficiency of the algorithm so that risk calculations now run substantially faster; and (4) updates to the model's web interface to accommodate these new features and to make it easier to use in a clinical setting. Results: We present results derived using the updated model, and demonstrate that the changes have a significant impact on risk predictions. Conclusion: All updates have been implemented in a new version of the BOADICEA web interface that is now available for general use: http://ccge.medschl.cam.ac.uk/boadicea/. PMID:24346285
Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers
Chang, Xiaodong; Huang, Jinquan; Lu, Feng
2017-01-01
For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios. PMID:28398255
Robust In-Flight Sensor Fault Diagnostics for Aircraft Engine Based on Sliding Mode Observers.
Chang, Xiaodong; Huang, Jinquan; Lu, Feng
2017-04-11
For a sensor fault diagnostic system of aircraft engines, the health performance degradation is an inevitable interference that cannot be neglected. To address this issue, this paper investigates an integrated on-line sensor fault diagnostic scheme for a commercial aircraft engine based on a sliding mode observer (SMO). In this approach, one sliding mode observer is designed for engine health performance tracking, and another for sensor fault reconstruction. Both observers are employed in in-flight applications. The results of the former SMO are analyzed for post-flight updating the baseline model of the latter. This idea is practical and feasible since the updating process does not require the algorithm to be regulated or redesigned, so that ground-based intervention is avoided, and the update process is implemented in an economical and efficient way. With this setup, the robustness of the proposed scheme to the health degradation is much enhanced and the latter SMO is able to fulfill sensor fault reconstruction over the course of the engine life. The proposed sensor fault diagnostic system is applied to a nonlinear simulation of a commercial aircraft engine, and its effectiveness is evaluated in several fault scenarios.
Streaming parallel GPU acceleration of large-scale filter-based spiking neural networks.
Slażyński, Leszek; Bohte, Sander
2012-01-01
The arrival of graphics processing (GPU) cards suitable for massively parallel computing promises affordable large-scale neural network simulation previously only available at supercomputing facilities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of magnitude, the challenge is to develop fine-grained parallel algorithms to fully exploit the particulars of GPUs. Computation in a neural network is inherently parallel and thus a natural match for GPU architectures: given inputs, the internal state for each neuron can be updated in parallel. We show that for filter-based spiking neurons, like the Spike Response Model, the additive nature of membrane potential dynamics enables additional update parallelism. This also reduces the accumulation of numerical errors when using single precision computation, the native precision of GPUs. We further show that optimizing simulation algorithms and data structures to the GPU's architecture has a large pay-off: for example, matching iterative neural updating to the memory architecture of the GPU speeds up this simulation step by a factor of three to five. With such optimizations, we can simulate in better-than-realtime plausible spiking neural networks of up to 50 000 neurons, processing over 35 million spiking events per second.
Chemical transport model simulations of organic aerosol in ...
Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data
Updating Human Factors Engineering Guidelines for Conducting Safety Reviews of Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
O, J.M.; Higgins, J.; Stephen Fleger - NRC
The U.S. Nuclear Regulatory Commission (NRC) reviews the human factors engineering (HFE) programs of applicants for nuclear power plant construction permits, operating licenses, standard design certifications, and combined operating licenses. The purpose of these safety reviews is to help ensure that personnel performance and reliability are appropriately supported. Detailed design review procedures and guidance for the evaluations is provided in three key documents: the Standard Review Plan (NUREG-0800), the HFE Program Review Model (NUREG-0711), and the Human-System Interface Design Review Guidelines (NUREG-0700). These documents were last revised in 2007, 2004 and 2002, respectively. The NRC is committed to the periodicmore » update and improvement of the guidance to ensure that it remains a state-of-the-art design evaluation tool. To this end, the NRC is updating its guidance to stay current with recent research on human performance, advances in HFE methods and tools, and new technology being employed in plant and control room design. This paper describes the role of HFE guidelines in the safety review process and the content of the key HFE guidelines used. Then we will present the methodology used to develop HFE guidance and update these documents, and describe the current status of the update program.« less
NASA Astrophysics Data System (ADS)
Yu, Liuqian; Fennel, Katja; Bertino, Laurent; Gharamti, Mohamad El; Thompson, Keith R.
2018-06-01
Effective data assimilation methods for incorporating observations into marine biogeochemical models are required to improve hindcasts, nowcasts and forecasts of the ocean's biogeochemical state. Recent assimilation efforts have shown that updating model physics alone can degrade biogeochemical fields while only updating biogeochemical variables may not improve a model's predictive skill when the physical fields are inaccurate. Here we systematically investigate whether multivariate updates of physical and biogeochemical model states are superior to only updating either physical or biogeochemical variables. We conducted a series of twin experiments in an idealized ocean channel that experiences wind-driven upwelling. The forecast model was forced with biased wind stress and perturbed biogeochemical model parameters compared to the model run representing the "truth". Taking advantage of the multivariate nature of the deterministic Ensemble Kalman Filter (DEnKF), we assimilated different combinations of synthetic physical (sea surface height, sea surface temperature and temperature profiles) and biogeochemical (surface chlorophyll and nitrate profiles) observations. We show that when biogeochemical and physical properties are highly correlated (e.g., thermocline and nutricline), multivariate updates of both are essential for improving model skill and can be accomplished by assimilating either physical (e.g., temperature profiles) or biogeochemical (e.g., nutrient profiles) observations. In our idealized domain, the improvement is largely due to a better representation of nutrient upwelling, which results in a more accurate nutrient input into the euphotic zone. In contrast, assimilating surface chlorophyll improves the model state only slightly, because surface chlorophyll contains little information about the vertical density structure. We also show that a degradation of the correlation between observed subsurface temperature and nutrient fields, which has been an issue in several previous assimilation studies, can be reduced by multivariate updates of physical and biogeochemical fields.
Model updating in flexible-link multibody systems
NASA Astrophysics Data System (ADS)
Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.
2016-09-01
The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.
The fate of memory: Reconsolidation and the case of Prediction Error.
Fernández, Rodrigo S; Boccia, Mariano M; Pedreira, María E
2016-09-01
The ability to make predictions based on stored information is a general coding strategy. A Prediction-Error (PE) is a mismatch between expected and current events. It was proposed as the process by which memories are acquired. But, our memories like ourselves are subject to change. Thus, an acquired memory can become active and update its content or strength by a labilization-reconsolidation process. Within the reconsolidation framework, PE drives the updating of consolidated memories. Moreover, memory features, such as strength and age, are crucial boundary conditions that limit the initiation of the reconsolidation process. In order to disentangle these boundary conditions, we review the role of surprise, classical models of conditioning, and their neural correlates. Several forms of PE were found to be capable of inducing memory labilization-reconsolidation. Notably, many of the PE findings mirror those of memory-reconsolidation, suggesting a strong link between these signals and memory process. Altogether, the aim of the present work is to integrate a psychological and neuroscientific analysis of PE into a general framework for memory-reconsolidation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kozar, Mark D.; Kahle, Sue C.
2013-01-01
This report documents the standard procedures, policies, and field methods used by the U.S. Geological Survey’s (USGS) Washington Water Science Center staff for activities related to the collection, processing, analysis, storage, and publication of groundwater data. This groundwater quality-assurance plan changes through time to accommodate new methods and requirements developed by the Washington Water Science Center and the USGS Office of Groundwater. The plan is based largely on requirements and guidelines provided by the USGS Office of Groundwater, or the USGS Water Mission Area. Regular updates to this plan represent an integral part of the quality-assurance process. Because numerous policy memoranda have been issued by the Office of Groundwater since the previous groundwater quality assurance plan was written, this report is a substantial revision of the previous report, supplants it, and contains significant additional policies not covered in the previous report. This updated plan includes information related to the organization and responsibilities of USGS Washington Water Science Center staff, training, safety, project proposal development, project review procedures, data collection activities, data processing activities, report review procedures, and archiving of field data and interpretative information pertaining to groundwater flow models, borehole aquifer tests, and aquifer tests. Important updates from the previous groundwater quality assurance plan include: (1) procedures for documenting and archiving of groundwater flow models; (2) revisions to procedures and policies for the creation of sites in the Groundwater Site Inventory database; (3) adoption of new water-level forms to be used within the USGS Washington Water Science Center; (4) procedures for future creation of borehole geophysics, surface geophysics, and aquifer-test archives; and (5) use of the USGS Multi Optional Network Key Entry System software for entry of routine water-level data collected as part of long-term water-level monitoring networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mobrand, Lars Erik; Lestelle, Lawrence C.
In the spring of 1994 a technical planning support project was initiated by the Grande Ronde Model Watershed Board of Directors (Board) with funding from the Bonneville Power Administration. The project was motivated by a need for a science based method for prioritizing restoration actions in the basin that would promote effectiveness and accountability. In this section the authors recall the premises for the project. The authors also present a set of recommendations for implementing a watershed planning process that incorporates a science-based framework to help guide decision making. This process is intended to assist the Grande Ronde Model Watershedmore » Board in its effort to plan and implement watershed improvement measures. The process would also assist the Board in coordinating its efforts with other entities in the region. The planning process is based on an approach for developing an ecosystem management strategy referred to as the Ecosystem Diagnosis and Treatment (EDT) method (Lichatowich et al. 1995, Lestelle et al. 1996). The process consists of an on-going planning cycle. Included in this cycle is an assessment of the ability of the watershed to support and sustain natural resources and other economic and societal values. This step in the process, which the authors refer to as the diagnosis, helps guide the development of actions (also referred to as treatments) aimed at improving the conditions of the watershed to achieve long-term objectives. The planning cycle calls for routinely reviewing and updating, as necessary, the basis for the diagnosis and other analyses used by the Board in adopting actions for implementation. The recommendations offered here address this critical need to habitually update the information used in setting priorities for action.« less
The four-dimensional data assimilation (FDDA) technique in the Weather Research and Forecasting (WRF) meteorological model has recently undergone an important update from the original version. Previous evaluation results have demonstrated that the updated FDDA approach in WRF pr...
H2-based star formation laws in hierarchical models of galaxy formation
NASA Astrophysics Data System (ADS)
Xie, Lizhi; De Lucia, Gabriella; Hirschmann, Michaela; Fontanot, Fabio; Zoldan, Anna
2017-07-01
We update our recently published model for GAlaxy Evolution and Assembly (GAEA), to include a self-consistent treatment of the partition of cold gas in atomic and molecular hydrogen. Our model provides significant improvements with respect to previous ones used for similar studies. In particular, GAEA (I) includes a sophisticated chemical enrichment scheme accounting for non-instantaneous recycling of gas, metals and energy; (II) reproduces the measured evolution of the galaxy stellar mass function; (III) reasonably reproduces the observed correlation between galaxy stellar mass and gas metallicity at different redshifts. These are important prerequisites for models considering a metallicity-dependent efficiency of molecular gas formation. We also update our model for disc sizes and show that model predictions are in nice agreement with observational estimates for the gas, stellar and star-forming discs at different cosmic epochs. We analyse the influence of different star formation laws including empirical relations based on the hydrostatic pressure of the disc, analytic models and prescriptions derived from detailed hydrodynamical simulations. We find that modifying the star formation law does not affect significantly the global properties of model galaxies, neither their distributions. The only quantity showing significant deviations in different models is the cosmic molecular-to-atomic hydrogen ratio, particularly at high redshift. Unfortunately, however, this quantity also depends strongly on the modelling adopted for additional physical processes. Useful constraints on the physical processes regulating star formation can be obtained focusing on low-mass galaxies and/or at higher redshift. In this case, self-regulation has not yet washed out differences imprinted at early time.
Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model
NASA Technical Reports Server (NTRS)
Boone, Spencer
2017-01-01
This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.
Attentional focus affects how events are segmented and updated in narrative reading.
Bailey, Heather R; Kurby, Christopher A; Sargent, Jesse Q; Zacks, Jeffrey M
2017-08-01
Readers generate situation models representing described events, but the nature of these representations may differ depending on the reading goals. We assessed whether instructions to pay attention to different situational dimensions affect how individuals structure their situation models (Exp. 1) and how they update these models when situations change (Exp. 2). In Experiment 1, participants read and segmented narrative texts into events. Some readers were oriented to pay specific attention to characters or space. Sentences containing character or spatial-location changes were perceived as event boundaries-particularly if the reader was oriented to characters or space, respectively. In Experiment 2, participants read narratives and responded to recognition probes throughout the texts. Readers who were oriented to the spatial dimension were more likely to update their situation models at spatial changes; all readers tracked the character dimension. The results from both experiments indicated that attention to individual situational dimensions influences how readers segment and update their situation models. More broadly, the results provide evidence for a global situation model updating mechanism that serves to set up new models at important narrative changes.
Environmental modeling and recognition for an autonomous land vehicle
NASA Technical Reports Server (NTRS)
Lawton, D. T.; Levitt, T. S.; Mcconnell, C. C.; Nelson, P. C.
1987-01-01
An architecture for object modeling and recognition for an autonomous land vehicle is presented. Examples of objects of interest include terrain features, fields, roads, horizon features, trees, etc. The architecture is organized around a set of data bases for generic object models and perceptual structures, temporary memory for the instantiation of object and relational hypotheses, and a long term memory for storing stable hypotheses that are affixed to the terrain representation. Multiple inference processes operate over these databases. Researchers describe these particular components: the perceptual structure database, the grouping processes that operate over this, schemas, and the long term terrain database. A processing example that matches predictions from the long term terrain model to imagery, extracts significant perceptual structures for consideration as potential landmarks, and extracts a relational structure to update the long term terrain database is given.
Sizing the science data processing requirements for EOS
NASA Technical Reports Server (NTRS)
Wharton, Stephen W.; Chang, Hyo D.; Krupp, Brian; Lu, Yun-Chi
1991-01-01
The methodology used in the compilation and synthesis of baseline science requirements associated with the 30 + EOS (Earth Observing System) instruments and over 2,400 EOS data products (both output and required input) proposed by EOS investigators is discussed. A brief background on EOS and the EOS Data and Information System (EOSDIS) is presented, and the approach is outlined in terms of a multilayer model. The methodology used to compile, synthesize, and tabulate requirements within the model is described. The principal benefit of this approach is the reduction of effort needed to update the analysis and maintain the accuracy of the science data processing requirements in response to changes in EOS platforms, instruments, data products, processing center allocations, or other model input parameters. The spreadsheets used in the model provide a compact representation, thereby facilitating review and presentation of the information content.
Hedden, Trey; Yoon, Carolyn
2006-09-01
Recent theories have suggested that resistance to interference is a unifying principle of executive function and that individual differences in interference may be explained by executive function (M. J. Kane & R. W. Engle, 2002). Measures of executive function, memory, and perceptual speed were obtained from 121 older adults (ages 63-82). We used structural equation modeling to investigate the relationships of these constructs with interference in a working memory task. Executive function was best described as two related subcomponent processes: shifting and updating goal-relevant representations and inhibition of proactive interference. These subcomponents were distinct from verbal and visual memory and speed. Individual differences in interference susceptibility and recollection were best predicted by shifting and updating and by resistance to proactive interference, and variability in familiarity was predicted by resistance to proactive interference and speed. ((c) 2006 APA, all rights reserved).
Simanowski, Stefanie; Krajewski, Kristin
2017-08-10
This study assessed the extent to which executive functions (EF), according to their factor structure in 5-year-olds (N = 244), influenced early quantity-number competencies, arithmetic fluency, and mathematics school achievement throughout first and second grades. A confirmatory factor analysis resulted in updating as a first, and inhibition and shifting as a combined second factor. In the structural equation model, updating significantly affected knowledge of the number word sequence, suggesting a facilitatory effect on basic encoding processes in numerical materials that can be learnt purely by rote. Shifting and inhibition significantly influenced quantity to number word linkages, indicating that these processes promote developing a profound understanding of numbers. These results show the supportive role of specific EF for specific aspects of a numerical foundation. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.
Unthank, Michael D.
2013-01-01
The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.
Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W
2010-07-01
Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter performance, but it did not improve performance when used individually. The EnKF estimates of leaf area followed the expected springtime canopy phenology. However, there were also diel fluctuations in the leaf-area estimates; these are a clear indication of a model deficiency possibly related to vapor pressure effects on canopy conductance.
NASA Astrophysics Data System (ADS)
Das, B.; Wilson, M.; Divakarla, M. G.; Chen, W.; Barnet, C.; Wolf, W.
2013-05-01
Algorithm Development Library (ADL) is a framework that mimics the operational system IDPS (Interface Data Processing Segment) that is currently being used to process data from instruments aboard Suomi National Polar-orbiting Partnership (S-NPP) satellite. The satellite was launched successfully in October 2011. The Cross-track Infrared and Microwave Sounder Suite (CrIMSS) consists of the Advanced Technology Microwave Sounder (ATMS) and Cross-track Infrared Sounder (CrIS) instruments that are on-board of S-NPP. These instruments will also be on-board of JPSS (Joint Polar Satellite System) that will be launched in early 2017. The primary products of the CrIMSS Environmental Data Record (EDR) include global atmospheric vertical temperature, moisture, and pressure profiles (AVTP, AVMP and AVPP) and Ozone IP (Intermediate Product from CrIS radiances). Several algorithm updates have recently been proposed by CrIMSS scientists that include fixes to the handling of forward modeling errors, a more conservative identification of clear scenes, indexing corrections for daytime products, and relaxed constraints between surface temperature and air temperature for daytime land scenes. We have integrated these improvements into the ADL framework. This work compares the results from ADL emulation of future IDPS system incorporating all the suggested algorithm updates with the current official processing results by qualitative and quantitative evaluations. The results prove these algorithm updates improve science product quality.
Zaĭtseva, N V; Trusov, P V; Kir'ianov, D A
2012-01-01
The mathematic concept model presented describes accumulation of functional disorders associated with environmental factors, plays predictive role and is designed for assessments of possible effects caused by heterogenous factors with variable exposures. Considering exposure changes with self-restoration process opens prospects of using the model to evaluate, analyse and manage occupational risks. To develop current theoretic approaches, the authors suggested a model considering age-related body peculiarities, systemic interactions of organs, including neuro-humoral regulation, accumulation of functional disorders due to external factors, rehabilitation of functions during treatment. General objective setting covers defining over a hundred unknow coefficients that characterize speed of various processes within the body. To solve this problem, the authors used iteration approach, successive identification, that starts from the certain primary approximation of the model parameters and processes subsequent updating on the basis of new theoretic and empirical knowledge.
Unifying Model-Based and Reactive Programming within a Model-Based Executive
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)
1999-01-01
Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.
Dissociating Working Memory Updating and Automatic Updating: The Reference-Back Paradigm
ERIC Educational Resources Information Center
Rac-Lubashevsky, Rachel; Kessler, Yoav
2016-01-01
Working memory (WM) updating is a controlled process through which relevant information in the environment is selected to enter the gate to WM and substitute its contents. We suggest that there is also an automatic form of updating, which influences performance in many tasks and is primarily manifested in reaction time sequential effects. The goal…
Large fluctuations in anti-coordination games on scale-free graphs
NASA Astrophysics Data System (ADS)
Sabsovich, Daniel; Mobilia, Mauro; Assaf, Michael
2017-05-01
We study the influence of the complex topology of scale-free graphs on the dynamics of anti-coordination games (e.g. snowdrift games). These reference models are characterized by the coexistence (evolutionary stable mixed strategy) of two competing species, say ‘cooperators’ and ‘defectors’, and, in finite systems, by metastability and large-fluctuation-driven fixation. In this work, we use extensive computer simulations and an effective diffusion approximation (in the weak selection limit) to determine under which circumstances, depending on the individual-based update rules, the topology drastically affects the long-time behavior of anti-coordination games. In particular, we compute the variance of the number of cooperators in the metastable state and the mean fixation time when the dynamics is implemented according to the voter model (death-first/birth-second process) and the link dynamics (birth/death or death/birth at random). For the voter update rule, we show that the scale-free topology effectively renormalizes the population size and as a result the statistics of observables depend on the network’s degree distribution. In contrast, such a renormalization does not occur with the link dynamics update rule and we recover the same behavior as on complete graphs.
Accelerating deep neural network training with inconsistent stochastic gradient descent.
Wang, Linnan; Yang, Yi; Min, Renqiang; Chakradhar, Srimat
2017-09-01
Stochastic Gradient Descent (SGD) updates Convolutional Neural Network (CNN) with a noisy gradient computed from a random batch, and each batch evenly updates the network once in an epoch. This model applies the same training effort to each batch, but it overlooks the fact that the gradient variance, induced by Sampling Bias and Intrinsic Image Difference, renders different training dynamics on batches. In this paper, we develop a new training strategy for SGD, referred to as Inconsistent Stochastic Gradient Descent (ISGD) to address this problem. The core concept of ISGD is the inconsistent training, which dynamically adjusts the training effort w.r.t the loss. ISGD models the training as a stochastic process that gradually reduces down the mean of batch's loss, and it utilizes a dynamic upper control limit to identify a large loss batch on the fly. ISGD stays on the identified batch to accelerate the training with additional gradient updates, and it also has a constraint to penalize drastic parameter changes. ISGD is straightforward, computationally efficient and without requiring auxiliary memories. A series of empirical evaluations on real world datasets and networks demonstrate the promising performance of inconsistent training. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McPherson, Brian J.; Pan, Feng
2014-09-24
This report summarizes development of a coupled-process reservoir model for simulating enhanced geothermal systems (EGS) that utilize supercritical carbon dioxide as a working fluid. Specifically, the project team developed an advanced chemical kinetic model for evaluating important processes in EGS reservoirs, such as mineral precipitation and dissolution at elevated temperature and pressure, and for evaluating potential impacts on EGS surface facilities by related chemical processes. We assembled a new database for better-calibrated simulation of water/brine/ rock/CO2 interactions in EGS reservoirs. This database utilizes existing kinetic and other chemical data, and we updated those data to reflect corrections for elevated temperaturemore » and pressure conditions of EGS reservoirs.« less
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl
2008-01-01
The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.
Electron Collisions in our Atmosphere — How the Microscopic Drives the Macroscopic
NASA Astrophysics Data System (ADS)
Buckman, S. J.; Brunger, M. J.; Campbell, L.; Jelisavcic, M.; Petrovic, Z. Lj.
2005-05-01
Recent measurements of low energy, absolute electron scattering cross sections for vibrational excitation of NO have been used to update the cross set used for modeling atmospheric auroral processes. These new cross sections, which highlight the role that intermediate negative ions (resonances) play at energies below 5 eV in mediating vibrational excitation, also indicate that electron-driven processes play an important role in the infrared (˜5 um) auroral emissions from the NO molecule.
Revised models of interstellar nitrogen isotopic fractionation
NASA Astrophysics Data System (ADS)
Wirström, E. S.; Charnley, S. B.
2018-03-01
Nitrogen-bearing molecules in cold molecular clouds exhibit a range of isotopic fractionation ratios and these molecules may be the precursors of 15N enrichments found in comets and meteorites. Chemical model calculations indicate that atom-molecular ion and ion-molecule reactions could account for most of the fractionation patterns observed. However, recent quantum-chemical computations demonstrate that several of the key processes are unlikely to occur in dense clouds. Related model calculations of dense cloud chemistry show that the revised 15N enrichments fail to match observed values. We have investigated the effects of these reaction rate modifications on the chemical model of Wirström et al. (2012) for which there are significant physical and chemical differences with respect to other models. We have included 15N fractionation of CN in neutral-neutral reactions and also updated rate coefficients for key reactions in the nitrogen chemistry. We find that the revised fractionation rates have the effect of suppressing 15N enrichment in ammonia at all times, while the depletion is even more pronounced, reaching 14N/15N ratios of >2000. Taking the updated nitrogen chemistry into account, no significant enrichment occurs in HCN or HNC, contrary to observational evidence in dark clouds and comets, although the 14N/15N ratio can still be below 100 in CN itself. However, such low CN abundances are predicted that the updated model falls short of explaining the bulk 15N enhancements observed in primitive materials. It is clear that alternative fractionating reactions are necessary to reproduce observations, so further laboratory and theoretical studies are urgently needed.
How can we deal with ANN in flood forecasting? As a simulation model or updating kernel!
NASA Astrophysics Data System (ADS)
Hassan Saddagh, Mohammad; Javad Abedini, Mohammad
2010-05-01
Flood forecasting and early warning, as a non-structural measure for flood control, is often considered to be the most effective and suitable alternative to mitigate the damage and human loss caused by flood. Forecast results which are output of hydrologic, hydraulic and/or black box models should secure accuracy of flood values and timing, especially for long lead time. The application of the artificial neural network (ANN) in flood forecasting has received extensive attentions in recent years due to its capability to capture the dynamics inherent in complex processes including flood. However, results obtained from executing plain ANN as simulation model demonstrate dramatic reduction in performance indices as lead time increases. This paper is intended to monitor the performance indices as it relates to flood forecasting and early warning using two different methodologies. While the first method employs a multilayer neural network trained using back-propagation scheme to forecast output hydrograph of a hypothetical river for various forecast lead time up to 6.0 hr, the second method uses 1D hydrodynamic MIKE11 model as forecasting model and multilayer neural network as updating kernel to monitor and assess the performance indices compared to ANN alone in light of increase in lead time. Results presented in both graphical and tabular format indicate superiority of MIKE11 coupled with ANN as updating kernel compared to ANN as simulation model alone. While plain ANN produces more accurate results for short lead time, the errors increase expeditiously for longer lead time. The second methodology provides more accurate and reliable results for longer forecast lead time.
An ocean data assimilation system and reanalysis of the World Ocean hydrophysical fields
NASA Astrophysics Data System (ADS)
Zelenko, A. A.; Vil'fand, R. M.; Resnyanskii, Yu. D.; Strukov, B. S.; Tsyrulnikov, M. D.; Svirenko, P. I.
2016-07-01
A new version of the ocean data assimilation system (ODAS) developed at the Hydrometcentre of Russia is presented. The assimilation is performed following the sequential scheme analysis-forecast-analysis. The main components of the ODAS are procedures for operational observation data processing, a variational analysis scheme, and an ocean general circulation model used to estimate the first guess fields involved in the analysis. In situ observations of temperature and salinity in the upper 1400-m ocean layer obtained from various observational platforms are used as input data. In the new ODAS version, the horizontal resolution of the assimilating model and of the output products is increased, the previous 2D-Var analysis scheme is replaced by a more general 3D-Var scheme, and a more flexible incremental analysis updating procedure is introduced to correct the model calculations. A reanalysis of the main World Ocean hydrophysical fields over the 2005-2015 period has been performed using the updated ODAS. The reanalysis results are compared with data from independent sources.
Space Vehicle Terrestrial Environment Design Requirements Guidelines
NASA Technical Reports Server (NTRS)
Johnson, Dale L.; Keller, Vernon W.; Vaughan, William W.
2006-01-01
The terrestrial environment is an important driver of space vehicle structural, control, and thermal system design. NASA is currently in the process of producing an update to an earlier Terrestrial Environment Guidelines for Aerospace Vehicle Design and Development Handbook. This paper addresses the contents of this updated handbook, with special emphasis on new material being included in the areas of atmospheric thermodynamic models, wind dynamics, atmospheric composition, atmospheric electricity, cloud phenomena, atmospheric extremes, and sea state. In addition, the respective engineering design elements are discussed relative to terrestrial environment inputs that require consideration. Specific lessons learned that have contributed to the advancements made in the application and awareness of terrestrial environment inputs for aerospace engineering applications are presented.
Goyet, Sophie; Barennes, Hubert; Libourel, Therese; van Griensven, Johan; Frutos, Roger; Tarantola, Arnaud
2014-06-26
The process and effectiveness of knowledge translation (KT) interventions targeting policymakers are rarely reported. In Cambodia, a low-income country (LIC), an intervention aiming to provide evidence-based knowledge on pneumonia to health authorities was developed to help update pediatric and adult national clinical guidelines. Through a case study, we assessed the effectiveness of this KT intervention, with the goal of identifying the barriers to KT and suggest strategies to facilitate KT in similar settings. An extensive search for all relevant sources of data documenting the processes of updating adult and pediatric pneumonia guidelines was done. Documents included among others, reports, meeting minutes, and email correspondences. The study was conducted in successive phases: an appraisal of the content of both adult and pediatric pneumonia guidelines; an appraisal of the quality of guidelines by independent experts, using the AGREE-II instrument; a description and modeling of the KT process within the guidelines updating system, using the Unified Modeling Language (UML) tools 2.2; and the listing of the barriers and facilitators to KT we identified during the study. The first appraisal showed that the integration of the KT key messages in pediatric and adult guidelines varied with a better efficiency in the pediatric guidelines. The overall AGREE-II quality assessments scored 37% and 44% for adult and pediatric guidelines, respectively. Scores were lowest for the domains of 'rigor of development' and 'editorial independence.' The UML analysis highlighted that time frames and constraints of the involved stakeholders greatly differed and that there were several missed opportunities to translate on evidence into the adult pneumonia guideline. Seventeen facilitating factors and 18 potential barriers to KT were identified. Main barriers were related to the absence of a clear mandate from the Ministry of Health for the researchers and to a lack of synchronization between knowledge production and policy-making. Study findings suggest that stakeholders, both researchers and policy makers planning to update clinical guidelines in LIC may need methodological support to overcome the expected barriers.
2014-01-01
Background The process and effectiveness of knowledge translation (KT) interventions targeting policymakers are rarely reported. In Cambodia, a low-income country (LIC), an intervention aiming to provide evidence-based knowledge on pneumonia to health authorities was developed to help update pediatric and adult national clinical guidelines. Through a case study, we assessed the effectiveness of this KT intervention, with the goal of identifying the barriers to KT and suggest strategies to facilitate KT in similar settings. Methods An extensive search for all relevant sources of data documenting the processes of updating adult and pediatric pneumonia guidelines was done. Documents included among others, reports, meeting minutes, and email correspondences. The study was conducted in successive phases: an appraisal of the content of both adult and pediatric pneumonia guidelines; an appraisal of the quality of guidelines by independent experts, using the AGREE-II instrument; a description and modeling of the KT process within the guidelines updating system, using the Unified Modeling Language (UML) tools 2.2; and the listing of the barriers and facilitators to KT we identified during the study. Results The first appraisal showed that the integration of the KT key messages in pediatric and adult guidelines varied with a better efficiency in the pediatric guidelines. The overall AGREE-II quality assessments scored 37% and 44% for adult and pediatric guidelines, respectively. Scores were lowest for the domains of ‘rigor of development’ and ‘editorial independence.’ The UML analysis highlighted that time frames and constraints of the involved stakeholders greatly differed and that there were several missed opportunities to translate on evidence into the adult pneumonia guideline. Seventeen facilitating factors and 18 potential barriers to KT were identified. Main barriers were related to the absence of a clear mandate from the Ministry of Health for the researchers and to a lack of synchronization between knowledge production and policy-making. Conclusions Study findings suggest that stakeholders, both researchers and policy makers planning to update clinical guidelines in LIC may need methodological support to overcome the expected barriers. PMID:24969242
Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth
Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak
1985-01-01
GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.
Puviani, Luca; Rama, Sidita
2016-01-01
Despite growing scientific interest in the placebo effect and increasing understanding of neurobiological mechanisms, theoretical modeling of the placebo response remains poorly developed. The most extensively accepted theories are expectation and conditioning, involving both conscious and unconscious information processing. However, it is not completely understood how these mechanisms can shape the placebo response. We focus here on neural processes which can account for key properties of the response to substance intake. It is shown that placebo response can be conceptualized as a reaction of a distributed neural system within the central nervous system. Such a reaction represents an integrated component of the response to open substance administration (or to substance intake) and is updated through “unconditioned stimulus (UCS) revaluation learning”. The analysis leads to a theorem, which proves the existence of two distinct quantities coded within the brain, these are the expected or prediction outcome and the reactive response. We show that the reactive response is updated automatically by implicit revaluation learning, while the expected outcome can also be modulated through conscious information processing. Conceptualizing the response to substance intake in terms of UCS revaluation learning leads to the theoretical formulation of a potential neuropharmacological treatment for increasing unlimitedly the effectiveness of a given drug. PMID:27436417
Puviani, Luca; Rama, Sidita
2016-07-20
Despite growing scientific interest in the placebo effect and increasing understanding of neurobiological mechanisms, theoretical modeling of the placebo response remains poorly developed. The most extensively accepted theories are expectation and conditioning, involving both conscious and unconscious information processing. However, it is not completely understood how these mechanisms can shape the placebo response. We focus here on neural processes which can account for key properties of the response to substance intake. It is shown that placebo response can be conceptualized as a reaction of a distributed neural system within the central nervous system. Such a reaction represents an integrated component of the response to open substance administration (or to substance intake) and is updated through "unconditioned stimulus (UCS) revaluation learning". The analysis leads to a theorem, which proves the existence of two distinct quantities coded within the brain, these are the expected or prediction outcome and the reactive response. We show that the reactive response is updated automatically by implicit revaluation learning, while the expected outcome can also be modulated through conscious information processing. Conceptualizing the response to substance intake in terms of UCS revaluation learning leads to the theoretical formulation of a potential neuropharmacological treatment for increasing unlimitedly the effectiveness of a given drug.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-10
... DEPARTMENT OF ENERGY Update on Reimbursement for Costs of Remedial Action at Active Uranium and Thorium Processing Sites AGENCY: Department of Energy. ACTION: Notice of the Title X claims during fiscal... at active uranium and thorium processing sites to remediate byproduct material generated as an...
Daucourt, Mia C; Schatschneider, Christopher; Connor, Carol M; Al Otaiba, Stephanie; Hart, Sara A
2018-01-01
Recent achievement research suggests that executive function (EF), a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD). Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79-10.40 years). At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF), they had a mean age of 13.21 years ( SD = 1.54 years; range = 10.47-16.63 years). The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting) and the hybrid model of RD, and that the strength of EF's predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the hybrid model of RD. In total, all EF components were significant and equally effective predictors of RD when RD was operationalized using the hybrid model.
Daucourt, Mia C.; Schatschneider, Christopher; Connor, Carol M.; Al Otaiba, Stephanie; Hart, Sara A.
2018-01-01
Recent achievement research suggests that executive function (EF), a set of regulatory processes that control both thought and action necessary for goal-directed behavior, is related to typical and atypical reading performance. This project examines the relation of EF, as measured by its components, Inhibition, Updating Working Memory, and Shifting, with a hybrid model of reading disability (RD). Our sample included 420 children who participated in a broader intervention project when they were in KG-third grade (age M = 6.63 years, SD = 1.04 years, range = 4.79–10.40 years). At the time their EF was assessed, using a parent-report Behavior Rating Inventory of Executive Function (BRIEF), they had a mean age of 13.21 years (SD = 1.54 years; range = 10.47–16.63 years). The hybrid model of RD was operationalized as a composite consisting of four symptoms, and set so that any child could have any one, any two, any three, any four, or none of the symptoms included in the hybrid model. The four symptoms include low word reading achievement, unexpected low word reading achievement, poorer reading comprehension compared to listening comprehension, and dual-discrepancy response-to-intervention, requiring both low achievement and low growth in word reading. The results of our multilevel ordinal logistic regression analyses showed a significant relation between all three components of EF (Inhibition, Updating Working Memory, and Shifting) and the hybrid model of RD, and that the strength of EF’s predictive power for RD classification was the highest when RD was modeled as having at least one or more symptoms. Importantly, the chances of being classified as having RD increased as EF performance worsened and decreased as EF performance improved. The question of whether any one EF component would emerge as a superior predictor was also examined and results showed that Inhibition, Updating Working Memory, and Shifting were equally valuable as predictors of the hybrid model of RD. In total, all EF components were significant and equally effective predictors of RD when RD was operationalized using the hybrid model. PMID:29662458
User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.
MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.
Further Automate Planned Cluster Maintenance to Minimize System Downtime during Maintenance Windows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Springmeyer, R.
This report documents the integration and testing of the automated update process of compute clusters in LC to minimize impact to user productivity. Description: A set of scripts will be written and deployed to further standardize cluster maintenance activities and minimize downtime during planned maintenance windows. Completion Criteria: When the scripts have been deployed and used during planned maintenance windows and a timing comparison is completed between the existing process and the new more automated process, this milestone is complete. This milestone was completed on Aug 23, 2016 on the new CTS1 cluster called Jade when a request to upgrademore » the version of TOSS 3 was initiated while SWL jobs and normal user jobs were running. Jobs that were running when the update to the system began continued to run to completion. New jobs on the cluster started on the new release of TOSS 3. No system administrator action was required. Current update procedures in TOSS 2 begin by killing all users jobs. Then all diskfull nodes are updated, which can take a few hours. Only after the updates are applied are all nodes are rebooted, and then finally put back into service. A system administrator is required for all steps. In terms of human time spent during a cluster OS update, the TOSS 3 automated procedure on Jade took 0 FTE hours. Doing the same update without the Toss Update Tool would have required 4 FTE hours.« less
NASA Astrophysics Data System (ADS)
Foster, A.; Armstrong, A. H.; Shuman, J. K.; Ranson, K.; Shugart, H. H., Jr.; Rogers, B. M.; Goetz, S. J.
2016-12-01
Global temperatures have increased about 0.2°C per decade since 1979, and the high latitudes are warming faster than the rest of the globe. Climate change within Alaska is likely to bring about increased drought and longer fire seasons, as well as increases in the severity and frequency of fires. These changes in disturbance regimes and their associated effects on ecosystem C stocks, including permafrost, may lead to a positive feedback to further climate warming. As of now, it is uncertain how vegetation will respond to ongoing climate change, and the addition of disturbance effects leads to even more complicated and varied scenarios. Through ecological modeling, we have the capacity to examine forest processes at multiple temporal and spatial scales, allowing for the testing of complex interactions between vegetation, climate, and disturbances. The University of Virginia Forest Model Enhanced (UVAFME) is an individual tree-based forest model that has been updated for use in interior boreal Alaska, with a new permafrost model and updated fire simulation. These updated submodels allow for feedback between soils, vegetation, and fire severity through fuels tracking and impact of depth of burn on permafrost dynamics. We present these updated submodels as well as calibration and validation of UVAFME to the Yukon River Basin in Alaska, with comparisons to inventory data. We also present initial findings from simulations of potential future forest biomass, structure, and species composition across the Yukon River Basin under expected changes in precipitation, temperature, and disturbances. We predict changing climate and the associated impacts on wildfire and permafrost dynamics will result in shifts in biomass and species composition across the region, with potential for further feedback to the climate-vegetation-disturbance system. These simulations advance our understanding of the possible futures for the Alaskan boreal forest, which is a valuable part of the global carbon budget.
NASA Astrophysics Data System (ADS)
Foster, A.; Armstrong, A. H.; Shuman, J. K.; Ranson, K.; Shugart, H. H., Jr.; Rogers, B. M.; Goetz, S. J.
2017-12-01
Global temperatures have increased about 0.2°C per decade since 1979, and the high latitudes are warming faster than the rest of the globe. Climate change within Alaska is likely to bring about increased drought and longer fire seasons, as well as increases in the severity and frequency of fires. These changes in disturbance regimes and their associated effects on ecosystem C stocks, including permafrost, may lead to a positive feedback to further climate warming. As of now, it is uncertain how vegetation will respond to ongoing climate change, and the addition of disturbance effects leads to even more complicated and varied scenarios. Through ecological modeling, we have the capacity to examine forest processes at multiple temporal and spatial scales, allowing for the testing of complex interactions between vegetation, climate, and disturbances. The University of Virginia Forest Model Enhanced (UVAFME) is an individual tree-based forest model that has been updated for use in interior boreal Alaska, with a new permafrost model and updated fire simulation. These updated submodels allow for feedback between soils, vegetation, and fire severity through fuels tracking and impact of depth of burn on permafrost dynamics. We present these updated submodels as well as calibration and validation of UVAFME to the Yukon River Basin in Alaska, with comparisons to inventory data. We also present initial findings from simulations of potential future forest biomass, structure, and species composition across the Yukon River Basin under expected changes in precipitation, temperature, and disturbances. We predict changing climate and the associated impacts on wildfire and permafrost dynamics will result in shifts in biomass and species composition across the region, with potential for further feedback to the climate-vegetation-disturbance system. These simulations advance our understanding of the possible futures for the Alaskan boreal forest, which is a valuable part of the global carbon budget.
Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2
Leggett, Richard W.
2017-03-02
Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less
Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leggett, Richard W.
Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less
Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.
2014-01-01
Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376
Combining Static Analysis and Model Checking for Software Analysis
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)
2003-01-01
We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.
Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.
Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai
2016-03-01
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.
Resource Tracking Model Updates and Trade Studies
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Moore, Michael
2016-01-01
The Resource tracking model has been updated to capture system manager and project manager inputs. Both the Trick/GUNNS RTM simulator and the RTM mass balance spreadsheet have been revised to address inputs from system managers and to refine the way mass balance is illustrated. The revisions to the RTM included addition of a Plasma Pyrolysis Assembly (PPA) to recover hydrogen from Sabatier reactor methane which was vented in the prior version of the RTM. The effect of the PPA on the overall balance of resources in an exploration vehicle is illustrated in the increased recycle of vehicle oxygen. Additionally simulation of EVAs conducted from the exploration module was added. Since the focus of the exploration module is to provide a habitat during deep space operations the EVA simulation approach to EVA is based on ISS EVA protocol and processes. Case studies have been run to show the relative effect of performance changes on vehicle resources.
A groundwater data assimilation application study in the Heihe mid-reach
NASA Astrophysics Data System (ADS)
Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.
2017-12-01
The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.
The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size distribution, and the model's parameterization of the sea salt emission factor as a function of sea surface temperature. This dataset is associated with the following publication:Gantt , B., J. Kelly , and J. Bash. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 8: 3733-3746, (2015).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rachid B. Slimane; Francis S. Lau; Javad Abbasian
2000-10-01
The objective of this program is to develop an economical process for hydrogen production, with no additional carbon dioxide emission, through the thermal decomposition of hydrogen sulfide (H{sub 2}S) in H{sub 2}S-rich waste streams to high-purity hydrogen and elemental sulfur. The novel feature of the process being developed is the superadiabatic combustion (SAC) of part of the H{sub 2}S in the waste stream to provide the thermal energy required for the decomposition reaction such that no additional energy is required. The program is divided into two phases. In Phase 1, detailed thermochemical and kinetic modeling of the SAC reactor withmore » H{sub 2}S-rich fuel gas and air/enriched air feeds is undertaken to evaluate the effects of operating conditions on exit gas products and conversion efficiency, and to identify key process parameters. Preliminary modeling results are used as a basis to conduct a thorough evaluation of SAC process design options, including reactor configuration, operating conditions, and productivity-product separation schemes, with respect to potential product yields, thermal efficiency, capital and operating costs, and reliability, ultimately leading to the preparation of a design package and cost estimate for a bench-scale reactor testing system to be assembled and tested in Phase 2 of the program. A detailed parametric testing plan was also developed for process design optimization and model verification in Phase 2. During Phase 2 of this program, IGT, UIC, and industry advisors UOP and BP Amoco will validate the SAC concept through construction of the bench-scale unit and parametric testing. The computer model developed in Phase 1 will be updated with the experimental data and used in future scale-up efforts. The process design will be refined and the cost estimate updated. Market survey and assessment will continue so that a commercial demonstration project can be identified.« less
The Precedence of Global Features in the Perception of Map Symbols
1988-06-01
be continually updated. The present study evaluated the feasibility of a serial model of visual processing. By comparing performance between a symbol...symbols, is based on a " filter - ing" procedure, consisting of a series of passive-to-active or global- to-local stages. Navon (1977, 1981a) has proposed a...packages or segments. This advances the earlier, static feature aggregation ap- proaches to comprise a "figure." According to the global precedence model
NASA Technical Reports Server (NTRS)
1998-01-01
Under an SBIR (Small Business Innovative Research) contract with Johnson Space Center, Knowledge Based Systems Inc. (KBSI) developed an intelligent software environment for modeling and analyzing mission planning activities, simulating behavior, and, using a unique constraint propagation mechanism, updating plans with each change in mission planning activities. KBSI developed this technology into a commercial product, PROJECTLINK, a two-way bridge between PROSIm, KBSI's process modeling and simulation software and leading project management software like Microsoft Project and Primavera's SureTrak Project Manager.
2009-02-01
range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The corresponding...frequency range of modal analysis and the high frequency region of statistical energy analysis , is referred to as the mid-frequency range. The...predictions. The averaging process is consistent with the averaging done in statistical energy analysis for stochastic systems. The FEM will always
NASA Astrophysics Data System (ADS)
Chen, Y.
2017-12-01
Urbanization is the world development trend for the past century, and the developing countries have been experiencing much rapider urbanization in the past decades. Urbanization brings many benefits to human beings, but also causes negative impacts, such as increasing flood risk. Impact of urbanization on flood response has long been observed, but quantitatively studying this effect still faces great challenges. For example, setting up an appropriate hydrological model representing the changed flood responses and determining accurate model parameters are very difficult in the urbanized or urbanizing watershed. In the Pearl River Delta area, rapidest urbanization has been observed in China for the past decades, and dozens of highly urbanized watersheds have been appeared. In this study, a physically based distributed watershed hydrological model, the Liuxihe model is employed and revised to simulate the hydrological processes of the highly urbanized watershed flood in the Pearl River Delta area. A virtual soil type is then defined in the terrain properties dataset, and its runoff production and routing algorithms are added to the Liuxihe model. Based on a parameter sensitive analysis, the key hydrological processes of a highly urbanized watershed is proposed, that provides insight into the hydrological processes and for parameter optimization. Based on the above analysis, the model is set up in the Songmushan watershed where there is hydrological data observation. A model parameter optimization and updating strategy is proposed based on the remotely sensed LUC types, which optimizes model parameters with PSO algorithm and updates them based on the changed LUC types. The model parameters in Songmushan watershed are regionalized at the Pearl River Delta area watersheds based on the LUC types of the other watersheds. A dozen watersheds in the highly urbanized area of Dongguan City in the Pearl River Delta area were studied for the flood response changes due to urbanization, and the results show urbanization has big impact on the watershed flood responses. The peak flow increased a few times after urbanization which is much higher than previous reports.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avramova, Maria N.; Salko, Robert K.
Coolant-Boiling in Rod Arrays|Two Fluids (COBRA-TF) is a thermal/ hydraulic (T/H) simulation code designed for light water reactor (LWR) vessel analysis. It uses a two-fluid, three-field (i.e. fluid film, fluid drops, and vapor) modeling approach. Both sub-channel and 3D Cartesian forms of 9 conservation equations are available for LWR modeling. The code was originally developed by Pacific Northwest Laboratory in 1980 and had been used and modified by several institutions over the last few decades. COBRA-TF also found use at the Pennsylvania State University (PSU) by the Reactor Dynamics and Fuel Management Group (RDFMG) and has been improved, updated, andmore » subsequently re-branded as CTF. As part of the improvement process, it was necessary to generate sufficient documentation for the open-source code which had lacked such material upon being adopted by RDFMG. This document serves mainly as a theory manual for CTF, detailing the many two-phase heat transfer, drag, and important accident scenario models contained in the code as well as the numerical solution process utilized. Coding of the models is also discussed, all with consideration for updates that have been made when transitioning from COBRA-TF to CTF. Further documentation outside of this manual is also available at RDFMG which focus on code input deck generation and source code global variable and module listings.« less
Zarriello, Phillip J.; Olson, Scott A.; Flynn, Robert H.; Strauch, Kellan R.; Murphy, Elizabeth A.
2014-01-01
Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term streamgages in Rhode Island. In response to this event, hydraulic models were updated for selected reaches covering about 56 river miles in the Pawtuxet River Basin to simulate water-surface elevations (WSEs) at specified flows and boundary conditions. Reaches modeled included the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Dry Brook, Meshanticut Brook, Furnace Hill Brook, Flat River, Quidneck Brook, and two unnamed tributaries referred to as South Branch Pawtuxet River Tributary A1 and Tributary A2. All the hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) version 4.1.0 using steady-state simulations. Updates to the models included incorporation of new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were assessed using high-water marks (HWMs) obtained in a related study following the March– April 2010 flood and the simulated water levels at the 0.2-percent annual exceedance probability (AEP), which is the estimated AEP of the 2010 flood in the basin. HWMs were obtained at 110 sites along the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Furnace Hill Brook, Flat River, and Quidneck Brook. Differences between the 2010 HWM elevations and the simulated 0.2-percent AEP WSEs from flood insurance studies (FISs) and the updated models developed in this study varied with most differences attributed to the magnitude of the 0.2-percent AEP flows. WSEs from the updated models generally are in closer agreement with the observed 2010 HWMs than with the FIS WSEs. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.
A land cover change detection and classification protocol for updating Alaska NLCD 2001 to 2011
Jin, Suming; Yang, Limin; Zhu, Zhe; Homer, Collin G.
2017-01-01
Monitoring and mapping land cover changes are important ways to support evaluation of the status and transition of ecosystems. The Alaska National Land Cover Database (NLCD) 2001 was the first 30-m resolution baseline land cover product of the entire state derived from circa 2001 Landsat imagery and geospatial ancillary data. We developed a comprehensive approach named AKUP11 to update Alaska NLCD from 2001 to 2011 and provide a 10-year cyclical update of the state's land cover and land cover changes. Our method is designed to characterize the main land cover changes associated with different drivers, including the conversion of forests to shrub and grassland primarily as a result of wildland fire and forest harvest, the vegetation successional processes after disturbance, and changes of surface water extent and glacier ice/snow associated with weather and climate changes. For natural vegetated areas, a component named AKUP11-VEG was developed for updating the land cover that involves four major steps: 1) identify the disturbed and successional areas using Landsat images and ancillary datasets; 2) update the land cover status for these areas using a SKILL model (System of Knowledge-based Integrated-trajectory Land cover Labeling); 3) perform decision tree classification; and 4) develop a final land cover and land cover change product through the postprocessing modeling. For water and ice/snow areas, another component named AKUP11-WIS was developed for initial land cover change detection, removal of the terrain shadow effects, and exclusion of ephemeral snow changes using a 3-year MODIS snow extent dataset from 2010 to 2012. The overall approach was tested in three pilot study areas in Alaska, with each area consisting of four Landsat image footprints. The results from the pilot study show that the overall accuracy in detecting change and no-change is 90% and the overall accuracy of the updated land cover label for 2011 is 86%. The method provided a robust, consistent, and efficient means for capturing major disturbance events and updating land cover for Alaska. The method has subsequently been applied to generate the land cover and land cover change products for the entire state of Alaska.
Opinion dynamics on an adaptive random network
NASA Astrophysics Data System (ADS)
Benczik, I. J.; Benczik, S. Z.; Schmittmann, B.; Zia, R. K. P.
2009-04-01
We revisit the classical model for voter dynamics in a two-party system with two basic modifications. In contrast to the original voter model studied in regular lattices, we implement the opinion formation process in a random network of agents in which interactions are no longer restricted by geographical distance. In addition, we incorporate the rapidly changing nature of the interpersonal relations in the model. At each time step, agents can update their relationships. This update is determined by their own opinion, and by their preference to make connections with individuals sharing the same opinion, or rather with opponents. In this way, the network is built in an adaptive manner, in the sense that its structure is correlated and evolves with the dynamics of the agents. The simplicity of the model allows us to examine several issues analytically. We establish criteria to determine whether consensus or polarization will be the outcome of the dynamics and on what time scales these states will be reached. In finite systems consensus is typical, while in infinite systems a disordered metastable state can emerge and persist for infinitely long time before consensus is reached.
Characterization of Orbital Debris via Hyper-Velocity Laboratory-Based Tests
NASA Technical Reports Server (NTRS)
Cowardin, Heather; Liou, J.-C.; Anz-Meador, Phillip; Sorge, Marlon; Opiela, John; Fitz-Coy, Norman; Huynh, Tom; Krisko, Paula
2017-01-01
Existing DOD and NASA satellite breakup models are based on a key laboratory test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve these models, the NASA Orbital Debris Program Office, in collaboration with the Air Force Space and Missile Systems Center, The Aerospace Corporation, and the University of Florida, replicated a hypervelocity impact using a mock-up satellite, DebriSat, in controlled laboratory conditions. DebriSat is representative of present-day LEO satellites, built with modern spacecraft materials and construction techniques. Fragments down to 2 mm in size will be characterized by their physical and derived properties. A subset of fragments will be further analyzed in laboratory radar and optical facilities to update the existing radar-based NASA Size Estimation Model (SEM) and develop a comparable optical-based SEM. A historical overview of the project, status of the characterization process, and plans for integrating the data into various models will be discussed herein.
Characterization of Orbital Debris via Hyper-Velocity Laboratory-Based Tests
NASA Technical Reports Server (NTRS)
Cowardin, Heather; Liou, J.-C.; Krisko, Paula; Opiela, John; Fitz-Coy, Norman; Sorge, Marlon; Huynh, Tom
2017-01-01
Existing DoD and NASA satellite breakup models are based on a key laboratory test, Satellite Orbital debris Characterization Impact Test (SOCIT), which has supported many applications and matched on-orbit events involving older satellite designs reasonably well over the years. In order to update and improve these models, the NASA Orbital Debris Program Office, in collaboration with the Air Force Space and Missile Systems Center, The Aerospace Corporation, and the University of Florida, replicated a hypervelocity impact using a mock-up satellite, DebriSat, in controlled laboratory conditions. DebriSat is representative of present-day LEO satellites, built with modern spacecraft materials and construction techniques. Fragments down to 2 mm in size will be characterized by their physical and derived properties. A subset of fragments will be further analyzed in laboratory radar and optical facilities to update the existing radar-based NASA Size Estimation Model (SEM) and develop a comparable optical-based SEM. A historical overview of the project, status of the characterization process, and plans for integrating the data into various models will be discussed herein.
Imputatoin and Model-Based Updating Technique for Annual Forest Inventories
Ronald E. McRoberts
2001-01-01
The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...
Bashari, Hossein; Naghipour, Ali Asghar; Khajeddin, Seyed Jamaleddin; Sangoony, Hamed; Tahmasebi, Pejman
2016-09-01
Identifying areas that have a high risk of burning is a main component of fire management planning. Although the available tools can predict the fire risks, these are poor in accommodating uncertainties in their predictions. In this study, we accommodated uncertainty in wildfire prediction using Bayesian belief networks (BBNs). An influence diagram was developed to identify the factors influencing wildfire in arid and semi-arid areas of Iran, and it was populated with probabilities to produce a BBNs model. The behavior of the model was tested using scenario and sensitivity analysis. Land cover/use, mean annual rainfall, mean annual temperature, elevation, and livestock density were recognized as the main variables determining wildfire occurrence. The produced model had good accuracy as its ROC area under the curve was 0.986. The model could be applied in both predictive and diagnostic analysis for answering "what if" and "how" questions. The probabilistic relationships within the model can be updated over time using observation and monitoring data. The wildfire BBN model may be updated as new knowledge emerges; hence, it can be used to support the process of adaptive management.
Integration of Models of Building Interiors with Cadastral Data
NASA Astrophysics Data System (ADS)
Gotlib, Dariusz; Karabin, Marcin
2017-12-01
Demands for applications which use models of building interiors is growing and highly diversified. Those models are applied at the stage of designing and construction of a building, in applications which support real estate management, in navigation and marketing systems and, finally, in crisis management and security systems. They are created on the basis of different data: architectural and construction plans, both, in the analogue form, as well as CAD files, BIM data files, by means of laser scanning (TLS) and conventional surveys. In this context the issue of searching solutions which would integrate the existing models and lead to elimination of data redundancy is becoming more important. The authors analysed the possible input- of cadastral data (legal extent of premises) at the stage of the creation and updating different models of building's interiors. The paper focuses on one issue - the way of describing the geometry of premises basing on the most popular source data, i.e. architectural and construction plans. However, the described rules may be considered as universal and also may be applied in practice concerned may be used during the process of creation and updating indoor models based on BIM dataset or laser scanning clouds
NASA Astrophysics Data System (ADS)
Li, Jun; Qin, Qiming; Xie, Chao; Zhao, Yue
2012-10-01
The update frequency of digital road maps influences the quality of road-dependent services. However, digital road maps surveyed by probe vehicles or extracted from remotely sensed images still have a long updating circle and their cost remain high. With GPS technology and wireless communication technology maturing and their cost decreasing, floating car technology has been used in traffic monitoring and management, and the dynamic positioning data from floating cars become a new data source for updating road maps. In this paper, we aim to update digital road maps using the floating car data from China's National Commercial Vehicle Monitoring Platform, and present an incremental road network extraction method suitable for the platform's GPS data whose sampling frequency is low and which cover a large area. Based on both spatial and semantic relationships between a trajectory point and its associated road segment, the method classifies each trajectory point, and then merges every trajectory point into the candidate road network through the adding or modifying process according to its type. The road network is gradually updated until all trajectories have been processed. Finally, this method is applied in the updating process of major roads in North China and the experimental results reveal that it can accurately derive geometric information of roads under various scenes. This paper provides a highly-efficient, low-cost approach to update digital road maps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford
The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less
MODEST: A Tool for Geodesy and Astronomy
NASA Technical Reports Server (NTRS)
Sovers, Ojars J.; Jacobs, Christopher S.; Lanyi, Gabor E.
2004-01-01
Features of the JPL VLBI modeling and estimation software "MODEST" are reviewed. Its main advantages include thoroughly documented model physics, portability, and detailed error modeling. Two unique models are included: modeling of source structure and modeling of both spatial and temporal correlations in tropospheric delay noise. History of the code parallels the development of the astrometric and geodetic VLBI technique and the software retains many of the models implemented during its advancement. The code has been traceably maintained since the early 1980s, and will continue to be updated with recent IERS standards. Scripts are being developed to facilitate user-friendly data processing in the era of e-VLBI.
Neural correlates of informational cascades: brain mechanisms of social influence on belief updating
Klucharev, Vasily; Rieskamp, Jörg
2015-01-01
Informational cascades can occur when rationally acting individuals decide independently of their private information and follow the decisions of preceding decision-makers. In the process of updating beliefs, differences in the weighting of private and publicly available social information may modulate the probability that a cascade starts in a decisive way. By using functional magnetic resonance imaging, we examined neural activity while participants updated their beliefs based on the decisions of two fictitious stock market traders and their own private information, which led to a final decision of buying one of two stocks. Computational modeling of the behavioral data showed that a majority of participants overweighted private information. Overweighting was negatively correlated with the probability of starting an informational cascade in trials especially prone to conformity. Belief updating by private information was related to activity in the inferior frontal gyrus/anterior insula, the dorsolateral prefrontal cortex and the parietal cortex; the more a participant overweighted private information, the higher the activity in the inferior frontal gyrus/anterior insula and the lower in the parietal-temporal cortex. This study explores the neural correlates of overweighting of private information, which underlies the tendency to start an informational cascade. PMID:24974396
Noble, Lorraine M; Scott-Smith, Wesley; O'Neill, Bernadette; Salisbury, Helen
2018-04-22
Clinical communication is a core component of undergraduate medical training. A consensus statement on the essential elements of the communication curriculum was co-produced in 2008 by the communication leads of UK medical schools. This paper discusses the relational, contextual and technological changes which have affected clinical communication since then and presents an updated curriculum for communication in undergraduate medicine. The consensus was developed through an iterative consultation process with the communication leads who represent their medical schools on the UK Council of Clinical Communication in Undergraduate Medical Education. The updated curriculum defines the underpinning values, core components and skills required within the context of contemporary medical care. It incorporates the evolving relational issues associated with the more prominent role of the patient in the consultation, reflected through legal precedent and changing societal expectations. The impact on clinical communication of the increased focus on patient safety, the professional duty of candour and digital medicine are discussed. Changes in the way medicine is practised should lead rapidly to adjustments to the content of curricula. The updated curriculum provides a model of best practice to help medical schools develop their teaching and argue for resources. Copyright © 2018 Elsevier B.V. All rights reserved.
Finite element model updating using the shadow hybrid Monte Carlo technique
NASA Astrophysics Data System (ADS)
Boulkaibet, I.; Mthembu, L.; Marwala, T.; Friswell, M. I.; Adhikari, S.
2015-02-01
Recent research in the field of finite element model updating (FEM) advocates the adoption of Bayesian analysis techniques to dealing with the uncertainties associated with these models. However, Bayesian formulations require the evaluation of the Posterior Distribution Function which may not be available in analytical form. This is the case in FEM updating. In such cases sampling methods can provide good approximations of the Posterior distribution when implemented in the Bayesian context. Markov Chain Monte Carlo (MCMC) algorithms are the most popular sampling tools used to sample probability distributions. However, the efficiency of these algorithms is affected by the complexity of the systems (the size of the parameter space). The Hybrid Monte Carlo (HMC) offers a very important MCMC approach to dealing with higher-dimensional complex problems. The HMC uses the molecular dynamics (MD) steps as the global Monte Carlo (MC) moves to reach areas of high probability where the gradient of the log-density of the Posterior acts as a guide during the search process. However, the acceptance rate of HMC is sensitive to the system size as well as the time step used to evaluate the MD trajectory. To overcome this limitation we propose the use of the Shadow Hybrid Monte Carlo (SHMC) algorithm. The SHMC algorithm is a modified version of the Hybrid Monte Carlo (HMC) and designed to improve sampling for large-system sizes and time steps. This is done by sampling from a modified Hamiltonian function instead of the normal Hamiltonian function. In this paper, the efficiency and accuracy of the SHMC method is tested on the updating of two real structures; an unsymmetrical H-shaped beam structure and a GARTEUR SM-AG19 structure and is compared to the application of the HMC algorithm on the same structures.
Machine learning in updating predictive models of planning and scheduling transportation projects
DOT National Transportation Integrated Search
1997-01-01
A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...
Benefits of Model Updating: A Case Study Using the Micro-Precision Interferometer Testbed
NASA Technical Reports Server (NTRS)
Neat, Gregory W.; Kissil, Andrew; Joshi, Sanjay S.
1997-01-01
This paper presents a case study on the benefits of model updating using the Micro-Precision Interferometer (MPI) testbed, a full-scale model of a future spaceborne optical interferometer located at JPL.
Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segev, A.; Fang, W.
In currency-based updates, processing a query to a materialized view has to satisfy a currency constraint which specifies the maximum time lag of the view data with respect to a transaction database. Currency-based update policies are more general than periodical, deferred, and immediate updates; they provide additional opportunities for optimization and allow updating a materialized view from other materialized views. In this paper, we present algorithms to determine the source and timing of view updates and validate the resulting cost savings through simulation results. 20 refs.
NASA Astrophysics Data System (ADS)
Fuller, C. W.; Unruh, J.; Lindvall, S.; Lettis, W.
2009-05-01
An integral component of the safety analysis for proposed nuclear power plants within the US is a probabilistic seismic hazard assessment (PSHA). Most applications currently under NRC review followed guidance provided within NRC Regulatory Guide 1.208 (RG 1.208) for developing seismic source characterizations (SSC) for their PSHA. Three key components of RG 1.208 guidance is that applicants should: (1) use existing PSHA models and SSCs accepted by the NRC as SSC as a starting point for their SSCs; (2) evaluate new information and data developed since acceptance of the starting model to determine if the model should be updated; and (3) follow guidelines set forth by the Senior Seismic Hazard Analysis Committee (SSHAC) (NUREG/CR-6372) in developing significant updates (i.e., updates should capture SSC uncertainty through representing the "center, body, and range of technical interpretations" of the informed technical community). Major motivations for following this guidance are to ensure accurate representations of hazard and regulatory stability in hazard estimates for nuclear power plants. All current applications with the NRC have used the EPRI-SOG source characterizations developed in the 1980s as their starting point model, and all applicants have followed RG 1.208 guidance in updating the EPRI- SOG model. However, there has been considerable variability in how applicants have interpreted the guidance, and thus there has been considerable variability in the methodology used in updating the SSCs. Much of the variability can be attributed to how different applicants have interpreted the implications of new data, new interpretations of new and/or old data, and new "opinions" of members of the informed technical community. For example, many applicants and the NRC have wrestled with the challenge of whether or not to update SSCs in light of new opinions or interpretations of older data put forth by one member of the technical community. This challenge has been further complicated by: (1) a given applicant's uncertainty in how to revise the EPRI-SOG model, which was developed using a process similar to that dictated by SSHAC for a level 3 or 4 study, without conducting a resource-intensive SSHAC level 3 or higher study for their respective application; and (2) a lack of guidance from the NRC on acceptable methods of demonstrating that new data, interpretations, and opinions are adequately represented within the EPRI-SOG model. Partly because of these issues, initiative was taken by the nuclear industry, NRC and DOE to develop a new base PSHA model for the central and eastern US. However, this new SSC model will not be completed for several years and does not resolve many of the fundamental regulatory and philosophical issues that have been raised during the current round of applications. To ensure regulatory stability and to provide accurate estimates of hazard for nuclear power plants, a dialog must be started between regulators and industry to resolve these issues. Two key issues that must be discussed are: (1) should new data and new interpretations or opinions of old data be treated differently in updated SSCs, and if so, how?; and (2) how can new data or interpretations developed by a small subset of the technical community be weighed against and potentially combined with a SSC model that was originally developed to capture the "center, body and range" of the technical community?
1988-02-29
reciprocity: An event- related brain potentials analysis. Acta Psychologica. Submitted for publication. 21. Stolar, N., Sparenborg, S., Donchin, E...in press) argued that it is a manifestation of a process related to the updating of models of the environment or context in working memory. Such an...suggemng " ees ud e may involve working memory, but they do am hold any privileged relation to working memory.u However, he immedi- ately proceeds to narrow
NASA Astrophysics Data System (ADS)
Martin-Bragado, I.; Castrillo, P.; Jaraiz, M.; Pinacho, R.; Rubio, J. E.; Barbolla, J.; Moroz, V.
2005-09-01
Atomistic process simulation is expected to play an important role for the development of next generations of integrated circuits. This work describes an approach for modeling electric charge effects in a three-dimensional atomistic kinetic Monte Carlo process simulator. The proposed model has been applied to the diffusion of electrically active boron and arsenic atoms in silicon. Several key aspects of the underlying physical mechanisms are discussed: (i) the use of the local Debye length to smooth out the atomistic point-charge distribution, (ii) algorithms to correctly update the charge state in a physically accurate and computationally efficient way, and (iii) an efficient implementation of the drift of charged particles in an electric field. High-concentration effects such as band-gap narrowing and degenerate statistics are also taken into account. The efficiency, accuracy, and relevance of the model are discussed.
[Purity Detection Model Update of Maize Seeds Based on Active Learning].
Tang, Jin-ya; Huang, Min; Zhu, Qi-bing
2015-08-01
Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.
Predictive spatial modeling of narcotic crop growth patterns
Waltz, Frederick A.; Moore, D.G.
1986-01-01
Spatial models for predicting the geographic distribution of marijuana crops have been developed and are being evaluated for use in law enforcement programs. The models are based on growing condition preferences and on psychological inferences regarding grower behavior. Experiences of local law officials were used to derive the initial model, which was updated and improved as data from crop finds were archived and statistically analyzed. The predictive models are changed as crop locations are moved in response to the pressures of law enforcement. The models use spatial data in a raster geographic information system. The spatial data are derived from the U.S. Geological Survey's US GeoData, standard 7.5-minute topographic quadrangle maps, interpretations of aerial photographs, and thematic maps. Updating of cultural patterns, canopy closure, and other dynamic features is conducted through interpretation of aerial photographs registered to the 7.5-minute quadrangle base. The model is used to numerically weight various data layers that have been processed using spread functions, edge definition, and categorization. The building of the spatial data base, model development, model application, product generation, and use are collectively referred to as the Area Reduction Program (ARP). The goal of ARP is to provide law enforcement officials with tactical maps that show the most likely locations for narcotic crops.
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, J. L.
1989-01-01
Problems associated with the sequential approach to multidisciplinary design are discussed. A blackboard model is suggested as a potential tool for implementing the multilevel decomposition approach to overcome these problems. The blackboard model serves as a global database for the solution with each discipline acting as a knowledge source for updating the solution. With this approach, it is possible for engineers to improve the coordination, communication, and cooperation in the conceptual design process, allowing them to achieve a more optimal design from an interdisciplinary standpoint.
1981-10-29
are implemented, respectively, in the files "W-Update," "W-combine" and RW-Copy," listed in the appendix. The appendix begins with a typescript of an...the typescript ) and the copying process (steps 45 and 46) are shown as human actions in the typescript , but can be performed easily by a "master...for Natural Language, M. Marcus, MIT Press, 1980. I 29 APPENDIX: DATABASE UPDATING EXPERIMENT 30 CONTENTS Typescript of an experiment in Rosie
78 FR 54863 - Bureau of the Census Geographically Updated Population Certification Program (GUPCP)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-06
... Geographically Updated Population Certification Program (GUPCP) AGENCY: Bureau of the Census, Department of... Census (Census Bureau) will resume processing applications for certified decennial census population and... Updated Population Certification Program (GUPCP), was suspended on January 1, 2008, to accommodate the...
A prediction model for colon cancer surveillance data.
Good, Norm M; Suresh, Krithika; Young, Graeme P; Lockett, Trevor J; Macrae, Finlay A; Taylor, Jeremy M G
2015-08-15
Dynamic prediction models make use of patient-specific longitudinal data to update individualized survival probability predictions based on current and past information. Colonoscopy (COL) and fecal occult blood test (FOBT) results were collected from two Australian surveillance studies on individuals characterized as high-risk based on a personal or family history of colorectal cancer. Motivated by a Poisson process, this paper proposes a generalized nonlinear model with a complementary log-log link as a dynamic prediction tool that produces individualized probabilities for the risk of developing advanced adenoma or colorectal cancer (AAC). This model allows predicted risk to depend on a patient's baseline characteristics and time-dependent covariates. Information on the dates and results of COLs and FOBTs were incorporated using time-dependent covariates that contributed to patient risk of AAC for a specified period following the test result. These covariates serve to update a person's risk as additional COL, and FOBT test information becomes available. Model selection was conducted systematically through the comparison of Akaike information criterion. Goodness-of-fit was assessed with the use of calibration plots to compare the predicted probability of event occurrence with the proportion of events observed. Abnormal COL results were found to significantly increase risk of AAC for 1 year following the test. Positive FOBTs were found to significantly increase the risk of AAC for 3 months following the result. The covariates that incorporated the updated test results were of greater significance and had a larger effect on risk than the baseline variables. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin
2016-08-01
This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.
NASA Astrophysics Data System (ADS)
Li, N.; Kinzelbach, W.; Li, H.; Li, W.; Chen, F.; Wang, L.
2017-12-01
Data assimilation techniques are widely used in hydrology to improve the reliability of hydrological models and to reduce model predictive uncertainties. This provides critical information for decision makers in water resources management. This study aims to evaluate a data assimilation system for the Guantao groundwater flow model coupled with a one-dimensional soil column simulation (Hydrus 1D) using an Unbiased Ensemble Square Root Filter (UnEnSRF) originating from the Ensemble Kalman Filter (EnKF) to update parameters and states, separately or simultaneously. To simplify the coupling between unsaturated and saturated zone, a linear relationship obtained from analyzing inputs to and outputs from Hydrus 1D is applied in the data assimilation process. Unlike EnKF, the UnEnSRF updates parameter ensemble mean and ensemble perturbations separately. In order to keep the ensemble filter working well during the data assimilation, two factors are introduced in the study. One is called damping factor to dampen the update amplitude of the posterior ensemble mean to avoid nonrealistic values. The other is called inflation factor to relax the posterior ensemble perturbations close to prior to avoid filter inbreeding problems. The sensitivities of the two factors are studied and their favorable values for the Guantao model are determined. The appropriate observation error and ensemble size were also determined to facilitate the further analysis. This study demonstrated that the data assimilation of both model parameters and states gives a smaller model prediction error but with larger uncertainty while the data assimilation of only model states provides a smaller predictive uncertainty but with a larger model prediction error. Data assimilation in a groundwater flow model will improve model prediction and at the same time make the model converge to the true parameters, which provides a successful base for applications in real time modelling or real time controlling strategies in groundwater resources management.
Nonlinear and Digital Man-machine Control Systems Modeling
NASA Technical Reports Server (NTRS)
Mekel, R.
1972-01-01
An adaptive modeling technique is examined by which controllers can be synthesized to provide corrective dynamics to a human operator's mathematical model in closed loop control systems. The technique utilizes a class of Liapunov functions formulated for this purpose, Liapunov's stability criterion and a model-reference system configuration. The Liapunov function is formulated to posses variable characteristics to take into consideration the identification dynamics. The time derivative of the Liapunov function generate the identification and control laws for the mathematical model system. These laws permit the realization of a controller which updates the human operator's mathematical model parameters so that model and human operator produce the same response when subjected to the same stimulus. A very useful feature is the development of a digital computer program which is easily implemented and modified concurrent with experimentation. The program permits the modeling process to interact with the experimentation process in a mutually beneficial way.
Simulated and observed 2010 floodwater elevations in the Pawcatuck and Wood Rivers, Rhode Island
Zarriello, Phillip J.; Straub, David E.; Smith, Thor E.
2014-01-01
Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term U.S. Geological Survey streamgages in Rhode Island. In response to this flood, hydraulic models of Pawcatuck River (26.9 miles) and Wood River (11.6 miles) were updated from the most recent approved U.S. Department of Homeland Security-Federal Emergency Management Agency flood insurance study (FIS) to simulate water-surface elevations (WSEs) for specified flows and boundary conditions. The hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) using steady-state simulations and incorporate new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were used to simulate the 0.2-percent annual exceedance probability (AEP) flood, which is the AEP determined for the 2010 flood in the Pawcatuck and Wood Rivers. The simulated WSEs were compared to high-water mark (HWM) elevation data obtained in a related study following the March–April 2010 flood, which included 39 HWMs along the Pawcatuck River and 11 HWMs along the Wood River. The 2010 peak flow generally was larger than the 0.2-percent AEP flow, which, in part, resulted in the FIS and updated model WSEs to be lower than the 2010 HWMs. The 2010 HWMs for the Pawcatuck River averaged about 1.6 feet (ft) higher than the 0.2-percent AEP WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The 2010 HWMs for the Wood River averaged about 1.3 ft higher than the WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.
Mindfulness meditation practice and executive functioning: Breaking down the benefit.
Gallant, Sara N
2016-02-01
This paper focuses on evidence for mindfulness meditation-related benefits to executive functioning, processes important for much of human volitional behaviour. Miyake et al. (2000) have shown that executive functions can be fractionated into three distinct domains including inhibition, working memory updating, and mental set shifting. Considering these separable domains, it is important to determine whether the effects of mindfulness can generalize to all three sub-functions or are specific to certain domains. To address this, the current review applied Miyake et al.'s (2000) fractionated model of executive functioning to the mindfulness literature. Empirical studies assessing the benefits of mindfulness to measures tapping the inhibition, updating, and shifting components of executive functioning were examined. Results suggest a relatively specific as opposed to general benefit resulting from mindfulness, with consistent inhibitory improvement, but more variable advantages to the updating and shifting domains. Recommendations surrounding application of mindfulness practice and future research are discussed. Copyright © 2016 Elsevier Inc. All rights reserved.
Donada, Marc; Della Mea, Vincenzo; Cumerlato, Megan; Rankin, Nicole; Madden, Richard
2018-01-01
The International Classification of Health Interventions (ICHI) is a member of the WHO Family of International Classifications, being developed to provide a common tool for reporting and analysing health interventions for statistical purposes. A web-based platform for classification development and update has been specifically developed to support the initial development step and then, after final approval, the continuous revision and update of the classification. The platform provides features for classification editing, versioning, comment management and URI identifiers. During the last 12 months it has been used for developing the ICHI Beta version, replacing the previous process based on the exchange of Excel files. At November 2017, 90 users have provided input to the development of the classification, which has resulted in 2913 comments and 2971 changes in the classification, since June 2017. Further work includes the development of an URI API for machine to machine communication, following the model established for ICD-11.
Niebauer, Christopher Lee
2004-12-01
Previous research found that mixed handers (i.e., those that are more ambidextrous) were more likely than strong handers to update their beliefs (Niebauer, Aselage, & Schutte, 2002). It was assumed that this was due to greater degrees of communication between the two cerebral hemispheres in mixed handers. Niebauer and Garvey (2004) made connections between this model of updating beliefs and metacognitive processing. The current work proposes that variations in interhemispheric interaction (as measured by degree of handedness) contribute to differences in consciousness, specifically when consciousness is used in rumination versus the metacognitive task of self-reflection. Using the Rumination-Reflection Questionnaire (Trapnell & Campbell, 1999), predictions were supported such that strong handedness was associated with self-rumination; whereas, mixed handedness was associated with increased self-reflection p values<.01, (N=255). James's (1890) concept of the "fringe of consciousness" is used to make connections between metacognition, updating beliefs, and self-reflection. Several studies are reviewed suggesting that mixed handers experience fringe consciousness to a greater degree than strong handers.
Davis, Brett; Van Wagtendonk, Jan W.; Beck, Jen; van Wagtendonk, Kent A.
2009-01-01
Surface fuels data are of critical importance for supporting fire incident management, risk assessment, and fuel management planning, but the development of surface fuels data can be expensive and time consuming. The data development process is extensive, generally beginning with acquisition of remotely sensed spatial data such as aerial photography or satellite imagery (Keane and others 2001). The spatial vegetation data are then crosswalked to a set of fire behavior fuel models that describe the available fuels (the burnable portions of the vegetation) (Anderson 1982, Scott and Burgan 2005). Finally, spatial fuels data are used as input to tools such as FARSITE and FlamMap to model current and potential fire spread and behavior (Finney 1998, Finney 2006). The capture date of the remotely sensed data defines the period for which the vegetation, and, therefore, fuels, data are most accurate. The more time that passes after the capture date, the less accurate the data become due to vegetation growth and processes such as fire. Subsequently, the results of any fire simulation based on these data become less accurate as the data age. Because of the amount of labor and expense required to develop these data, keeping them updated may prove to be a challenge. In this article, we describe the Sierra Nevada Fuel Succession Model, a modeling tool that can quickly and easily update surface fuel models with a minimum of additional input data. Although it was developed for use by Yosemite, Sequoia, and Kings Canyon National Parks, it is applicable to much of the central and southern Sierra Nevada. Furthermore, the methods used to develop the model have national applicability.
On-line identification of fermentation processes for ethanol production.
Câmara, M M; Soares, R M; Feital, T; Naomi, P; Oki, S; Thevelein, J M; Amaral, M; Pinto, J C
2017-07-01
A strategy for monitoring fermentation processes, specifically, simultaneous saccharification and fermentation (SSF) of corn mash, was developed. The strategy covered the development and use of first principles, semimechanistic and unstructured process model based on major kinetic phenomena, along with mass and energy balances. The model was then used as a reference model within an identification procedure capable of running on-line. The on-line identification procedure consists on updating the reference model through the estimation of corrective parameters for certain reaction rates using the most recent process measurements. The strategy makes use of standard laboratory measurements for sugars quantification and in situ temperature and liquid level data. The model, along with the on-line identification procedure, has been tested against real industrial data and have been able to accurately predict the main variables of operational interest, i.e., state variables and its dynamics, and key process indicators. The results demonstrate that the strategy is capable of monitoring, in real time, this complex industrial biomass fermentation. This new tool provides a great support for decision-making and opens a new range of opportunities for industrial optimization.
Cohn, Neil; Kutas, Marta
2017-01-01
Visual narratives sometimes depict successive images with different characters in the same physical space; corpus analysis has revealed that this occurs more often in Japanese manga than American comics. We used event-related brain potentials to determine whether comprehension of "visual narrative conjunctions" invokes not only incremental mental updating as traditionally assumed, but also, as we propose, "grammatical" combinatoric processing. We thus crossed (non)/conjunction sequences with character (in)/congruity. Conjunctions elicited a larger anterior negativity (300-500 ms) than nonconjunctions, regardless of congruity, implicating "grammatical" processes. Conjunction and incongruity both elicited larger P600s (500-700 ms), indexing updating. Both conjunction effects were modulated by participants' frequency of reading manga while growing up. Greater anterior negativity in frequent manga readers suggests more reliance on combinatoric processing; larger P600 effects in infrequent manga readers suggest more resources devoted to mental updating. As in language comprehension, it seems that processing conjunctions in visual narratives is not just mental updating but also partly grammatical, conditioned by comic readers' experience with specific visual narrative structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Juxiu Tong; Bill X. Hu; Hai Huang
2014-03-01
With growing importance of water resources in the world, remediations of anthropogenic contaminations due to reactive solute transport become even more important. A good understanding of reactive rate parameters such as kinetic parameters is the key to accurately predicting reactive solute transport processes and designing corresponding remediation schemes. For modeling reactive solute transport, it is very difficult to estimate chemical reaction rate parameters due to complex processes of chemical reactions and limited available data. To find a method to get the reactive rate parameters for the reactive urea hydrolysis transport modeling and obtain more accurate prediction for the chemical concentrations,more » we developed a data assimilation method based on an ensemble Kalman filter (EnKF) method to calibrate reactive rate parameters for modeling urea hydrolysis transport in a synthetic one-dimensional column at laboratory scale and to update modeling prediction. We applied a constrained EnKF method to pose constraints to the updated reactive rate parameters and the predicted solute concentrations based on their physical meanings after the data assimilation calibration. From the study results we concluded that we could efficiently improve the chemical reactive rate parameters with the data assimilation method via the EnKF, and at the same time we could improve solute concentration prediction. The more data we assimilated, the more accurate the reactive rate parameters and concentration prediction. The filter divergence problem was also solved in this study.« less
Application of a computational decision model to examine acute drug effects on human risk taking.
Lane, Scott D; Yechiam, Eldad; Busemeyer, Jerome R
2006-05-01
In 3 previous experiments, high doses of alcohol, marijuana, and alprazolam acutely increased risky decision making by adult humans in a 2-choice (risky vs. nonrisky) laboratory task. In this study, a computational modeling analysis known as the expectancy valence model (J. R. Busemeyer & J. C. Stout, 2002) was applied to individual-participant data from these studies, for the highest administered dose of all 3 drugs and corresponding placebo doses, to determine changes in decision-making processes that may be uniquely engendered by each drug. The model includes 3 parameters: responsiveness to rewards and losses (valence or motivation); the rate of updating expectancies about the value of risky alternatives (learning/memory); and the consistency with which trial-by-trial choices match expected outcomes (sensitivity). Parameter estimates revealed 3 key outcomes: Alcohol increased responsiveness to risky rewards and decreased responsiveness to risky losses (motivation) but did not alter expectancy updating (learning/memory); both marijuana and alprazolam produced increases in risk taking that were related to learning/memory but not motivation; and alcohol and marijuana (but not alprazolam) produced more random response patterns that were less consistently related to expected outcomes on the 2 choices. No significant main effects of gender or dose by gender interactions were obtained, but 2 dose by gender interactions approached significance. These outcomes underscore the utility of using a computational modeling approach to deconstruct decision-making processes and thus better understand drug effects on risky decision making in humans.
Yi, Wei; Sheng-de, Wu; Lian-Ju, Shen; Tao, Lin; Da-Wei, He; Guang-Hui, Wei
2018-05-24
To investigate whether management of undescended testis (UDT) may be improved with educational updates and new transferring model among referring providers (RPs). The age of orchidopexies performed in Children's Hospital of Chongqing Medical University were reviewed. We then proposed educational updates and new transferring model among RPs. The age of orchidopexies performed after our intervention were collected. Data were represented graphically and statistical analysis Chi-square for trend were used. A total of 1543 orchidopexies were performed. The median age of orchidopexy did not matched the target age of 6-12 months in any subsequent year. Survey of the RPs showed that 48.85% of their recommended age was below 12 months. However, only 25.50% of them would directly make a surgical referral to pediatric surgery specifically at this point. After we proposed educational updates, tracking the age of orchidopexy revealed a statistically significant trend downward. The management of undescended testis may be improved with educational updates and new transferring model among primary healthcare practitioners.
Rothwell, Joseph A; Perez-Jimenez, Jara; Neveu, Vanessa; Medina-Remón, Alexander; M'hiri, Nouha; García-Lobato, Paula; Manach, Claudine; Knox, Craig; Eisner, Roman; Wishart, David S; Scalbert, Augustin
2013-01-01
Polyphenols are a major class of bioactive phytochemicals whose consumption may play a role in the prevention of a number of chronic diseases such as cardiovascular diseases, type II diabetes and cancers. Phenol-Explorer, launched in 2009, is the only freely available web-based database on the content of polyphenols in food and their in vivo metabolism and pharmacokinetics. Here we report the third release of the database (Phenol-Explorer 3.0), which adds data on the effects of food processing on polyphenol contents in foods. Data on >100 foods, covering 161 polyphenols or groups of polyphenols before and after processing, were collected from 129 peer-reviewed publications and entered into new tables linked to the existing relational design. The effect of processing on polyphenol content is expressed in the form of retention factor coefficients, or the proportion of a given polyphenol retained after processing, adjusted for change in water content. The result is the first database on the effects of food processing on polyphenol content and, following the model initially defined for Phenol-Explorer, all data may be traced back to original sources. The new update will allow polyphenol scientists to more accurately estimate polyphenol exposure from dietary surveys.
NASA Technical Reports Server (NTRS)
Newman, C. M.
1977-01-01
The updated consumables flight planning worksheet (CFPWS) is documented. The update includes: (1) additional consumables: ECLSS ammonia, APU propellant, HYD water; (2) additional on orbit activity for development flight instrumentation (DFI); (3) updated use factors for all consumables; and (4) sources and derivations of the use factors.
Artificial neural networks and approximate reasoning for intelligent control in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
A method is introduced for learning to refine the control rules of approximate reasoning-based controllers. A reinforcement-learning technique is used in conjunction with a multi-layer neural network model of an approximate reasoning-based controller. The model learns by updating its prediction of the physical system's behavior. The model can use the control knowledge of an experienced operator and fine-tune it through the process of learning. Some of the space domains suitable for applications of the model such as rendezvous and docking, camera tracking, and tethered systems control are discussed.
Multi-Target Tracking via Mixed Integer Optimization
2016-05-13
solving these two problems separately, however few algorithms attempt to solve these simultaneously and even fewer utilize optimization. In this paper we...introduce a new mixed integer optimization (MIO) model which solves the data association and trajectory estimation problems simultaneously by minimizing...Kalman filter [5], which updates the trajectory estimates before the algorithm progresses forward to the next scan. This process repeats sequentially
OSATE Overview & Community Updates
2015-02-15
update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Delange /Julien 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...main language capabilities Modeling patterns & model samples for beginners Error-Model examples EMV2 model constructs Demonstration of tools Case
Description and evaluation of the Community Multiscale Air ...
The Community Multiscale Air Quality (CMAQ) model is a comprehensive multipollutant air quality modeling system developed and maintained by the US Environmental Protection Agency's (EPA) Office of Research and Development (ORD). Recently, version 5.1 of the CMAQ model (v5.1) was released to the public, incorporating a large number of science updates and extended capabilities over the previous release version of the model (v5.0.2). These updates include the following: improvements in the meteorological calculations in both CMAQ and the Weather Research and Forecast (WRF) model used to provide meteorological fields to CMAQ, updates to the gas and aerosol chemistry, revisions to the calculations of clouds and photolysis, and improvements to the dry and wet deposition in the model. Sensitivity simulations isolating several of the major updates to the modeling system show that changes to the meteorological calculations result in enhanced afternoon and early evening mixing in the model, periods when the model historically underestimates mixing. This enhanced mixing results in higher ozone (O3) mixing ratios on average due to reduced NO titration, and lower fine particulate matter (PM2. 5) concentrations due to greater dilution of primary pollutants (e.g., elemental and organic carbon). Updates to the clouds and photolysis calculations greatly improve consistency between the WRF and CMAQ models and result in generally higher O3 mixing ratios, primarily due to reduced
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-04-15
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-01-01
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505
Metareasoning and Social Evaluations in Cognitive Agents
NASA Astrophysics Data System (ADS)
Pinyol, Isaac; Sabater-Mir, Jordi
Reputation mechanisms have been recognized one of the key technologies when designing multi-agent systems. They are specially relevant in complex open environments, becoming a non-centralized mechanism to control interactions among agents. Cognitive agents tackling such complex societies must use reputation information not only for selecting partners to interact with, but also in metareasoning processes to change reasoning rules. This is the focus of this paper. We argue about the necessity to allow, as a cognitive systems designers, certain degree of freedom in the reasoning rules of the agents. We also describes cognitive approaches of agency that support this idea. Furthermore, taking as a base the computational reputation model Repage, and its integration in a BDI architecture, we use the previous ideas to specify metarules and processes to modify at run-time the reasoning paths of the agent. In concrete we propose a metarule to update the link between Repage and the belief base, and a metarule and a process to update an axiom incorporated in the belief logic of the agent. Regarding this last issue we also provide empirical results that show the evolution of agents that use it.
Bilinear modeling and nonlinear estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.
1989-01-01
New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.
Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D
2013-07-01
To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.
SynTrack: DNA Assembly Workflow Management (SynTrack) v2.0.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
MENG, XIANWEI; SIMIRENKO, LISA
2016-12-01
SynTrack is a dynamic, workflow-driven data management system that tracks the DNA build process: Management of the hierarchical relationships of the DNA fragments; Monitoring of process tasks for the assembly of multiple DNA fragments into final constructs; Creations of vendor order forms with selectable building blocks. Organizing plate layouts barcodes for vendor/pcr/fusion/chewback/bioassay/glycerol/master plate maps (default/condensed); Creating or updating Pre-Assembly/Assembly process workflows with selected building blocks; Generating Echo pooling instructions based on plate maps; Tracking of building block orders, received and final assembled for delivering; Bulk updating of colony or PCR amplification information, fusion PCR and chewback results; Updating with QA/QCmore » outcome with .csv & .xlsx template files; Re-work assembly workflow enabled before and after sequencing validation; and Tracking of plate/well data changes and status updates and reporting of master plate status with QC outcomes.« less
Updating the Behavior Engineering Model.
ERIC Educational Resources Information Center
Chevalier, Roger
2003-01-01
Considers Thomas Gilbert's Behavior Engineering Model as a tool for systematically identifying barriers to individual and organizational performance. Includes a detailed case study and a performance aid that incorporates gap analysis, cause analysis, and force field analysis to update the original model. (Author/LRW)
Optimization Control of the Color-Coating Production Process for Model Uncertainty
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results. PMID:27247563
Optimization Control of the Color-Coating Production Process for Model Uncertainty.
He, Dakuo; Wang, Zhengsong; Yang, Le; Mao, Zhizhong
2016-01-01
Optimized control of the color-coating production process (CCPP) aims at reducing production costs and improving economic efficiency while meeting quality requirements. However, because optimization control of the CCPP is hampered by model uncertainty, a strategy that considers model uncertainty is proposed. Previous work has introduced a mechanistic model of CCPP based on process analysis to simulate the actual production process and generate process data. The partial least squares method is then applied to develop predictive models of film thickness and economic efficiency. To manage the model uncertainty, the robust optimization approach is introduced to improve the feasibility of the optimized solution. Iterative learning control is then utilized to further refine the model uncertainty. The constrained film thickness is transformed into one of the tracked targets to overcome the drawback that traditional iterative learning control cannot address constraints. The goal setting of economic efficiency is updated continuously according to the film thickness setting until this reaches its desired value. Finally, fuzzy parameter adjustment is adopted to ensure that the economic efficiency and film thickness converge rapidly to their optimized values under the constraint conditions. The effectiveness of the proposed optimization control strategy is validated by simulation results.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
Li, Yan; Wang, Dejun; Zhang, Shaoyi
2014-01-01
Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters. PMID:24634612
A Study of Upgraded Phenolic Curing for RSRM Nozzle Rings
NASA Technical Reports Server (NTRS)
Smartt, Ziba
2000-01-01
A thermochemical cure model for predicting temperature and degree of cure profiles in curing phenolic parts was developed, validated and refined over several years. The model supports optimization of cure cycles and allows input of properties based upon the types of material and the process by which these materials are used to make nozzle components. The model has been refined to use sophisticated computer graphics to demonstrate the changes in temperature and degree of cure during the curing process. The effort discussed in the paper will be the conversion from an outdated solid modeling input program and SINDA analysis code to an integrated solid modeling and analysis package (I-DEAS solid model and TMG). Also discussed will be the incorporation of updated material properties obtained during full scale curing tests into the cure models and the results for all the Reusable Solid Rocket Motor (RSRM) nozzle rings.
Simultaneous processing of photographic and accelerator array data from sled impact experiment
NASA Astrophysics Data System (ADS)
Ash, M. E.
1982-12-01
A Quaternion-Kalman filter model is derived to simultaneously analyze accelerometer array and photographic data from sled impact experiments. Formulas are given for the quaternion representation of rotations, the propagation of dynamical states and their partial derivatives, the observables and their partial derivatives, and the Kalman filter update of the state given the observables. The observables are accelerometer and tachometer velocity data of the sled relative to the track, linear accelerometer array and photographic data of the subject relative to the sled, and ideal angular accelerometer data. The quaternion constraints enter through perfect constraint observations and normalization after a state update. Lateral and fore-aft impact tests are analyzed with FORTRAN IV software written using the formulas of this report.
Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia
2018-04-15
Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.
A New Biogeochemical Computational Framework Integrated within the Community Land Model
NASA Astrophysics Data System (ADS)
Fang, Y.; Li, H.; Liu, C.; Huang, M.; Leung, L.
2012-12-01
Terrestrial biogeochemical processes, particularly carbon cycle dynamics, have been shown to significantly influence regional and global climate changes. Modeling terrestrial biogeochemical processes within the land component of Earth System Models such as the Community Land model (CLM), however, faces three major challenges: 1) extensive efforts in modifying modeling structures and rewriting computer programs to incorporate biogeochemical processes with increasing complexity, 2) expensive computational cost to solve the governing equations due to numerical stiffness inherited from large variations in the rates of biogeochemical processes, and 3) lack of an efficient framework to systematically evaluate various mathematical representations of biogeochemical processes. To address these challenges, we introduce a new computational framework to incorporate biogeochemical processes into CLM, which consists of a new biogeochemical module with a generic algorithm and reaction database. New and updated biogeochemical processes can be incorporated into CLM without significant code modification. To address the stiffness issue, algorithms and criteria will be developed to identify fast processes, which will be replaced with algebraic equations and decoupled from slow processes. This framework can serve as a generic and user-friendly platform to test out different mechanistic process representations and datasets and gain new insight on the behavior of the terrestrial ecosystems in response to climate change in a systematic way.
Seismic hazard in the eastern United States
Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison
2015-01-01
The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.
Object-Oriented MDAO Tool with Aeroservoelastic Model Tuning Capability
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Li, Wesley; Lung, Shun-fat
2008-01-01
An object-oriented multi-disciplinary analysis and optimization (MDAO) tool has been developed at the NASA Dryden Flight Research Center to automate the design and analysis process and leverage existing commercial as well as in-house codes to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic and hypersonic aircraft. Once the structural analysis discipline is finalized and integrated completely into the MDAO process, other disciplines such as aerodynamics and flight controls will be integrated as well. Simple and efficient model tuning capabilities based on optimization problem are successfully integrated with the MDAO tool. More synchronized all phases of experimental testing (ground and flight), analytical model updating, high-fidelity simulations for model validation, and integrated design may result in reduction of uncertainties in the aeroservoelastic model and increase the flight safety.
FORCARB2: An updated version of the U.S. Forest Carbon Budget Model
Linda S. Heath; Michael C. Nichols; James E. Smith; John R. Mills
2010-01-01
FORCARB2, an updated version of the U.S. FORest CARBon Budget Model (FORCARB), produces estimates of carbon stocks and stock changes for forest ecosystems and forest products at 5-year intervals. FORCARB2 includes a new methodology for carbon in harvested wood products, updated initial inventory data, a revised algorithm for dead wood, and now includes public forest...
US EPA's Pathogen Equivalency Committee (PEC) has updated the evaluation criteria it uses to make recommendations of equivalency (to processes acceptable under 40CFR503) on innovative or alternative sludge pathogen reduction processes. These criteria will be presented along with ...
75 FR 66114 - National Institute of Mental Health; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-27
... Strategic Plan Updating Process of the Interagency Autism Coordinating Committee (IACC). The purpose of the Subcommittee meeting is to plan the process for updating the IACC Strategic Plan for Autism Spectrum Disorder.... Name of Committee: Interagency Autism Coordinating Committee (IACC). Type of meeting: Subcommittee for...
75 FR 59731 - National Institute of Mental Health; Notice of Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-28
... Strategic Plan Updating Process of the Interagency Autism Coordinating Committee (IACC). The purpose of the Subcommittee meeting is to plan the process for updating the IACC Strategic Plan for Autism Spectrum Disorder.... Name of Committee: Interagency Autism Coordinating Committee (IACC). Type of meeting: Subcommittee for...
Updating of the Curricula for Office Administration and Secretarial Science. Final Report.
ERIC Educational Resources Information Center
Devin, Carl
Changes are proposed for updating the curricula for office administration and secretarial science, the first being a name change for the department to Information/Word Processing or Administrative Sciences. Curricula (required courses and electives) are suggested for information/word processing--keyboarding specialist (one year certificate),…
Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.
Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y
2007-01-01
Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.
Mechanistic modeling of reactive soil nitrogen emissions across agricultural management practices
NASA Astrophysics Data System (ADS)
Rasool, Q. Z.; Miller, D. J.; Bash, J. O.; Venterea, R. T.; Cooter, E. J.; Hastings, M. G.; Cohan, D. S.
2017-12-01
The global reactive nitrogen (N) budget has increased by a factor of 2-3 from pre-industrial levels. This increase is especially pronounced in highly N fertilized agricultural regions in summer. The reactive N emissions from soil to atmosphere can be in reduced (NH3) or oxidized (NO, HONO, N2O) forms, depending on complex biogeochemical transformations of soil N reservoirs. Air quality models like CMAQ typically neglect soil emissions of HONO and N2O. Previously, soil NO emissions estimated by models like CMAQ remained parametric and inconsistent with soil NH3 emissions. Thus, there is a need to more mechanistically and consistently represent the soil N processes that lead to reactive N emissions to the atmosphere. Our updated approach estimates soil NO, HONO and N2O emissions by incorporating detailed agricultural fertilizer inputs from EPIC, and CMAQ-modeled N deposition, into the soil N pool. EPIC addresses the nitrification, denitrification and volatilization rates along with soil N pools for agricultural soils. Suitable updates to account for factors like nitrite (NO2-) accumulation not addressed in EPIC, will also be made. The NO and N2O emissions from nitrification and denitrification are computed mechanistically using the N sub-model of DAYCENT. These mechanistic definitions use soil water content, temperature, NH4+ and NO3- concentrations, gas diffusivity and labile C availability as dependent parameters at various soil layers. Soil HONO emissions found to be most probable under high NO2- availability will be based on observed ratios of HONO to NO emissions under different soil moistures, pH and soil types. The updated scheme will utilize field-specific soil properties and N inputs across differing manure management practices such as tillage. Comparison of the modeled soil NO emission rates from the new mechanistic and existing schemes against field measurements will be discussed. Our updated framework will help to predict the diurnal and daily variability of different reactive N emissions (NO, HONO, N2O) with soil temperature, moisture and N inputs.
Highly efficient model updating for structural condition assessment of large-scale bridges.
DOT National Transportation Integrated Search
2015-02-01
For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...
UPDATE ON EPA'S URBAN WATERSHED MANAGEMENT BRANCH MODELING ACTIVITIES
This paper provides the Stormwater Management Model (SWMM) user community with a description of the Environmental Protection Agency (EPA's) Office of Research and Development (ORD) approach to urban watershed modeling research and provides an update on current ORD SWMM-related pr...
Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics
NASA Astrophysics Data System (ADS)
Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong
2018-02-01
Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.
NASA Astrophysics Data System (ADS)
Cristallo, S.; Piersanti, L.; Straniero, O.; Gallino, R.; Domínguez, I.; Abia, C.; Di Rico, G.; Quintini, M.; Bisterzo, S.
2011-12-01
By using updated stellar low-mass stars models, we systematically investigate the nucleosynthesis processes occurring in asymptotic giant branch (AGB) stars. In this paper, we present a database dedicated to the nucleosynthesis of AGB stars: FRANEC Repository of Updated Isotopic Tables & Yields (FRUITY). An interactive Web-based interface allows users to freely download the full (from H to Bi) isotopic composition, as it changes after each third dredge-up (TDU) episode and the stellar yields the models produce. A first set of AGB models, having masses in the range 1.5 <=M/M ⊙ <= 3.0 and metallicities 1 × 10-3 <= Z <= 2 × 10-2, is discussed. For each model, a detailed description of the physical and the chemical evolution is provided. In particular, we illustrate the details of the s-process and we evaluate the theoretical uncertainties due to the parameterization adopted to model convection and mass loss. The resulting nucleosynthesis scenario is checked by comparing the theoretical [hs/ls] and [Pb/hs] ratios to those obtained from the available abundance analysis of s-enhanced stars. On the average, the variation with the metallicity of these spectroscopic indexes is well reproduced by theoretical models, although the predicted spread at a given metallicity is substantially smaller than the observed one. Possible explanations for such a difference are briefly discussed. An independent check of the TDU efficiency is provided by the C-stars luminosity function. Consequently, theoretical C-stars luminosity functions for the Galactic disk and the Magellanic Clouds have been derived. We generally find good agreement with observations.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Cassells, Benny; Sin, Gürkan; Gernaey, Krist V
2017-07-01
A novel model-based control strategy has been developed for filamentous fungal fed-batch fermentation processes. The system of interest is a pilot scale (550 L) filamentous fungus process operating at Novozymes A/S. In such processes, it is desirable to maximize the total product achieved in a batch in a defined process time. In order to achieve this goal, it is important to maximize both the product concentration, and also the total final mass in the fed-batch system. To this end, we describe the development of a control strategy which aims to achieve maximum tank fill, while avoiding oxygen limited conditions. This requires a two stage approach: (i) calculation of the tank start fill; and (ii) on-line control in order to maximize fill subject to oxygen transfer limitations. First, a mechanistic model was applied off-line in order to determine the appropriate start fill for processes with four different sets of process operating conditions for the stirrer speed, headspace pressure, and aeration rate. The start fills were tested with eight pilot scale experiments using a reference process operation. An on-line control strategy was then developed, utilizing the mechanistic model which is recursively updated using on-line measurements. The model was applied in order to predict the current system states, including the biomass concentration, and to simulate the expected future trajectory of the system until a specified end time. In this way, the desired feed rate is updated along the progress of the batch taking into account the oxygen mass transfer conditions and the expected future trajectory of the mass. The final results show that the target fill was achieved to within 5% under the maximum fill when tested using eight pilot scale batches, and over filling was avoided. The results were reproducible, unlike the reference experiments which show over 10% variation in the final tank fill, and this also includes over filling. The variance of the final tank fill is reduced by over 74%, meaning that it is possible to target the final maximum fill reproducibly. The product concentration achieved at a given set of process conditions was unaffected by the control strategy. Biotechnol. Bioeng. 2017;114: 1459-1468. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Selective updating of working memory content modulates meso-cortico-striatal activity.
Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S
2011-08-01
Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.
Identification of cracks in thick beams with a cracked beam element model
NASA Astrophysics Data System (ADS)
Hou, Chuanchuan; Lu, Yong
2016-12-01
The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.
NASA Astrophysics Data System (ADS)
Vilone, Daniele; Ramasco, José J.; Sánchez, Angel; Miguel, Maxi San
2014-08-01
The interplay of social and strategic motivations in human interactions is a largely unexplored topic in collective social phenomena. Whether individuals' decisions are taken in a purely strategic basis or due to social pressure without a rational background crucially influences the model outcome. Here we study a networked Prisoner's Dilemma in which decisions are made either based on the replication of the most successful neighbor's strategy (unconditional imitation) or by pure social imitation following an update rule inspired by the voter model. The main effects of the voter dynamics are an enhancement of the final consensus, i.e., asymptotic states are generally uniform, and a promotion of cooperation in certain regions of the parameter space as compared to the outcome of purely strategic updates. Thus, voter dynamics acts as an interface noise and has a similar effect as a pure random noise; furthermore, its influence is mostly independent of the network heterogeneity. When strategic decisions are made following other update rules such as the replicator or Moran processes, the dynamic mixed state found under unconditional imitation for some parameters disappears, but an increase of cooperation in certain parameter regions is still observed. Comparing our results with recent experiments on the Prisoner's Dilemma, we conclude that such a mixed dynamics may explain moody conditional cooperation among the agents.
Vilone, Daniele; Ramasco, José J; Sánchez, Angel; San Miguel, Maxi
2014-08-01
The interplay of social and strategic motivations in human interactions is a largely unexplored topic in collective social phenomena. Whether individuals' decisions are taken in a purely strategic basis or due to social pressure without a rational background crucially influences the model outcome. Here we study a networked Prisoner's Dilemma in which decisions are made either based on the replication of the most successful neighbor's strategy (unconditional imitation) or by pure social imitation following an update rule inspired by the voter model. The main effects of the voter dynamics are an enhancement of the final consensus, i.e., asymptotic states are generally uniform, and a promotion of cooperation in certain regions of the parameter space as compared to the outcome of purely strategic updates. Thus, voter dynamics acts as an interface noise and has a similar effect as a pure random noise; furthermore, its influence is mostly independent of the network heterogeneity. When strategic decisions are made following other update rules such as the replicator or Moran processes, the dynamic mixed state found under unconditional imitation for some parameters disappears, but an increase of cooperation in certain parameter regions is still observed. Comparing our results with recent experiments on the Prisoner's Dilemma, we conclude that such a mixed dynamics may explain moody conditional cooperation among the agents.
Peterson, M.D.; Mueller, C.S.
2011-01-01
The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.
MENA 1.1 - An Updated Geophysical Regionalization of the Middle East and North Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walters, B.; Pasyanos, M.E.; Bhattacharyya, J.
2000-03-01
This short report provides an update to the earlier LLNL paper entitled ''Preliminary Definition of Geophysical Regions for the Middle East and North Africa'' (Sweeney and Walter, 1998). This report is designed to be used in combination with that earlier paper. The reader is referred to Sweeney and Walter (1998) for all details, including definitions, references, uses, shortcomings, etc., of the regionalization process. In this report we will discuss only those regions in which we have changed the boundaries or velocity structure from that given by the original paper. The paper by Sweeney and Walter (1998) drew on a varietymore » of sources to estimate a preliminary, first-order regionalization of the Middle East and North Africa (MENA), providing regional boundaries and velocity models within each region. The model attempts to properly account for major structural discontinuities and significant crustal thickness and velocity variations on a gross scale. The model can be used to extrapolate sparse calibration data within a distinct geophysical region. This model can also serve as a background model in the process of forming station calibration maps using intelligent interpolation techniques such as kriging, extending the calibration into aseismic areas. Such station maps can greatly improve the ability to locate and identify seismic events, which in turn improves the ability to seismically monitor for underground nuclear testing. The original model from Sweeney and Walter (1998) was digitized to a 1{sup o} resolution, for simplicity we will hereafter refer to this model as MENA 1.0. The new model described here has also been digitized to a 1{sup o} resolution and will be referred to as MENA1.1 throughout this report.« less
Aphasia and the Diagram Makers Revisited: an Update of Information Processing Models
2006-01-01
Aphasic syndromes from diseases such as stroke and degenerative disorders are still common and disabling neurobehavioral disorders. Diagnosis, management and treatment of these communication disorders are often dependent upon understanding the neuropsychological mechanisms that underlie these disorders. Since the work of Broca it has been recognized that the human brain is organized in a modular fashion. Wernicke realized that the types of signs and symptoms displayed by aphasic patients reflect the degradation or disconnection of the modules that comprise this speech-language network. Thus, he was the first to propose a diagrammatic or information processing model of this modular language-speech network. Since he first published this model many new aphasic syndromes have been discovered and this has led to modifications of this model. This paper reviews some of the early (nineteenth century) models and then attempts to develop a more up-to-date and complete model. PMID:20396501
Stratiform chromite deposit model
Schulte, Ruth F.; Taylor, Ryan D.; Piatak, Nadine M.; Seal, Robert R.
2010-01-01
Stratiform chromite deposits are of great economic importance, yet their origin and evolution remain highly debated. Layered igneous intrusions such as the Bushveld, Great Dyke, Kemi, and Stillwater Complexes, provide opportunities for studying magmatic differentiation processes and assimilation within the crust, as well as related ore-deposit formation. Chromite-rich seams within layered intrusions host the majority of the world's chromium reserves and may contain significant platinum-group-element (PGE) mineralization. This model of stratiform chromite deposits is part of an effort by the U.S. Geological Survey's Mineral Resources Program to update existing models and develop new descriptive mineral deposit models to supplement previously published models for use in mineral-resource and mineral-environmental assessments. The model focuses on features that may be common to all stratiform chromite deposits as a way to gain insight into the processes that gave rise to their emplacement and to the significant economic resources contained in them.
NASA Astrophysics Data System (ADS)
Lamparter, Gabriele; Kovacs, Kristof; Nobrega, Rodolfo; Gerold, Gerhard
2015-04-01
Changes in the hydrological balance and following degradation of the water ecosystem services due to large scale land use changes are reported from agricultural frontiers all over the world. Traditionally, hydrological models including vegetation and land use as a part of the hydrological cycle use a fixed distribution of land use for the calibration period. We believe that a meaningful calibration - especially when investigating the effects of land use change on hydrology - demands the inclusion of land use change during the calibration period into the calibration procedure. The SWAT (Soil and Water Assessment Tool) model is a process-based, semi-distributed model calculating the different components of the water balance. The model bases on the definition of hydrological response units (HRUs) which are based on soil, vegetation and slope distribution. It specifically emphasises the role of land use and land management on the water balance. The Central-Western region of Brazil is one of the leading agricultural frontiers, which experienced rapid and radical deforestation and agricultural intensification in the last 40 years (from natural Cerrado savannah to cattle grazing to intensive corn and soya cropland). The land use history of the upper Rio das Mortes catchment (with 17500 km²) is reasonably well documented since the 1970th. At the same time there are almost continuous climate and runoff data available for the period between 1988 and 2011. Therefore, the work presented here shows the model calibration and validation of the SWAT model with the land use update function for three different periods (1988 to 1998, 1998 to 2007 and 2007 to 2011) in comparison with the same calibration periods using a steady state land use distribution. The use of the land use update function allows a clearer identification which changes in the discharge are due to climatic variability and which are due to changes in the vegetation cover. With land use update included into the calibration procedure, the impact of land use change on overall modelled runoff was more pronounced. For example, the accordance of modelled peak discharge improved for the period from 1988 to 1998 (with a decrease of primary Cerrado from 60 to 30 %) with the use of the land use update function compared to the steady state calibration. The effect for the following two periods 1998 to 2007 and 2007 to 2011 (with a decrease of primary Cerrado from 30 to 24 % and 24 to 19 % respectively) show only a small improvement of the model fit.
SysML model of exoplanet archive functionality and activities
NASA Astrophysics Data System (ADS)
Ramirez, Solange
2016-08-01
The NASA Exoplanet Archive is an online service that serves data and information on exoplanets and their host stars to help astronomical research related to search for and characterization of extra-solar planetary systems. In order to provide the most up to date data sets to the users, the exoplanet archive performs weekly updates that include additions into the database and updates to the services as needed. These weekly updates are complex due to interfaces within the archive. I will be presenting a SysML model that helps us perform these update activities in a weekly basis.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2016-10-01
This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.
Capital update factor: a new era approaches.
Grimaldi, P L
1993-02-01
The Health Care Financing Administration (HCFA) has constructed a preliminary model of a new capital update method which is consistent with the framework being developed to refine the update method for PPS operating costs. HCFA's eventual goal is to develop a single update framework for operating and capital costs. Initial results suggest that adopting the new capital update method would reduce capital payments substantially, which might intensify creditor's concerns about extending loans to hospitals.
Control of Interference during Working Memory Updating
ERIC Educational Resources Information Center
Szmalec, Arnaud; Verbruggen, Frederick; Vandierendonck, Andre; Kemps, Eva
2011-01-01
The current study examined the nature of the processes underlying working memory updating. In 4 experiments using the n-back paradigm, the authors demonstrate that continuous updating of items in working memory prevents strong binding of those items to their contexts in working memory, and hence leads to an increased susceptibility to proactive…
Doyle, Caoilainn; Smeaton, Alan F.; Roche, Richard A. P.; Boran, Lorraine
2018-01-01
To elucidate the core executive function profile (strengths and weaknesses in inhibition, updating, and switching) associated with dyslexia, this study explored executive function in 27 children with dyslexia and 29 age matched controls using sensitive z-mean measures of each ability and controlled for individual differences in processing speed. This study found that developmental dyslexia is associated with inhibition and updating, but not switching impairments, at the error z-mean composite level, whilst controlling for processing speed. Inhibition and updating (but not switching) error composites predicted both dyslexia likelihood and reading ability across the full range of variation from typical to atypical. The predictive relationships were such that those with poorer performance on inhibition and updating measures were significantly more likely to have a diagnosis of developmental dyslexia and also demonstrate poorer reading ability. These findings suggest that inhibition and updating abilities are associated with developmental dyslexia and predict reading ability. Future studies should explore executive function training as an intervention for children with dyslexia as core executive functions appear to be modifiable with training and may transfer to improved reading ability. PMID:29892245
NODA for EPA's Updated Ozone Transport Modeling
Find EPA's NODA for the Updated Ozone Transport Modeling Data for the 2008 Ozone National Ambient Air Quality Standard (NAAQS) along with the ExitExtension of Public Comment Period on CSAPR for the 2008 NAAQS.
A computational approach to climate science education with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2017-12-01
CLIMLAB is a Python-based software toolkit for interactive, process-oriented climate modeling for use in education and research. It is motivated by the need for simpler tools and more reproducible workflows with which to "fill in the gaps" between blackboard-level theory and the results of comprehensive climate models. With CLIMLAB you can interactively mix and match physical model components, or combine simpler process models together into a more comprehensive model. I use CLIMLAB in the classroom to put models in the hands of students (undergraduate and graduate), and emphasize a hierarchical, process-oriented approach to understanding the key emergent properties of the climate system. CLIMLAB is equally a tool for climate research, where the same needs exist for more robust, process-based understanding and reproducible computational results. I will give an overview of CLIMLAB and an update on recent developments, including: a full-featured, well-documented, interactive implementation of a widely-used radiation model (RRTM) packaging with conda-forge for compiler-free (and hassle-free!) installation on Mac, Windows and Linux interfacing with xarray for i/o and graphics with gridded model data a rich and growing collection of examples and self-computing lecture notes in Jupyter notebook format
Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.
2013-01-01
Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413
Boundary condition identification for a grid model by experimental and numerical dynamic analysis
NASA Astrophysics Data System (ADS)
Mao, Qiang; Devitis, John; Mazzotti, Matteo; Bartoli, Ivan; Moon, Franklin; Sjoblom, Kurt; Aktan, Emin
2015-04-01
There is a growing need to characterize unknown foundations and assess substructures in existing bridges. It is becoming an important issue for the serviceability and safety of bridges as well as for the possibility of partial reuse of existing infrastructures. Within this broader contest, this paper investigates the possibility of identifying, locating and quantifying changes of boundary conditions, by leveraging a simply supported grid structure with a composite deck. Multi-reference impact tests are operated for the grid model and modification of one supporting bearing is done by replacing a steel cylindrical roller with a roller of compliant material. Impact based modal analysis provide global modal parameters such as damped natural frequencies, mode shapes and flexibility matrix that are used as indicators of boundary condition changes. An updating process combining a hybrid optimization algorithm and the finite element software suit ABAQUS is presented in this paper. The updated ABAQUS model of the grid that simulates the supporting bearing with springs is used to detect and quantify the change of the boundary conditions.
Update on Integrated Optical Design Analyzer
NASA Technical Reports Server (NTRS)
Moore, James D., Jr.; Troy, Ed
2003-01-01
Updated information on the Integrated Optical Design Analyzer (IODA) computer program has become available. IODA was described in Software for Multidisciplinary Concurrent Optical Design (MFS-31452), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 8a. To recapitulate: IODA facilitates multidisciplinary concurrent engineering of highly precise optical instruments. The architecture of IODA was developed by reviewing design processes and software in an effort to automate design procedures. IODA significantly reduces design iteration cycle time and eliminates many potential sources of error. IODA integrates the modeling efforts of a team of experts in different disciplines (e.g., optics, structural analysis, and heat transfer) working at different locations and provides seamless fusion of data among thermal, structural, and optical models used to design an instrument. IODA is compatible with data files generated by the NASTRAN structural-analysis program and the Code V (Registered Trademark) optical-analysis program, and can be used to couple analyses performed by these two programs. IODA supports multiple-load-case analysis for quickly accomplishing trade studies. IODA can also model the transient response of an instrument under the influence of dynamic loads and disturbances.
Tensor Dictionary Learning for Positive Definite Matrices.
Sivalingam, Ravishankar; Boley, Daniel; Morellas, Vassilios; Papanikolopoulos, Nikolaos
2015-11-01
Sparse models have proven to be extremely successful in image processing and computer vision. However, a majority of the effort has been focused on sparse representation of vectors and low-rank models for general matrices. The success of sparse modeling, along with popularity of region covariances, has inspired the development of sparse coding approaches for these positive definite descriptors. While in earlier work, the dictionary was formed from all, or a random subset of, the training signals, it is clearly advantageous to learn a concise dictionary from the entire training set. In this paper, we propose a novel approach for dictionary learning over positive definite matrices. The dictionary is learned by alternating minimization between sparse coding and dictionary update stages, and different atom update methods are described. A discriminative version of the dictionary learning approach is also proposed, which simultaneously learns dictionaries for different classes in classification or clustering. Experimental results demonstrate the advantage of learning dictionaries from data both from reconstruction and classification viewpoints. Finally, a software library is presented comprising C++ binaries for all the positive definite sparse coding and dictionary learning approaches presented here.
NASA Astrophysics Data System (ADS)
Niu, X.; Yang, K.; Tang, W.; Qin, J.
2015-12-01
Neither surface measurement nor existing remote sensing products of the Surface Solar Radiation (SSR) can meet the application requirements of hydrological and land process modeling in the Tibetan Plateau (TP). High resolution (hourly; 0.1⁰) of SSR estimates have been derived recently from the geostationary satellite observations - the Multi-functional Transport Satellite (MTSAT). This SSR estimation is based on updating an existing physical model, the UMD-SRB (University of Maryland Surface Radiation Budget) which is the basis of the well-known GEWEX-SRB model. In the updated framework introduced is the high-resolution Global Land Surface Broadband Albedo Product (GLASS) with spatial continuity. The developed SSR estimates are demonstrated at different temporal resolutions over the TP and are evaluated against ground observations and other satellite products from: (1) China Meteorological Administration (CMA) radiation stations in TP; (2) three TP radiation stations contributed from the Institute of Tibetan Plateau Research; (3) and the universal used satellite products (i.e. ISCCP-FD, GEWEX-SRB) in relatively low spatial resolution (0.5º-2.5º) and temporal resolution (3-hourly, daily, or monthly).
USDA-ARS?s Scientific Manuscript database
Conventional culture methods and Taqman real time PCR (RTi PCR) were used for isolation of Listeria monocytogenes (Lm) from knee and hip joints of processing-age turkeys in a transport stress model. Male turkeys were exposed to an Escherichia coli and Lm Scott A co-challenge using coarse spray and f...
ERIC Educational Resources Information Center
Rodríguez-Villagra, Odir Antonio; Göthe, Katrin; Oberauer, Klaus; Kliegl, Reinhold
2013-01-01
We tested the limits of working-memory capacity (WMC) of young adults, old adults, and children with a memory-updating task. The task consisted of mentally shifting spatial positions within a grid according to arrows, their color signaling either only go (control) or go/no-go conditions. The interference model (IM) of Oberauer and Kliegl (2006)…
OSPREY Model Development Status Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veronica J Rutledge
2014-04-01
During the processing of used nuclear fuel, volatile radionuclides will be discharged to the atmosphere if no recovery processes are in place to limit their release. The volatile radionuclides of concern are 3H, 14C, 85Kr, and 129I. Methods are being developed, via adsorption and absorption unit operations, to capture these radionuclides. It is necessary to model these unit operations to aid in the evaluation of technologies and in the future development of an advanced used nuclear fuel processing plant. A collaboration between Fuel Cycle Research and Development Offgas Sigma Team member INL and a NEUP grant including ORNL, Syracuse University,more » and Georgia Institute of Technology has been formed to develop off gas models and support off gas research. Georgia Institute of Technology is developing fundamental level model to describe the equilibrium and kinetics of the adsorption process, which are to be integrated with OSPREY. This report discusses the progress made on expanding OSPREY to be multiple component and the integration of macroscale and microscale level models. Also included in this report is a brief OSPREY user guide.« less
REVISING/UPDATING EPA 625/1-79-011, PROCESS DESIGN MANUAL FOR SLUDGE TREATMENT AND DISPOSAL
The US Environmental Protection Agency (EPA) wishes to revise/update its very large and comprehensive 1979 Process Design Manual for Sludge Treatment and Disposal, EPA 625/1-79-011. As you might imagine the task is not trivial, as already in 1979 the original manual cost more tha...
Direct Loan Update, 2002-2003. EDExpress Training. Participant Guide.
ERIC Educational Resources Information Center
Office of Student Financial Assistance (ED), Washington, DC.
This participant guide is an update to basic training in the Direct Loan (DL) portion of the EDExpress system designed for financial aid professionals who have already participated in the basic training. The first session considers new aspects of DL processing, focusing on DL process changes and EDExpress DL changes. Session 2 contains three…
Update to the USDA-ARS fixed-wing spray nozzle models
USDA-ARS?s Scientific Manuscript database
The current USDA ARS Aerial Spray Nozzle Models were updated to reflect both new standardized measurement methods and systems, as well as, to increase operational spray pressure, aircraft airspeed and nozzle orientation angle limits. The new models were developed using both Central Composite Design...
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less
MODFLOW-OWHM v2: The next generation of fully integrated hydrologic simulation software
NASA Astrophysics Data System (ADS)
Boyce, S. E.; Hanson, R. T.; Ferguson, I. M.; Reimann, T.; Henson, W.; Mehl, S.; Leake, S.; Maddock, T.
2016-12-01
The One-Water Hydrologic Flow Model (One-Water) is a MODFLOW-based integrated hydrologic flow model designed for the analysis of a broad range of conjunctive-use and climate-related issues. One-Water fully links the movement and use of groundwater, surface water, and imported water for consumption by agriculture and natural vegetation on the landscape, and for potable and other uses within a supply-and-demand framework. One-Water includes linkages for deformation-, flow-, and head-dependent flows; additional observation and parameter options for higher-order calibrations; and redesigned code for facilitation of self-updating models and faster simulation run times. The next version of One-Water, currently under development, will include a new surface-water operations module that simulates dynamic reservoir operations, a new sustainability analysis package that facilitates the estimation and simulation of reduced storage depletion and captured discharge, a conduit-flow process for karst aquifers and leaky pipe networks, a soil zone process that adds an enhanced infiltration process, interflow, deep percolation and soil moisture, and a new subsidence and aquifer compaction package. It will also include enhancements to local grid refinement, and additional features to facilitate easier model updates, faster execution, better error messages, and more integration/cross communication between the traditional MODFLOW packages. By retaining and tracking the water within the hydrosphere, One-Water accounts for "all of the water everywhere and all of the time." This philosophy provides more confidence in the water accounting by the scientific community and provides the public a foundation needed to address wider classes of problems. Ultimately, more complex questions are being asked about water resources, so they require a more complete answer about conjunctive-use and climate-related issues.
Determination of replicate composite bone material properties using modal analysis.
Leuridan, Steven; Goossens, Quentin; Pastrav, Leonard; Roosen, Jorg; Mulier, Michiel; Denis, Kathleen; Desmet, Wim; Sloten, Jos Vander
2017-02-01
Replicate composite bones are used extensively for in vitro testing of new orthopedic devices. Contrary to tests with cadaveric bone material, which inherently exhibits large variability, they offer a standardized alternative with limited variability. Accurate knowledge of the composite's material properties is important when interpreting in vitro test results and when using them in FE models of biomechanical constructs. The cortical bone analogue material properties of three different fourth-generation composite bone models were determined by updating FE bone models using experimental and numerical modal analyses results. The influence of the cortical bone analogue material model (isotropic or transversely isotropic) and the inter- and intra-specimen variability were assessed. Isotropic cortical bone analogue material models failed to represent the experimental behavior in a satisfactory way even after updating the elastic material constants. When transversely isotropic material models were used, the updating procedure resulted in a reduction of the longitudinal Young's modulus from 16.00GPa before updating to an average of 13.96 GPa after updating. The shear modulus was increased from 3.30GPa to an average value of 3.92GPa. The transverse Young's modulus was lowered from an initial value of 10.00GPa to 9.89GPa. Low inter- and intra-specimen variability was found. Copyright © 2016 Elsevier Ltd. All rights reserved.
On the predictability of event boundaries in discourse: An ERP investigation.
Delogu, Francesca; Drenhaus, Heiner; Crocker, Matthew W
2018-02-01
When reading a text describing an everyday activity, comprehenders build a model of the situation described that includes prior knowledge of the entities, locations, and sequences of actions that typically occur within the event. Previous work has demonstrated that such knowledge guides the processing of incoming information by making event boundaries more or less expected. In the present ERP study, we investigated whether comprehenders' expectations about event boundaries are influenced by how elaborately common events are described in the context. Participants read short stories in which a common activity (e.g., washing the dishes) was described either in brief or in an elaborate manner. The final sentence contained a target word referring to a more predictable action marking a fine event boundary (e.g., drying) or a less predictable action, marking a coarse event boundary (e.g., jogging). The results revealed a larger N400 effect for coarse event boundaries compared to fine event boundaries, but no interaction with description length. Between 600 and 1000 ms, however, elaborate contexts elicited a larger frontal positivity compared to brief contexts. This effect was largely driven by less predictable targets, marking coarse event boundaries. We interpret the P600 effect as indexing the updating of the situation model at event boundaries, consistent with Event Segmentation Theory (EST). The updating process is more demanding with coarse event boundaries, which presumably require the construction of a new situation model.
An architecture for designing fuzzy logic controllers using neural networks
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1991-01-01
Described here is an architecture for designing fuzzy controllers through a hierarchical process of control rule acquisition and by using special classes of neural network learning techniques. A new method for learning to refine a fuzzy logic controller is introduced. A reinforcement learning technique is used in conjunction with a multi-layer neural network model of a fuzzy controller. The model learns by updating its prediction of the plant's behavior and is related to the Sutton's Temporal Difference (TD) method. The method proposed here has the advantage of using the control knowledge of an experienced operator and fine-tuning it through the process of learning. The approach is applied to a cart-pole balancing system.
NASA Astrophysics Data System (ADS)
Zhou, Y.; Zhang, X.; Xiao, W.
2018-04-01
As the geomagnetic sensor is susceptible to interference, a pre-processing total least square iteration method is proposed for calibration compensation. Firstly, the error model of the geomagnetic sensor is analyzed and the correction model is proposed, then the characteristics of the model are analyzed and converted into nine parameters. The geomagnetic data is processed by Hilbert transform (HHT) to improve the signal-to-noise ratio, and the nine parameters are calculated by using the combination of Newton iteration method and the least squares estimation method. The sifter algorithm is used to filter the initial value of the iteration to ensure that the initial error is as small as possible. The experimental results show that this method does not need additional equipment and devices, can continuously update the calibration parameters, and better than the two-step estimation method, it can compensate geomagnetic sensor error well.
The AFIS tree growth model for updating annual forest inventories in Minnesota
Margaret R. Holdaway
2000-01-01
As the Forest Service moves towards annual inventories, states may use model predictions of growth to update unmeasured plots. A tree growth model (AFIS) based on the scaled Weibull function and using the average-adjusted model form is presented. Annual diameter growth for four species was modeled using undisturbed plots from Minnesota's Aspen-Birch and Northern...
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giaddui, T; Chen, W; Yu, J
2014-06-15
Purpose: To review IGRT credentialing experience and unexpected technical issues encountered in connection with advanced radiotherapy technologies as implemented in RTOG clinical trials. To update IGRT credentialing procedures with the aim of improving the quality of the process, and to increase the proportion of IGRT credentialing compliance. To develop a living disease site-specific IGRT encyclopedia. Methods: Numerous technical issues were encountered during the IGRT credentialing process. The criteria used for credentialing review were based on: image quality; anatomy included in fused data sets and shift results. Credentialing requirements have been updated according to the AAPM task group reports for IGRTmore » to ensure that all required technical items are included in the quality review process. Implementation instructions have been updated and expanded for recent protocols. Results: Technical issues observed during the credentialing review process include, but are not limited to: poor quality images; inadequate image acquisition region; poor data quality; shifts larger than acceptable; no soft tissue surrogate. The updated IGRT credentialing process will address these issues and will also include the technical items required from AAPM: TG 104; TG 142 and TG 179 reports. An instruction manual has been developed describing a remote credentialing method for reviewers. Submission requirements are updated, including images/documents as well as facility questionnaire. The review report now includes summary of the review process and the parameters that reviewers check. We have reached consensus on the minimum IGRT technical requirement for a number of disease sites. RTOG 1311(NRG-BR002A Phase 1 Study of Stereotactic Body Radiotherapy (SBRT) for the Treatment of Multiple Metastases) is an example, here; the protocol specified the minimum requirement for each anatomical sites (with/without fiducials). Conclusion: Technical issues are identified and reported. IGRT guidelines are updated, with the corresponding credentialing requirements. An IGRT encyclopedia describing site-specific implementation issues is currently in development.« less
Dynamical analysis of surface-insulated planar wire array Z-pinches
NASA Astrophysics Data System (ADS)
Li, Yang; Sheng, Liang; Hei, Dongwei; Li, Xingwen; Zhang, Jinhai; Li, Mo; Qiu, Aici
2018-05-01
The ablation and implosion dynamics of planar wire array Z-pinches with and without surface insulation are compared and discussed in this paper. This paper first presents a phenomenological model named the ablation and cascade snowplow implosion (ACSI) model, which accounts for the ablation and implosion phases of a planar wire array Z-pinch in a single simulation. The comparison between experimental data and simulation results shows that the ACSI model could give a fairly good description about the dynamical characteristics of planar wire array Z-pinches. Surface insulation introduces notable differences in the ablation phase of planar wire array Z-pinches. The ablation phase is divided into two stages: insulation layer ablation and tungsten wire ablation. The two-stage ablation process of insulated wires is simulated in the ACSI model by updating the formulas describing the ablation process.
NASA Technical Reports Server (NTRS)
Cox, T. H.; Gilyard, G. B.
1986-01-01
The drones for aerodynamic and structural testing (DAST) project was designed to control flutter actively at high subsonic speeds. Accurate knowledge of the structural model was critical for the successful design of the control system. A ground vibration test was conducted on the DAST vehicle to determine the structural model characteristics. This report presents and discusses the vibration and test equipment, the test setup and procedures, and the antisymmetric and symmetric mode shape results. The modal characteristics were subsequently used to update the structural model employed in the control law design process.
Qiao, Hong; Li, Yinlin; Li, Fengfu; Xi, Xuanyang; Wu, Wei
2016-10-01
Recently, many biologically inspired visual computational models have been proposed. The design of these models follows the related biological mechanisms and structures, and these models provide new solutions for visual recognition tasks. In this paper, based on the recent biological evidence, we propose a framework to mimic the active and dynamic learning and recognition process of the primate visual cortex. From principle point of view, the main contributions are that the framework can achieve unsupervised learning of episodic features (including key components and their spatial relations) and semantic features (semantic descriptions of the key components), which support higher level cognition of an object. From performance point of view, the advantages of the framework are as follows: 1) learning episodic features without supervision-for a class of objects without a prior knowledge, the key components, their spatial relations and cover regions can be learned automatically through a deep neural network (DNN); 2) learning semantic features based on episodic features-within the cover regions of the key components, the semantic geometrical values of these components can be computed based on contour detection; 3) forming the general knowledge of a class of objects-the general knowledge of a class of objects can be formed, mainly including the key components, their spatial relations and average semantic values, which is a concise description of the class; and 4) achieving higher level cognition and dynamic updating-for a test image, the model can achieve classification and subclass semantic descriptions. And the test samples with high confidence are selected to dynamically update the whole model. Experiments are conducted on face images, and a good performance is achieved in each layer of the DNN and the semantic description learning process. Furthermore, the model can be generalized to recognition tasks of other objects with learning ability.
NRL/VOA Modifications to IONCAP as of 12 July 1988
1989-08-02
suitable for wide-area coverage studies), to incorporate a newer noise model , to improve the accuracy of some calculations, to correct a few...with IONANT ............................................................... 13 C. Incorporation of an Updated Noise Model into IONCAP...LISTINGS OF FOUR IONCAP SUBROUTINES SUPPORTING THE UPDATED NOISE MODEL ................................................................... 42 VI. LISTING
Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7
This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...
Interactive Management and Updating of Spatial Data Bases
NASA Technical Reports Server (NTRS)
French, P.; Taylor, M.
1982-01-01
The decision making process, whether for power plant siting, load forecasting or energy resource planning, invariably involves a blend of analytical methods and judgement. Management decisions can be improved by the implementation of techniques which permit an increased comprehension of results from analytical models. Even where analytical procedures are not required, decisions can be aided by improving the methods used to examine spatially and temporally variant data. How the use of computer aided planning (CAP) programs and the selection of a predominant data structure, can improve the decision making process is discussed.
Illuminating the landscape of host–pathogen interactions with the bacterium Listeria monocytogenes
Cossart, Pascale
2011-01-01
Listeria monocytogenes has, in 25 y, become a model in infection biology. Through the analysis of both its saprophytic life and infectious process, new concepts in microbiology, cell biology, and pathogenesis have been discovered. This review will update our knowledge on this intracellular pathogen and highlight the most recent breakthroughs. Promising areas of investigation such as the increasingly recognized relevance for the infectious process, of RNA-mediated regulations in the bacterium, and the role of bacterially controlled posttranslational and epigenetic modifications in the host will also be discussed. PMID:22114192
Numerical orbit generators of artificial earth satellites
NASA Astrophysics Data System (ADS)
Kugar, H. K.; Dasilva, W. C. C.
1984-04-01
A numerical orbit integrator containing updatings and improvements relative to the previous ones that are being utilized by the Departmento de Mecanica Espacial e Controle (DMC), of INPE, besides incorporating newer modellings resulting from the skill acquired along the time is presented. Flexibility and modularity were taken into account in order to allow future extensions and modifications. Characteristics of numerical accuracy, processing quickness, memory saving as well as utilization aspects were also considered. User's handbook, whole program listing and qualitative analysis of accuracy, processing time and orbit perturbation effects were included as well.
HFE Process Guidance and Standards for potential application to updating NRC guidance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacques Hugo; J. J. Persensky
2012-07-01
The U.S. Nuclear Regulatory Commission (NRC) reviews and evaluates the human factors engineering (HFE) programs of applicants for nuclear power plant construction permits, operating licenses, standard design certifications, and combined operating licenses. The purpose of these safety reviews is to help ensure that personnel performance and reliability are appropriately supported. Detailed design review procedures and guidance for the evaluations is provided in three key documents: the Standard Review Plan (NUREG-0800), the HFE Program Review Model (NUREG-0711), and the Human-System Interface Design Review Guidelines (NUREG-0700). These documents were last revised in 2007, 2004 and 2002, respectively. The NRC is committed tomore » the periodic update and improvement of these guidance documents to ensure that they remain state-of-the-art design evaluation tools. Thus, the NRC has initiated a project with BNL to update the NRC guidance to remain current with recent research on human performance, advances in HFE methods and tools, and new technology. INL supported Brookhaven National Lab (BNL) to update the detailed HFE review criteria contained in NUREG-0711 and NUREG-0700 based on (1) feedback obtained from end users, (2) the results of NRC research and development efforts supporting the NRC staff’s HFE safety reviews, and (3) other material the project staff identify as applicable to the update effort. INL submitted comments on development plans and sections of NUREGs 0800, 0711, and 0700. The contractor prepared the report attached here as the deliverable for this work.« less
Enumeration and extension of non-equivalent deterministic update schedules in Boolean networks.
Palma, Eduardo; Salinas, Lilian; Aracena, Julio
2016-03-01
Boolean networks (BNs) are commonly used to model genetic regulatory networks (GRNs). Due to the sensibility of the dynamical behavior to changes in the updating scheme (order in which the nodes of a network update their state values), it is increasingly common to use different updating rules in the modeling of GRNs to better capture an observed biological phenomenon and thus to obtain more realistic models.In Aracena et al. equivalence classes of deterministic update schedules in BNs, that yield exactly the same dynamical behavior of the network, were defined according to a certain label function on the arcs of the interaction digraph defined for each scheme. Thus, the interaction digraph so labeled (update digraphs) encode the non-equivalent schemes. We address the problem of enumerating all non-equivalent deterministic update schedules of a given BN. First, we show that it is an intractable problem in general. To solve it, we first construct an algorithm that determines the set of update digraphs of a BN. For that, we use divide and conquer methodology based on the structural characteristics of the interaction digraph. Next, for each update digraph we determine a scheme associated. This algorithm also works in the case where there is a partial knowledge about the relative order of the updating of the states of the nodes. We exhibit some examples of how the algorithm works on some GRNs published in the literature. An executable file of the UpdateLabel algorithm made in Java and the files with the outputs of the algorithms used with the GRNs are available at: www.inf.udec.cl/ ∼lilian/UDE/ CONTACT: lilisalinas@udec.cl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
The BioGRID interaction database: 2017 update
Chatr-aryamontri, Andrew; Oughtred, Rose; Boucher, Lorrie; Rust, Jennifer; Chang, Christie; Kolas, Nadine K.; O'Donnell, Lara; Oster, Sara; Theesfeld, Chandra; Sellam, Adnane; Stark, Chris; Breitkreutz, Bobby-Joe; Dolinski, Kara; Tyers, Mike
2017-01-01
The Biological General Repository for Interaction Datasets (BioGRID: https://thebiogrid.org) is an open access database dedicated to the annotation and archival of protein, genetic and chemical interactions for all major model organism species and humans. As of September 2016 (build 3.4.140), the BioGRID contains 1 072 173 genetic and protein interactions, and 38 559 post-translational modifications, as manually annotated from 48 114 publications. This dataset represents interaction records for 66 model organisms and represents a 30% increase compared to the previous 2015 BioGRID update. BioGRID curates the biomedical literature for major model organism species, including humans, with a recent emphasis on central biological processes and specific human diseases. To facilitate network-based approaches to drug discovery, BioGRID now incorporates 27 501 chemical–protein interactions for human drug targets, as drawn from the DrugBank database. A new dynamic interaction network viewer allows the easy navigation and filtering of all genetic and protein interaction data, as well as for bioactive compounds and their established targets. BioGRID data are directly downloadable without restriction in a variety of standardized formats and are freely distributed through partner model organism databases and meta-databases. PMID:27980099
2013-01-01
Liver fibrosis is defined as excessive extracellular matrix deposition and is based on complex interactions between matrix-producing hepatic stellate cells and an abundance of liver-resident and infiltrating cells. Investigation of these processes requires in vitro and in vivo experimental work in animals. However, the use of animals in translational research will be increasingly challenged, at least in countries of the European Union, because of the adoption of new animal welfare rules in 2013. These rules will create an urgent need for optimized standard operating procedures regarding animal experimentation and improved international communication in the liver fibrosis community. This review gives an update on current animal models, techniques and underlying pathomechanisms with the aim of fostering a critical discussion of the limitations and potential of up-to-date animal experimentation. We discuss potential complications in experimental liver fibrosis and provide examples of how the findings of studies in which these models are used can be translated to human disease and therapy. In this review, we want to motivate the international community to design more standardized animal models which might help to address the legally requested replacement, refinement and reduction of animals in fibrosis research. PMID:24274743
Dashboard systems: implementing pharmacometrics from bench to bedside.
Mould, Diane R; Upton, Richard N; Wojciechowski, Jessica
2014-09-01
In recent years, there has been increasing interest in the development of medical decision-support tools, including dashboard systems. Dashboard systems are software packages that integrate information and calculations about therapeutics from multiple components into a single interface for use in the clinical environment. Given the high cost of medical care, and the increasing need to demonstrate positive clinical outcomes for reimbursement, dashboard systems may become an important tool for improving patient outcome, improving clinical efficiency and containing healthcare costs. Similarly the costs associated with drug development are also rising. The use of model-based drug development (MBDD) has been proposed as a tool to streamline this process, facilitating the selection of appropriate doses and making informed go/no-go decisions. However, complete implementation of MBDD has not always been successful owing to a variety of factors, including the resources required to provide timely modeling and simulation updates. The application of dashboard systems in drug development reduces the resource requirement and may expedite updating models as new data are collected, allowing modeling results to be available in a timely fashion. In this paper, we present some background information on dashboard systems and propose the use of these systems both in the clinic and during drug development.
Overview and Evaluation of the Community Multiscale Air Quality (CMAQ) Modeling System Version 5.2
A new version of the Community Multiscale Air Quality (CMAQ) model, version 5.2 (CMAQv5.2), is currently being developed, with a planned release date in 2017. The new model includes numerous updates from the previous version of the model (CMAQv5.1). Specific updates include a new...
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452
Application of firefly algorithm to the dynamic model updating problem
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2015-04-01
Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.
NASA Astrophysics Data System (ADS)
Gantt, B.; Kelly, J. T.; Bash, J. O.
2015-11-01
Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the fine-mode size distribution, include sea surface temperature (SST) dependency, and reduce surf-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several coastal and national observational data sets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for coastal sites in the Bay Regional Atmospheric Chemistry Experiment (BRACE) near Tampa, Florida. Including SST dependency to the SSA emission parameterization led to increased sodium concentrations in the southeastern US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex) study period resulted in a modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This update of SSA emissions enabled a more realistic simulation of the atmospheric chemistry in coastal environments where marine air mixes with urban pollution.
Wisconsin's forest statistics, 1987: an inventory update.
W. Brad Smith; Jerold T. Hahn
1989-01-01
The Wisconsin 1987 inventory update, derived by using tree growth models, reports 14.7 million acres of timberland, a decline of less than 1% since 1983. This bulletin presents findings from the inventory update in tables detailing timberland area, volume, and biomass.
Stabilizing Motifs in Autonomous Boolean Networks and the Yeast Cell Cycle Oscillator
NASA Astrophysics Data System (ADS)
Sevim, Volkan; Gong, Xinwei; Socolar, Joshua
2009-03-01
Synchronously updated Boolean networks are widely used to model gene regulation. Some properties of these model networks are known to be artifacts of the clocking in the update scheme. Autonomous updating is a less artificial scheme that allows one to introduce small timing perturbations and study stability of the attractors. We argue that the stabilization of a limit cycle in an autonomous Boolean network requires a combination of motifs such as feed-forward loops and auto-repressive links that can correct small fluctuations in the timing of switching events. A recently published model of the transcriptional cell-cycle oscillator in yeast contains the motifs necessary for stability under autonomous updating [1]. [1] D. A. Orlando, et al. Nature (London), 4530 (7197):0 944--947, 2008.
Hybrid Kalman Filter: A New Approach for Aircraft Engine In-Flight Diagnostics
NASA Technical Reports Server (NTRS)
Kobayashi, Takahisa; Simon, Donald L.
2006-01-01
In this paper, a uniquely structured Kalman filter is developed for its application to in-flight diagnostics of aircraft gas turbine engines. The Kalman filter is a hybrid of a nonlinear on-board engine model (OBEM) and piecewise linear models. The utilization of the nonlinear OBEM allows the reference health baseline of the in-flight diagnostic system to be updated to the degraded health condition of the engines through a relatively simple process. Through this health baseline update, the effectiveness of the in-flight diagnostic algorithm can be maintained as the health of the engine degrades over time. Another significant aspect of the hybrid Kalman filter methodology is its capability to take advantage of conventional linear and nonlinear Kalman filter approaches. Based on the hybrid Kalman filter, an in-flight fault detection system is developed, and its diagnostic capability is evaluated in a simulation environment. Through the evaluation, the suitability of the hybrid Kalman filter technique for aircraft engine in-flight diagnostics is demonstrated.
Space-Based Sensorweb Monitoring of Wildfires in Thailand
NASA Technical Reports Server (NTRS)
Chien, Steve; Doubleday, Joshua; Mclaren, David; Davies, Ashley; Tran, Daniel; Tanpipat, Veerachai; Akaakara, Siri; Ratanasuwan, Anuchit; Mandl, Daniel
2011-01-01
We describe efforts to apply sensorweb technologies to the monitoring of forest fires in Thailand. In this approach, satellite data and ground reports are assimilated to assess the current state of the forest system in terms of forest fire risk, active fires, and likely progression of fires and smoke plumes. This current and projected assessment can then be used to actively direct sensors and assets to best acquire further information. This process operates continually with new data updating models of fire activity leading to further sensing and updating of models. As the fire activity is tracked, products such as active fire maps, burn scar severity maps, and alerts are automatically delivered to relevant parties.We describe the current state of the Thailand Fire Sensorweb which utilizes the MODIS-based FIRMS system to track active fires and trigger Earth Observing One / Advanced Land Imager to acquire imagery and produce active fire maps, burn scar severity maps, and alerts. We describe ongoing work to integrate additional sensor sources and generate additional products.
Modifications of Hinge Mechanisms for the Mobile Launcher
NASA Technical Reports Server (NTRS)
Ganzak, Jacob D.
2018-01-01
The further development and modifications made towards the integration of the upper and lower hinge assemblies for the Exploration Upper Stage umbilical are presented. Investigative work is included to show the process of applying updated NASA Standards within component and assembly drawings for selected manufacturers. Component modifications with the addition of drawings are created to precisely display part geometries and geometric tolerances, along with proper methods of fabrication. Comparison of newly updated components with original Apollo era components is essential to correctly model the part characteristics and parameters, i.e. mass properties, material selection, weldments, and tolerances. 3-Dimensional modeling software is used to demonstrate the necessary improvements. In order to share and corroborate these changes, a document management system is used to store the various components and associated drawings. These efforts will contribute towards the Mobile Launcher for Exploration Mission 2 to provide proper rotation of the Exploration Upper Stage umbilical, necessary for providing cryogenic fill and drain capabilities.
NASA Astrophysics Data System (ADS)
King, Sean W.; Simka, Harsono; Herr, Dan; Akinaga, Hiro; Garner, Mike
2013-10-01
Recent discussions concerning the continuation of Moore's law have focused on announcements by several major corporations to transition from traditional 2D planar to new 3D multi-gate field effect transistor devices. However, the growth and progression of the semiconductor microelectronics industry over the previous 4 decades has been largely driven by combined advances in new materials, lithography, and materials related process technologies. Looking forward, it is therefore anticipated that new materials and materials technologies will continue to play a significant role in both the pursuit of Moore's law and the evolution of the industry. In this research update, we discuss and illustrate some of the required and anticipated materials innovations that could potentially lead to the continuation of Moore's law for another decade (or more). We focus primarily on the innovations needed to achieve single digit nanometer technologies and illustrate how at these dimensions not only new materials but new metrologies and computational modeling will be needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Qiang; Kelly, Jarod C.; Burnham, Andrew
This report serves as an update for the life-cycle analysis (LCA) of aluminum production based on the most recent data representing the state-of-the-art of the industry in North America. The 2013 Aluminum Association (AA) LCA report on the environmental footprint of semifinished aluminum products in North America provides the basis for the update (The Aluminum Association, 2013). The scope of this study covers primary aluminum production, secondary aluminum production, as well as aluminum semi-fabrication processes including hot rolling, cold rolling, extrusion and shape casting. This report focuses on energy consumptions, material inputs and criteria air pollutant emissions for each processmore » from the cradle-to-gate of aluminum, which starts from bauxite extraction, and ends with manufacturing of semi-fabricated aluminum products. The life-cycle inventory (LCI) tables compiled are to be incorporated into the vehicle cycle model of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET) Model for the release of its 2015 version.« less
Auto Draw from Excel Input Files
NASA Technical Reports Server (NTRS)
Strauss, Karl F.; Goullioud, Renaud; Cox, Brian; Grimes, James M.
2011-01-01
The design process often involves the use of Excel files during project development. To facilitate communications of the information in the Excel files, drawings are often generated. During the design process, the Excel files are updated often to reflect new input. The problem is that the drawings often lag the updates, often leading to confusion of the current state of the design. The use of this program allows visualization of complex data in a format that is more easily understandable than pages of numbers. Because the graphical output can be updated automatically, the manual labor of diagram drawing can be eliminated. The more frequent update of system diagrams can reduce confusion and reduce errors and is likely to uncover symmetric problems earlier in the design cycle, thus reducing rework and redesign.