Medendorp, W. P.
2015-01-01
It is known that the brain uses multiple reference frames to code spatial information, including eye-centered and body-centered frames. When we move our body in space, these internal representations are no longer in register with external space, unless they are actively updated. Whether the brain updates multiple spatial representations in parallel, or whether it restricts its updating mechanisms to a single reference frame from which other representations are constructed, remains an open question. We developed an optimal integration model to simulate the updating of visual space across body motion in multiple or single reference frames. To test this model, we designed an experiment in which participants had to remember the location of a briefly presented target while being translated sideways. The behavioral responses were in agreement with a model that uses a combination of eye- and body-centered representations, weighted according to the reliability in which the target location is stored and updated in each reference frame. Our findings suggest that the brain simultaneously updates multiple spatial representations across body motion. Because both representations are kept in sync, they can be optimally combined to provide a more precise estimate of visual locations in space than based on single-frame updating mechanisms. PMID:26490289
Benefits of Model Updating: A Case Study Using the Micro-Precision Interferometer Testbed
NASA Technical Reports Server (NTRS)
Neat, Gregory W.; Kissil, Andrew; Joshi, Sanjay S.
1997-01-01
This paper presents a case study on the benefits of model updating using the Micro-Precision Interferometer (MPI) testbed, a full-scale model of a future spaceborne optical interferometer located at JPL.
Chung, Yun Won; Kwon, Jae Kyun; Park, Suwon
2014-01-01
One of the key technologies to support mobility of mobile station (MS) in mobile communication systems is location management which consists of location update and paging. In this paper, an improved movement-based location management scheme with two movement thresholds is proposed, considering bursty data traffic characteristics of packet-switched (PS) services. The analytical modeling for location update and paging signaling loads of the proposed scheme is developed thoroughly and the performance of the proposed scheme is compared with that of the conventional scheme. We show that the proposed scheme outperforms the conventional scheme in terms of total signaling load with an appropriate selection of movement thresholds.
Jones, Joseph L.; Johnson, Kenneth H.; Frans, Lonna M.
2016-08-18
Information about groundwater-flow paths and locations where groundwater discharges at and near Puget Sound Naval Shipyard is necessary for understanding the potential migration of subsurface contaminants by groundwater at the shipyard. The design of some remediation alternatives would be aided by knowledge of whether groundwater flowing at specific locations beneath the shipyard will eventually discharge directly to Sinclair Inlet of Puget Sound, or if it will discharge to the drainage system of one of the six dry docks located in the shipyard. A 1997 numerical (finite difference) groundwater-flow model of the shipyard and surrounding area was constructed to help evaluate the potential for groundwater discharge to Puget Sound. That steady-state, multilayer numerical model with homogeneous hydraulic characteristics indicated that groundwater flowing beneath nearly all of the shipyard discharges to the dry-dock drainage systems, and only shallow groundwater flowing beneath the western end of the shipyard discharges directly to Sinclair Inlet.Updated information from a 2016 regional groundwater-flow model constructed for the greater Kitsap Peninsula was used to update the 1997 groundwater model of the Puget Sound Naval Shipyard. That information included a new interpretation of the hydrogeologic units underlying the area, as well as improved recharge estimates. Other updates to the 1997 model included finer discretization of the finite-difference model grid into more layers, rows, and columns, all with reduced dimensions. This updated Puget Sound Naval Shipyard model was calibrated to 2001–2005 measured water levels, and hydraulic characteristics of the model layers representing different hydrogeologic units were estimated with the aid of state-of-the-art parameter optimization techniques.The flow directions and discharge locations predicted by this updated model generally match the 1997 model despite refinements and other changes. In the updated model, most groundwater discharge recharged within the boundaries of the shipyard is to the dry docks; only at the western end of the shipyard does groundwater discharge directly to Puget Sound. Particle tracking for the existing long-term monitoring well network suggests that only a few wells intercept groundwater that originates as recharge within the shipyard boundary.
Situation Model Updating in Young and Older Adults: Global versus Incremental Mechanisms
Bailey, Heather R.; Zacks, Jeffrey M.
2015-01-01
Readers construct mental models of situations described by text. Activity in narrative text is dynamic, so readers must frequently update their situation models when dimensions of the situation change. Updating can be incremental, such that a change leads to updating just the dimension that changed, or global, such that the entire model is updated. Here, we asked whether older and young adults make differential use of incremental and global updating. Participants read narratives containing changes in characters and spatial location and responded to recognition probes throughout the texts. Responses were slower when probes followed a change, suggesting that situation models were updated at changes. When either dimension changed, responses to probes for both dimensions were slowed; this provides evidence for global updating. Moreover, older adults showed stronger evidence of global updating than did young adults. One possibility is that older adults perform more global updating to offset reduced ability to manipulate information in working memory. PMID:25938248
Chung, Yun Won
2012-11-22
Location management, which consists of location registration and paging, is essential to provide mobile communication services to mobile stations (MSs). Since MSs riding on a public transportation system (TS) generates significant location registration signaling loads simultaneously when a TS with riding MSs moves between location areas (LAs), group location management was proposed. Under the group location management, an MS performs group registration when it gets on a TS and performs group deregistration when it gets off a TS. Then, only a TS updates its current location when it changes LA, on behalf of all riding MSs. In this paper, movement-based group location management using radio frequency identification (RFID) is proposed, where the MS's getting on and getting off behaviors are detected using RFID and only location update of a TS is carried out if the number of crossed cells from the last updated cell exceeds a predefined movement threshold, on behalf of all riding MSs. Then, we develop an analytical model for the performance analysis of the movement-based group location management and analyze the effects of various parameters on the performance. The results show that the movement-based group location management has reduced signaling cost compared with movement-based individual location management, and optimal performance can be achieved by choosing appropriate movement threshold values.
Chung, Yun Won
2012-01-01
Location management, which consists of location registration and paging, is essential to provide mobile communication services to mobile stations (MSs). Since MSs riding on a public transportation system (TS) generates significant location registration signaling loads simultaneously when a TS with riding MSs moves between location areas (LAs), group location management was proposed. Under the group location management, an MS performs group registration when it gets on a TS and performs group deregistration when it gets off a TS. Then, only a TS updates its current location when it changes LA, on behalf of all riding MSs. In this paper, movement-based group location management using radio frequency identification (RFID) is proposed, where the MS’s getting on and getting off behaviors are detected using RFID and only location update of a TS is carried out if the number of crossed cells from the last updated cell exceeds a predefined movement threshold, on behalf of all riding MSs. Then, we develop an analytical model for the performance analysis of the movement-based group location management and analyze the effects of various parameters on the performance. The results show that the movement-based group location management has reduced signaling cost compared with movement-based individual location management, and optimal performance can be achieved by choosing appropriate movement threshold values. PMID:23443368
Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.
2014-01-01
Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376
A Probabilistic Approach to Model Update
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.; Voracek, David F.
2001-01-01
Finite element models are often developed for load validation, structural certification, response predictions, and to study alternate design concepts. In rare occasions, models developed with a nominal set of parameters agree with experimental data without the need to update parameter values. Today, model updating is generally heuristic and often performed by a skilled analyst with in-depth understanding of the model assumptions. Parameter uncertainties play a key role in understanding the model update problem and therefore probabilistic analysis tools, developed for reliability and risk analysis, may be used to incorporate uncertainty in the analysis. In this work, probability analysis (PA) tools are used to aid the parameter update task using experimental data and some basic knowledge of potential error sources. Discussed here is the first application of PA tools to update parameters of a finite element model for a composite wing structure. Static deflection data at six locations are used to update five parameters. It is shown that while prediction of individual response values may not be matched identically, the system response is significantly improved with moderate changes in parameter values.
Microseismic Image-domain Velocity Inversion: Case Study From The Marcellus Shale
NASA Astrophysics Data System (ADS)
Shragge, J.; Witten, B.
2017-12-01
Seismic monitoring at injection wells relies on generating accurate location estimates of detected (micro-)seismicity. Event location estimates assist in optimizing well and stage spacings, assessing potential hazards, and establishing causation of larger events. The largest impediment to generating accurate location estimates is an accurate velocity model. For surface-based monitoring the model should capture 3D velocity variation, yet, rarely is the laterally heterogeneous nature of the velocity field captured. Another complication for surface monitoring is that the data often suffer from low signal-to-noise levels, making velocity updating with established techniques difficult due to uncertainties in the arrival picks. We use surface-monitored field data to demonstrate that a new method requiring no arrival picking can improve microseismic locations by jointly locating events and updating 3D P- and S-wave velocity models through image-domain adjoint-state tomography. This approach creates a complementary set of images for each chosen event through wave-equation propagation and correlating combinations of P- and S-wavefield energy. The method updates the velocity models to optimize the focal consistency of the images through adjoint-state inversions. We demonstrate the functionality of the method using a surface array of 192 three-component geophones over a hydraulic stimulation in the Marcellus Shale. Applying the proposed joint location and velocity-inversion approach significantly improves the estimated locations. To assess event location accuracy, we propose a new measure of inconsistency derived from the complementary images. By this measure the location inconsistency decreases by 75%. The method has implications for improving the reliability of microseismic interpretation with low signal-to-noise data, which may increase hydrocarbon extraction efficiency and improve risk assessment from injection related seismicity.
Walking through doorways causes forgetting: environmental integration.
Radvansky, Gabriel A; Tamplin, Andrea K; Krawietz, Sabine A
2010-12-01
Memory for objects declines when people move from one location to another (the location updating effect). However, it is unclear whether this is attributable to event model updating or to task demands. The focus here was on the degree of integration for probed-for information with the experienced environment. In prior research, the probes were verbal labels of visual objects. Experiment 1 assessed whether this was a consequence of an item-probe mismatch, as with transfer-appropriate processing. Visual probes were used to better coordinate what was seen with the nature of the memory probe. In Experiment 2, people received additional word pairs to remember, which were less well integrated with the environment, to assess whether the probed-for information needed to be well integrated. The results showed location updating effects in both cases. These data are consistent with an event cognition view that mental updating of a dynamic event disrupts memory.
NASA Astrophysics Data System (ADS)
Barton, E.; Middleton, C.; Koo, K.; Crocker, L.; Brownjohn, J.
2011-07-01
This paper presents the results from collaboration between the National Physical Laboratory (NPL) and the University of Sheffield on an ongoing research project at NPL. A 50 year old reinforced concrete footbridge has been converted to a full scale structural health monitoring (SHM) demonstrator. The structure is monitored using a variety of techniques; however, interrelating results and converting data to knowledge are not possible without a reliable numerical model. During the first stage of the project, the work concentrated on static loading and an FE model of the undamaged bridge was created, and updated, under specified static loading and temperature conditions. This model was found to accurately represent the response under static loading and it was used to identify locations for sensor installation. The next stage involves the evaluation of repair/strengthening patches under both static and dynamic loading. Therefore, before deliberately introducing significant damage, the first set of dynamic tests was conducted and modal properties were estimated. The measured modal properties did not match the modal analysis from the statically updated FE model; it was clear that the existing model required updating. This paper introduces the results of the dynamic testing and model updating. It is shown that the structure exhibits large non-linear, amplitude dependant characteristics. This creates a difficult updating process, but we attempt to produce the best linear representation of the structure. A sensitivity analysis is performed to determine the most sensitive locations for planned damage/repair scenarios and is used to decide whether additional sensors will be necessary.
Faunt, C.C.; Hanson, R.T.; Martin, P.; Schmid, W.
2011-01-01
California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence. ?? 2011 ASCE.
Faunt, Claudia C.; Hanson, Randall T.; Martin, Peter; Schmid, Wolfgang
2011-01-01
California's Central Valley has been one of the most productive agricultural regions in the world for more than 50 years. To better understand the groundwater availability in the valley, the U.S. Geological Survey (USGS) developed the Central Valley hydrologic model (CVHM). Because of recent water-level declines and renewed subsidence, the CVHM is being updated to better simulate the geohydrologic system. The CVHM updates and refinements can be grouped into two general categories: (1) model code changes and (2) data updates. The CVHM updates and refinements will require that the model be recalibrated. The updated CVHM will provide a detailed transient analysis of changes in groundwater availability and flow paths in relation to climatic variability, urbanization, stream flow, and changes in irrigated agricultural practices and crops. The updated CVHM is particularly focused on more accurately simulating the locations and magnitudes of land subsidence. The intent of the updated CVHM is to help scientists better understand the availability and sustainability of water resources and the interaction of groundwater levels with land subsidence.
Mining moving object trajectories in location-based services for spatio-temporal database update
NASA Astrophysics Data System (ADS)
Guo, Danhuai; Cui, Weihong
2008-10-01
Advances in wireless transmission and mobile technology applied to LBS (Location-based Services) flood us with amounts of moving objects data. Vast amounts of gathered data from position sensors of mobile phones, PDAs, or vehicles hide interesting and valuable knowledge and describe the behavior of moving objects. The correlation between temporal moving patterns of moving objects and geo-feature spatio-temporal attribute was ignored, and the value of spatio-temporal trajectory data was not fully exploited too. Urban expanding or frequent town plan change bring about a large amount of outdated or imprecise data in spatial database of LBS, and they cannot be updated timely and efficiently by manual processing. In this paper we introduce a data mining approach to movement pattern extraction of moving objects, build a model to describe the relationship between movement patterns of LBS mobile objects and their environment, and put up with a spatio-temporal database update strategy in LBS database based on trajectories spatiotemporal mining. Experimental evaluation reveals excellent performance of the proposed model and strategy. Our original contribution include formulation of model of interaction between trajectory and its environment, design of spatio-temporal database update strategy based on moving objects data mining, and the experimental application of spatio-temporal database update by mining moving objects trajectories.
Attentional focus affects how events are segmented and updated in narrative reading.
Bailey, Heather R; Kurby, Christopher A; Sargent, Jesse Q; Zacks, Jeffrey M
2017-08-01
Readers generate situation models representing described events, but the nature of these representations may differ depending on the reading goals. We assessed whether instructions to pay attention to different situational dimensions affect how individuals structure their situation models (Exp. 1) and how they update these models when situations change (Exp. 2). In Experiment 1, participants read and segmented narrative texts into events. Some readers were oriented to pay specific attention to characters or space. Sentences containing character or spatial-location changes were perceived as event boundaries-particularly if the reader was oriented to characters or space, respectively. In Experiment 2, participants read narratives and responded to recognition probes throughout the texts. Readers who were oriented to the spatial dimension were more likely to update their situation models at spatial changes; all readers tracked the character dimension. The results from both experiments indicated that attention to individual situational dimensions influences how readers segment and update their situation models. More broadly, the results provide evidence for a global situation model updating mechanism that serves to set up new models at important narrative changes.
Walking through doorways causes forgetting: Event structure or updating disruption?
Pettijohn, Kyle A; Radvansky, Gabriel A
2016-11-01
According to event cognition theory, people segment experience into separate event models. One consequence of this segmentation is that when people transport objects from one location to another, memory is worse than if people move across a large location. In two experiments participants navigated through a virtual environment, and recognition memory was tested in either the presence or the absence of a location shift for objects that were recently interacted with (i.e., just picked up or set down). Of particular concern here is whether this location updating effect is due to (a) differences in retention intervals as a result of the navigation process, (b) a temporary disruption in cognitive processing that may occur as a result of the updating processes, or (c) a need to manage multiple event models, as has been suggested in prior research. Experiment 1 explored whether retention interval is driving this effect by recording travel times from the acquisition of an object and the probe time. The results revealed that travel times were similar, thereby rejecting a retention interval explanation. Experiment 2 explored whether a temporary disruption in processing is producing the effect by introducing a 3-second delay prior to the presentation of a memory probe. The pattern of results was not affected by adding a delay, thereby rejecting a temporary disruption account. These results are interpreted in the context of the event horizon model, which suggests that when there are multiple event models that contain common elements there is interference at retrieval, which compromises performance.
Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model
Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua
2015-01-01
We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.
NASA Astrophysics Data System (ADS)
Downey, N.; Begnaud, M. L.; Hipp, J. R.; Ballard, S.; Young, C. S.; Encarnacao, A. V.
2017-12-01
The SALSA3D global 3D velocity model of the Earth was developed to improve the accuracy and precision of seismic travel time predictions for a wide suite of regional and teleseismic phases. Recently, the global SALSA3D model was updated to include additional body wave phases including mantle phases, core phases, reflections off the core-mantle boundary and underside reflections off the surface of the Earth. We show that this update improves travel time predictions and leads directly to significant improvements in the accuracy and precision of seismic event locations as compared to locations computed using standard 1D velocity models like ak135, or 2½D models like RSTT. A key feature of our inversions is that path-specific model uncertainty of travel time predictions are calculated using the full 3D model covariance matrix computed during tomography, which results in more realistic uncertainty ellipses that directly reflect tomographic data coverage. Application of this method can also be done at a regional scale: we present a velocity model with uncertainty obtained using data obtained from the University of Utah Seismograph Stations. These results show a reduction in travel-time residuals for re-located events compared with those obtained using previously published models.
Updating finite element dynamic models using an element-by-element sensitivity methodology
NASA Technical Reports Server (NTRS)
Farhat, Charbel; Hemez, Francois M.
1993-01-01
A sensitivity-based methodology for improving the finite element model of a given structure using test modal data and a few sensors is presented. The proposed method searches for both the location and sources of the mass and stiffness errors and does not interfere with the theory behind the finite element model while correcting these errors. The updating algorithm is derived from the unconstrained minimization of the squared L sub 2 norms of the modal dynamic residuals via an iterative two-step staggered procedure. At each iteration, the measured mode shapes are first expanded assuming that the model is error free, then the model parameters are corrected assuming that the expanded mode shapes are exact. The numerical algorithm is implemented in an element-by-element fashion and is capable of 'zooming' on the detected error locations. Several simulation examples which demonstate the potential of the proposed methodology are discussed.
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2018-03-01
vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.
A Geo-referenced 3D model of the Juan de Fuca Slab and associated seismicity
Blair, J.L.; McCrory, P.A.; Oppenheimer, D.H.; Waldhauser, F.
2011-01-01
We present a Geographic Information System (GIS) of a new 3-dimensional (3D) model of the subducted Juan de Fuca Plate beneath western North America and associated seismicity of the Cascadia subduction system. The geo-referenced 3D model was constructed from weighted control points that integrate depth information from hypocenter locations and regional seismic velocity studies. We used the 3D model to differentiate earthquakes that occur above the Juan de Fuca Plate surface from earthquakes that occur below the plate surface. This GIS project of the Cascadia subduction system supersedes the one previously published by McCrory and others (2006). Our new slab model updates the model with new constraints. The most significant updates to the model include: (1) weighted control points to incorporate spatial uncertainty, (2) an additional gridded slab surface based on the Generic Mapping Tools (GMT) Surface program which constructs surfaces based on splines in tension (see expanded description below), (3) double-differenced hypocenter locations in northern California to better constrain slab location there, and (4) revised slab shape based on new hypocenter profiles that incorporate routine depth uncertainties as well as data from new seismic-reflection and seismic-refraction studies. We also provide a 3D fly-through animation of the model for use as a visualization tool.
Mentally walking through doorways causes forgetting: The location updating effect and imagination.
Lawrence, Zachary; Peterson, Daniel
2016-01-01
Researchers have documented an intriguing phenomenon whereby simply walking through a doorway causes forgetting (the location updating effect). The Event Horizon Model is the most commonly cited theory to explain these data. Importantly, this model explains the effect without invoking the importance or reliance upon perceptual information (i.e., seeing oneself pass through the doorway). This generates the intriguing hypothesis that the effect may be demonstrated in participants who simply imagine walking through a doorway. Across two experiments, we explicitly test this hypothesis. Participants familiarised themselves with both real (Experiment 1) and virtual (Experiment 2) environments which served as the setting for their mental walk. They were then provided with an image to remember and were instructed to imagine themselves walking through the previously presented space. In both experiments, when the mental walk required participants to pass through a doorway, more forgetting occurred, consistent with the predictions laid out in the Event Horizon Model.
Updating of working memory: lingering bindings.
Oberauer, Klaus; Vockenberg, Kerstin
2009-05-01
Three experiments investigated proactive interference and proactive facilitation in a memory-updating paradigm. Participants remembered several letters or spatial patterns, distinguished by their spatial positions, and updated them by new stimuli up to 20 times per trial. Self-paced updating times were shorter when an item previously remembered and then replaced reappeared in the same location than when it reappeared in a different location. This effect demonstrates residual memory for no-longer-relevant bindings of items to locations. The effect increased with the number of items to be remembered. With one exception, updating times did not increase, and recall of final values did not decrease, over successive updating steps, thus providing little evidence for proactive interference building up cumulatively.
Early Limits on the Verbal Updating of an Object's Location
ERIC Educational Resources Information Center
Ganea, Patricia A.; Harris, Paul L.
2013-01-01
Recent research has shown that by 30 months of age, children can successfully update their representation of an absent object's location on the basis of new verbal information, whereas 23-month-olds often return to the object's prior location. The current results show that this updating failure persisted even when (a) toddlers received visual and…
Future-year ozone prediction for the United States using updated models and inputs.
Collet, Susan; Kidokoro, Toru; Karamchandani, Prakash; Shah, Tejas; Jung, Jaegun
2017-08-01
The relationship between emission reductions and changes in ozone can be studied using photochemical grid models. These models are updated with new information as it becomes available. The primary objective of this study was to update the previous Collet et al. studies by using the most up-to-date (at the time the study was done) modeling emission tools, inventories, and meteorology available to conduct ozone source attribution and sensitivity studies. Results show future-year, 2030, design values for 8-hr ozone concentrations were lower than base-year values, 2011. The ozone source attribution results for selected cities showed that boundary conditions were the dominant contributors to ozone concentrations at the western U.S. locations, and were important for many of the eastern U.S. Point sources were generally more important in the eastern United States than in the western United States. The contributions of on-road mobile emissions were less than 5 ppb at a majority of the cities selected for analysis. The higher-order decoupled direct method (HDDM) results showed that in most of the locations selected for analysis, NOx emission reductions were more effective than VOC emission reductions in reducing ozone levels. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies. The relationship between emission reductions and changes in ozone can be studied using photochemical grid models, which are updated with new available information. This study was to update the previous Collet et al. studies by using the most current, at the time the study was done, models and inventory to conduct ozone source attribution and sensitivity studies. The source attribution results from this study provide useful information on the important source categories and provide some initial guidance on future emission reduction strategies.
Feature-binding errors after eye movements and shifts of attention.
Golomb, Julie D; L'heureux, Zara E; Kanwisher, Nancy
2014-05-01
When people move their eyes, the eye-centered (retinotopic) locations of objects must be updated to maintain world-centered (spatiotopic) stability. Here, we demonstrated that the attentional-updating process temporarily distorts the fundamental ability to bind object locations with their features. Subjects were simultaneously presented with four colors after a saccade-one in a precued spatiotopic target location-and were instructed to report the target's color using a color wheel. Subjects' reports were systematically shifted in color space toward the color of the distractor in the retinotopic location of the cue. Probabilistic modeling exposed both crude swapping errors and subtler feature mixing (as if the retinotopic color had blended into the spatiotopic percept). Additional experiments conducted without saccades revealed that the two types of errors stemmed from different attentional mechanisms (attention shifting vs. splitting). Feature mixing not only reflects a new perceptual phenomenon, but also provides novel insight into how attention is remapped across saccades.
Revised coordinates of the Mars Orbiter Laser Altimeter (MOLA) footprints
NASA Astrophysics Data System (ADS)
Annibali, S.; Stark, A.; Gwinner, K.; Hussmann, H.; Oberst, J.
2017-09-01
We revised the Mars Orbiter Laser Altimeter (MOLA) footprint locations (i.e. areocentric body-fixed latitude and longitude), using updated trajectory models for the Mars Global Surveyor and updated rotation parameters of Mars, including precession, nutation and length-of-day variation. We assess the impact of these updates on the gridded MOLA maps. A first comparison reveals that even slight corrections to the rotational state of Mars can lead to height differences up to 100 m (in particular in regions with high slopes, where large interpolation effects are expected). Ultimately, we aim at independent measurements of the rotation parameters of Mars. We co-register MOLA profiles to digital terrain models from stereo images (stereo DTMs) and measure offsets of the two data sets.
Coates, Peter S.; Casazza, Michael L.; Brussee, Brianne E.; Ricca, Mark A.; Gustafson, K. Benjamin; Sanchez-Chopitea, Erika; Mauch, Kimberly; Niell, Lara; Gardner, Scott; Espinosa, Shawn; Delehanty, David J.
2016-05-20
Successful adaptive management hinges largely upon integrating new and improved sources of information as they become available. As a timely example of this tenet, we updated a management decision support tool that was previously developed for greater sage-grouse (Centrocercus urophasianus, hereinafter referred to as “sage-grouse”) populations in Nevada and California. Specifically, recently developed spatially explicit habitat maps derived from empirical data played a key role in the conservation of this species facing listing under the Endangered Species Act. This report provides an updated process for mapping relative habitat suitability and management categories for sage-grouse in Nevada and northeastern California (Coates and others, 2014, 2016). These updates include: (1) adding radio and GPS telemetry locations from sage-grouse monitored at multiple sites during 2014 to the original location dataset beginning in 1998; (2) integrating output from high resolution maps (1–2 m2) of sagebrush and pinyon-juniper cover as covariates in resource selection models; (3) modifying the spatial extent of the analyses to match newly available vegetation layers; (4) explicit modeling of relative habitat suitability during three seasons (spring, summer, winter) that corresponded to critical life history periods for sage-grouse (breeding, brood-rearing, over-wintering); (5) accounting for differences in habitat availability between more mesic sagebrush steppe communities in the northern part of the study area and drier Great Basin sagebrush in more southerly regions by categorizing continuous region-wide surfaces of habitat suitability index (HSI) with independent locations falling within two hydrological zones; (6) integrating the three seasonal maps into a composite map of annual relative habitat suitability; (7) deriving updated land management categories based on previously determined cut-points for intersections of habitat suitability and an updated index of sage-grouse abundance and space-use (AUI); and (8) masking urban footprints and major roadways out of the final map products.Seasonal habitat maps were generated based on model-averaged resource selection functions (RSF) derived for 10 project areas (813 sage-grouse; 14,085 locations) during the spring season, 10 during the summer season (591 sage-grouse, 11,743 locations), and 7 during the winter season (288 sage-grouse, 4,862 locations). RSF surfaces were transformed to HSIs and averaged in a GIS framework for every pixel for each season. Validation analyses of categorized HSI surfaces using a suite of independent datasets resulted in an agreement of 93–97 percent for habitat versus non-habitat on an annual basis. Spring and summer maps validated similarly well at 94–97 percent, while winter maps validated slightly less accurately at 87–93 percent.We then provide an updated example of how space use models can be integrated with habitat models to help inform conservation planning. We used updated lek count data to calculate a composite abundance and space use index (AUI) that comprised the combination of probabilistic breeding density with a non-linear probability of occurrence relative to distance to nearest lek. The AUI was then classified into two categories of use (high and low-to-no) and intersected with the HSI categories to create potential management prioritization scenarios based on information about sage-grouse occupancy coupled with habitat suitability. Compared to Coates and others (2014, 2016), the amount of area classified as habitat across the region increased by 6.5 percent (approximately 1,700,000 acres). For management categories, core increased by 7.2 percent (approximately 865,000 acres), priority increased by 9.6 percent (approximately 855,000 acres), and general increased by 9.2 percent (approximately 768,000 acres), while non-habitat decreased (that is, classified non-habitat occurring outside of areas of concentrated use) by 11.9 percent (approximately 2,500,000 acres). Importantly, seasonal and annual maps represent habitat for all age and sex classes of sage-grouse (that is, sample sizes of marked grouse were insufficient to only construct models for reproductive females). This revised sage-grouse habitat mapping product helps improve adaptive application of conservation planning tools based on intersections of spatially explicit habitat suitability, abundance, and space use indices.
Progress on Updating the 1961-1990 National Solar Radiation Database
NASA Technical Reports Server (NTRS)
Renne, D.; Wilcox, S.; Marion, B.; George, R.; Myers, D.
2003-01-01
The 1961-1990 National Solar Radiation Data Base (NSRDB) provides a 30-year climate summary and solar characterization of 239 locations throughout the United States. Over the past several years, the National Renewable Energy Laboratory (NREL) has received numerous inquiries from a range of constituents as to whether an update of the database to include the 1990s will be developed. However, there are formidable challenges to creating an update of the serially complete station-specific database for the 1971-2000 period. During the 1990s, the National Weather Service changed its observational procedures from a human-based to an automated system, resulting in the loss of important input variables to the model used to complete the 1961-1990 NSRDB. As a result, alternative techniques are required for an update that covers the 1990s. This paper examines several alternative approaches for creating this update and describes preliminary NREL plans for implementing the update.
Multi-level damage identification with response reconstruction
NASA Astrophysics Data System (ADS)
Zhang, Chao-Dong; Xu, You-Lin
2017-10-01
Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.
Modelling Solar Energetic Particle Events Using the iPATH Model
NASA Astrophysics Data System (ADS)
Li, G.; Hu, J.; Ao, X.; Zank, G. P.; Verkhoglyadova, O. P.
2016-12-01
Solar Energetic Particles (SEPs) is the No. 1 space weather hazard. Understanding how particles are energized and propagated in these events is of practical concerns to the manned space missions. In particular, both the radial evolution and the longitudinal extent of a gradual solarenergetic particle (SEP) event are central topics for space weather forecasting. In this talk, I discuss the improved Particle Acceleration and Transport in the Heliosphere (iPATH) model. The iPATH model consists of three parts: (1) an updated ZEUS3D V3.5 MHD module that models thebackground solar wind and the initiation of a CME in a 2D domain; (2) an updated shock acceleration module where we investigate particle acceleration at different longitudinal locations along the surface of a CME-driven shock. Accelerated particle spectrum are obtained at the shock under the diffusive shock acceleration mechanism. Shock parameters and particle distributions are recorded and used as inputs for the later part. (3) an updated transport module where we follow the transport of accelerated particles from the shock to any destinations (Earth and/or Mars, e.g.) using a Monte-Carlo method. Both pitch angle scattering due to MHD turbulence and perpendicular diffusion across magnetic field are included. Our iPATH model is therefore intrinsically 2D in nature. The model is capable of generating time intensity profiles and instantaneous particle spectra atvarious locations and can greatly improve our current space weather forecasting capability.
Allocentrically implied target locations are updated in an eye-centred reference frame.
Thompson, Aidan A; Glover, Christopher V; Henriques, Denise Y P
2012-04-18
When reaching to remembered target locations following an intervening eye movement a systematic pattern of error is found indicating eye-centred updating of visuospatial memory. Here we investigated if implicit targets, defined only by allocentric visual cues, are also updated in an eye-centred reference frame as explicit targets are. Participants viewed vertical bars separated by varying distances, and horizontal lines of equivalently varying lengths, implying a "target" location at the midpoint of the stimulus. After determining the implied "target" location from only the allocentric stimuli provided, participants saccaded to an eccentric location, and reached to the remembered "target" location. Irrespective of the type of stimulus reaching errors to these implicit targets are gaze-dependent, and do not differ from those found when reaching to remembered explicit targets. Implicit target locations are coded and updated as a function of relative gaze direction with respect to those implied locations just as explicit targets are, even though no target is specifically represented. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Hasselmo, Michael E.
2008-01-01
The spiking activity of hippocampal neurons during REM sleep exhibits temporally structured replay of spiking occurring during previously experienced trajectories (Louie and Wilson, 2001). Here, temporally structured replay of place cell activity during REM sleep is modeled in a large-scale network simulation of grid cells, place cells and head direction cells. During simulated waking behavior, the movement of the simulated rat drives activity of a population of head direction cells that updates the activity of a population of entorhinal grid cells. The population of grid cells drives the activity of place cells coding individual locations. Associations between location and movement direction are encoded by modification of excitatory synaptic connections from place cells to speed modulated head direction cells. During simulated REM sleep, the population of place cells coding an experienced location activates the head direction cells coding the associated movement direction. Spiking of head direction cells then causes frequency shifts within the population of entorhinal grid cells to update a phase representation of location. Spiking grid cells then activate new place cells that drive new head direction activity. In contrast to models that perform temporally compressed sequence retrieval similar to sharp wave activity, this model can simulate data on temporally structured replay of hippocampal place cell activity during REM sleep at time scales similar to those observed during waking. These mechanisms could be important for episodic memory of trajectories. PMID:18973557
Wild Fire Emissions for the NOAA Operational HYSPLIT Smoke Model
NASA Astrophysics Data System (ADS)
Huang, H. C.; ONeill, S. M.; Ruminski, M.; Shafran, P.; McQueen, J.; DiMego, G.; Kondragunta, S.; Gorline, J.; Huang, J. P.; Stunder, B.; Stein, A. F.; Stajner, I.; Upadhayay, S.; Larkin, N. K.
2015-12-01
Particulate Matter (PM) generated from forest fires often lead to degraded visibility and unhealthy air quality in nearby and downstream areas. To provide near-real time PM information to the state and local agencies, the NOAA/National Weather Service (NWS) operational HYSPLIT (Hybrid Single Particle Lagrangian Integrated Trajectory Model) smoke modeling system (NWS/HYSPLIT smoke) provides the forecast of smoke concentration resulting from fire emissions driven by the NWS North American Model 12 km weather predictions. The NWS/HYSPLIT smoke incorporates the U.S. Forest Service BlueSky Smoke Modeling Framework (BlueSky) to provide smoke fire emissions along with the input fire locations from the NOAA National Environmental Satellite, Data, and Information Service (NESDIS)'s Hazard Mapping System fire and smoke detection system. Experienced analysts inspect satellite imagery from multiple sensors onboard geostationary and orbital satellites to identify the location, size and duration of smoke emissions for the model. NWS/HYSPLIT smoke is being updated to use a newer version of USFS BlueSky. The updated BlueSky incorporates the Fuel Characteristic Classification System version 2 (FCCS2) over the continental U.S. and Alaska. FCCS2 includes a more detailed description of fuel loadings with additional plant type categories. The updated BlueSky also utilizes an improved fuel consumption model and fire emission production system. For the period of August 2014 and June 2015, NWS/HYSPLIT smoke simulations show that fire smoke emissions with updated BlueSky are stronger than the current operational BlueSky in the Northwest U.S. For the same comparisons, weaker fire smoke emissions from the updated BlueSky were observed over the middle and eastern part of the U.S. A statistical evaluation of NWS/HYSPLIT smoke predicted total column concentration compared to NOAA NESDIS GOES EAST Aerosol Smoke Product retrievals is underway. Preliminary results show that using the newer version of BlueSky leads to improved performance of NWS/HYSPLIT-smoke for June 2015. These results are partially due to the default fuel loading selected for Canadian fires that lead to stronger fire emissions there. The use of more realistic Canadian fuel loading may improve NWS/HYSPLIT smoke forecast.
Human Visuospatial Updating After Passive Translations In Three-Dimensional Space
Klier, Eliana M.; Hess, Bernhard J. M.; Angelaki, Dora E.
2013-01-01
To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63 and 150cm in front of the cyclopean eye) as they moved 10cm left, right, up, down, forward or backward, while fixating a head-fixed target at 53cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84±0.28 (mean±SD), as compared to 0.51±0.33 for downward and 1.05±0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12±0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and inter-subject variabilities were smallest for near targets. Thus, in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy. PMID:18256164
Disruption of the Right Temporoparietal Junction Impairs Probabilistic Belief Updating.
Mengotti, Paola; Dombert, Pascasie L; Fink, Gereon R; Vossel, Simone
2017-05-31
Generating and updating probabilistic models of the environment is a fundamental modus operandi of the human brain. Although crucial for various cognitive functions, the neural mechanisms of these inference processes remain to be elucidated. Here, we show the causal involvement of the right temporoparietal junction (rTPJ) in updating probabilistic beliefs and we provide new insights into the chronometry of the process by combining online transcranial magnetic stimulation (TMS) with computational modeling of behavioral responses. Female and male participants performed a modified location-cueing paradigm, where false information about the percentage of cue validity (%CV) was provided in half of the experimental blocks to prompt updating of prior expectations. Online double-pulse TMS over rTPJ 300 ms (but not 50 ms) after target appearance selectively decreased participants' updating of false prior beliefs concerning %CV, reflected in a decreased learning rate of a Rescorla-Wagner model. Online TMS over rTPJ also impacted on participants' explicit beliefs, causing them to overestimate %CV. These results confirm the involvement of rTPJ in updating of probabilistic beliefs, thereby advancing our understanding of this area's function during cognitive processing. SIGNIFICANCE STATEMENT Contemporary views propose that the brain maintains probabilistic models of the world to minimize surprise about sensory inputs. Here, we provide evidence that the right temporoparietal junction (rTPJ) is causally involved in this process. Because neuroimaging has suggested that rTPJ is implicated in divergent cognitive domains, the demonstration of an involvement in updating internal models provides a novel unifying explanation for these findings. We used computational modeling to characterize how participants change their beliefs after new observations. By interfering with rTPJ activity through online transcranial magnetic stimulation, we showed that participants were less able to update prior beliefs with TMS delivered at 300 ms after target onset. Copyright © 2017 the authors 0270-6474/17/375419-10$15.00/0.
Response Surface Model (RSM)-based Benefit Per Ton Estimates
The tables below are updated versions of the tables appearing in The influence of location, source, and emission type in estimates of the human health benefits of reducing a ton of air pollution (Fann, Fulcher and Hubbell 2009).
Ensemble Kalman Filter versus Ensemble Smoother for Data Assimilation in Groundwater Modeling
NASA Astrophysics Data System (ADS)
Li, L.; Cao, Z.; Zhou, H.
2017-12-01
Groundwater modeling calls for an effective and robust integrating method to fill the gap between the model and data. The Ensemble Kalman Filter (EnKF), a real-time data assimilation method, has been increasingly applied in multiple disciplines such as petroleum engineering and hydrogeology. In this approach, the groundwater models are sequentially updated using measured data such as hydraulic head and concentration data. As an alternative to the EnKF, the Ensemble Smoother (ES) was proposed with updating models using all the data together, and therefore needs a much less computational cost. To further improve the performance of the ES, an iterative ES was proposed for continuously updating the models by assimilating measurements together. In this work, we compare the performance of the EnKF, the ES and the iterative ES using a synthetic example in groundwater modeling. The hydraulic head data modeled on the basis of the reference conductivity field are utilized to inversely estimate conductivities at un-sampled locations. Results are evaluated in terms of the characterization of conductivity and groundwater flow and solute transport predictions. It is concluded that: (1) the iterative ES could achieve a comparable result with the EnKF, but needs a less computational cost; (2) the iterative ES has the better performance than the ES through continuously updating. These findings suggest that the iterative ES should be paid much more attention for data assimilation in groundwater modeling.
2012-06-04
central Tibetan Plateau. Automated hypocenter locations in south- central Tibet were finalized. Refinements included an update of the model used for... central Tibet. A subset of ~7,900 events with 25+ arrivals is considered well-located based on kilometer-scale differences relative to manually located...propagation in the Nepal Himalaya and the south- central Tibetan Plateau. The 2002–2005 experiment consisted of 233 stations along a dense 800 km linear
Predictive spatial modeling of narcotic crop growth patterns
Waltz, Frederick A.; Moore, D.G.
1986-01-01
Spatial models for predicting the geographic distribution of marijuana crops have been developed and are being evaluated for use in law enforcement programs. The models are based on growing condition preferences and on psychological inferences regarding grower behavior. Experiences of local law officials were used to derive the initial model, which was updated and improved as data from crop finds were archived and statistically analyzed. The predictive models are changed as crop locations are moved in response to the pressures of law enforcement. The models use spatial data in a raster geographic information system. The spatial data are derived from the U.S. Geological Survey's US GeoData, standard 7.5-minute topographic quadrangle maps, interpretations of aerial photographs, and thematic maps. Updating of cultural patterns, canopy closure, and other dynamic features is conducted through interpretation of aerial photographs registered to the 7.5-minute quadrangle base. The model is used to numerically weight various data layers that have been processed using spread functions, edge definition, and categorization. The building of the spatial data base, model development, model application, product generation, and use are collectively referred to as the Area Reduction Program (ARP). The goal of ARP is to provide law enforcement officials with tactical maps that show the most likely locations for narcotic crops.
De Sá Teixeira, Nuno
2016-01-01
Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object’s offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth’s gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects’ location. PMID:26910260
De Sá Teixeira, Nuno
2016-01-01
Visual memory for the spatial location where a moving target vanishes has been found to be systematically displaced downward in the direction of gravity. Moreover, it was recently reported that the magnitude of the downward error increases steadily with increasing retention intervals imposed after object's offset and before observers are allowed to perform the spatial localization task, in a pattern where the remembered vanishing location drifts downward as if following a falling trajectory. This outcome was taken to reflect the dynamics of a representational model of earth's gravity. The present study aims to establish the spatial and temporal features of this downward drift by taking into account the dynamics of the motor response. The obtained results show that the memory for the last location of the target drifts downward with time, thus replicating previous results. Moreover, the time taken for completion of the behavioural localization movements seems to add to the imposed retention intervals in determining the temporal frame during which the visual memory is updated. Overall, it is reported that the representation of spatial location drifts downward by about 3 pixels for each two-fold increase of time until response. The outcomes are discussed in relation to a predictive internal model of gravity which outputs an on-line spatial update of remembered objects' location.
Second COS FUV Lifetime Position: FUV Target Acquisition Parameter Update {FENA4}
NASA Astrophysics Data System (ADS)
Penton, Steven
2011-10-01
Verify the ability of the Cycle 20 COS FSW to place an isolated point source at the center of the PSA, using FUV dispersed light target acquisition {TA} from the object and all three FUV gratings at the Second Lifetime Position {SLP}. This program is modeled from the activity summary of FENA4.This program should be executed after the new HV, XD spectral positions, and focus are determined and updated. In addition, the LIFETIME=ALTERNATE TA FSW parameters should be updated prior to execution of this program.NUV imaging TAs have previously been used to determine the correct locations for FUV spectra. We follow the same procedure here.
Walters, D M; Stringer, S M
2010-07-01
A key question in understanding the neural basis of path integration is how individual, spatially responsive, neurons may self-organize into networks that can, through learning, integrate velocity signals to update a continuous representation of location within an environment. It is of vital importance that this internal representation of position is updated at the correct speed, and in real time, to accurately reflect the motion of the animal. In this article, we present a biologically plausible model of velocity path integration of head direction that can solve this problem using neuronal time constants to effect natural time delays, over which associations can be learned through associative Hebbian learning rules. The model comprises a linked continuous attractor network and competitive network. In simulation, we show that the same model is able to learn two different speeds of rotation when implemented with two different values for the time constant, and without the need to alter any other model parameters. The proposed model could be extended to path integration of place in the environment, and path integration of spatial view.
Marino, Alexandria C.; Chun, Marvin M.
2011-01-01
During natural vision, eye movements can drastically alter the retinotopic (eye-centered) coordinates of locations and objects, yet the spatiotopic (world-centered) percept remains stable. Maintaining visuospatial attention in spatiotopic coordinates requires updating of attentional representations following each eye movement. However, this updating is not instantaneous; attentional facilitation temporarily lingers at the previous retinotopic location after a saccade, a phenomenon known as the retinotopic attentional trace. At various times after a saccade, we probed attention at an intermediate location between the retinotopic and spatiotopic locations to determine whether a single locus of attentional facilitation slides progressively from the previous retinotopic location to the appropriate spatiotopic location, or whether retinotopic facilitation decays while a new, independent spatiotopic locus concurrently becomes active. Facilitation at the intermediate location was not significant at any time, suggesting that top-down attention can result in enhancement of discrete retinotopic and spatiotopic locations without passing through intermediate locations. PMID:21258903
Damage identification via asymmetric active magnetic bearing acceleration feedback control
NASA Astrophysics Data System (ADS)
Zhao, Jie; DeSmidt, Hans; Yao, Wei
2015-04-01
A Floquet-based damage detection methodology for cracked rotor systems is developed and demonstrated on a shaft-disk system. This approach utilizes measured changes in the system natural frequencies to estimate the severity and location of shaft structural cracks during operation. The damage detection algorithms are developed with the initial guess solved by least square method and iterative damage parameter vector by updating the eigenvector updating. Active Magnetic Bearing is introduced to break the symmetric structure of rotor system and the tuning range of proper stiffness/virtual mass gains is studied. The system model is built based on energy method and the equations of motion are derived by applying assumed modes method and Lagrange Principle. In addition, the crack model is based on the Strain Energy Release Rate (SERR) concept in fracture mechanics. Finally, the method is synthesized via harmonic balance and numerical examples for a shaft/disk system demonstrate the effectiveness in detecting both location and severity of the structural damage.
Moreo, Michael T.; Justet, Leigh
2008-01-01
Ground-water withdrawal estimates from 1913 through 2003 for the Death Valley regional ground-water flow system are compiled in an electronic database to support a regional, three-dimensional, transient ground-water flow model. This database updates a previously published database that compiled estimates of ground-water withdrawals for 1913-1998. The same methodology is used to construct each database. Primary differences between the 2 databases are an additional 5 years of ground-water withdrawal data, well locations in the updated database are restricted to Death Valley regional ground-water flow system model boundary, and application rates are from 0 to 1.5 feet per year lower than original estimates. The lower application rates result from revised estimates of crop consumptive use, which are based on updated estimates of potential evapotranspiration. In 2003, about 55,700 acre-feet of ground water was pumped in the DVRFS, of which 69 percent was used for irrigation, 13 percent for domestic, and 18 percent for public supply, commercial, and mining activities.
Spatial updating in area LIP is independent of saccade direction.
Heiser, Laura M; Colby, Carol L
2006-05-01
We explore the world around us by making rapid eye movements to objects of interest. Remarkably, these eye movements go unnoticed, and we perceive the world as stable. Spatial updating is one of the neural mechanisms that contributes to this perception of spatial constancy. Previous studies in macaque lateral intraparietal cortex (area LIP) have shown that individual neurons update, or "remap," the locations of salient visual stimuli at the time of an eye movement. The existence of remapping implies that neurons have access to visual information from regions far beyond the classically defined receptive field. We hypothesized that neurons have access to information located anywhere in the visual field. We tested this by recording the activity of LIP neurons while systematically varying the direction in which a stimulus location must be updated. Our primary finding is that individual neurons remap stimulus traces in multiple directions, indicating that LIP neurons have access to information throughout the visual field. At the population level, stimulus traces are updated in conjunction with all saccade directions, even when we consider direction as a function of receptive field location. These results show that spatial updating in LIP is effectively independent of saccade direction. Our findings support the hypothesis that the activity of LIP neurons contributes to the maintenance of spatial constancy throughout the visual field.
Updating visual memory across eye movements for ocular and arm motor control.
Thompson, Aidan A; Henriques, Denise Y P
2008-11-01
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
NASA Astrophysics Data System (ADS)
Duxbury, T. C.; Christensen, P.; Smith, D. E.; Neumann, G. A.; Kirk, R. L.; Caplinger, M. A.; Albee, A. A.; Seregina, N. V.; Neukum, G.; Archinal, B. A.
2014-12-01
The small crater Airy-0 was selected from Mariner 9 images to be the reference for the Mars prime meridian. Initial analyses in the year 2000 tied Viking Orbiter and Mars Orbiter Camera images of Airy-0 to the evolving Mars Orbiter Laser Altimeter global digital terrain model to update the location of Airy-0. Based upon this tie and radiometric tracking of landers/rovers from Earth, new expressions for the Mars spin axis direction, spin rate, and prime meridian epoch value were produced to define the orientation of the Martian surface in inertial space over time. Since the Mars Global Surveyor mission and Mars Orbiter Laser Altimeter global digital terrain model were completed some time ago, a more exhaustive study has been performed to determine the accuracy of the Airy-0 location and orientation of Mars at the standard epoch. Thermal Emission Imaging System (THEMIS) IR image cubes of the Airy and Gale crater regions were tied to the global terrain grid using precision stereo photogrammetric image processing techniques. The Airy-0 location was determined to be about 0.001° east of its predicted location using the currently defined International Astronomical Union (IAU) prime meridian location. Information on this new location and how it was derived will be provided to the NASA Mars Exploration Program Geodesy and Cartography Working Group for their assessment. This NASA group will make a recommendation to the IAU Working Group on Cartographic Coordinates and Rotational Elements to update the expression for the Mars spin axis direction, spin rate, and prime meridian location.
NASA Astrophysics Data System (ADS)
Turnbull, Heather; Omenzetter, Piotr
2017-04-01
The recent shift towards development of clean, sustainable energy sources has provided a new challenge in terms of structural safety and reliability: with aging, manufacturing defects, harsh environmental and operational conditions, and extreme events such as lightning strikes wind turbines can become damaged resulting in production losses and environmental degradation. To monitor the current structural state of the turbine, structural health monitoring (SHM) techniques would be beneficial. Physics based SHM in the form of calibration of a finite element model (FEMs) by inverse techniques is adopted in this research. Fuzzy finite element model updating (FFEMU) techniques for damage severity assessment of a small-scale wind turbine blade are discussed and implemented. The main advantage is the ability of FFEMU to account in a simple way for uncertainty within the problem of model updating. Uncertainty quantification techniques, such as fuzzy sets, enable a convenient mathematical representation of the various uncertainties. Experimental frequencies obtained from modal analysis on a small-scale wind turbine blade were described by fuzzy numbers to model measurement uncertainty. During this investigation, damage severity estimation was investigated through addition of small masses of varying magnitude to the trailing edge of the structure. This structural modification, intended to be in lieu of damage, enabled non-destructive experimental simulation of structural change. A numerical model was constructed with multiple variable additional masses simulated upon the blades trailing edge and used as updating parameters. Objective functions for updating were constructed and minimized using both particle swarm optimization algorithm and firefly algorithm. FFEMU was able to obtain a prediction of baseline material properties of the blade whilst also successfully predicting, with sufficient accuracy, a larger magnitude of structural alteration and its location.
NASA Astrophysics Data System (ADS)
Longting, M.; Ye, S.; Wu, J.
2014-12-01
Identification and removing the DNAPL source in aquifer system is vital in rendering remediation successful and lowering the remediation time and cost. Our work is to apply an optimal search strategy introduced by Zoi and Pinder[1], with some modifications, to a field site in Nanjing City, China to define the strength, and location of DNAPL sources using the least samples. The overall strategy uses Monte Carlo stochastic groundwater flow and transport modeling, incorporates existing sampling data into the search strategy, and determines optimal sampling locations that are selected according to the reduction in overall uncertainty of the field and the proximity to the source locations. After a sample is taken, the plume is updated using a Kalman filter. The updated plume is then compared to the concentration fields that emanate from each individual potential source using fuzzy set technique. The comparison followed provides weights that reflect the degree of truth regarding the location of the source. The above steps are repeated until the optimal source characteristics are determined. Considering our site case, some specific modifications and work have been done as follows. K random fields are generated after fitting the measurement K data to the variogram model. The locations of potential sources that are given initial weights are targeted based on the field survey, with multiple potential source locations around the workshops and wastewater basin. Considering the short history (1999-2010) of manufacturing optical brightener PF at the site, and the existing sampling data, a preliminary source strength is then estimated, which will be optimized by simplex method or GA later. The whole algorithm then will guide us for optimal sampling and update as the investigation proceeds, until the weights finally stabilized. Reference [1] Dokou Zoi, and George F. Pinder. "Optimal search strategy for the definition of a DNAPL source." Journal of Hydrology 376.3 (2009): 542-556. Acknowledgement: Funding supported by National Natural Science Foundation of China (No. 41030746, 40872155) and DuPont Company is appreciated.
On-line Bayesian model updating for structural health monitoring
NASA Astrophysics Data System (ADS)
Rocchetta, Roberto; Broggi, Matteo; Huchet, Quentin; Patelli, Edoardo
2018-03-01
Fatigue induced cracks is a dangerous failure mechanism which affects mechanical components subject to alternating load cycles. System health monitoring should be adopted to identify cracks which can jeopardise the structure. Real-time damage detection may fail in the identification of the cracks due to different sources of uncertainty which have been poorly assessed or even fully neglected. In this paper, a novel efficient and robust procedure is used for the detection of cracks locations and lengths in mechanical components. A Bayesian model updating framework is employed, which allows accounting for relevant sources of uncertainty. The idea underpinning the approach is to identify the most probable crack consistent with the experimental measurements. To tackle the computational cost of the Bayesian approach an emulator is adopted for replacing the computationally costly Finite Element model. To improve the overall robustness of the procedure, different numerical likelihoods, measurement noises and imprecision in the value of model parameters are analysed and their effects quantified. The accuracy of the stochastic updating and the efficiency of the numerical procedure are discussed. An experimental aluminium frame and on a numerical model of a typical car suspension arm are used to demonstrate the applicability of the approach.
NASA Astrophysics Data System (ADS)
Vassena, G.; Clerici, A.
2018-05-01
The state of the art of 3D surveying technologies, if correctly applied, allows to obtain 3D coloured models of large open pit mines using different technologies as terrestrial laser scanner (TLS), with images, combined with UAV based digital photogrammetry. GNSS and/or total station are also currently used to geo reference the model. The University of Brescia has been realised a project to map in 3D an open pit mine located in Botticino, a famous location of marble extraction close to Brescia in North Italy. Terrestrial Laser Scanner 3D point clouds combined with RGB images and digital photogrammetry from UAV have been used to map a large part of the cave. By rigorous and well know procedures a 3D point cloud and mesh model have been obtained using an easy and rigorous approach. After the description of the combined mapping process, the paper describes the innovative process proposed for the daily/weekly update of the model itself. To realize this task a SLAM technology approach is described, using an innovative approach based on an innovative instrument capable to run an automatic localization process and real time on the field change detection analysis.
Smart Location Database - Service
The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block group in the United States. Future updates to the SLD will include additional attributes which summarize the relative location efficiency of a block group when compared to other block groups within the same metropolitan region. EPA also plans to periodically update attributes and add new attributes to reflect latest available data. A log of SLD updates is included in the SLD User Guide. See the user guide for a full description of data sources, data currency, and known limitations: https://edg.epa.gov/data/Public/OP/SLD/SLD_userguide.pdf
Smart Location Database - Download
The Smart Location Database (SLD) summarizes over 80 demographic, built environment, transit service, and destination accessibility attributes for every census block group in the United States. Future updates to the SLD will include additional attributes which summarize the relative location efficiency of a block group when compared to other block groups within the same metropolitan region. EPA also plans to periodically update attributes and add new attributes to reflect latest available data. A log of SLD updates is included in the SLD User Guide. See the user guide for a full description of data sources, data currency, and known limitations: https://edg.epa.gov/data/Public/OP/SLD/SLD_userguide.pdf
Real-time, interactive animation of deformable two- and three-dimensional objects
Desbrun, Mathieu; Schroeder, Peter; Meyer, Mark; Barr, Alan H.
2003-06-03
A method of updating in real-time the locations and velocities of mass points of a two- or three-dimensional object represented by a mass-spring system. A modified implicit Euler integration scheme is employed to determine the updated locations and velocities. In an optional post-integration step, the updated locations are corrected to preserve angular momentum. A processor readable medium and a network server each tangibly embodying the method are also provided. A system comprising a processor in combination with the medium, and a system comprising the server in combination with a client for accessing the server over a computer network, are also provided.
Toward a Neural Basis of Music Perception – A Review and Updated Model
Koelsch, Stefan
2011-01-01
Music perception involves acoustic analysis, auditory memory, auditory scene analysis, processing of interval relations, of musical syntax and semantics, and activation of (pre)motor representations of actions. Moreover, music perception potentially elicits emotions, thus giving rise to the modulation of emotional effector systems such as the subjective feeling system, the autonomic nervous system, the hormonal, and the immune system. Building on a previous article (Koelsch and Siebel, 2005), this review presents an updated model of music perception and its neural correlates. The article describes processes involved in music perception, and reports EEG and fMRI studies that inform about the time course of these processes, as well as about where in the brain these processes might be located. PMID:21713060
Program Updates - San Antonio River Basin
This page will house updates for this urban waters partnership location. As projects progress, status updates can be posted here to reflect the ongoing work by partners in San Antonio working on the San Antonio River Basin.
47 CFR 64.703 - Consumer information.
Code of Federal Regulations, 2010 CFR
2010-10-01
... telephone location; and (4) The name and address of the Consumer Information Bureau of the Commission...). (c) Updating of postings. The posting required by this section shall be updated as soon as... location, but no later than 30 days following such change. This requirement may be satisfied by applying to...
75 FR 55277 - Outer Continental Shelf Air Regulations; Consistency Update for California
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-10
... control air pollution from OCS sources located within 25 miles of States' seaward boundaries that are the... located within 25 miles of States' seaward boundaries must be updated periodically to remain consistent... FR 67845), EPA proposed to incorporate various South Coast AQMD air pollution control requirements...
Modeling Human Behavior at a Large Scale
2012-01-01
update from his phone. He writes that he has a fever and feels awful. Since Joe has a public Twitter profile, we know who some of his friends are...is a vector of a dangerous disease, i.e., a “ Typhoid Mary”? What is the interaction between friendship, location, and co-location in the spread of...some rest. I have nausea, headache, is tired, freezing & now have I got fever . Good Night! :-* It hurts to breathe, swallow, cough or yawn. I must be
The rotational dynamics of Titan from Cassini RADAR images
NASA Astrophysics Data System (ADS)
Meriggiola, Rachele; Iess, Luciano; Stiles, Bryan. W.; Lunine, Jonathan. I.; Mitri, Giuseppe
2016-09-01
Between 2004 and 2009 the RADAR instrument of the Cassini mission provided 31 SAR images of Titan. We tracked the position of 160 surface landmarks as a function of time in order to monitor the rotational dynamics of Titan. We generated and processed RADAR observables using a least squares fit to determine the updated values of the rotational parameters. We provide a new rotational model of Titan, which includes updated values for spin pole location, spin rate, precession and nutation terms. The estimated pole location is compatible with the occupancy of a Cassini state 1. We found a synchronous value of the spin rate (22.57693 deg/day), compatible at a 3-σ level with IAU predictions. The estimated obliquity is equal to 0.31°, incompatible with the assumption of a rigid body with fully-damped pole and a moment of inertia factor of 0.34, as determined by gravity measurements.
Continuous Improvement of a Groundwater Model over a 20-Year Period: Lessons Learned.
Andersen, Peter F; Ross, James L; Fenske, Jon P
2018-04-17
Groundwater models developed for specific sites generally become obsolete within a few years due to changes in: (1) modeling technology; (2) site/project personnel; (3) project funding; and (4) modeling objectives. Consequently, new models are sometimes developed for the same sites using the latest technology and data, but without potential knowledge gained from the prior models. When it occurs, this practice is particularly problematic because, although technology, data, and observed conditions change, development of the new numerical model may not consider the conceptual model's underpinnings. As a contrary situation, we present the unique case of a numerical flow and trichloroethylene (TCE) transport model that was first developed in 1993 and since revised and updated annually by the same personnel. The updates are prompted by an increase in the amount of data, exposure to a wider range of hydrologic conditions over increasingly longer timeframes, technological advances, evolving modeling objectives, and revised modeling methodologies. The history of updates shows smooth, incremental changes in the conceptual model and modeled aquifer parameters that result from both increase and decrease in complexity. Myriad modeling objectives have included demonstrating the ineffectiveness of a groundwater extraction/injection system, evaluating potential TCE degradation, locating new monitoring points, and predicting likelihood of exceedance of groundwater standards. The application emphasizes an original tenet of successful groundwater modeling: iterative adjustment of the conceptual model based on observations of actual vs. model response. © 2018, National Ground Water Association.
Sakaki, Michiko; Niki, Kazuhisa; Mather, Mara
2011-01-01
In life, we must often learn new associations to people, places, or things we already know. The current functional magnetic resonance imaging study investigated the neural mechanisms underlying emotional memory updating. Nineteen participants first viewed negative and neutral pictures and learned associations between those pictures and other neutral stimuli, such as neutral objects and encoding tasks. This initial learning phase was followed by a memory updating phase, during which participants learned picture-location associations for old pictures (i.e., pictures previously associated with other neutral stimuli) and new pictures (i.e., pictures not seen in the first phase). There was greater frontopolar/ orbitofrontal (OFC) activity when people learned picture-location associations for old negative pictures than for new negative pictures, but frontopolar OFC activity did not significantly differ during learning locations of old versus new neutral pictures. In addition, frontopolar activity was more negatively correlated with the amygdala when participants learned picture-location associations for old negative pictures than for new negative or old neutral pictures. Past studies revealed that the frontopolar OFC allows for updating the affective values of stimuli in reversal learning or extinction of conditioning (e.g., Izquierdo & Murray, 2005); our findings suggest that it plays a more general role in updating associations to emotional stimuli. PMID:21568639
75 FR 36294 - Correspondence With the United States Patent and Trademark Office
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-25
... the Solicitor). The Office is also updating the physical location address for the Public Search Room... Solicitor, by specifically adding the Office of the Solicitor as the addressee. Public Search Room: The physical address for the Public Search Room is being updated to reflect that it is located at the Office's...
Walking through doorways causes forgetting: Further explorations.
Radvansky, Gabriel A; Krawietz, Sabine A; Tamplin, Andrea K
2011-08-01
Previous research using virtual environments has revealed a location-updating effect in which there is a decline in memory when people move from one location to another. Here we assess whether this effect reflects the influence of the experienced context, in terms of the degree of immersion of a person in an environment, as suggested by some work in spatial cognition, or by a shift in context. In Experiment 1, the degree of immersion was reduced by using smaller displays. In comparison, in Experiment 2 an actual, rather than a virtual, environment was used, to maximize immersion. Location-updating effects were observed under both of these conditions. In Experiment 3, the original encoding context was reinstated by having a person return to the original room in which objects were first encoded. However, inconsistent with an encoding specificity account, memory did not improve by reinstating this context. Finally, we did a further analysis of the results of this and previous experiments to assess the differential influence of foregrounding and retrieval interference. Overall, these data are interpreted in terms of the event horizon model of event cognition and memory.
Role of Alpha-Band Oscillations in Spatial Updating across Whole Body Motion
Gutteling, Tjerk P.; Medendorp, W. P.
2016-01-01
When moving around in the world, we have to keep track of important locations in our surroundings. In this process, called spatial updating, we must estimate our body motion and correct representations of memorized spatial locations in accordance with this motion. While the behavioral characteristics of spatial updating across whole body motion have been studied in detail, its neural implementation lacks detailed study. Here we use electroencephalography (EEG) to distinguish various spectral components of this process. Subjects gazed at a central body-fixed point in otherwise complete darkness, while a target was briefly flashed, either left or right from this point. Subjects had to remember the location of this target as either moving along with the body or remaining fixed in the world while being translated sideways on a passive motion platform. After the motion, subjects had to indicate the remembered target location in the instructed reference frame using a mouse response. While the body motion, as detected by the vestibular system, should not affect the representation of body-fixed targets, it should interact with the representation of a world-centered target to update its location relative to the body. We show that the initial presentation of the visual target induced a reduction of alpha band power in contralateral parieto-occipital areas, which evolved to a sustained increase during the subsequent memory period. Motion of the body led to a reduction of alpha band power in central parietal areas extending to lateral parieto-temporal areas, irrespective of whether the targets had to be memorized relative to world or body. When updating a world-fixed target, its internal representation shifts hemispheres, only when subjects’ behavioral responses suggested an update across the body midline. Our results suggest that parietal cortex is involved in both self-motion estimation and the selective application of this motion information to maintaining target locations as fixed in the world or fixed to the body. PMID:27199882
McKerrow, Alexa; Davidson, A.; Earnhardt, Todd; Benson, Abigail L.; Toth, Charles; Holm, Thomas; Jutz, Boris
2014-01-01
Over the past decade, great progress has been made to develop national extent land cover mapping products to address natural resource issues. One of the core products of the GAP Program is range-wide species distribution models for nearly 2000 terrestrial vertebrate species in the U.S. We rely on deductive modeling of habitat affinities using these products to create models of habitat availability. That approach requires that we have a thematically rich and ecologically meaningful map legend to support the modeling effort. In this work, we tested the integration of the Multi-Resolution Landscape Characterization Consortium's National Land Cover Database 2011 and LANDFIRE's Disturbance Products to update the 2001 National GAP Vegetation Dataset to reflect 2011 conditions. The revised product can then be used to update the species models. We tested the update approach in three geographic areas (Northeast, Southeast, and Interior Northwest). We used the NLCD product to identify areas where the cover type mapped in 2011 was different from what was in the 2001 land cover map. We used Google Earth and ArcGIS base maps as reference imagery in order to label areas identified as "changed" to the appropriate class from our map legend. Areas mapped as urban or water in the 2011 NLCD map that were mapped differently in the 2001 GAP map were accepted without further validation and recoded to the corresponding GAP class. We used LANDFIRE's Disturbance products to identify changes that are the result of recent disturbance and to inform the reassignment of areas to their updated thematic label. We ran species habitat models for three species including Lewis's Woodpecker (Melanerpes lewis) and the White-tailed Jack Rabbit (Lepus townsendii) and Brown Headed nuthatch (Sitta pusilla). For each of three vertebrate species we found important differences in the amount and location of suitable habitat between the 2001 and 2011 habitat maps. Specifically, Brown headed nuthatch habitat in 2011 was −14% of the 2001 modeled habitat, whereas Lewis's Woodpecker increased by 4%. The white-tailed jack rabbit (Lepus townsendii) had a net change of −1% (11% decline, 10% gain). For that species we found the updates related to opening of forest due to burning and regenerating shrubs following harvest to be the locally important main transitions. In the Southeast updates related to timber management and urbanization are locally important.
ERIC Educational Resources Information Center
Crane, Laura; Benachour, Phillip
2013-01-01
The paper describes the analysis of user location and time stamp information automatically logged when students receive and interact with electronic updates from the University's virtual learning environment. The electronic updates are sent to students' mobile devices using RSS feeds. The mobile reception of such information can be received in…
Flight test derived heating math models for critical locations on the orbiter during reentry
NASA Technical Reports Server (NTRS)
Hertzler, E. K.; Phillips, P. W.
1983-01-01
An analysis technique was developed for expanding the aerothermodynamic envelope of the Space Shuttle without subjecting the vehicle to sustained flight at more stressing heating conditions. A transient analysis program was developed to take advantage of the transient maneuvers that were flown as part of this analysis technique. Heat rates were derived from flight test data for various locations on the orbiter. The flight derived heat rates were used to update heating models based on predicted data. Future missions were then analyzed based on these flight adjusted models. A technique for comparing flight and predicted heating rate data and the extrapolation of the data to predict the aerothermodynamic environment of future missions is presented.
Stone, Mandy L.; Graham, Jennifer L.; Gatotho, Jackline W.
2013-01-01
Cheney Reservoir, located in south-central Kansas, is one of the primary water supplies for the city of Wichita, Kansas. The U.S. Geological Survey has operated a continuous real-time water-quality monitoring station in Cheney Reservoir since 2001; continuously measured physicochemical properties include specific conductance, pH, water temperature, dissolved oxygen, turbidity, fluorescence (wavelength range 650 to 700 nanometers; estimate of total chlorophyll), and reservoir elevation. Discrete water-quality samples were collected during 2001 through 2009 and analyzed for sediment, nutrients, taste-and-odor compounds, cyanotoxins, phytoplankton community composition, actinomycetes bacteria, and other water-quality measures. Regression models were developed to establish relations between discretely sampled constituent concentrations and continuously measured physicochemical properties to compute concentrations of constituents that are not easily measured in real time. The water-quality information in this report is important to the city of Wichita because it allows quantification and characterization of potential constituents of concern in Cheney Reservoir. This report updates linear regression models published in 2006 that were based on data collected during 2001 through 2003. The update uses discrete and continuous data collected during May 2001 through December 2009. Updated models to compute dissolved solids, sodium, chloride, and suspended solids were similar to previously published models. However, several other updated models changed substantially from previously published models. In addition to updating relations that were previously developed, models also were developed for four new constituents, including magnesium, dissolved phosphorus, actinomycetes bacteria, and the cyanotoxin microcystin. In addition, a conversion factor of 0.74 was established to convert the Yellow Springs Instruments (YSI) model 6026 turbidity sensor measurements to the newer YSI model 6136 sensor at the Cheney Reservoir site. Because a high percentage of geosmin and microcystin data were below analytical detection thresholds (censored data), multiple logistic regression was used to develop models that best explained the probability of geosmin and microcystin concentrations exceeding relevant thresholds. The geosmin and microcystin models are particularly important because geosmin is a taste-and-odor compound and microcystin is a cyanotoxin.
The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades.
Boon, Paul J; Belopolsky, Artem V; Theeuwes, Jan
2016-01-01
Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which went unnoticed by the participants. After executing the saccade, participants had to indicate the memorized location. If memory updating fully relies on cancellation driven by extraretinal oculomotor signals, the displacement should have no effect on the perceived location of the memorized stimulus. However, if postsaccadic retinal information about the location of the saccade target is used, the perceived location will be shifted according to the target displacement. As it has been suggested that maintenance of accurate spatial representations across saccades is especially important for action control, we used different ways of reporting the location held in memory; a match-to-sample task, a mouse click or by making another saccade. The results showed a small systematic target displacement bias in all response modalities. Parametric manipulation of the distance between the to-be-memorized stimulus and saccade target revealed that target displacement bias increased over time and changed its spatial profile from being initially centered on locations around the saccade target to becoming spatially global. Taken together results suggest that we neither rely exclusively on extraretinal nor on retinal information in updating working memory representations across saccades. The relative contribution of retinal signals is not fixed but depends on both the time available to integrate these signals as well as the distance between the saccade target and the remembered location.
Earthquake Hazard and Risk in New Zealand
NASA Astrophysics Data System (ADS)
Apel, E. V.; Nyst, M.; Fitzenz, D. D.; Molas, G.
2014-12-01
To quantify risk in New Zealand we examine the impact of updating the seismic hazard model. The previous RMS New Zealand hazard model is based on the 2002 probabilistic seismic hazard maps for New Zealand (Stirling et al., 2002). The 2015 RMS model, based on Stirling et al., (2012) will update several key source parameters. These updates include: implementation a new set of crustal faults including multi-segment ruptures, updating the subduction zone geometry and reccurrence rate and implementing new background rates and a robust methodology for modeling background earthquake sources. The number of crustal faults has increased by over 200 from the 2002 model, to the 2012 model which now includes over 500 individual fault sources. This includes the additions of many offshore faults in northern, east-central, and southwest regions. We also use the recent data to update the source geometry of the Hikurangi subduction zone (Wallace, 2009; Williams et al., 2013). We compare hazard changes in our updated model with those from the previous version. Changes between the two maps are discussed as well as the drivers for these changes. We examine the impact the hazard model changes have on New Zealand earthquake risk. Considered risk metrics include average annual loss, an annualized expected loss level used by insurers to determine the costs of earthquake insurance (and premium levels), and the loss exceedance probability curve used by insurers to address their solvency and manage their portfolio risk. We analyze risk profile changes in areas with large population density and for structures of economic and financial importance. New Zealand is interesting in that the city with the majority of the risk exposure in the country (Auckland) lies in the region of lowest hazard, where we don't have a lot of information about the location of faults and distributed seismicity is modeled by averaged Mw-frequency relationships on area sources. Thus small changes to the background rates can have a large impact on the risk profile for the area. Wellington, another area of high exposure is particularly sensitive to how the Hikurangi subduction zone and the Wellington fault are modeled. Minor changes on these sources have substantial impacts for the risk profile of the city and the country at large.
Cortical Coupling Reflects Bayesian Belief Updating in the Deployment of Spatial Attention.
Vossel, Simone; Mathys, Christoph; Stephan, Klaas E; Friston, Karl J
2015-08-19
The deployment of visuospatial attention and the programming of saccades are governed by the inferred likelihood of events. In the present study, we combined computational modeling of psychophysical data with fMRI to characterize the computational and neural mechanisms underlying this flexible attentional control. Sixteen healthy human subjects performed a modified version of Posner's location-cueing paradigm in which the percentage of cue validity varied in time and the targets required saccadic responses. Trialwise estimates of the certainty (precision) of the prediction that the target would appear at the cued location were derived from a hierarchical Bayesian model fitted to individual trialwise saccadic response speeds. Trial-specific model parameters then entered analyses of fMRI data as parametric regressors. Moreover, dynamic causal modeling (DCM) was performed to identify the most likely functional architecture of the attentional reorienting network and its modulation by (Bayes-optimal) precision-dependent attention. While the frontal eye fields (FEFs), intraparietal sulcus, and temporoparietal junction (TPJ) of both hemispheres showed higher activity on invalid relative to valid trials, reorienting responses in right FEF, TPJ, and the putamen were significantly modulated by precision-dependent attention. Our DCM results suggested that the precision of predictability underlies the attentional modulation of the coupling of TPJ with FEF and the putamen. Our results shed new light on the computational architecture and neuronal network dynamics underlying the context-sensitive deployment of visuospatial attention. Spatial attention and its neural correlates in the human brain have been studied extensively with the help of fMRI and cueing paradigms in which the location of targets is pre-cued on a trial-by-trial basis. One aspect that has so far been neglected concerns the question of how the brain forms attentional expectancies when no a priori probability information is available but needs to be inferred from observations. This study elucidates the computational and neural mechanisms under which probabilistic inference governs attentional deployment. Our results show that Bayesian belief updating explains changes in cortical connectivity; in that directional influences from the temporoparietal junction on the frontal eye fields and the putamen were modulated by (Bayes-optimal) updates. Copyright © 2015 Vossel et al.
Lack of Set Size Effects in Spatial Updating: Evidence for Offline Updating
ERIC Educational Resources Information Center
Hodgson, Eric; Waller, David
2006-01-01
Four experiments required participants to keep track of the locations of (i.e., update) 1, 2, 3, 4, 6, 8, 10, or 15 target objects after rotating. Across all conditions, updating was unaffected by set size. Although some traditional set size effects (i.e., a linear increase of latency with memory load) were observed under some conditions, these…
ERIC Educational Resources Information Center
Easton, Randolph D.; Bentzen, Billie Louise
1999-01-01
A study, including research and practice notes by various authors, investigated whether extended training in an acoustically rich environment could enhance the spatial updating ability of 12 adults with congenital blindness. After training, the adults' distance perception from a home-base location and novel locations was superior to a sighted…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-11
... paragraph for part 730 of the EAR. Updating Address and Telephone Number Recently, BIS's Western Regional Office moved to a new location. This rule revises Sec. 730.8(c) of the EAR to include the address and telephone number of the new location. Consolidation of Information Collections Supplement No. 1 to part 730...
76 FR 55273 - Federal Travel Regulation; Per Diem, Miscellaneous Amendments
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... review of a location's per diem rate. 8. Section 301-11.29--Updating the Web address for state tax... Federal Travel Regulation (FTR) by changing, updating, and clarifying various provisions of Chapters 300... travel during Presidentially- Declared Disasters; and updating other miscellaneous provisions. DATES...
Poppenga, Sandra K.; Gesch, Dean B.; Worstell, Bruce B.
2013-01-01
The 1:24,000-scale high-resolution National Hydrography Dataset (NHD) mapped hydrography flow lines require regular updating because land surface conditions that affect surface channel drainage change over time. Historically, NHD flow lines were created by digitizing surface water information from aerial photography and paper maps. Using these same methods to update nationwide NHD flow lines is costly and inefficient; furthermore, these methods result in hydrography that lacks the horizontal and vertical accuracy needed for fully integrated datasets useful for mapping and scientific investigations. Effective methods for improving mapped hydrography employ change detection analysis of surface channels derived from light detection and ranging (LiDAR) digital elevation models (DEMs) and NHD flow lines. In this article, we describe the usefulness of surface channels derived from LiDAR DEMs for hydrography change detection to derive spatially accurate and time-relevant mapped hydrography. The methods employ analyses of horizontal and vertical differences between LiDAR-derived surface channels and NHD flow lines to define candidate locations of hydrography change. These methods alleviate the need to analyze and update the nationwide NHD for time relevant hydrography, and provide an avenue for updating the dataset where change has occurred.
2010-09-01
raytracing and travel-time calculation in 3D Earth models, such as the finite-difference eikonal method (e.g., Podvin and Lecomte, 1991), fast...by Reiter and Rodi (2009) in constructing JWM. Two teleseismic data sets were considered, both extracted from the EHB database (Engdahl et al...extracted from the updated EHB database distributed by the International Seismological Centre (http://www.isc.ac.uk/EHB/index.html). The new database
Model Analyst’s Toolkit User Guide, Version 7.1.0
2015-08-01
Help > About) Environment details ( operating system ) metronome.log file, located in your MAT 7.1.0 installation folder Any log file that...requirements to run the Model Analyst’s Toolkit: Windows XP operating system (or higher) with Service Pack 2 and all critical Windows updates installed...application icon on your desktop Create a Quick Launch icon – Creates a MAT application icon on the taskbar for operating systems released
Structural Health Monitoring of Large Structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Bartkowicz, Theodore J.; Smith, Suzanne Weaver; Zimmerman, David C.
1994-01-01
This paper describes a damage detection and health monitoring method that was developed for large space structures using on-orbit modal identification. After evaluating several existing model refinement and model reduction/expansion techniques, a new approach was developed to identify the location and extent of structural damage with a limited number of measurements. A general area of structural damage is first identified and, subsequently, a specific damaged structural component is located. This approach takes advantage of two different model refinement methods (optimal-update and design sensitivity) and two different model size matching methods (model reduction and eigenvector expansion). Performance of the proposed damage detection approach was demonstrated with test data from two different laboratory truss structures. This space technology can also be applied to structural inspection of aircraft, offshore platforms, oil tankers, ridges, and buildings. In addition, its applications to model refinement will improve the design of structural systems such as automobiles and electronic packaging.
NASA Technical Reports Server (NTRS)
Smith, Suzanne Weaver; Beattie, Christopher A.
1991-01-01
On-orbit testing of a large space structure will be required to complete the certification of any mathematical model for the structure dynamic response. The process of establishing a mathematical model that matches measured structure response is referred to as model correlation. Most model correlation approaches have an identification technique to determine structural characteristics from the measurements of the structure response. This problem is approached with one particular class of identification techniques - matrix adjustment methods - which use measured data to produce an optimal update of the structure property matrix, often the stiffness matrix. New methods were developed for identification to handle problems of the size and complexity expected for large space structures. Further development and refinement of these secant-method identification algorithms were undertaken. Also, evaluation of these techniques is an approach for model correlation and damage location was initiated.
Experimental Test of Spatial Updating Models for Monkey Eye-Head Gaze Shifts
Van Grootel, Tom J.; Van der Willigen, Robert F.; Van Opstal, A. John
2012-01-01
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements. PMID:23118883
Scale-adaptive compressive tracking with feature integration
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin
2016-05-01
Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.
Ground Vibration Test of the Aerostructure Test Wing 2
NASA Technical Reports Server (NTRS)
Herrera, Claudia; Moholt, Matthew
2009-01-01
The Aerostructures Test Wing (ATW) was developed to test unique concepts for flutter prediction and control synthesis. A follow-on to the successful ATW, denoted ATW2, was fabricated as a test bed to validate a variety of instrumentation in flight and to collect data for development of advanced signal processing algorithms for flutter prediction and aviation safety. As a means to estimate flutter speed, a ground vibration test (GVT) was performed. The results of a GVT are typically utilized to update structural dynamics finite element (FE) models used for flutter analysis. In this study, two GVT methodologies were explored to determine which nodes provide the best sensor locations: (i) effective independence and (ii) kinetic energy sorting algorithms. For measurement, ten and twenty sensors were used for three and 10 target test modes. A total of six accelerometer configurations measured frequencies and mode shapes. This included locations used in the original ATW GVT. Moreover, an optical measurement system was used to acquire data without mass effects added by conventional sensors. A considerable frequency shift was observed in comparing the data from the accelerometers to the optical data. The optical data provided robust data for use of the ATW2 finite element model update.
Hedge, Craig; Oberauer, Klaus; Leonards, Ute
2015-11-01
We examined the relationship between the attentional selection of perceptual information and of information in working memory (WM) through four experiments, using a spatial WM-updating task. Participants remembered the locations of two objects in a matrix and worked through a sequence of updating operations, each mentally shifting one dot to a new location according to an arrow cue. Repeatedly updating the same object in two successive steps is typically faster than switching to the other object; this object switch cost reflects the shifting of attention in WM. In Experiment 1, the arrows were presented in random peripheral locations, drawing perceptual attention away from the selected object in WM. This manipulation did not eliminate the object switch cost, indicating that the mechanisms of perceptual selection do not underlie selection in WM. Experiments 2a and 2b corroborated the independence of selection observed in Experiment 1, but showed a benefit to reaction times when the placement of the arrow cue was aligned with the locations of relevant objects in WM. Experiment 2c showed that the same benefit also occurs when participants are not able to mark an updating location through eye fixations. Together, these data can be accounted for by a framework in which perceptual selection and selection in WM are separate mechanisms that interact through a shared spatial priority map.
Perception of 3-D location based on vision, touch, and extended touch
Giudice, Nicholas A.; Klatzky, Roberta L.; Bennett, Christopher R.; Loomis, Jack M.
2012-01-01
Perception of the near environment gives rise to spatial images in working memory that continue to represent the spatial layout even after cessation of sensory input. As the observer moves, these spatial images are continuously updated.This research is concerned with (1) whether spatial images of targets are formed when they are sensed using extended touch (i.e., using a probe to extend the reach of the arm) and (2) the accuracy with which such targets are perceived. In Experiment 1, participants perceived the 3-D locations of individual targets from a fixed origin and were then tested with an updating task involving blindfolded walking followed by placement of the hand at the remembered target location. Twenty-four target locations, representing all combinations of two distances, two heights, and six azimuths, were perceived by vision or by blindfolded exploration with the bare hand, a 1-m probe, or a 2-m probe. Systematic errors in azimuth were observed for all targets, reflecting errors in representing the target locations and updating. Overall, updating after visual perception was best, but the quantitative differences between conditions were small. Experiment 2 demonstrated that auditory information signifying contact with the target was not a factor. Overall, the results indicate that 3-D spatial images can be formed of targets sensed by extended touch and that perception by extended touch, even out to 1.75 m, is surprisingly accurate. PMID:23070234
NASA Astrophysics Data System (ADS)
Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas
2018-07-01
This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.
Kalman filter data assimilation: targeting observations and parameter estimation.
Bellsky, Thomas; Kostelich, Eric J; Mahalov, Alex
2014-06-01
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Update of potency factors for asbestos-related lung cancer and mesothelioma.
Berman, D Wayne; Crump, Kenny S
2008-01-01
The most recent update of the U.S. Environmental Protection Agency (EPA) health assessment document for asbestos (Nicholson, 1986, referred to as "the EPA 1986 update") is now 20 years old. That document contains estimates of "potency factors" for asbestos in causing lung cancer (K(L)'s) and mesothelioma (K(M)'s) derived by fitting mathematical models to data from studies of occupational cohorts. The present paper provides a parallel analysis that incorporates data from studies published since the EPA 1986 update. The EPA lung cancer model assumes that the relative risk varies linearly with cumulative exposure lagged 10 years. This implies that the relative risk remains constant after 10 years from last exposure. The EPA mesothelioma model predicts that the mortality rate from mesothelioma increases linearly with the intensity of exposure and, for a given intensity, increases indefinitely after exposure ceases, approximately as the square of time since first exposure lagged 10 years. These assumptions were evaluated using raw data from cohorts where exposures were principally to chrysotile (South Carolina textile workers, Hein et al., 2007; mesothelioma only data from Quebec miners and millers, Liddell et al., 1997) and crocidolite (Wittenoom Gorge, Australia miners and millers, Berry et al., 2004) and using published data from a cohort exposed to amosite (Paterson, NJ, insulation manufacturers, Seidman et al., 1986). Although the linear EPA model generally provided a good description of exposure response for lung cancer, in some cases it did so only by estimating a large background risk relative to the comparison population. Some of these relative risks seem too large to be due to differences in smoking rates and are probably due at least in part to errors in exposure estimates. There was some equivocal evidence that the relative risk decreased with increasing time since last exposure in the Wittenoom cohort, but none either in the South Carolina cohort up to 50 years from last exposure or in the New Jersey cohort up to 35 years from last exposure. The mesothelioma model provided good descriptions of the observed patterns of mortality after exposure ends, with no evidence that risk increases with long times since last exposure at rates that vary from that predicted by the model (i.e., with the square of time). In particular, the model adequately described the mortality rate in Quebec chrysotile miners and millers up through >50 years from last exposure. There was statistically significant evidence in both the Wittenoom and Quebec cohorts that the exposure intensity-response is supralinear(1) rather than linear. The best-fitting models predicted that the mortality rate varies as [intensity](0.47) for Wittenoom and as [intensity](0.19) for Quebec and, in both cases, the exponent was significantly less than 1 (p< .0001). Using the EPA models, K(L)'s and K(M)'s were estimated from the three sets of raw data and also from published data covering a broader range of environments than those originally addressed in the EPA 1986 update. Uncertainty in these estimates was quantified using "uncertainty bounds" that reflect both statistical and nonstatistical uncertainties. Lung cancer potency factors (K(L)'s) were developed from 20 studies from 18 locations, compared to 13 locations covered in the EPA 1986 update. Mesothelioma potency factors (K(M)'s) were developed for 12 locations compared to four locations in the EPA 1986 update. Although the 4 locations used to calculate K(M) in the EPA 1986 update include one location with exposures to amosite and three with exposures to mixed fiber types, the 14 K(M)'s derived in the present analysis also include 6 locations in which exposures were predominantly to chrysotile and 1 where exposures were only to crocidolite. The K(M)'s showed evidence of a trend, with lowest K(M)'s obtained from cohorts exposed predominantly to chrysotile and highest K(M)'s from cohorts exposed only to amphibole asbestos, with K(M)'s from cohorts exposed to mixed fiber types being intermediate between the K(M)'s obtained from chrysotile and amphibole environments. Despite the considerable uncertainty in the K(M) estimates, the K(M) from the Quebec mines and mills was clearly smaller than those from several cohorts exposed to amphibole asbestos or a mixture of amphibole asbestos and chrysotile. For lung cancer, although there is some evidence of larger K(L)'s from amphibole asbestos exposure, there is a good deal of dispersion in the data, and one of the largest K(L)'s is from the South Carolina textile mill where exposures were almost exclusively to chrysotile. This K(L) is clearly inconsistent with the K(L) obtained from the cohort of Quebec chrysotile miners and millers. The K(L)'s and K(M)'s derived herein are defined in terms of concentrations of airborne fibers measured by phase-contrast microscopy (PCM), which only counts all structures longer than 5 microm, thicker than about 0.25 microm, and with an aspect ratio > or =3:1. Moreover, PCM does not distinguish between asbestos and nonasbestos particles. One possible reason for the discrepancies between the K(L)'s and K(M)'s from different studies is that the category of structures included in PCM counts does not correspond closely to biological activity. In the accompanying article (Berman and Crump, 2008) the K(L)'s and K(M)'s and related uncertainty bounds obtained in this article are paired with fiber size distributions from the literature obtained using transmission electron microscopy (TEM). The resulting database is used to define K(L)'s and K(M)'s that depend on both the size (e.g., length and width) and mineralogical type (e.g., chrysotile or crocidolite) of an asbestos structure. An analysis is conducted to determine how well different K(L) and K(M) definitions are able to reconcile the discrepancies observed herein among values obtained from different environments.
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Perera, Ricardo; De Roeck, Guido
2008-06-01
This paper develops a sensitivity-based updating method to identify the damage in a tested reinforced concrete (RC) frame modeled with a two-dimensional planar finite element (FE) by minimizing the discrepancies of modal frequencies and mode shapes. In order to reduce the number of unknown variables, a bidimensional damage (element) function is proposed, resulting in a considerable improvement of the optimization performance. For damage identification, a reference FE model of the undamaged frame divided into a few damage functions is firstly obtained and then a rough identification is carried out to detect possible damage locations, which are subsequently refined with new damage functions to accurately identify the damage. From a design point of view, it would be useful to evaluate, in a simplified way, the remaining bending stiffness of cracked beam sections or segments. Hence, an RC damage model based on a static mechanism is proposed to estimate the remnant stiffness of a cracked RC beam segment. The damage model is based on the assumption that the damage effect spreads over a region and the stiffness in the segment changes linearly. Furthermore, the stiffness reduction evaluated using this damage model is compared with the FE updating result. It is shown that the proposed bidimensional damage function is useful in producing a well-conditioned optimization problem and the aforementioned damage model can be used for an approximate stiffness estimation of a cracked beam segment.
10 CFR 50.71 - Maintenance of records, making of reports.
Code of Federal Regulations, 2012 CFR
2012-01-01
... submittal of the original FSAR, or as appropriate, the last update to the FSAR under this section. The..., shall update periodically, as provided in paragraphs (e) (3) and (4) of this section, the final safety... located within the update to the FSAR. 1 Effects of changes includes appropriate revisions of descriptions...
10 CFR 50.71 - Maintenance of records, making of reports.
Code of Federal Regulations, 2011 CFR
2011-01-01
... submittal of the original FSAR, or as appropriate, the last update to the FSAR under this section. The..., shall update periodically, as provided in paragraphs (e) (3) and (4) of this section, the final safety... located within the update to the FSAR. 1 Effects of changes includes appropriate revisions of descriptions...
10 CFR 50.71 - Maintenance of records, making of reports.
Code of Federal Regulations, 2014 CFR
2014-01-01
... submittal of the original FSAR, or as appropriate, the last update to the FSAR under this section. The..., shall update periodically, as provided in paragraphs (e) (3) and (4) of this section, the final safety... located within the update to the FSAR. 1 Effects of changes includes appropriate revisions of descriptions...
10 CFR 50.71 - Maintenance of records, making of reports.
Code of Federal Regulations, 2013 CFR
2013-01-01
... submittal of the original FSAR, or as appropriate, the last update to the FSAR under this section. The..., shall update periodically, as provided in paragraphs (e) (3) and (4) of this section, the final safety... located within the update to the FSAR. 1 Effects of changes includes appropriate revisions of descriptions...
NASA Astrophysics Data System (ADS)
Demissie, Henok K.; Bacopoulos, Peter
2017-05-01
A rich dataset of time- and space-varying velocity measurements for a macrotidal estuary was used in the development of a vector-based formulation of bottom roughness in the Advanced Circulation (ADCIRC) model. The updates to the parallel code of ADCIRC to include directionally based drag coefficient are briefly discussed in the paper, followed by an application of the data assimilation (nudging analysis) to the lower St. Johns River (northeastern Florida) for parameter estimation of anisotropic Manning's n coefficient. The method produced converging estimates of Manning's n values for ebb (0.0290) and flood (0.0219) when initialized with uniform and isotropic setting of 0.0200. Modeled currents, water levels and flows were improved at observation locations where data were assimilated as well as at monitoring locations where data were not assimilated, such that the method increases model skill locally and non-locally with regard to the data locations. The methodology is readily transferrable to other circulation/estuary models, given pre-developed quality mesh/grid and adequate data available for assimilation.
News and Updates from Proctor Creek
This page contains news and updates from the Proctor Creek Urban Waters Partnership location. They span ongoing projects, programs, and initiatives that this Atlanta-based partnership is taking on in its work plan.
The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades
Boon, Paul J.; Belopolsky, Artem V.; Theeuwes, Jan
2016-01-01
Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which went unnoticed by the participants. After executing the saccade, participants had to indicate the memorized location. If memory updating fully relies on cancellation driven by extraretinal oculomotor signals, the displacement should have no effect on the perceived location of the memorized stimulus. However, if postsaccadic retinal information about the location of the saccade target is used, the perceived location will be shifted according to the target displacement. As it has been suggested that maintenance of accurate spatial representations across saccades is especially important for action control, we used different ways of reporting the location held in memory; a match-to-sample task, a mouse click or by making another saccade. The results showed a small systematic target displacement bias in all response modalities. Parametric manipulation of the distance between the to-be-memorized stimulus and saccade target revealed that target displacement bias increased over time and changed its spatial profile from being initially centered on locations around the saccade target to becoming spatially global. Taken together results suggest that we neither rely exclusively on extraretinal nor on retinal information in updating working memory representations across saccades. The relative contribution of retinal signals is not fixed but depends on both the time available to integrate these signals as well as the distance between the saccade target and the remembered location. PMID:27631767
A systematic approach for the location of hand sanitizer dispensers in hospitals.
Cure, Laila; Van Enk, Richard; Tiong, Ewing
2014-09-01
Compliance with hand hygiene practices is directly affected by the accessibility and availability of cleaning agents. Nevertheless, the decision of where to locate these dispensers is often not explicitly or fully addressed in the literature. In this paper, we study the problem of selecting the locations to install alcohol-based hand sanitizer dispensers throughout a hospital unit as an indirect approach to maximize compliance with hand hygiene practices. We investigate the relevant criteria in selecting dispenser locations that promote hand hygiene compliance, propose metrics for the evaluation of various location configurations, and formulate a dispenser location optimization model that systematically incorporates such criteria. A complete methodology to collect data and obtain the model parameters is described. We illustrate the proposed approach using data from a general care unit at a collaborating hospital. A cost analysis was performed to study the trade-offs between usability and cost. The proposed methodology can help in evaluating the current location configuration, determining the need for change, and establishing the best possible configuration. It can be adapted to incorporate alternative metrics, tailored to different institutions and updated as needed with new internal policies or safety regulation.
Tactical Satellite (TacSat) Feasibility Study: A Scenario Driven Approach
2006-09-01
Mobile User Objective System NAFCOM NASA /Air Force Cost Model NAVNETWARCOM Naval Network Warfare Command NGA National Geospatial Intelligence...by providing frequent imagery updates as they search for disaster survivors and trek into regions where all terrain has been destroyed and altered to...Kwajalein Atoll; Wallops Island; NASA . Assets will be located in adjacent to launch sites. 4) Launch schedule- Launch schedule will enable full
NASA Astrophysics Data System (ADS)
Sebera, Josef; Bezděk, Aleš; Kostelecký, Jan; Pešek, Ivan; Shum, C. K.
2016-01-01
The most important high-resolution geopotential models such as EGM96 and EGM2008 have been released approximately once per decade. In light of the ability of modern satellite, airborne or terrestrial techniques to provide new data sets every year (e.g., in polar and ocean areas), these data can be readily included in existing models without waiting for a new release. In this article, we present a novel ellipsoidal approach for updating high-resolution models over the oceans with new gridded data. The problem is demonstrated using the EGM2008 model updated with DTU10 geoid and gravity grids that provide additional signal over the Arctic oceans. The result of the procedure are the ellipsoidal and the spherical harmonic coefficients up to degree 4320 and 4400, respectively. These coefficients represent the input data set to within 0.08 mGal globally, with the largest differences located at the land-ocean boundaries, which is two orders of magnitude less than real accuracy of gravity data from satellite altimetry. Along with the harmonic coefficients a detailed map of the second vertical derivative of the anomalous potential (or vertical gravitational gradient) on 1 arc-min grid is anticipated to improve or complement the original DTU10 geoid model. Finally, an optimized set of Jekeli's functions is provided as they allow for computing oblate ellipsoidal harmonics up to a very high degree and order (>10,000) in terms of the hypergeometric formulation.
Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets
ERIC Educational Resources Information Center
Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus
2012-01-01
Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…
Microseismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-07-01
At the heart of microseismic event measurements is the task to estimate the location of the source microseismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional microseismic source locating methods require, in many cases, manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image microseismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, FWI of microseismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent FWI of microseismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modelled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers are calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
Adaptive particle filter for robust visual tracking
NASA Astrophysics Data System (ADS)
Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai
2009-10-01
Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.
NASA's Planetary Aeolian Laboratory: Status and Update
NASA Astrophysics Data System (ADS)
Williams, D. A.; Smith, J. K.
2017-05-01
This presentation provides a status update on the operational capabilities and funding plans by NASA for the Planetary Aeolian Laboratory located at NASA Ames Research Center, including details for those proposing future wind tunnel experiments.
NASA Technical Reports Server (NTRS)
Li, Bailing; Toll, David; Zhan, Xiwu; Cosgrove, Brian
2011-01-01
Model simulated soil moisture fields are often biased due to errors in input parameters and deficiencies in model physics. Satellite derived soil moisture estimates, if retrieved appropriately, represent the spatial mean of soil moisture in a footprint area, and can be used to reduce model bias (at locations near the surface) through data assimilation techniques. While assimilating the retrievals can reduce model bias, it can also destroy the mass balance enforced by the model governing equation because water is removed from or added to the soil by the assimilation algorithm. In addition, studies have shown that assimilation of surface observations can adversely impact soil moisture estimates in the lower soil layers due to imperfect model physics, even though the bias near the surface is decreased. In this study, an ensemble Kalman filter (EnKF) with a mass conservation updating scheme was developed to assimilate the actual value of Advanced Microwave Scanning Radiometer (AMSR-E) soil moisture retrievals to improve the mean of simulated soil moisture fields by the Noah land surface model. Assimilation results using the conventional and the mass conservation updating scheme in the Little Washita watershed of Oklahoma showed that, while both updating schemes reduced the bias in the shallow root zone, the mass conservation scheme provided better estimates in the deeper profile. The mass conservation scheme also yielded physically consistent estimates of fluxes and maintained the water budget. Impacts of model physics on the assimilation results are discussed.
The same-location cost is unrelated to attentional settings: an object-updating account.
Carmel, Tomer; Lamy, Dominique
2014-08-01
What mechanisms allow us to ignore salient yet irrelevant visual information has been a matter of intense debate. According to the contingent-capture hypothesis, such information is filtered out, whereas according to the salience-based account, it captures attention automatically. Several recent studies have reported a same-location cost that appears to fit neither of these accounts. These showed that responses may actually be slower when the target appears at the location just occupied by an irrelevant singleton distractor. Here, we investigated the mechanisms underlying this same-location cost. Our findings show that the same-location cost is unrelated to automatic attentional capture or strategic setting of attentional priorities, and therefore invalidate the feature-based inhibition and fast attentional disengagement accounts of this effect. In addition, we show that the cost is wiped out when the cue and target are not perceived as parts of the same object. We interpret these findings as indicating that the same-location cost has been previously misinterpreted by both bottom-up and top-down theories of attentional capture. We propose that it is better understood as a consequence of object updating, namely, as the cost of updating the information stored about an object when this object changes across time.
NASA Astrophysics Data System (ADS)
Roberts, P. M.; House, L. S.; Greene, M.; Ten Cate, J. A.; Schultz-Fellenz, E. S.; Kelley, R.
2012-12-01
From the first data recorded in the fall of 1973 to now, the Los Alamos Seismograph Network (LASN) has operated for nearly 40 years. LASN data have been used to locate more than 2,500 earthquakes in north-central New Mexico. The network was installed for seismic verification research, as well as to monitor and locate earthquakes near Los Alamos National Laboratory (LANL). LASN stations are the only earthquake monitoring stations in New Mexico north of Albuquerque. In the late 1970s, LASN included 22 stations spread over a geographic area of 150 km (N-S) by 350 km (E-W), of northern New Mexico. In the early 1980s, the available funding limited the stations that could be operated to a set of 7, located within an area of about 15 km (N-S) by 15 km (E-W), centered on Los Alamos. Over the last 3 years, 6 additional stations have been installed, which have considerably expanded the spatial coverage of the network. These new stations take advantage of broadband state-of-the-art sensors as well as digital recording and telemetry technology. Currently, 7 stations have broadband, three-component seismometers with digital telemetry, and the remaining 6 have traditional 1 Hz short-period seismometers with analog telemetry. In addition, a vertical array of accelerometers was installed in a wellbore on LANL property. This borehole station has 3-component digital strong-motion sensors. In addition, four forensic strong-motion accelerometers (SMA) are operated at LANL facilities. With 3 of the new broadband stations in and around the nearby Valles Caldera, LASN is now able to monitor any very small volcano-seismic events that may be associated with the caldera. We will present a complete description of the current LASN station, instrumentation and telemetry configurations, as well as the data acquisition and event-detection software structure used to record events in Earthworm. More than 2,000 earthquakes were detected and located in north-central New Mexico during the first 11 years of LASN's operation (1973 to 1984). With the subsequent downsizing of the network, only 1-2 earthquakes per month were detected and located within about 150 km of Los Alamos. Over 850 of these nearby earthquakes have been located from 1973 to present. We recently updated the LASN earthquake catalog for north-central New Mexico up through 2011 and most of 2012. This involved re-assessing phase picks and ensuring that all locations are derived using updated station locations and the best available velocity model. We are also looking at subsets of the catalog that include earthquake swarms and clusters and applying relative location techniques to obtain high-precision re-locations for these events. Most events that were detected and located by LASN have magnitudes less than 1.5 and do not appear in the catalogs of any other network. We will present a newly updated map of north-central New Mexico seismicity based on these recent efforts.
Earth-Science Data Co-Locating Tool
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Pan, Lei; Block, Gary L.
2012-01-01
This software is used to locate Earth-science satellite data and climate-model analysis outputs in space and time. This enables the direct comparison of any set of data with different spatial and temporal resolutions. It is written in three separate modules that are clearly separated for their functionality and interface with other modules. This enables a fast development of supporting any new data set. In this updated version of the tool, several new front ends are developed for new products. This software finds co-locatable data pairs for given sets of data products and creates new data products that share the same spatial and temporal coordinates. This facilitates the direct comparison between the two heterogeneous datasets and the comprehensive and synergistic use of the datasets.
Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford; ...
2016-01-01
The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less
NASA Astrophysics Data System (ADS)
Roberts, P. M.; Ten Cate, J. A.; House, L. S.; Greene, M. K.; Morton, E.; Kelley, R. E.
2013-12-01
The Los Alamos Seismic Network (LASN) has operated for 41 years, and provided the data to locate more than 2,500 earthquakes in north-central New Mexico. The network was installed for seismic verification research, as well as to monitor and locate earthquakes near Los Alamos National Laboratory (LANL). LASN stations are the only monitoring stations in New Mexico north of Albuquerque. The original network once included 22 stations in northern Mew Mexico. With limited funding in the early 1980's, the network was downsized to 7 stations within an area of about 15 km (N-S) by 15 km (E-W), centered on Los Alamos. Over the last four years, eight additional stations have been installed, which have considerably expanded the spatial coverage of the network. Currently, 7 stations have broadband, three-component seismometers with digital telemetry, and the remaining 8 have traditional 1 Hz short-period seismometers with either analog telemetry or on-site digital recording. A vertical array of accelerometers was also installed in a wellbore on LANL property. This borehole array has 3-component digital strong-motion sensors. Recently we began upgrading the local strong-motion accelerometer (SMA) network as well, with the addition of high-resolution digitizers and high-sensitivity force-balance accelerometers (FBA). We will present an updated description of the current LASN station, instrumentation and telemetry configurations, as well as the data acquisition and event-detection software structure used to record events in Earthworm. Although more than 2,000 earthquakes were detected and located in north-central New Mexico during the first 11 years of LASN's operation (1973 to 1984), currently only 1-2 earthquakes per month are detected and located within about 150 km of Los Alamos. Over 850 of these nearby earthquakes have been located from 1973 to present. We recently updated the LASN earthquake catalog for north-central New Mexico up through 2012 and most of 2013. Locations for these earthquakes are based on new, consistently picked arrival times, updated station locations, and the best available velocity model. Most have magnitudes less than 1.5 and are not contained in the catalogs of any other network. With 3 of the new broadband stations in and around the nearby Valles Caldera, LASN is now able to monitor even very small volcano-seismic events that may be associated with the caldera. The expanded station coverage and instrument sensitivity has also allowed detection of smaller, more distant events and new types of peculiar, non-earthquake signals we had not previously seen (e.g., train noise). These unusual signals have complicated our event discrimination efforts. We will show an updated map of north-central New Mexico seismicity based on these recent efforts, as well as examples of some the new types of data LASN is now picking up. Although the network and data are generally not accessible to the public, requests for data can be granted on a case-by-case basis.
Study on Dissemination Patterns in Location-Aware Gossiping Networks
NASA Astrophysics Data System (ADS)
Kami, Nobuharu; Baba, Teruyuki; Yoshikawa, Takashi; Morikawa, Hiroyuki
We study the properties of information dissemination over location-aware gossiping networks leveraging location-based real-time communication applications. Gossiping is a promising method for quickly disseminating messages in a large-scale system, but in its application to information dissemination for location-aware applications, it is important to consider the network topology and patterns of spatial dissemination over the network in order to achieve effective delivery of messages to potentially interested users. To this end, we propose a continuous-space network model extended from Kleinberg's small-world model applicable to actual location-based applications. Analytical and simulation-based study shows that the proposed network achieves high dissemination efficiency resulting from geographically neutral dissemination patterns as well as selective dissemination to proximate users. We have designed a highly scalable location management method capable of promptly updating the network topology in response to node movement and have implemented a distributed simulator to perform dynamic target pursuit experiments as one example of applications that are the most sensitive to message forwarding delay. The experimental results show that the proposed network surpasses other types of networks in pursuit efficiency and achieves the desirable dissemination patterns.
Prospective testing of Coulomb short-term earthquake forecasts
NASA Astrophysics Data System (ADS)
Jackson, D. D.; Kagan, Y. Y.; Schorlemmer, D.; Zechar, J. D.; Wang, Q.; Wong, K.
2009-12-01
Earthquake induced Coulomb stresses, whether static or dynamic, suddenly change the probability of future earthquakes. Models to estimate stress and the resulting seismicity changes could help to illuminate earthquake physics and guide appropriate precautionary response. But do these models have improved forecasting power compared to empirical statistical models? The best answer lies in prospective testing in which a fully specified model, with no subsequent parameter adjustments, is evaluated against future earthquakes. The Center of Study of Earthquake Predictability (CSEP) facilitates such prospective testing of earthquake forecasts, including several short term forecasts. Formulating Coulomb stress models for formal testing involves several practical problems, mostly shared with other short-term models. First, earthquake probabilities must be calculated after each “perpetrator” earthquake but before the triggered earthquakes, or “victims”. The time interval between a perpetrator and its victims may be very short, as characterized by the Omori law for aftershocks. CSEP evaluates short term models daily, and allows daily updates of the models. However, lots can happen in a day. An alternative is to test and update models on the occurrence of each earthquake over a certain magnitude. To make such updates rapidly enough and to qualify as prospective, earthquake focal mechanisms, slip distributions, stress patterns, and earthquake probabilities would have to be made by computer without human intervention. This scheme would be more appropriate for evaluating scientific ideas, but it may be less useful for practical applications than daily updates. Second, triggered earthquakes are imperfectly recorded following larger events because their seismic waves are buried in the coda of the earlier event. To solve this problem, testing methods need to allow for “censoring” of early aftershock data, and a quantitative model for detection threshold as a function of distance, time, and magnitude is needed. Third, earthquake catalogs contain errors in location and magnitude that may be corrected in later editions. One solution is to test models in “pseudo-prospective” mode (after catalog revision but without model adjustment). Again, appropriate for science but not for response. Hopefully, demonstrations of modeling success will stimulate improvements in earthquake detection.
How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking
Thomas, Laura E.; Seiffert, Adriane E.
2011-01-01
Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259
Ulloa, Antonio; Bullock, Daniel
2003-10-01
We developed a neural network model to simulate temporal coordination of human reaching and grasping under variable initial grip apertures and perturbations of object size and object location/orientation. The proposed model computes reach-grasp trajectories by continuously updating vector positioning commands. The model hypotheses are (1) hand/wrist transport, grip aperture, and hand orientation control modules are coupled by a gating signal that fosters synchronous completion of the three sub-goals. (2) Coupling from transport and orientation velocities to aperture control causes maximum grip apertures that scale with these velocities and exceed object size. (3) Part of the aperture trajectory is attributable to an aperture-reducing passive biomechanical effect that is stronger for larger apertures. (4) Discrepancies between internal representations of targets partially inhibit the gating signal, leading to movement time increases that compensate for perturbations. Simulations of the model replicate key features of human reach-grasp kinematics observed under three experimental protocols. Our results indicate that no precomputation of component movement times is necessary for online temporal coordination of the components of reaching and grasping.
ERIC Educational Resources Information Center
Saunders, G. Thomas
1985-01-01
Provides information on updating older fume hoods. Areas addressed include: (1) adjustment of the hood's back baffle; (2) hood air leakage; (3) light level; (4) hood location in relation to room traffic and room air; and (5) establishing and maintaining hood performance. (JN)
Dynamic Simulation of 1D Cellular Automata in the Active aTAM.
Jonoska, Nataša; Karpenko, Daria; Seki, Shinnosuke
2015-07-01
The Active aTAM is a tile based model for self-assembly where tiles are able to transfer signals and change identities according to the signals received. We extend Active aTAM to include deactivation signals and thereby allow detachment of tiles. We show that the model allows a dynamic simulation of cellular automata with assemblies that do not record the entire computational history but only the current updates of the states, and thus provide a way for (a) algorithmic dynamical structural changes in the assembly and (b) reusable space in self-assembly. The simulation is such that at a given location the sequence of tiles that attach and detach corresponds precisely to the sequence of states the synchronous cellular automaton generates at that location.
Dynamic Simulation of 1D Cellular Automata in the Active aTAM
Jonoska, Nataša; Karpenko, Daria; Seki, Shinnosuke
2016-01-01
The Active aTAM is a tile based model for self-assembly where tiles are able to transfer signals and change identities according to the signals received. We extend Active aTAM to include deactivation signals and thereby allow detachment of tiles. We show that the model allows a dynamic simulation of cellular automata with assemblies that do not record the entire computational history but only the current updates of the states, and thus provide a way for (a) algorithmic dynamical structural changes in the assembly and (b) reusable space in self-assembly. The simulation is such that at a given location the sequence of tiles that attach and detach corresponds precisely to the sequence of states the synchronous cellular automaton generates at that location. PMID:27789918
The Tenacious Nature of Memory Binding for Arousing Negative Items
Novak, Deanna L.; Mather, Mara
2009-01-01
In two experiments, we investigated whether people are better or worse at updating memory for the location of emotional pictures than neutral pictures. We measured participants' memories for the locations of both arousing negative pictures and neutral pictures while manipulating practice (encountering the same event repeatedly) and interference (encountering the same picture in a different location). Memory for the context of emotional items was less likely to be corrected when erroneous and less likely to be correctly updated when the context changed. These results suggest that initial item-context binding is more tenacious for emotional items than for neutral items, even when such binding is incorrect. PMID:19744934
Brady's Geothermal Field - March 2016 Vibroseis SEG-Y Files and UTM Locations
Kurt Feigl
2016-03-31
PoroTomo March 2016 (Task 6.4) Updated vibroseis source locations with UTM locations. Supersedes gdr.openei.org/submissions/824. Updated vibroseis source location data for Stages 1-4, PoroTomo March 2016. This revision includes source point locations in UTM format (meters) for all four Stages of active source acquisition. Vibroseis sweep data were collected on a Signature Recorder unit (mfr Seismic Source) mounted in the vibroseis cab during the March 2016 PoroTomo active seismic survey Stages 1 to 4. Each sweep generated a GPS timed SEG-Y file with 4 input channels and a 20 second record length. Ch1 = pilot sweep, Ch2 = accelerometer output from the vibe's mass, Ch3 = accel output from the baseplase, and Ch4 = weighted sum of the accelerometer outputs. SEG-Y files are available via the links below.
Precise Hypocenter Determination around Palu Koro Fault: a Preliminary Results
NASA Astrophysics Data System (ADS)
Fawzy Ismullah, M. Muhammad; Nugraha, Andri Dian; Ramdhan, Mohamad; Wandono
2017-04-01
Sulawesi area is located in complex tectonic pattern. High seismicity activity in the middle of Sulawesi is related to Palu Koro fault (PKF). In this study, we determined precise hypocenter around PKF by applying double-difference method. We attempt to investigate of the seismicity rate, geometry of the fault and distribution of focus depth around PKF. We first re-pick P-and S-wave arrival time of the PKF events to determine the initial hypocenter location using Hypoellipse method through updated 1-D seismic velocity. Later on, we relocated the earthquake event using double-difference method. Our preliminary results show the distribution of relocated events are located around PKF and have smaller residual time than the initial location. We will enhance the hypocenter location through updating of arrival time by applying waveform cross correlation method as input for double-difference relocation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arrowsmith, Stephen John; Young, Christopher J.; Ballard, Sanford
The standard paradigm for seismic event monitoring breaks the event detection problem down into a series of processing stages that can be categorized at the highest level into station-level processing and network-level processing algorithms (e.g., Le Bras and Wuster (2002)). At the station-level, waveforms are typically processed to detect signals and identify phases, which may subsequently be updated based on network processing. At the network-level, phase picks are associated to form events, which are subsequently located. Furthermore, waveforms are typically directly exploited only at the station-level, while network-level operations rely on earth models to associate and locate the events thatmore » generated the phase picks.« less
Micro-seismic imaging using a source function independent full waveform inversion method
NASA Astrophysics Data System (ADS)
Wang, Hanchen; Alkhalifah, Tariq
2018-03-01
At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.
NASA Astrophysics Data System (ADS)
Lee, Hak Su; Seo, Dong-Jun; Liu, Yuqiong; McKee, Paul; Corby, Robert
2010-05-01
State updating of distributed hydrologic models via assimilation of streamflow data is subject to "overfitting" because large dimensionality of the state space of the model may render the assimilation problem seriously underdetermined. To examine the issue in the context of operational hydrology, we carried out a set of real-world experiments in which we assimilate streamflow data at interior and/or outlet locations into gridded SAC and kinematic-wave routing models of the U.S. National Weather Service (NWS) Research Distributed Hydrologic Model (RDHM). We used for the experiments nine basins in the southern plains of the U.S. The experiments consist of selectively assimilating streamflow at different gauge locations, outlet and/or interior, and carrying out both dependent and independent validation. To assess the sensitivity of the quality of assimilation-aided streamflow simulation to the reduced dimensionality of the state space, we carried out data assimilation at spatially semi-distributed or lumped scale and by adjusting biases in precipitation and potential evaporation at a 6-hourly or larger scale. In this talk, we present the results and findings.
The boundary vector cell model of place cell firing and spatial memory
Barry, Caswell; Lever, Colin; Hayman, Robin; Hartley, Tom; Burton, Stephen; O'Keefe, John; Jeffery, Kate; Burgess, Neil
2009-01-01
We review evidence for the boundary vector cell model of the environmental determinants of the firing of hippocampal place cells. Preliminary experimental results are presented concerning the effects of addition or removal of environmental boundaries on place cell firing and evidence that boundary vector cells may exist in the subiculum. We review and update computational simulations predicting the location of human search within a virtual environment of variable geometry, assuming that boundary vector cells provide one of the input representations of location used in mammalian spatial memory. Finally, we extend the model to include experience-dependent modification of connection strengths through a BCM-like learning rule, and compare the effects to experimental data on the firing of place cells under geometrical manipulations to their environment. The relationship between neurophysiological results in rats and spatial behaviour in humans is discussed. PMID:16703944
Granek, Joshua A.; Pisella, Laure; Blangero, Annabelle; Rossetti, Yves; Sergio, Lauren E.
2012-01-01
Patients with optic ataxia (OA), who are missing the caudal portion of their superior parietal lobule (SPL), have difficulty performing visually-guided reaches towards extra-foveal targets. Such gaze and hand decoupling also occurs in commonly performed non-standard visuomotor transformations such as the use of a computer mouse. In this study, we test two unilateral OA patients in conditions of 1) a change in the physical location of the visual stimulus relative to the plane of the limb movement, 2) a cue that signals a required limb movement 180° opposite to the cued visual target location, or 3) both of these situations combined. In these non-standard visuomotor transformations, the OA deficit is not observed as the well-documented field-dependent misreach. Instead, OA patients make additional eye movements to update hand and goal location during motor execution in order to complete these slow movements. Overall, the OA patients struggled when having to guide centrifugal movements in peripheral vision, even when they were instructed from visual stimuli that could be foveated. We propose that an intact caudal SPL is crucial for any visuomotor control that involves updating ongoing hand location in space without foveating it, i.e. from peripheral vision, proprioceptive or predictive information. PMID:23071599
Cholinergic stimulation enhances Bayesian belief updating in the deployment of spatial attention.
Vossel, Simone; Bauer, Markus; Mathys, Christoph; Adams, Rick A; Dolan, Raymond J; Stephan, Klaas E; Friston, Karl J
2014-11-19
The exact mechanisms whereby the cholinergic neurotransmitter system contributes to attentional processing remain poorly understood. Here, we applied computational modeling to psychophysical data (obtained from a spatial attention task) under a psychopharmacological challenge with the cholinesterase inhibitor galantamine (Reminyl). This allowed us to characterize the cholinergic modulation of selective attention formally, in terms of hierarchical Bayesian inference. In a placebo-controlled, within-subject, crossover design, 16 healthy human subjects performed a modified version of Posner's location-cueing task in which the proportion of validly and invalidly cued targets (percentage of cue validity, % CV) changed over time. Saccadic response speeds were used to estimate the parameters of a hierarchical Bayesian model to test whether cholinergic stimulation affected the trial-wise updating of probabilistic beliefs that underlie the allocation of attention or whether galantamine changed the mapping from those beliefs to subsequent eye movements. Behaviorally, galantamine led to a greater influence of probabilistic context (% CV) on response speed than placebo. Crucially, computational modeling suggested this effect was due to an increase in the rate of belief updating about cue validity (as opposed to the increased sensitivity of behavioral responses to those beliefs). We discuss these findings with respect to cholinergic effects on hierarchical cortical processing and in relation to the encoding of expected uncertainty or precision. Copyright © 2014 the authors 0270-6474/14/3415735-08$15.00/0.
DOT National Transportation Integrated Search
2016-05-01
Florida International University researchers examined the existing performance measures and the project prioritization method in the CMP and updated them to better reflect the current conditions and strategic goals of FDOT. They also developed visual...
Response phase mapping of nonlinear joint dynamics using continuous scanning LDV measurement method
NASA Astrophysics Data System (ADS)
Di Maio, D.; Bozzo, A.; Peyret, Nicolas
2016-06-01
This study aims to present a novel work aimed at locating discrete nonlinearities in mechanical assemblies. The long term objective is to develop a new metric for detecting and locating nonlinearities using Scanning LDV systems (SLDV). This new metric will help to improve the modal updating, or validation, of mechanical assemblies presenting discrete and sparse nonlinearities. It is well established that SLDV systems can scan vibrating structures with high density of measurement points and produc e highly defined Operational Deflection Shapes (ODSs). This paper will present some insights on how to use response phase mapping for locating nonlinearities of a bolted flange. This type of structure presents two types of nonlinearities, which are geometr ical and frictional joints. The interest is focussed on the frictional joints and, therefore, the ability to locate which joint s are responsible for nonlinearity is seen highly valuable for the model validation activities.
Modeling and query the uncertainty of network constrained moving objects based on RFID data
NASA Astrophysics Data System (ADS)
Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie
2007-06-01
The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.
Recent Updates of A Multi-Phase Transport (AMPT) Model
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei
2008-10-01
We will present recent updates to the AMPT model, a Monte Carlo transport model for high energy heavy ion collisions, since its first public release in 2004 and the corresponding detailed descriptions in Phys. Rev. C 72, 064901 (2005). The updates often result from user requests. Some of these updates expand the physics processes or descriptions in the model, while some updates improve the usability of the model such as providing the initial parton distributions or help avoid crashes on some operating systems. We will also explain how the AMPT model is being maintained and updated.
Simulation of the shallow groundwater-flow system near Mole Lake, Forest County, Wisconsin
Fienen, Michael N.; Juckem, Paul F.; Hunt, Randall J.
2011-01-01
The shallow groundwater system near Mole Lake, Forest County, Wis. was simulated using a previously calibrated regional model. The previous model was updated using newly collected water-level measurements and refinements to surface-water features. The updated model was then used to calculate the area contributing recharge for one existing and two proposed pumping locations on lands of the Sokaogon Chippewa Community. Delineated 1-, 5-, and 10-year areas contributing recharge for existing and proposed wells extend from the areas of pumping to the northeast of the pumping locations. Steady-state pumping was simulated for two scenarios: a base pumping scenario using pumping rates that reflect what the Tribe expects to pump and a high pumping scenario, in which the rate was set to the maximum expected from wells installed in this area. In the base pumping scenario, pumping rates of 32 gallons per minute (gal/min; 46,000 gallons per day (gal/d)) from the existing well and 30 gal/min (43,000 gal/d) at each of the two proposed wells were simulated. The high pumping scenario simulated a rate of 70 gal/min (101,000 gal/d) from each of the three pumping wells to estimate of the largest areas contributing recharge that might be expected given what is currently known about the shallow groundwater system. The areas contributing recharge for both the base and high pumping scenarios did not intersect any modeled surface-water bodies; however, the high pumping scenario had a larger areal extent than the base pumping scenario and intersected a septic separator.
Locations and attributes of wind turbines in Colorado, 2011
Carr, Natasha B.; Diffendorfer, James E.; Fancher, Tammy; Hawkins, Sarah J.; Latysh, Natalie; Leib, Kenneth J.; Matherne, Anne Marie
2013-01-01
This dataset represents an update to U.S. Geological Survey Data Series 597. Locations and attributes of wind turbines in Colorado, 2009 (available at http://pubs.usgs.gov/ds/597/). This updated Colorado wind turbine Data Series provides geospatial data for all 1,204 wind turbines established within the State of Colorado as of September 2011, an increase of 297 wind turbines from 2009. Attributes specific to each turbine include: turbine location, manufacturer and model, rotor diameter, hub height, rotor height, potential megawatt output, land ownership, county, and development status of the wind turbine. Wind energy facility data for each turbine include: facility name, facility power capacity, number of turbines associated with each facility to date, facility developer, facility ownership, and year the facility went online. The locations of turbines are derived from 1-meter true-color aerial photographs produced by the National Agriculture Imagery Program (NAIP); the photographs have a positional accuracy of about ±5 meters. Locations of turbines constructed during or prior to August 2009 are based on August 2009 NAIP imagery and turbine locations constructed after August 2009 were based on September 2011 NAIP imagery. The location of turbines under construction during September 2011 likely will be less accurate than the location of existing turbines. This data series contributes to an Online Interactive Energy Atlas developed by the U.S. Geological Survey (http://my.usgs.gov/eerma/). The Energy Atlas synthesizes data on existing and potential energy development in Colorado and New Mexico and includes additional natural resource data layers. This information may be used by decisionmakers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, comprehensive metadata, and decision-support tools also are included in the Energy Atlas. The format of the Energy Atlas is designed to facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development.
Locations and attributes of wind turbines in New Mexico, 2011
Carr, Natasha B.; Diffendorfer, James B.; Fancher, Tammy; Hawkins, Sarah J.; Latysh, Natalie; Leib, Kenneth J.; Matherne, Anne Marie
2013-01-01
This dataset represents an update to U.S. Geological Survey Data Series 596. Locations and attributes of wind turbines in New Mexico, 2009 (available at http://pubs.usgs.gov/ds/596/).This updated New Mexico wind turbine Data Series provides geospatial data for all 562 wind turbines established within the State of New Mexico as of June 2011, an increase of 155 wind turbines from 2009. Attributes specific to each turbine include: turbine location, manufacturer and model, rotor diameter, hub height, rotor height, potential megawatt output, land ownership, county, and development status of wind turbine. Wind energy facility data for each turbine include: facility name, facility power capacity, number of turbines associated with each facility to date, facility developer, facility ownership, and year the facility went online. The locations of turbines are derived from 1-meter true-color aerial photographs produced by the National Agriculture Imagery Program (NAIP); the photographs have a positional accuracy of about ±5 meters. The locations of turbines constructed during or prior to August 2009 are based on August 2009 NAIP imagery and turbine locations constructed after August 2009 were based June 2011 NAIP imagery. The location of turbines under construction during June 2011 likely will be less accurate than the location of existing turbines. This data series contributes to an Online Interactive Energy Atlas developed by the U.S. Geological Survey (http://my.usgs.gov/eerma/). The Energy Atlas synthesizes data on existing and potential energy development in Colorado and New Mexico and includes additional natural resource data layers. This information may be used by decisionmakers to evaluate and compare the potential benefits and tradeoffs associated with different energy development strategies or scenarios. Interactive maps, downloadable data layers, comprehensive metadata, and decision-support tools also are included in the Energy Atlas. The format of the Energy Atlas is designed to facilitate the integration of information about energy with key terrestrial and aquatic resources for evaluating resource values and minimizing risks from energy development.
2010-01-01
Background Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. Results We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. Conclusions The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to the positional constraints. For a fair comparison, we also apply our proposed method without positional constraints. This comparison shows that these results contain less crossings than those of the spring embedder algorithm. We also compared layouts of the proposed algorithm with and without compartment update and verified that latter can reach better local optima. PMID:20565884
Boundary condition identification for a grid model by experimental and numerical dynamic analysis
NASA Astrophysics Data System (ADS)
Mao, Qiang; Devitis, John; Mazzotti, Matteo; Bartoli, Ivan; Moon, Franklin; Sjoblom, Kurt; Aktan, Emin
2015-04-01
There is a growing need to characterize unknown foundations and assess substructures in existing bridges. It is becoming an important issue for the serviceability and safety of bridges as well as for the possibility of partial reuse of existing infrastructures. Within this broader contest, this paper investigates the possibility of identifying, locating and quantifying changes of boundary conditions, by leveraging a simply supported grid structure with a composite deck. Multi-reference impact tests are operated for the grid model and modification of one supporting bearing is done by replacing a steel cylindrical roller with a roller of compliant material. Impact based modal analysis provide global modal parameters such as damped natural frequencies, mode shapes and flexibility matrix that are used as indicators of boundary condition changes. An updating process combining a hybrid optimization algorithm and the finite element software suit ABAQUS is presented in this paper. The updated ABAQUS model of the grid that simulates the supporting bearing with springs is used to detect and quantify the change of the boundary conditions.
Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature
Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat
2014-01-01
It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185
Update on Integrated Optical Design Analyzer
NASA Technical Reports Server (NTRS)
Moore, James D., Jr.; Troy, Ed
2003-01-01
Updated information on the Integrated Optical Design Analyzer (IODA) computer program has become available. IODA was described in Software for Multidisciplinary Concurrent Optical Design (MFS-31452), NASA Tech Briefs, Vol. 25, No. 10 (October 2001), page 8a. To recapitulate: IODA facilitates multidisciplinary concurrent engineering of highly precise optical instruments. The architecture of IODA was developed by reviewing design processes and software in an effort to automate design procedures. IODA significantly reduces design iteration cycle time and eliminates many potential sources of error. IODA integrates the modeling efforts of a team of experts in different disciplines (e.g., optics, structural analysis, and heat transfer) working at different locations and provides seamless fusion of data among thermal, structural, and optical models used to design an instrument. IODA is compatible with data files generated by the NASTRAN structural-analysis program and the Code V (Registered Trademark) optical-analysis program, and can be used to couple analyses performed by these two programs. IODA supports multiple-load-case analysis for quickly accomplishing trade studies. IODA can also model the transient response of an instrument under the influence of dynamic loads and disturbances.
NASA Astrophysics Data System (ADS)
Calvo, N.; Garcia, R. R.; Kinnison, D. E.
2017-04-01
The latest version of the Whole Atmosphere Community Climate Model (WACCM), which includes a new chemistry scheme and an updated parameterization of orographic gravity waves, produces temperature trends in the Antarctic lower stratosphere in excellent agreement with radiosonde observations for 1969-1998 as regards magnitude, location, timing, and persistence. The maximum trend, reached in November at 100 hPa, is -4.4 ± 2.8 K decade-1, which is a third smaller than the largest trend in the previous version of WACCM. Comparison with a simulation without the updated orographic gravity wave parameterization, together with analysis of the model's thermodynamic budget, reveals that the reduced trend is due to the effects of a stronger Brewer-Dobson circulation in the new simulations, which warms the polar cap. The effects are both direct (a trend in adiabatic warming in late spring) and indirect (a smaller trend in ozone, hence a smaller reduction in shortwave heating, due to the warmer environment).
Identification of cracks in thick beams with a cracked beam element model
NASA Astrophysics Data System (ADS)
Hou, Chuanchuan; Lu, Yong
2016-12-01
The effect of a crack on the vibration of a beam is a classical problem, and various models have been proposed, ranging from the basic stiffness reduction method to the more sophisticated model involving formulation based on the additional flexibility due to a crack. However, in the damage identification or finite element model updating applications, it is still common practice to employ a simple stiffness reduction factor to represent a crack in the identification process, whereas the use of a more realistic crack model is rather limited. In this paper, the issues with the simple stiffness reduction method, particularly concerning thick beams, are highlighted along with a review of several other crack models. A robust finite element model updating procedure is then presented for the detection of cracks in beams. The description of the crack parameters is based on the cracked beam flexibility formulated by means of the fracture mechanics, and it takes into consideration of shear deformation and coupling between translational and longitudinal vibrations, and thus is particularly suitable for thick beams. The identification procedure employs a global searching technique using Genetic Algorithms, and there is no restriction on the location, severity and the number of cracks to be identified. The procedure is verified to yield satisfactory identification for practically any configurations of cracks in a beam.
Fallon, Sean J; Mattiesing, Rozemarijn M; Dolfen, Nina; Manohar, Sanjay G; Husain, Masud
2018-01-05
Ignoring distracting information and updating current contents are essential components of working memory (WM). Yet, although both require controlling irrelevant information, it is unclear whether they have the same effects on recall and produce the same level of misbinding errors (incorrectly joining the features of different memoranda). Moreover, the likelihood of misbinding may be affected by the feature similarity between the items already encoded into memory and the information that has to be filtered out (ignored) or updated into memory. Here, we investigate these questions. Participants were sequentially presented with two pairs of arrows. The first pair of arrows always had to be encoded into memory, but the second pair either had to be ignored (ignore condition) or allowed to displace the previously encoded items (update condition). To investigate the effect of similarity on recall, we also varied, in a factorial manner, whether the items that had to be ignored or updated were presented in the same or different colours and/or same or different spatial locations to the original memoranda. By applying a computational model, we were able to quantify the levels of misbinding. Ignoring, but not updating, increased overall recall error as well as misbinding rates, even when accounting for the retention period. This indicates that not all manipulations of attention in WM are equal in terms of their effects on recall and misbinding. Misbinding rates in the ignore condition were affected by the colour and spatial congruence of relevant and irrelevant information to a greater extent than in the update condition. This finding suggests that attentional templates are used to evaluate relevant and irrelevant information in different ways during ignoring and updating. Together, the results suggest that differences between the two functions might occur due to higher levels of attentional compartmentalisation - or protection - during updating compared to ignoring. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
An update of Quaternary faults of central and eastern Oregon
Weldon, Ray J.; Fletcher, D.K.; Weldon, E.M.; Scharer, K.M.; McCrory, P.A.
2002-01-01
This is the online version of a CD-ROM publication. We have updated the eastern portion of our previous active fault map of Oregon (Pezzopane, Nakata, and Weldon, 1992) as a contribution to the larger USGS effort to produce digital maps of active faults in the Pacific Northwest region. The 1992 fault map has seen wide distribution and has been reproduced in essentially all subsequent compilations of active faults of Oregon. The new map provides a substantial update of known active or suspected active faults east of the Cascades. Improvements in the new map include (1) many newly recognized active faults, (2) a linked ArcInfo map and reference database, (3) more precise locations for previously recognized faults on shaded relief quadrangles generated from USGS 30-m digital elevations models (DEM), (4) more uniform coverage resulting in more consistent grouping of the ages of active faults, and (5) a new category of 'possibly' active faults that share characteristics with known active faults, but have not been studied adequately to assess their activity. The distribution of active faults has not changed substantially from the original Pezzopane, Nakata and Weldon map. Most faults occur in the south-central Basin and Range tectonic province that is located in the backarc portion of the Cascadia subduction margin. These faults occur in zones consisting of numerous short faults with similar rates, ages, and styles of movement. Many active faults strongly correlate with the most active volcanic centers of Oregon, including Newberry Craters and Crater Lake.
78 FR 36738 - Signal System Reporting Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-19
... by updating an outdated statutory citation. DATES: Written comments must be received by August 19... interested parties of the date, time, and location of any such hearing. ADDRESSES: You may submit comments.... Updating U.S. Code Citations in Part 233 Administrative amendments are sometimes necessary to address...
NASA Astrophysics Data System (ADS)
Theunissen, T.; Chevrot, S.; Sylvander, M.; Monteiller, V.; Calvet, M.; Villaseñor, A.; Benahmed, S.; Pauchet, H.; Grimaud, F.
2018-03-01
Local seismic networks are usually designed so that earthquakes are located inside them (primary azimuthal gap <<180°) and close to the seismic stations (0-100 km). With these local or near-regional networks (0°-5°), many seismological observatories still routinely locate earthquakes using 1-D velocity models. Moving towards 3-D location algorithms requires robust 3-D velocity models. This work takes advantage of seismic monitoring spanning more than 30 yr in the Pyrenean region. We investigate the influence of a well-designed 3-D model with station corrections including basins structure and the geometry of the Mohorovicic discontinuity on earthquake locations. In the most favourable cases (GAP < 180° and distance to the first station lower than 15 km), results using 1-D velocity models are very similar to 3-D results. The horizontal accuracy in the 1-D case can be higher than in the 3-D case if lateral variations in the structure are not properly resolved. Depth is systematically better resolved in the 3-D model even on the boundaries of the seismic network (GAP > 180° and distance to the first station higher than 15 km). Errors on velocity models and accuracy of absolute earthquake locations are assessed based on a reference data set made of active seismic, quarry blasts and passive temporary experiments. Solutions and uncertainties are estimated using the probabilistic approach of the NonLinLoc (NLLoc) software based on Equal Differential Time. Some updates have been added to NLLoc to better focus on the final solution (outlier exclusion, multiscale grid search, S-phases weighting). Errors in the probabilistic approach are defined to take into account errors on velocity models and on arrival times. The seismicity in the final 3-D catalogue is located with a horizontal uncertainty of about 2.0 ± 1.9 km and a vertical uncertainty of about 3.0 ± 2.0 km.
Ricketts Community Update No 3
In February 2017, the U.S. Environmental Protection Agency (EPA) collected air samples at 50 properties located near the Rickett’s Dry Cleaning facility located at 2017 Doubleday Avenue in the village of Ballston Spa.
Object-location binding across a saccade: A retinotopic Spatial Congruency Bias
Shafer-Skelton, Anna; Kupitz, Colin N.; Golomb, Julie D.
2017-01-01
Despite frequent eye movements that rapidly shift the locations of objects on our retinas, our visual system creates a stable perception of the world. To do this, it must convert eye-centered (retinotopic) input to world-centered (spatiotopic) percepts. Moreover, for successful behavior we must also incorporate information about object features/identities during this updating – a fundamental challenge that remains to be understood. Here we adapted a recent behavioral paradigm, the “Spatial Congruency Bias”, to investigate object-location binding across an eye movement. In two initial baseline experiments, we showed that the Spatial Congruency Bias was present for both gabor and face stimuli in addition to the object stimuli used in the original paradigm. Then, across three main experiments, we found the bias was preserved across an eye movement, but only in retinotopic coordinates: Subjects were more likely to perceive two stimuli as having the same features/identity when they were presented in the same retinotopic location. Strikingly, there was no evidence of location binding in the more ecologically relevant spatiotopic (world-centered) coordinates; the reference frame did not update to spatiotopic even at longer post-saccade delays, nor did it transition to spatiotopic with more complex stimuli (gabors, shapes, and faces all showed a retinotopic Congruency Bias). Our results suggest that object-location binding may be tied to retinotopic coordinates, and that it may need to be re-established following each eye movement rather than being automatically updated to spatiotopic coordinates. PMID:28070793
Finite element modelling and updating of a lively footbridge: The complete process
NASA Astrophysics Data System (ADS)
Živanović, Stana; Pavic, Aleksandar; Reynolds, Paul
2007-03-01
The finite element (FE) model updating technology was originally developed in the aerospace and mechanical engineering disciplines to automatically update numerical models of structures to match their experimentally measured counterparts. The process of updating identifies the drawbacks in the FE modelling and the updated FE model could be used to produce more reliable results in further dynamic analysis. In the last decade, the updating technology has been introduced into civil structural engineering. It can serve as an advanced tool for getting reliable modal properties of large structures. The updating process has four key phases: initial FE modelling, modal testing, manual model tuning and automatic updating (conducted using specialist software). However, the published literature does not connect well these phases, although this is crucial when implementing the updating technology. This paper therefore aims to clarify the importance of this linking and to describe the complete model updating process as applicable in civil structural engineering. The complete process consisting the four phases is outlined and brief theory is presented as appropriate. Then, the procedure is implemented on a lively steel box girder footbridge. It was found that even a very detailed initial FE model underestimated the natural frequencies of all seven experimentally identified modes of vibration, with the maximum error being almost 30%. Manual FE model tuning by trial and error found that flexible supports in the longitudinal direction should be introduced at the girder ends to improve correlation between the measured and FE-calculated modes. This significantly reduced the maximum frequency error to only 4%. It was demonstrated that only then could the FE model be automatically updated in a meaningful way. The automatic updating was successfully conducted by updating 22 uncertain structural parameters. Finally, a physical interpretation of all parameter changes is discussed. This interpretation is often missing in the published literature. It was found that the composite slabs were less stiff than originally assumed and that the asphalt layer contributed considerably to the deck stiffness.
78 FR 68501 - International Standards on the Transport of Dangerous Goods
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-14
... include: Review of Working papers Working Group updates Regulatory Cooperation Council (RCC) Update... sessions, can be found on the United Nations Economic Commission for Europe (UNECE) Transport Division Web... its decisions on Working Papers. The Working Papers for the 26th session of the UNSCEGHS are located...
Aerial Networking for the Implementation of Cooperative Control on Small Unmanned Aerial Systems
2013-03-01
the relay aircraft to an optimal location. Secondly, a mesh network was configured and tested. This configuration successfully relayed aircraft...functionality, such as updating navigation waypoints to each aircraft. The results suggest the system be updated with more capable modems in a mesh ...
Third COS FUV Lifetime Position: FUV Target Acquisition Parameter Update {LENA3}
NASA Astrophysics Data System (ADS)
Penton, Steven
2013-10-01
Verify the ability of the Cycle 22 COS FSW to place an isolated point source at the center of the PSA, using FUV dispersed light target acquisition (TA) from the object and all three FUV gratings at the Third Lifetime Position (LP3). This program is modeled from the activity summary of LENA3.This program should be executed after the LP3 HV, XD spectral positions, aperture mechanism position, and focus are determined and updated. In addition, initial estimates of the LIFETIME=ALTERNATE TA FSW parameters and subarrays should be updated prior to execution of this program. After Visit 01, the subarrays will be updated. After Visit 2, the FUV WCA-to-PSA offsets will be updateded. Prior to Visit 6, LV56 will be installed will include new values for the LP3 FUV plate scales. VISIT 6 exposures use the default lifetime position (LP3).NUV imaging TAs have previously been used to determine the correct locations for FUV spectra. We follow the same procedure here.Note that the ETC runs here were made using ETC22.2 and are therefore valid for Mach 2014. Some TDS drop will likely have occured before these visits execute, but we have plenty of count to go what we need to do in this program.
Numerical model updating technique for structures using firefly algorithm
NASA Astrophysics Data System (ADS)
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
Pavelko, Michael T.
2010-01-01
The water-level database for the Death Valley regional groundwater flow system in Nevada and California was updated. The database includes more than 54,000 water levels collected from 1907 to 2007, from more than 1,800 wells. Water levels were assigned a primary flag and multiple secondary flags that describe hydrologic conditions and trends at the time of the measurement and identify pertinent information about the well or water-level measurement. The flags provide a subjective measure of the relative accuracy of the measurements and are used to identify which water levels are appropriate for calculating head observations in a regional transient groundwater flow model. Included in the report appendix are all water-level data and their flags, selected well data, and an interactive spreadsheet for viewing hydrographs and well locations.
Factors influencing infants’ ability to update object representations in memory
Moher, Mariko; Feigenson, Lisa
2013-01-01
Remembering persisting objects over occlusion is critical to representing a stable environment. Infants remember hidden objects at multiple locations and can update their representation of a hidden array when an object is added or subtracted. However, the factors influencing these updating abilities have received little systematic exploration. Here we examined the flexibility of infants’ ability to update object representations. We tested 11-month-olds in a looking-time task in which objects were added to or subtracted from two hidden arrays. Across five experiments, infants successfully updated their representations of hidden arrays when the updating occurred successively at one array before beginning at the other. But when updating required alternating between two arrays, infants failed. However, simply connecting the two arrays with a thin strip of foam-core led infants to succeed. Our results suggest that infants’ construal of an event strongly affects their ability to update memory representations of hidden objects. When construing an event as containing multiple updates to the same array, infants succeed, but when construing the event as requiring the revisiting and updating of previously attended arrays, infants fail. PMID:24049245
Spatial Updating and the Maintenance of Visual Constancy
Klier, Eliana M.; Angelaki, Dora E.
2008-01-01
Spatial updating is the means by which we keep track of the locations of objects in space even as we move. Four decades of research have shown that humans and non-human primates can take the amplitude and direction of intervening movements into account, including saccades (both head-fixed and head-free), pursuit, whole-body rotations and translations. At the neuronal level, spatial updating is thought to be maintained by receptive field locations that shift with changes in gaze and evidence for such shifts have been shown in several cortical areas. These regions receive information about the intervening movement from several sources including motor efference copies when a voluntary movement is made and vestibular/somatosensory signals when the body is in motion. Many of these updating signals arise from brainstem regions that monitor our ongoing movements and subsequently transmit this information to the cortex via pathways that likely include the thalamus. Several issues of debate include (1) the relative contribution of extra-retinal sensory and efference copy signals to spatial updating, (2) the source of an updating signal for real life, three-dimensional motion that cannot arise from brain areas encoding only two-dimensional commands, and (3) the reference frames used by the brain to integrate updating signals from various sources. This review highlights the relevant spatial updating studies and provides a summary of the field today. We find that spatial constancy is maintained by a highly evolved neural mechanism that keeps track of our movements, transmits this information to relevant brain regions, and then uses this information to change the way in which single neurons respond. In this way, we are able to keep track of relevant objects in the outside world and interact with them in meaningful ways. PMID:18786618
Spatio-Semantic Comparison of Large 3d City Models in Citygml Using a Graph Database
NASA Astrophysics Data System (ADS)
Nguyen, S. H.; Yao, Z.; Kolbe, T. H.
2017-10-01
A city may have multiple CityGML documents recorded at different times or surveyed by different users. To analyse the city's evolution over a given period of time, as well as to update or edit the city model without negating modifications made by other users, it is of utmost importance to first compare, detect and locate spatio-semantic changes between CityGML datasets. This is however difficult due to the fact that CityGML elements belong to a complex hierarchical structure containing multi-level deep associations, which can basically be considered as a graph. Moreover, CityGML allows multiple syntactic ways to define an object leading to syntactic ambiguities in the exchange format. Furthermore, CityGML is capable of including not only 3D urban objects' graphical appearances but also their semantic properties. Since to date, no known algorithm is capable of detecting spatio-semantic changes in CityGML documents, a frequent approach is to replace the older models completely with the newer ones, which not only costs computational resources, but also loses track of collaborative and chronological changes. Thus, this research proposes an approach capable of comparing two arbitrarily large-sized CityGML documents on both semantic and geometric level. Detected deviations are then attached to their respective sources and can easily be retrieved on demand. As a result, updating a 3D city model using this approach is much more efficient as only real changes are committed. To achieve this, the research employs a graph database as the main data structure for storing and processing CityGML datasets in three major steps: mapping, matching and updating. The mapping process transforms input CityGML documents into respective graph representations. The matching process compares these graphs and attaches edit operations on the fly. Found changes can then be executed using the Web Feature Service (WFS), the standard interface for updating geographical features across the web.
Zimmermann, Kathrin; Eschen, Anne
2017-04-01
Object-location memory (OLM) enables us to keep track of the locations of objects in our environment. The neurocognitive model of OLM (Postma, A., Kessels, R. P. C., & Van Asselen, M. (2004). The neuropsychology of object-location memory. In G. L. Allen (Ed.), Human spatial memory: Remembering where (pp. 143-160). Mahwah, NJ: Lawrence Erlbaum, Postma, A., Kessels, R. P. C., & Van Asselen, M. (2008). How the brain remembers and forgets where things are: The neurocognition of object-location memory. Neuroscience & Biobehavioral Reviews, 32, 1339-1345. doi: 10.1016/j.neubiorev.2008.05.001 ) proposes that distinct brain regions are specialised for different subprocesses of OLM (object processing, location processing, and object-location binding; categorical and coordinate OLM; egocentric and allocentric OLM). It was based mainly on findings from lesion studies. However, recent episodic memory studies point to a contribution of additional or different brain regions to object and location processing within episodic OLM. To evaluate and update the neurocognitive model of OLM, we therefore conducted a systematic literature search for lesion as well as functional neuroimaging studies contrasting small-space episodic OLM with object memory or location memory. We identified 10 relevant lesion studies and 8 relevant functional neuroimaging studies. We could confirm some of the proposals of the neurocognitive model of OLM, but also differing hypotheses from episodic memory research, about which brain regions are involved in the different subprocesses of small-space episodic OLM. In addition, we were able to identify new brain regions as well as important research gaps.
Vocabulary Instruction for Secondary Students with Reading Disabilities: An Updated Research Review
ERIC Educational Resources Information Center
Kuder, S. Jay
2017-01-01
This article presents an update and extension of the research on instructional methods for vocabulary learning by secondary-age students with learning disabilities. Seven studies that have been published since the last comprehensive review of the research were located. Four instructional methods were found to be the most effective: mnemonic…
Elastic Velocity Updating through Image-Domain Tomographic Inversion of Passive Seismic Data
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2014-12-01
Seismic monitoring at injection sites (e.g., CO2sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits images of the earthquake source using various imaging conditions based upon the P- and S-wavefield data. We generate image volumes by back propagating data through initial models and then applying a correlation-based imaging condition. We use the P-wavefield autocorrelation, S-wavefield autocorrelation, and P-S wavefield cross-correlation images. Inconsistencies in the images form the residuals, which are used to update the P- and S-wave velocity models through adjoint-state tomography. Because the image volumes are constructed from all trace data, the signal-to-noise in this space is increased when compared to the individual traces. Moreover, it eliminates the need for picking and does not require any estimation of the source location and timing. Initial tests show that with reasonable source distribution and acquisition array, velocity anomalies can be recovered. Future tests will apply this methodology to other scales from laboratory to global.
Using SCADA Data, Field Studies, and Real-Time Modeling to ...
EPA has been providing technical assistance to the City of Flint and the State of Michigan in response to the drinking water lead contamination incident. Responders quickly recognized the need for a water distribution system hydraulic model to provide insight on flow patterns and water quality as well as to evaluate changes being made to the system operation to enhance corrosion control and improve chlorine residuals. EPA partnered with the City of Flint and the Michigan Department of Environmental Quality to update and calibrate an existing hydraulic model. The City provided SCADA data, GIS data, customer billing data, valve status data, design diagrams, and information on operations. Team members visited all facilities and updated pump and valve types, sizes, settings, elevations, and pump discharge curves. Several technologies were used to support this work including the EPANET-RTX based Polaris real-time modeling software, WaterGEMS, ArcGIS, EPANET, and RTX:LINK. Field studies were conducted to collect pressure and flow data from more than 25 locations throughout the distribution system. An assessment of the model performance compared model predictions for flow, pressure, and tank levels to SCADA and field data, resulting in error measurements for each data stream over the time period analyzed. Now, the calibrated model can be used with a known confidence in its performance to evaluate hydraulic and water quality problems, and the model can be easily
Timing Interactions in Social Simulations: The Voter Model
NASA Astrophysics Data System (ADS)
Fernández-Gracia, Juan; Eguíluz, Víctor M.; Miguel, Maxi San
The recent availability of huge high resolution datasets on human activities has revealed the heavy-tailed nature of the interevent time distributions. In social simulations of interacting agents the standard approach has been to use Poisson processes to update the state of the agents, which gives rise to very homogeneous activity patterns with a well defined characteristic interevent time. As a paradigmatic opinion model we investigate the voter model and review the standard update rules and propose two new update rules which are able to account for heterogeneous activity patterns. For the new update rules each node gets updated with a probability that depends on the time since the last event of the node, where an event can be an update attempt (exogenous update) or a change of state (endogenous update). We find that both update rules can give rise to power law interevent time distributions, although the endogenous one more robustly. Apart from that for the exogenous update rule and the standard update rules the voter model does not reach consensus in the infinite size limit, while for the endogenous update there exist a coarsening process that drives the system toward consensus configurations.
Derivation of low flow frequency distributions under human activities and its implications
NASA Astrophysics Data System (ADS)
Gao, Shida; Liu, Pan; Pan, Zhengke; Ming, Bo; Guo, Shenglian; Xiong, Lihua
2017-06-01
Low flow, refers to a minimum streamflow in dry seasons, is crucial to water supply, agricultural irrigation and navigation. Human activities, such as groundwater pumping, influence low flow severely. In order to derive the low flow frequency distribution functions under human activities, this study incorporates groundwater pumping and return flow as variables in the recession process. Steps are as follows: (1) the original low flow without human activities is assumed to follow a Pearson type three distribution, (2) the probability distribution of climatic dry spell periods is derived based on a base flow recession model, (3) the base flow recession model is updated under human activities, and (4) the low flow distribution under human activities is obtained based on the derived probability distribution of dry spell periods and the updated base flow recession model. Linear and nonlinear reservoir models are used to describe the base flow recession, respectively. The Wudinghe basin is chosen for the case study, with daily streamflow observations during 1958-2000. Results show that human activities change the location parameter of the low flow frequency curve for the linear reservoir model, while alter the frequency distribution function for the nonlinear one. It is indicated that alter the parameters of the low flow frequency distribution is not always feasible to tackle the changing environment.
2017-09-01
VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY
ERM model analysis for adaptation to hydrological model errors
NASA Astrophysics Data System (ADS)
Baymani-Nezhad, M.; Han, D.
2018-05-01
Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.
A review of statistical updating methods for clinical prediction models.
Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew
2018-01-01
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.
New highway accident location manual for Missouri.
DOT National Transportation Integrated Search
2013-12-01
The Missouri HAL manual is used to identify, analyze, and correct high crash locations, and has not been updated since : 1999. This new edition brings the manual up to date, while incorporating the methodology of the national Highway Safety : Manual ...
Tuning Fractures With Dynamic Data
NASA Astrophysics Data System (ADS)
Yao, Mengbi; Chang, Haibin; Li, Xiang; Zhang, Dongxiao
2018-02-01
Flow in fractured porous media is crucial for production of oil/gas reservoirs and exploitation of geothermal energy. Flow behaviors in such media are mainly dictated by the distribution of fractures. Measuring and inferring the distribution of fractures is subject to large uncertainty, which, in turn, leads to great uncertainty in the prediction of flow behaviors. Inverse modeling with dynamic data may assist to constrain fracture distributions, thus reducing the uncertainty of flow prediction. However, inverse modeling for flow in fractured reservoirs is challenging, owing to the discrete and non-Gaussian distribution of fractures, as well as strong nonlinearity in the relationship between flow responses and model parameters. In this work, building upon a series of recent advances, an inverse modeling approach is proposed to efficiently update the flow model to match the dynamic data while retaining geological realism in the distribution of fractures. In the approach, the Hough-transform method is employed to parameterize non-Gaussian fracture fields with continuous parameter fields, thus rendering desirable properties required by many inverse modeling methods. In addition, a recently developed forward simulation method, the embedded discrete fracture method (EDFM), is utilized to model the fractures. The EDFM maintains computational efficiency while preserving the ability to capture the geometrical details of fractures because the matrix is discretized as structured grid, while the fractures being handled as planes are inserted into the matrix grids. The combination of Hough representation of fractures with the EDFM makes it possible to tune the fractures (through updating their existence, location, orientation, length, and other properties) without requiring either unstructured grids or regridding during updating. Such a treatment is amenable to numerous inverse modeling approaches, such as the iterative inverse modeling method employed in this study, which is capable of dealing with strongly nonlinear problems. A series of numerical case studies with increasing complexity are set up to examine the performance of the proposed approach.
Cleanups In My Community (CIMC) - Incidents of National Significance, National Layer
This data layer provides access to Incidents of National Significance as part of the CIMC web service. Incidents of National Significance include all Presidentially-declared emergencies, major disasters, and catastrophes. Multiple federal departments and agencies, including EPA, coordinate actions to help prevent, prepare for, respond to, and recover from Incidents of National Significance. The Incidents of National Significance shown in this web service are derived from the epa.gov website and include links to the relevant web pages within the attribute table. Data about Incidents of National Significance are located on their own EPA web pages, and CIMC links to those pages. The CIMC web service was initially published in 2013, but the data are updated on the 18th of each month. The full schedule for data updates in CIMC is located here: https://iaspub.epa.gov/enviro/data_update_v2.
NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B
2006-08-01
Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.
Update rules and interevent time distributions: slow ordering versus no ordering in the voter model.
Fernández-Gracia, J; Eguíluz, V M; San Miguel, M
2011-07-01
We introduce a general methodology of update rules accounting for arbitrary interevent time (IET) distributions in simulations of interacting agents. We consider in particular update rules that depend on the state of the agent, so that the update becomes part of the dynamical model. As an illustration we consider the voter model in fully connected, random, and scale-free networks with an activation probability inversely proportional to the time since the last action, where an action can be an update attempt (an exogenous update) or a change of state (an endogenous update). We find that in the thermodynamic limit, at variance with standard updates and the exogenous update, the system orders slowly for the endogenous update. The approach to the absorbing state is characterized by a power-law decay of the density of interfaces, observing that the mean time to reach the absorbing state might be not well defined. The IET distributions resulting from both update schemes show power-law tails.
Modeling job sites in real time to improve safety during equipment operation
NASA Astrophysics Data System (ADS)
Caldas, Carlos H.; Haas, Carl T.; Liapi, Katherine A.; Teizer, Jochen
2006-03-01
Real-time three-dimensional (3D) modeling of work zones has received an increasing interest to perform equipment operation faster, safer and more precisely. In addition, hazardous job site environment like they exist on construction sites ask for new devices which can rapidly and actively model static and dynamic objects. Flash LADAR (Laser Detection and Ranging) cameras are one of the recent technology developments which allow rapid spatial data acquisition of scenes. Algorithms that can process and interpret the output of such enabling technologies into threedimensional models have the potential to significantly improve work processes. One particular important application is modeling the location and path of objects in the trajectory of heavy construction equipment navigation. Detecting and mapping people, materials and equipment into a three-dimensional computer model allows analyzing the location, path, and can limit or restrict access to hazardous areas. This paper presents experiments and results of a real-time three-dimensional modeling technique to detect static and moving objects within the field of view of a high-frame update rate laser range scanning device. Applications related to heavy equipment operations on transportation and construction job sites are specified.
Central Facilities Area Sewage Lagoon Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giesbrecht, Alan
2015-03-01
The Central Facilities Area (CFA) located in Butte County, Idaho at Idaho National Laboratory (INL) has an existing wastewater system to collect and treat sanitary wastewater and non contact cooling water from the facility. The existing treatment facility consists of three cells: Cell 1 has a surface area of 1.7 acres, Cell 2 has a surface area of 10.3 acres, and Cell 3 has a surface area of 0.5 acres. If flows exceed the evaporative capacity of the cells, wastewater is discharged to a 73.5 acre land application site that utilizes a center pivot irrigation sprinkler system. The purpose ofmore » this current study is to update the analysis and conclusions of the December 2013 study. In this current study, the new seepage rate and influent flow rate data have been used to update the calculations, model, and analysis.« less
Groundwater availability of the Denver Basin aquifer system, Colorado
Paschke, Suzanne
2011-01-01
The Denver Basin aquifer system is a critical water resource for growing municipal, industrial, and domestic uses along the semiarid Front Range urban corridor of Colorado. The confined bedrock aquifer system is located along the eastern edge of the Rocky Mountain Front Range where the mountains meet the Great Plains physiographic province. Continued population growth and the resulting need for additional water supplies in the Denver Basin and throughout the western United States emphasize the need to continually monitor and reassess the availability of groundwater resources. In 2004, the U.S. Geological Survey initiated large-scale regional studies to provide updated groundwater-availability assessments of important principal aquifers across the United States, including the Denver Basin. This study of the Denver Basin aquifer system evaluates the hydrologic effects of continued pumping and documents an updated groundwater flow model useful for appraisal of hydrologic conditions.
A National Disturbance Modeling System to Support Ecological Carbon Sequestration Assessments
NASA Astrophysics Data System (ADS)
Hawbaker, T. J.; Rollins, M. G.; Volegmann, J. E.; Shi, H.; Sohl, T. L.
2009-12-01
The U.S. Geological Survey (USGS) is prototyping a methodology to fulfill requirements of Section 712 of the Energy Independence and Security Act (EISA) of 2007. At the core of the EISA requirements is the development of a methodology to complete a two-year assessment of current carbon stocks and other greenhouse gas (GHG) fluxes, and potential increases for ecological carbon sequestration under a range of future climate changes, land-use / land-cover configurations, and policy, economic and management scenarios. Disturbances, especially fire, affect vegetation dynamics and ecosystem processes, and can also introduce substantial uncertainty and risk to the efficacy of long-term carbon sequestration strategies. Thus, the potential impacts of disturbances need to be considered under different scenarios. As part of USGS efforts to meet EISA requirements, we developed the National Disturbance Modeling System (NDMS) using a series of statistical and process-based simulation models. NDMS produces spatially-explicit forecasts of future disturbance locations and severity, and the resulting effects on vegetation dynamics. NDMS is embedded within the Forecasting Scenarios of Future Land Cover (FORE-SCE) model and informs the General Ensemble Biogeochemical Modeling System (GEMS) for quantifying carbon stocks and GHG fluxes. For fires, NDMS relies on existing disturbance histories, such as the Landsat derived Monitoring Trends in Burn Severity (MTBS) and Vegetation Change Tracker (VCT) data being used to update LANDFIRE fuels data. The MTBS and VCT data are used to parameterize models predicting the number and size of fires in relation to climate, land-use/land-cover change, and socioeconomic variables. The locations of individual fire ignitions are determined by an ignition probability surface and then FARSITE is used to simulate fire spread in response to weather, fuels, and topography. Following the fire spread simulations, a burn severity model is used to determine annual changes in biomass pools. Vegetation succession among LANDFIRE vegetation types is initiated using burn perimeter and severity data at the end of each annual simulation. Results from NDMS are used to update land-use/land-cover layers used by FORE-SCE and also transferred to GEMS for quantifying and updating carbon stocks and greenhouse gas fluxes. In this presentation, we present: 1) an overview of NDMS and its role in USGS's national ecological carbon sequestration assessment; 2) validation of NDMS using historic data; and 3) initial forecasts of disturbances for the southeastern United States and their impacts on greenhouse gas emissions, and post-fire carbon stocks and fluxes.
Improving precipitation simulation from updated surface characteristics in South America
NASA Astrophysics Data System (ADS)
Pereira, Gabriel; Silva, Maria Elisa Siqueira; Moraes, Elisabete Caria; Chiquetto, Júlio Barboza; da Silva Cardozo, Francielle
2017-07-01
Land use and land cover maps and their physical-chemical and biological properties are important variables in the numerical modeling of Earth systems. In this context, the main objective of this study is to analyze the improvements resulting from the land use and land cover map update in numerical simulations performed using the Regional Climate Model system version 4 (RegCM4), as well as the seasonal variations of physical parameters used by the Biosphere Atmosphere Transfer Scheme (BATS). In general, the update of the South America 2007 land use and land cover map, used by the BATS, improved the simulation of precipitation by 10 %, increasing the mean temporal correlation coefficient, compared to observed data, from 0.84 to 0.92 (significant at p < 0.05, Student's t test). Correspondingly, the simulations performed with adjustments in maximum fractional vegetation cover, in visible and shortwave infrared reflectance, and in the leaf area index, showed a good agreement for maximum and minimum temperature, with values closer to observed data. The changes in physical parameters and land use updating in BATS/RegCM4 reduced overestimation of simulated precipitation from 19 to 7 % (significant at p < 0.05, Student's t test). Regarding evapotranspiration and precipitation, the most significant differences due to land use updating were located (1) in the Amazon deforestation arc; (2) around the Brazil-Bolivia border (in the Brazilian Pantanal wetlands); (3) in the Northeast region of Brazil; (4) in northwestern Paraguay; and (5) in the River Plate Basin, in Argentina. Moreover, the main precipitation differences between sensitivity and control experiments occurred during the rainy months in central-north South America (October to March). These were associated with a displacement in the South Atlantic convergence zone (SACZ) positioning, presenting a spatial pattern of alternated areas with higher and lower precipitation rates. These important differences occur due to the replacement of tropical rainforest for pasture and agriculture and the replacement of agricultural areas for pasture, scrubland, and deciduous forest.
Addressing the Big-Earth-Data Variety Challenge with the Hierarchical Triangular Mesh
NASA Technical Reports Server (NTRS)
Rilee, Michael L.; Kuo, Kwo-Sen; Clune, Thomas; Oloso, Amidu; Brown, Paul G.; Yu, Honfeng
2016-01-01
We have implemented an updated Hierarchical Triangular Mesh (HTM) as the basis for a unified data model and an indexing scheme for geoscience data to address the variety challenge of Big Earth Data. We observe that, in the absence of variety, the volume challenge of Big Data is relatively easily addressable with parallel processing. The more important challenge in achieving optimal value with a Big Data solution for Earth Science (ES) data analysis, however, is being able to achieve good scalability with variety. With HTM unifying at least the three popular data models, i.e. Grid, Swath, and Point, used by current ES data products, data preparation time for integrative analysis of diverse datasets can be drastically reduced and better variety scaling can be achieved. In addition, since HTM is also an indexing scheme, when it is used to index all ES datasets, data placement alignment (or co-location) on the shared nothing architecture, which most Big Data systems are based on, is guaranteed and better performance is ensured. Moreover, our updated HTM encoding turns most geospatial set operations into integer interval operations, gaining further performance advantages.
The Role of Categories and Spatial Cuing in Global-Scale Location Estimates
ERIC Educational Resources Information Center
Friedman, Alinda
2009-01-01
Seven independent groups estimated the location of North American cities using both spatial and numeric response modes and a variety of perceptual and memory supports. These supports included having location markers for each city color coded by nation and identified by name, giving participants the opportunity to see and update all their estimates…
Indoor Spatial Updating with Reduced Visual Information
Legge, Gordon E.; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M.
2016-01-01
Purpose Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Methods Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. Results With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. Discussion If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment. PMID:26943674
Indoor Spatial Updating with Reduced Visual Information.
Legge, Gordon E; Gage, Rachel; Baek, Yihwa; Bochsler, Tiana M
2016-01-01
Spatial updating refers to the ability to keep track of position and orientation while moving through an environment. People with impaired vision may be less accurate in spatial updating with adverse consequences for indoor navigation. In this study, we asked how artificial restrictions on visual acuity and field size affect spatial updating, and also judgments of the size of rooms. Normally sighted young adults were tested with artificial restriction of acuity in Mild Blur (Snellen 20/135) and Severe Blur (Snellen 20/900) conditions, and a Narrow Field (8°) condition. The subjects estimated the dimensions of seven rectangular rooms with and without these visual restrictions. They were also guided along three-segment paths in the rooms. At the end of each path, they were asked to estimate the distance and direction to the starting location. In Experiment 1, the subjects walked along the path. In Experiment 2, they were pushed in a wheelchair to determine if reduced proprioceptive input would result in poorer spatial updating. With unrestricted vision, mean Weber fractions for room-size estimates were near 20%. Severe Blur but not Mild Blur yielded larger errors in room-size judgments. The Narrow Field was associated with increased error, but less than with Severe Blur. There was no effect of visual restriction on estimates of distance back to the starting location, and only Severe Blur yielded larger errors in the direction estimates. Contrary to expectation, the wheelchair subjects did not exhibit poorer updating performance than the walking subjects, nor did they show greater dependence on visual condition. If our results generalize to people with low vision, severe deficits in acuity or field will adversely affect the ability to judge the size of indoor spaces, but updating of position and orientation may be less affected by visual impairment.
Cho, Chulhee; Choi, Jae-Young; Jeong, Jongpil; Chung, Tai-Myoung
2017-01-01
Lately, we see that Internet of things (IoT) is introduced in medical services for global connection among patients, sensors, and all nearby things. The principal purpose of this global connection is to provide context awareness for the purpose of bringing convenience to a patient's life and more effectively implementing clinical processes. In health care, monitoring of biosignals of a patient has to be continuously performed while the patient moves inside and outside the hospital. Also, to monitor the accurate location and biosignals of the patient, appropriate mobility management is necessary to maintain connection between the patient and the hospital network. In this paper, a binding update scheme on PMIPv6, which reduces signal traffic during location updates by Virtual LMA (VLMA) on the top original Local Mobility Anchor (LMA) Domain, is proposed to reduce the total cost. If a Mobile Node (MN) moves to a Mobile Access Gateway (MAG)-located boundary of an adjacent LMA domain, the MN changes itself into a virtual mode, and this movement will be assumed to be a part of the VLMA domain. In the proposed scheme, MAGs eliminate global binding updates for MNs between LMA domains and significantly reduce the packet loss and latency by eliminating the handoff between LMAs. In conclusion, the performance analysis results show that the proposed scheme improves performance significantly versus PMIPv6 and HMIPv6 in terms of the binding update rate per user and average handoff latency.
Updated Status and Performance at the Fourth HST COS FUV Lifetime Position
NASA Astrophysics Data System (ADS)
Taylor, Joanna M.; De Rosa, Gisella; Fix, Mees B.; Fox, Andrew; Indriolo, Nick; James, Bethan; Jedrzejewski, Robert I.; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Proffitt, Charles R.; Rafelski, Marc; Roman-Duval, Julia; Sahnow, David J.; Snyder, Elaine M.; Sonnentrucker, Paule; White, James
2017-06-01
To mitigate the adverse effects of gain sag on the spectral quality and accuracy of Hubble Space Telescope’s Cosmic Origins Spectrograph FUV observations, COS FUV spectra will be moved from Lifetime Position 3 (LP3) to a new pristine location on the detectors at LP4 in July 2017. To achieve maximal spectral resolution while preserving detector area, the spectra will be shifted in the cross-dispersion (XD) direction by -2.5" (about -31 pixels) from LP3 or -5” (about 62 pixels) from the original LP1. At LP4, the wavelength calibration lamp spectrum can overlap with the previously gain-sagged LP2 PSA spectrum location. If lamp lines fall in the gain sag holes from LP2, it can cause line ratios to change and the wavelength calibration to fail. As a result, we have updated the Wavecal Parameters Reference Table and CalCOS to address this issue. Additionally, it was necessary to extend the current geometric correction in order to encompass the entire LP4 location. Here we present 2-D template profiles and 1-D spectral trace centroids derived at LP4 as well as LP4-related updates to the wavelength calibration, and geometric correction.
78 FR 42700 - Radio Broadcasting Services; Various Locations
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-17
... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 73 [DA 13-1376] Radio Broadcasting Services; Various Locations AGENCY: Federal Communications Commission. ACTION: Final rule. SUMMARY: The Audio Division updates the FM Table of Allotments to reinstate five vacant FM allotments in various communities in Maryland...
76 FR 67375 - Radio Broadcasting Services; Various Locations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-01
... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 73 [DA 11-1689] Radio Broadcasting Services; Various Locations AGENCY: Federal Communications Commission. ACTION: Final rule. SUMMARY: The Audio Division, on its own motion, updates the FM Table of Allotments to reinstate certain vacant FM allotments. Formerly...
USDA-ARS?s Scientific Manuscript database
Potato research at the Red River Valley Agricultural Research Center is conducted by the Sugarbeet & Potato Research Unit at two locations: the Northern Crop Science Laboratory in Fargo, ND and the Potato Research Worksite located in East Grand Forks, MN. Research in Fargo is laboratory oriented an...
Porphyry Copper Deposits of the World: Database and Grade and Tonnage Models, 2008
Singer, Donald A.; Berger, Vladimir I.; Moring, Barry C.
2008-01-01
This report is an update of earlier publications about porphyry copper deposits (Singer, Berger, and Moring, 2002; Singer, D.A., Berger, V.I., and Moring, B.C., 2005). The update was necessary because of new information about substantial increases in resources in some deposits and because we revised locations of some deposits so that they are consistent with images in GoogleEarth. In this report we have added new porphyry copper deposits and removed a few incorrectly classed deposits. In addition, some errors have been corrected and a number of deposits have had some information, such as grades, tonnages, locations, or ages revised. Colleagues have helped identify places where improvements were needed. Mineral deposit models are important in exploration planning and quantitative resource assessments for a number of reasons including: (1) grades and tonnages among deposit types are significantly different, and (2) many types occur in different geologic settings that can be identified from geologic maps. Mineral deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables or for robust estimation of undiscovered deposits?thus we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral deposit models play the central role in transforming geoscience information to a form useful to policy makers. The foundation of mineral deposit models is information about known deposits. The purpose of this publication is to make this kind of information available in digital form for porphyry copper deposits. The consistently defined deposits in this file provide the foundation for grade and tonnage models included here and for mineral deposit density models (Singer and others, 2005: Singer, 2008).
Bayesian updating in a fault tree model for shipwreck risk assessment.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M
2017-07-15
Shipwrecks containing oil and other hazardous substances have been deteriorating on the seabeds of the world for many years and are threatening to pollute the marine environment. The status of the wrecks and the potential volume of harmful substances present in the wrecks are affected by a multitude of uncertainties. Each shipwreck poses a unique threat, the nature of which is determined by the structural status of the wreck and possible damage resulting from hazardous activities that could potentially cause a discharge. Decision support is required to ensure the efficiency of the prioritisation process and the allocation of resources required to carry out risk mitigation measures. Whilst risk assessments can provide the requisite decision support, comprehensive methods that take into account key uncertainties related to shipwrecks are limited. The aim of this paper was to develop a method for estimating the probability of discharge of hazardous substances from shipwrecks. The method is based on Bayesian updating of generic information on the hazards posed by different activities in the surroundings of the wreck, with information on site-specific and wreck-specific conditions in a fault tree model. Bayesian updating is performed using Monte Carlo simulations for estimating the probability of a discharge of hazardous substances and formal handling of intrinsic uncertainties. An example application involving two wrecks located off the Swedish coast is presented. Results show the estimated probability of opening, discharge and volume of the discharge for the two wrecks and illustrate the capability of the model to provide decision support. Together with consequence estimations of a discharge of hazardous substances, the suggested model enables comprehensive and probabilistic risk assessments of shipwrecks to be made. Copyright © 2017 Elsevier B.V. All rights reserved.
Integrating Remote Sensing and Disease Surveillance to Forecast Malaria Epidemics
NASA Astrophysics Data System (ADS)
Wimberly, M. C.; Beyane, B.; DeVos, M.; Liu, Y.; Merkord, C. L.; Mihretie, A.
2015-12-01
Advance information about the timing and locations of malaria epidemics can facilitate the targeting of resources for prevention and emergency response. Early detection methods can detect incipient outbreaks by identifying deviations from expected seasonal patterns, whereas early warning approaches typically forecast future malaria risk based on lagged responses to meteorological factors. A critical limiting factor for implementing either of these approaches is the need for timely and consistent acquisition, processing and analysis of both environmental and epidemiological data. To address this need, we have developed EPIDEMIA - an integrated system for surveillance and forecasting of malaria epidemics. The EPIDEMIA system includes a public health interface for uploading and querying weekly surveillance reports as well as algorithms for automatically validating incoming data and updating the epidemiological surveillance database. The newly released EASTWeb 2.0 software application automatically downloads, processes, and summaries remotely-sensed environmental data from multiple earth science data archives. EASTWeb was implemented as a component of the EPIDEMIA system, which combines the environmental monitoring data and epidemiological surveillance data into a unified database that supports both early detection and early warning models. Dynamic linear models implemented with Kalman filtering were used to carry out forecasting and model updating. Preliminary forecasts have been disseminated to public health partners in the Amhara Region of Ethiopia and will be validated and refined as the EPIDEMIA system ingests new data. In addition to continued model development and testing, future work will involve updating the public health interface to provide a broader suite of outbreak alerts and data visualization tools that are useful to our public health partners. The EPIDEMIA system demonstrates a feasible approach to synthesizing the information from epidemiological surveillance systems and remotely-sensed environmental monitoring systems to improve malaria epidemic detection and forecasting.
NASA Astrophysics Data System (ADS)
Ma, W.; Jafarpour, B.
2017-12-01
We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
NASA Astrophysics Data System (ADS)
Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping
2018-05-01
Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.
Static and Dynamic Model Update of an Inflatable/Rigidizable Torus Structure
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, mercedes C.
2006-01-01
The present work addresses the development of an experimental and computational procedure for validating finite element models. A torus structure, part of an inflatable/rigidizable Hexapod, is used to demonstrate the approach. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with optimization is used to modify key model parameters. Static test results are used to update stiffness parameters and dynamic test results are used to update the mass distribution. Updated parameters are computed using gradient and non-gradient based optimization algorithms. Results show significant improvements in model predictions after parameters are updated. Lessons learned in the areas of test procedures, modeling approaches, and uncertainties quantification are presented.
NASA Astrophysics Data System (ADS)
Kordy, M.; Wannamaker, P.; Maris, V.; Cherkaev, E.; Hill, G.
2016-01-01
Following the creation described in Part I of a deformable edge finite-element simulator for 3-D magnetotelluric (MT) responses using direct solvers, in Part II we develop an algorithm named HexMT for 3-D regularized inversion of MT data including topography. Direct solvers parallelized on large-RAM, symmetric multiprocessor (SMP) workstations are used also for the Gauss-Newton model update. By exploiting the data-space approach, the computational cost of the model update becomes much less in both time and computer memory than the cost of the forward simulation. In order to regularize using the second norm of the gradient, we factor the matrix related to the regularization term and apply its inverse to the Jacobian, which is done using the MKL PARDISO library. For dense matrix multiplication and factorization related to the model update, we use the PLASMA library which shows very good scalability across processor cores. A synthetic test inversion using a simple hill model shows that including topography can be important; in this case depression of the electric field by the hill can cause false conductors at depth or mask the presence of resistive structure. With a simple model of two buried bricks, a uniform spatial weighting for the norm of model smoothing recovered more accurate locations for the tomographic images compared to weightings which were a function of parameter Jacobians. We implement joint inversion for static distortion matrices tested using the Dublin secret model 2, for which we are able to reduce nRMS to ˜1.1 while avoiding oscillatory convergence. Finally we test the code on field data by inverting full impedance and tipper MT responses collected around Mount St Helens in the Cascade volcanic chain. Among several prominent structures, the north-south trending, eruption-controlling shear zone is clearly imaged in the inversion.
An investigation of soil-structure interaction effects observed at the MIT Green Building
Taciroglu, Ertugrul; Çelebi, Mehmet; Ghahari, S. Farid; Abazarsa, Fariba
2016-01-01
The soil-foundation impedance function of the MIT Green Building is identified from its response signals recorded during an earthquake. Estimation of foundation impedance functions from seismic response signals is a challenging task, because: (1) the foundation input motions (FIMs) are not directly measurable, (2) the as-built properties of the super-structure are only approximately known, and (3) the soil-foundation impedance functions are inherently frequency-dependent. In the present study, aforementioned difficulties are circumvented by using, in succession, a blind modal identification (BMID) method, a simplified Timoshenko beam model (TBM), and a parametric updating of transfer functions (TFs). First, the flexible-base modal properties of the building are identified from response signals using the BMID method. Then, a flexible-base TBM is updated using the identified modal data. Finally, the frequency-dependent soil-foundation impedance function is estimated by minimizing the discrepancy between TFs (of pairs instrumented floors) that are (1) obtained experimentally from earthquake data and (2) analytically from the updated TBM. Using the fully identified flexible-base TBM, the FIMs as well as building responses at locations without instruments can be predicted, as demonstrated in the present study.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-10
... Plan Update. c. Subsistence Uses of Horns, Antlers, Bones and Plants EA Update. 13. New Business. 14... guarantee that we will be able to do so. Wrangell-St. Elias National Park SRC Meeting Date and Location: The... if all business is completed. For Further Information on the Gates of the Arctic National Park SRC...
NASA Astrophysics Data System (ADS)
Huismann, Tyler D.
Due to the rapidly expanding role of electric propulsion (EP) devices, it is important to evaluate their integration with other spacecraft systems. Specifically, EP device plumes can play a major role in spacecraft integration, and as such, accurate characterization of plume structure bears on mission success. This dissertation addresses issues related to accurate prediction of plume structure in a particular type of EP device, a Hall thruster. This is done in two ways: first, by coupling current plume simulation models with current models that simulate a Hall thruster's internal plasma behavior; second, by improving plume simulation models and thereby increasing physical fidelity. These methods are assessed by comparing simulated results to experimental measurements. Assessment indicates the two methods improve plume modeling capabilities significantly: using far-field ion current density as a metric, these approaches used in conjunction improve agreement with measurements by a factor of 2.5, as compared to previous methods. Based on comparison to experimental measurements, recent computational work on discharge chamber modeling has been largely successful in predicting properties of internal thruster plasmas. This model can provide detailed information on plasma properties at a variety of locations. Frequently, experimental data is not available at many locations that are of interest regarding computational models. Excepting the presence of experimental data, there are limited alternatives for scientifically determining plasma properties that are necessary as inputs into plume simulations. Therefore, this dissertation focuses on coupling current models that simulate internal thruster plasma behavior with plume simulation models. Further, recent experimental work on atom-ion interactions has provided a better understanding of particle collisions within plasmas. This experimental work is used to update collision models in a current plume simulation code. Previous versions of the code assume an unknown dependence between particles' pre-collision velocities and post-collision scattering angles. This dissertation focuses on updating several of these types of collisions by assuming a curve fit based on the measurements of atom-ion interactions, such that previously unknown angular dependences are well-characterized.
Klier, Eliana M; Angelaki, Dora E; Hess, Bernhard J M
2005-07-01
Primates are able to localize a briefly flashed target despite intervening movements of the eyes, head, or body. This ability, often referred to as updating, requires extraretinal signals related to the intervening movement. With active roll rotations of the head from an upright position it has been shown that the updating mechanism is 3-dimensional, robust, and geometrically sophisticated. Here we examine whether such a rotational updating mechanism operates during passive motion both with and without inertial cues about head/body position in space. Subjects were rotated from either an upright or supine position, about a nasal-occipital axis, briefly shown a world-fixed target, rotated back to their original position, and then asked to saccade to the remembered target location. Using this paradigm, we tested subjects' abilities to update from various tilt angles (0, +/-30, +/-45, +/-90 degrees), to 8 target directions and 2 target eccentricities. In the upright condition, subjects accurately updated the remembered locations from all tilt angles independent of target direction or eccentricity. Slopes of directional errors versus tilt angle ranged from -0.011 to 0.15, and were significantly different from a slope of 1 (no compensation for head-in-space roll) and a slope of 0.9 (no compensation for eye-in-space roll). Because the eyes, head, and body were fixed throughout these passive movements, subjects could not use efference copies or neck proprioceptive cues to assess the amount of tilt, suggesting that vestibular signals and/or body proprioceptive cues suffice for updating. In the supine condition, where gravitational signals could not contribute, slopes ranged from 0.60 to 0.82, indicating poor updating performance. Thus information specifying the body's orientation relative to gravity is critical for maintaining spatial constancy and for distinguishing body-fixed versus world-fixed reference frames.
Probabilistic Flood Mapping using Volunteered Geographical Information
NASA Astrophysics Data System (ADS)
Rivera, S. J.; Girons Lopez, M.; Seibert, J.; Minsker, B. S.
2016-12-01
Flood extent maps are widely used by decision makers and first responders to provide critical information that prevents economic impacts and the loss of human lives. These maps are usually obtained from sensory data and/or hydrologic models, which often have limited coverage in space and time. Recent developments in social media and communication technology have created a wealth of near-real-time, user-generated content during flood events in many urban areas, such as flooded locations, pictures of flooding extent and height, etc. These data could improve decision-making and response operations as events unfold. However, the integration of these data sources has been limited due to the need for methods that can extract and translate the data into useful information for decision-making. This study presents an approach that uses volunteer geographic information (VGI) and non-traditional data sources (i.e., Twitter, Flicker, YouTube, and 911 and 311 calls) to generate/update the flood extent maps in areas where no models and/or gauge data are operational. The approach combines Web-crawling and computer vision techniques to gather information about the location, extent, and water height of the flood from unstructured textual data, images, and videos. These estimates are then used to provide an updated flood extent map for areas surrounding the geo-coordinate of the VGI through the application of a Hydro Growing Region Algorithm (HGRA). HGRA combines hydrologic and image segmentation concepts to estimate a probabilistic flooding extent along the corresponding creeks. Results obtained for a case study in Austin, TX (i.e., 2015 Memorial Day flood) were comparable to those obtained by a calibrated hydrologic model and had good spatial correlation with flooding extents estimated by the Federal Emergency Management Agency (FEMA).
Does nonstationarity in rainfall require nonstationary intensity-duration-frequency curves?
NASA Astrophysics Data System (ADS)
Ganguli, Poulomi; Coulibaly, Paulin
2017-12-01
In Canada, risk of flooding due to heavy rainfall has risen in recent decades; the most notable recent examples include the July 2013 storm in the Greater Toronto region and the May 2017 flood of the Toronto Islands. We investigate nonstationarity and trends in the short-duration precipitation extremes in selected urbanized locations in Southern Ontario, Canada, and evaluate the potential of nonstationary intensity-duration-frequency (IDF) curves, which form an input to civil infrastructural design. Despite apparent signals of nonstationarity in precipitation extremes in all locations, the stationary vs. nonstationary models do not exhibit any significant differences in the design storm intensity, especially for short recurrence intervals (up to 10 years). The signatures of nonstationarity in rainfall extremes do not necessarily imply the use of nonstationary IDFs for design considerations. When comparing the proposed IDFs with current design standards, for return periods (10 years or less) typical for urban drainage design, current design standards require an update of up to 7 %, whereas for longer recurrence intervals (50-100 years), ideal for critical civil infrastructural design, updates ranging between ˜ 2 and 44 % are suggested. We further emphasize that the above findings need re-evaluation in the light of climate change projections since the intensity and frequency of extreme precipitation are expected to intensify due to global warming.
NASA Technical Reports Server (NTRS)
Rushley, Stephanie; Carter, Matthew; Chiou, Charles; Farmer, Richard; Haywood, Kevin; Pototzky, Anthony, Jr.; White, Adam; Winker, Daniel
2014-01-01
Colombia is a country with highly variable terrain, from the Andes Mountains to plains and coastal areas, many of these areas are prone to flooding disasters. To identify these risk areas NASA's Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) was used to construct a digital elevation model (DEM) for the study region. The preliminary risk assessment was applied to a pilot study area, the La Mosca River basin. Precipitation data from the National Aeronautics and Space Administration (NASA) Tropical Rainfall Measuring Mission (TRMM)'s near-real-time rainfall products as well as precipitation data from the Instituto de Hidrologia, Meteorologia y Estudios Ambientales (the Institute of Hydrology, Meteorology and Environmental Studies, IDEAM) and stations in the La Mosca River Basin were used to create rainfall distribution maps for the region. Using the precipitation data and the ASTER DEM, the web application, Mi Pronóstico, run by IDEAM, was updated to include an interactive map which currently allows users to search for a location and view the vulnerability and current weather and flooding conditions. The geospatial information was linked to an early warning system in Mi Pronóstico that can alert the public of flood warnings and identify locations of nearby shelters.
NASA Astrophysics Data System (ADS)
de Souza, V.; Apel, W. D.; Arteaga, J. C.; Badea, F.; Bekk, K.; Bertaina, M.; Blümer, J.; Bozdog, H.; Brancus, I. M.; Brüggemann, M.; Buchholz, P.; Cantoni, E.; Chiavassa, A.; Cossavella, F.; Daumiller, K.; di Pierro, F.; Doll, P.; Engel, R.; Engler, J.; Finger, M.; Fuhrmann, D.; Ghia, P. L.; Gils, H. J.; Glasstetter, R.; Grupen, C.; Haungs, A.; Heck, D.; Hörandel, J. R.; Huege, T.; Isar, P. G.; Kampert, K.-H.; Kang, D.; Kickelbick, D.; Klages, H. O.; Kolotaev, Y.; Łuczak, P.; Mathes, H. J.; Mayer, H. J.; Milke, J.; Mitrica, B.; Morello, C.; Navarra, G.; Nehls, S.; Oehlschläger, J.; Ostapchenko, S.; Over, S.; Petcu, M.; Pierog, T.; Rebel, H.; Roth, M.; Schieler, H.; Schröder, F.; Sima, O.; Stümpert, M.; Toma, G.; Trinchero, G. C.; Ulrich, H.; van Buren, J.; Walkowiak, W.; Weindl, A.; Wochele, J.; Wommer, M.; Zabierowski, J.
2009-04-01
KASCADE-Grande is a multi-component detector located at Karlsruhe, Germany. It was optimized to measure cosmic ray air showers with energies between 5×1016 and 1018 eV. Its capabilities are based on the use of several techniques to measure the electromagnetic and muon components of the shower in an independent way which allows a direct comparison to hadronic interaction models and a good estimation of the primary cosmic ray composition. In this paper, we present the status of the experiment, an update of the data analysis and the latest results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hwang, Ho-Ling; Davis, Stacy Cagle
2009-12-01
This report is designed to document the analysis process and estimation models currently used by the Federal Highway Administration (FHWA) to estimate the off-highway gasoline consumption and public sector fuel consumption. An overview of the entire FHWA attribution process is provided along with specifics related to the latest update (2008) on the Off-Highway Gasoline Use Model and the Public Use of Gasoline Model. The Off-Highway Gasoline Use Model is made up of five individual modules, one for each of the off-highway categories: agricultural, industrial and commercial, construction, aviation, and marine. This 2008 update of the off-highway models was the secondmore » major update (the first model update was conducted during 2002-2003) after they were originally developed in mid-1990. The agricultural model methodology, specifically, underwent a significant revision because of changes in data availability since 2003. Some revision to the model was necessary due to removal of certain data elements used in the original estimation method. The revised agricultural model also made use of some newly available information, published by the data source agency in recent years. The other model methodologies were not drastically changed, though many data elements were updated to improve the accuracy of these models. Note that components in the Public Use of Gasoline Model were not updated in 2008. A major challenge in updating estimation methods applied by the public-use model is that they would have to rely on significant new data collection efforts. In addition, due to resource limitation, several components of the models (both off-highway and public-us models) that utilized regression modeling approaches were not recalibrated under the 2008 study. An investigation of the Environmental Protection Agency's NONROAD2005 model was also carried out under the 2008 model update. Results generated from the NONROAD2005 model were analyzed, examined, and compared, to the extent that is possible on the overall totals, to the current FHWA estimates. Because NONROAD2005 model was designed for emission estimation purposes (i.e., not for measuring fuel consumption), it covers different equipment populations from those the FHWA models were based on. Thus, a direct comparison generally was not possible in most sectors. As a result, NONROAD2005 data were not used in the 2008 update of the FHWA off-highway models. The quality of fuel use estimates directly affect the data quality in many tables published in the Highway Statistics. Although updates have been made to the Off-Highway Gasoline Use Model and the Public Use Gasoline Model, some challenges remain due to aging model equations and discontinuation of data sources.« less
Prediction-error variance in Bayesian model updating: a comparative study
NASA Astrophysics Data System (ADS)
Asadollahi, Parisa; Li, Jian; Huang, Yong
2017-04-01
In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model class level produces more robust results especially when the number of measurement is small.
Which coordinate system for modelling path integration?
Vickerstaff, Robert J; Cheung, Allen
2010-03-21
Path integration is a navigation strategy widely observed in nature where an animal maintains a running estimate, called the home vector, of its location during an excursion. Evidence suggests it is both ancient and ubiquitous in nature, and has been studied for over a century. In that time, canonical and neural network models have flourished, based on a wide range of assumptions, justifications and supporting data. Despite the importance of the phenomenon, consensus and unifying principles appear lacking. A fundamental issue is the neural representation of space needed for biological path integration. This paper presents a scheme to classify path integration systems on the basis of the way the home vector records and updates the spatial relationship between the animal and its home location. Four extended classes of coordinate systems are used to unify and review both canonical and neural network models of path integration, from the arthropod and mammalian literature. This scheme demonstrates analytical equivalence between models which may otherwise appear unrelated, and distinguishes between models which may superficially appear similar. A thorough analysis is carried out of the equational forms of important facets of path integration including updating, steering, searching and systematic errors, using each of the four coordinate systems. The type of available directional cue, namely allothetic or idiothetic, is also considered. It is shown that on balance, the class of home vectors which includes the geocentric Cartesian coordinate system, appears to be the most robust for biological systems. A key conclusion is that deducing computational structure from behavioural data alone will be difficult or impossible, at least in the absence of an analysis of random errors. Consequently it is likely that further theoretical insights into path integration will require an in-depth study of the effect of noise on the four classes of home vectors. Copyright 2009 Elsevier Ltd. All rights reserved.
Assessing the performance of eight real-time updating models and procedures for the Brosna River
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Bhattarai, K. P.; Shamseldin, A. Y.
2005-10-01
The flow forecasting performance of eight updating models, incorporated in the Galway River Flow Modelling and Forecasting System (GFMFS), was assessed using daily data (rainfall, evaporation and discharge) of the Irish Brosna catchment (1207 km2), considering their one to six days lead-time discharge forecasts. The Perfect Forecast of Input over the Forecast Lead-time scenario was adopted, where required, in place of actual rainfall forecasts. The eight updating models were: (i) the standard linear Auto-Regressive (AR) model, applied to the forecast errors (residuals) of a simulation (non-updating) rainfall-runoff model; (ii) the Neural Network Updating (NNU) model, also using such residuals as input; (iii) the Linear Transfer Function (LTF) model, applied to the simulated and the recently observed discharges; (iv) the Non-linear Auto-Regressive eXogenous-Input Model (NARXM), also a neural network-type structure, but having wide options of using recently observed values of one or more of the three data series, together with non-updated simulated outflows, as inputs; (v) the Parametric Simple Linear Model (PSLM), of LTF-type, using recent rainfall and observed discharge data; (vi) the Parametric Linear perturbation Model (PLPM), also of LTF-type, using recent rainfall and observed discharge data, (vii) n-AR, an AR model applied to the observed discharge series only, as a naïve updating model; and (viii) n-NARXM, a naive form of the NARXM, using only the observed discharge data, excluding exogenous inputs. The five GFMFS simulation (non-updating) models used were the non-parametric and parametric forms of the Simple Linear Model and of the Linear Perturbation Model, the Linearly-Varying Gain Factor Model, the Artificial Neural Network Model, and the conceptual Soil Moisture Accounting and Routing (SMAR) model. As the SMAR model performance was found to be the best among these models, in terms of the Nash-Sutcliffe R2 value, both in calibration and in verification, the simulated outflows of this model only were selected for the subsequent exercise of producing updated discharge forecasts. All the eight forms of updating models for producing lead-time discharge forecasts were found to be capable of producing relatively good lead-1 (1-day ahead) forecasts, with R2 values almost 90% or above. However, for higher lead time forecasts, only three updating models, viz., NARXM, LTF, and NNU, were found to be suitable, with lead-6 values of R2 about 90% or higher. Graphical comparisons were made of the lead-time forecasts for the two largest floods, one in the calibration period and the other in the verification period.
An Online Risk Monitor System (ORMS) to Increase Safety and Security Levels in Industry
NASA Astrophysics Data System (ADS)
Zubair, M.; Rahman, Khalil Ur; Hassan, Mehmood Ul
2013-12-01
The main idea of this research is to develop an Online Risk Monitor System (ORMS) based on Living Probabilistic Safety Assessment (LPSA). The article highlights the essential features and functions of ORMS. The basic models and modules such as, Reliability Data Update Model (RDUM), running time update, redundant system unavailability update, Engineered Safety Features (ESF) unavailability update and general system update have been described in this study. ORMS not only provides quantitative analysis but also highlights qualitative aspects of risk measures. ORMS is capable of automatically updating the online risk models and reliability parameters of equipment. ORMS can support in the decision making process of operators and managers in Nuclear Power Plants.
Spacecraft camera image registration
NASA Technical Reports Server (NTRS)
Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)
1987-01-01
A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).
NASA Astrophysics Data System (ADS)
Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan
2018-01-01
The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.
Diurnal forcing of planetary atmospheres
NASA Technical Reports Server (NTRS)
Houben, Howard C.
1991-01-01
The utility of the Mars Planetary Boundary Layer Model (MPBL) for calculations in support of the Mars 94 balloon mission was substantially enhanced by the introduction of a balloon equation of motion into the model. Both vertical and horizontal excursions of the balloon are calculated along with its volume, temperature, and pressure. The simulations reproduce the expected 5-min vertical oscillations of a constant density balloon at altitude on Mars. The results of these calculations are presented for the nominal target location of the balloon. A nonlinear balanced model was developed for the Martian atmosphere. It was used to initialize a primitive equation model for the simulations of the Earth's atmosphere at the time of the El Chichon eruption in 1982. It is also used as an assimilation model to update the temperature and wind fields at frequent intervals.
The Role of Volatiles in Volcanism at Loki and other Hotspots on Io
NASA Astrophysics Data System (ADS)
Howell, Robert R.; Allen, D. R.; Landis, C. E.; Lopes, R. M. C.
2012-10-01
To determine the role of volatiles in volcanic processes on Io we are analyzing Voyager, Galileo, and New Horizons images to obtain colors and high resolution maps near hotspots, in particular Loki. We are also producing numerical transport models for volatiles such as sulfur. As a part of this effort we have also developed Python-based software tools for updating the Voyager and Galileo NAIF pointing kernels, and for analyzing the observations themselves. At Loki, despite their relatively low abundance, volatiles clearly play a significant role. Color photometry of the small bright spots colloquially known as "sulfur bergs", which we suspect are fumarole deposits, show their reflectance is consistent with sulfur but not sulfur dioxide. Mapping of their location shows they avoid the patera margins, and may show other spatial patterns. Preliminary transport models suggest their sizes are consistent with that expected for sulfur fumarole deposits over cooled lava crust. We are currently comparing the high resolution Voyager images with the best available Galileo and New Horizons images to measure changes in the volatile locations over time, and also measure changing locations of nearby silicate flows. We are also beginning stress modeling to understand the structural features seen in island patera such as Loki and are also beginning an analysis of other hotspots such as Tupan.
Eye movement sequence generation in humans: Motor or goal updating?
Quaia, Christian; Joiner, Wilsaan M.; FitzGibbon, Edmond J.; Optican, Lance M.; Smith, Maurice A.
2011-01-01
Saccadic eye movements are often grouped in pre-programmed sequences. The mechanism underlying the generation of each saccade in a sequence is currently poorly understood. Broadly speaking, two alternative schemes are possible: first, after each saccade the retinotopic location of the next target could be estimated, and an appropriate saccade could be generated. We call this the goal updating hypothesis. Alternatively, multiple motor plans could be pre-computed, and they could then be updated after each movement. We call this the motor updating hypothesis. We used McLaughlin’s intra-saccadic step paradigm to artificially create a condition under which these two hypotheses make discriminable predictions. We found that in human subjects, when sequences of two saccades are planned, the motor updating hypothesis predicts the landing position of the second saccade in two-saccade sequences much better than the goal updating hypothesis. This finding suggests that the human saccadic system is capable of executing sequences of saccades to multiple targets by planning multiple motor commands, which are then updated by serial subtraction of ongoing motor output. PMID:21191134
NASA Astrophysics Data System (ADS)
Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.
2018-03-01
Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.
Choi, Jae-Young; Jeong, Jongpil; Chung, Tai-Myoung
2017-01-01
Lately, we see that Internet of things (IoT) is introduced in medical services for global connection among patients, sensors, and all nearby things. The principal purpose of this global connection is to provide context awareness for the purpose of bringing convenience to a patient’s life and more effectively implementing clinical processes. In health care, monitoring of biosignals of a patient has to be continuously performed while the patient moves inside and outside the hospital. Also, to monitor the accurate location and biosignals of the patient, appropriate mobility management is necessary to maintain connection between the patient and the hospital network. In this paper, a binding update scheme on PMIPv6, which reduces signal traffic during location updates by Virtual LMA (VLMA) on the top original Local Mobility Anchor (LMA) Domain, is proposed to reduce the total cost. If a Mobile Node (MN) moves to a Mobile Access Gateway (MAG)-located boundary of an adjacent LMA domain, the MN changes itself into a virtual mode, and this movement will be assumed to be a part of the VLMA domain. In the proposed scheme, MAGs eliminate global binding updates for MNs between LMA domains and significantly reduce the packet loss and latency by eliminating the handoff between LMAs. In conclusion, the performance analysis results show that the proposed scheme improves performance significantly versus PMIPv6 and HMIPv6 in terms of the binding update rate per user and average handoff latency. PMID:28129355
Parametric design and gridding through relational geometry
NASA Technical Reports Server (NTRS)
Letcher, John S., Jr.; Shook, D. Michael
1995-01-01
Relational Geometric Synthesis (RGS) is a new logical framework for building up precise definitions of complex geometric models from points, curves, surfaces and solids. RGS achieves unprecedented design flexibility by supporting a rich variety of useful curve and surface entities. During the design process, many qualitative and quantitative relationships between elementary objects may be captured and retained in a data structure equivalent to a directed graph, such that they can be utilized for automatically updating the complete model geometry following changes in the shape or location of an underlying object. Capture of relationships enables many new possibilities for parametric variations and optimization. Examples are given of panelization applications for submarines, sailing yachts, offshore structures, and propellers.
Summary of Expansions, Updates, and Results in GREET 2017 Suite of Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Michael; Elgowainy, Amgad; Han, Jeongwoo
This report provides a technical summary of the expansions and updates to the 2017 release of Argonne National Laboratory’s Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation (GREET®) model, including references and links to key technical documents related to these expansions and updates. The GREET 2017 release includes an updated version of the GREET1 (the fuel-cycle GREET model) and GREET2 (the vehicle-cycle GREET model), both in the Microsoft Excel platform and in the GREET.net modeling platform. Figure 1 shows the structure of the GREET Excel modeling platform. The .net platform integrates all GREET modules together seamlessly.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-06
...] Medical Device User Fee and Modernization Act; Notice to Public of Web Site Location of Fiscal Year 2014... and Drug Administration (FDA or the Agency) is announcing the Web site location where the Agency will... documents, FDA has committed to updating its Web site in a timely manner to reflect the Agency's review of...
Updates to the Demographic and Spatial Allocation Models to ...
EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing development scenarios up to 2100. This newest version includes updated population and land use data sets and addresses limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide (Final Report) describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.
Welcome to the California State Web Portal
information. Update Location Clear Location à Default High Contrast Reset Increase Font SizeFont Decrease with High Speed Rail? are California's climate change goals? are my business opportunities in Department of Resources Recycling and Recovery (CalRecycle) Find out where to recycle beverage containers
TREATY LOCATOR Treaty Locator Icon Select treaties based on search criteria. Browse results or construct updated August 2012 Metadata and suggested citations Collaborators and sponsors FAQs Search Treaty Texts Enter Keyword or Phrase: Search advanced search Search Conference of Party Decisions Search a complete
Takahama, Sachiko; Saiki, Jun
2014-01-01
Information on an object's features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location) with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC), hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS), DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010) demonstrated that the superior parietal lobule (SPL) cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding. PMID:24917833
Takahama, Sachiko; Saiki, Jun
2014-01-01
Information on an object's features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location) with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC), hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS), DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010) demonstrated that the superior parietal lobule (SPL) cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding.
Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification
NASA Technical Reports Server (NTRS)
Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.
2004-01-01
This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.
Kim, Seung-Nam; Park, Taewon; Lee, Sang-Hyun
2014-01-01
Damage of a 5-story framed structure was identified from two types of measured data, which are frequency response functions (FRF) and natural frequencies, using a finite element (FE) model updating procedure. In this study, a procedure to determine the appropriate weightings for different groups of observations was proposed. In addition, a modified frame element which included rotational springs was used to construct the FE model for updating to represent concentrated damage at the member ends (a formulation for plastic hinges in framed structures subjected to strong earthquakes). The results of the model updating and subsequent damage detection when the rotational springs (RS model) were used were compared with those obtained using the conventional frame elements (FS model). Comparisons indicated that the RS model gave more accurate results than the FS model. That is, the errors in the natural frequencies of the updated models were smaller, and the identified damage showed clearer distinctions between damaged and undamaged members and was more consistent with observed damage. PMID:24574888
Ground Source Heat Pump Sub-Slab Heat Exchange Loop Performance in a Cold Climate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittereder, N.; Poerschke, A.
2013-11-01
This report presents a cold-climate project that examines an alternative approach to ground source heat pump (GSHP) ground loop design. The innovative ground loop design is an attempt to reduce the installed cost of the ground loop heat exchange portion of the system by containing the entire ground loop within the excavated location beneath the basement slab. Prior to the installation and operation of the sub-slab heat exchanger, energy modeling using TRNSYS software and concurrent design efforts were performed to determine the size and orientation of the system. One key parameter in the design is the installation of the GSHPmore » in a low-load home, which considerably reduces the needed capacity of the ground loop heat exchanger. This report analyzes data from two cooling seasons and one heating season. Upon completion of the monitoring phase, measurements revealed that the initial TRNSYS simulated horizontal sub-slab ground loop heat exchanger fluid temperatures and heat transfer rates differed from the measured values. To determine the cause of this discrepancy, an updated model was developed utilizing a new TRNSYS subroutine for simulating sub-slab heat exchangers. Measurements of fluid temperature, soil temperature, and heat transfer were used to validate the updated model.« less
Neural adaptive control for vibration suppression in composite fin-tip of aircraft.
Suresh, S; Kannan, N; Sundararajan, N; Saratchandran, P
2008-06-01
In this paper, we present a neural adaptive control scheme for active vibration suppression of a composite aircraft fin tip. The mathematical model of a composite aircraft fin tip is derived using the finite element approach. The finite element model is updated experimentally to reflect the natural frequencies and mode shapes very accurately. Piezo-electric actuators and sensors are placed at optimal locations such that the vibration suppression is a maximum. Model-reference direct adaptive neural network control scheme is proposed to force the vibration level within the minimum acceptable limit. In this scheme, Gaussian neural network with linear filters is used to approximate the inverse dynamics of the system and the parameters of the neural controller are estimated using Lyapunov based update law. In order to reduce the computational burden, which is critical for real-time applications, the number of hidden neurons is also estimated in the proposed scheme. The global asymptotic stability of the overall system is ensured using the principles of Lyapunov approach. Simulation studies are carried-out using sinusoidal force functions of varying frequency. Experimental results show that the proposed neural adaptive control scheme is capable of providing significant vibration suppression in the multiple bending modes of interest. The performance of the proposed scheme is better than the H(infinity) control scheme.
Alpha power gates relevant information during working memory updating.
Manza, Peter; Hau, Chui Luen Vera; Leung, Hoi-Chung
2014-04-23
Human working memory (WM) is inherently limited, so we must filter out irrelevant information in our environment or our mind while retaining limited important relevant contents. Previous work suggests that neural oscillations in the alpha band (8-14 Hz) play an important role in inhibiting incoming distracting information during attention and selective encoding tasks. However, whether alpha power is involved in inhibiting no-longer-relevant content or in representing relevant WM content is still debated. To clarify this issue, we manipulated the amount of relevant/irrelevant information using a task requiring spatial WM updating while measuring neural oscillatory activity via EEG and localized current sources across the scalp using a surface Laplacian transform. An initial memory set of two, four, or six spatial locations was to be memorized over a delay until an updating cue was presented indicating that only one or three locations remained relevant for a subsequent recognition test. Alpha amplitude varied with memory maintenance and updating demands among a cluster of left frontocentral electrodes. Greater postcue alpha power was associated with the high relevant load conditions (six and four dots cued to reduce to three relevant) relative to the lower load conditions (four and two dots reduced to one). Across subjects, this difference in alpha power was correlated with condition differences in performance accuracy. In contrast, no significant effects of irrelevant load were observed. These findings demonstrate that, during WM updating, alpha power reflects maintenance of relevant memory contents rather than suppression of no-longer-relevant memory traces.
Mackrous, I; Simoneau, M
2014-02-28
To maintain perception of the world around us during body motion, the brain must update the spatial presentation of visual stimuli, known as space updating. Previous studies have demonstrated that vestibular signals contribute to space updating. Nonetheless, when being passively rotated in the dark, the ability to keep track of a memorized earth-fixed target (EFT) involves learning mechanism(s). We tested whether such learning generalizes across different EFT eccentricities. Furthermore, we ascertained whether learning transfers to similar target eccentricities but in the opposite direction. Participants were trained to predict the position of an EFT (located at 45° to their left) while being rotated counterclockwise (i.e., they press a push button when they perceived that their body midline have cross the position of the target). Overall, the results indicated that learning transferred to other target eccentricity (30° and 60°) for identical body rotation direction. In contrast, vestibular learning partly transferred to target location's matching body rotation but in the opposite rotation direction. Generalization of learning implies that participants do not adopt cognitive strategies to improve their performance during training. We argue that the brain learned to use vestibular signals for space updating. Generalization of learning while being rotated in the opposite direction implies that some parts of the neural networks involved in space updating is shared between trained and untrained direction. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Xing; Hill, Thomas L.; Neild, Simon A.; Shaw, Alexander D.; Haddad Khodaparast, Hamed; Friswell, Michael I.
2018-02-01
This paper proposes a model updating strategy for localised nonlinear structures. It utilises an initial finite-element (FE) model of the structure and primary harmonic response data taken from low and high amplitude excitations. The underlying linear part of the FE model is first updated using low-amplitude test data with established techniques. Then, using this linear FE model, the nonlinear elements are localised, characterised, and quantified with primary harmonic response data measured under stepped-sine or swept-sine excitations. Finally, the resulting model is validated by comparing the analytical predictions with both the measured responses used in the updating and with additional test data. The proposed strategy is applied to a clamped beam with a nonlinear mechanism and good agreements between the analytical predictions and measured responses are achieved. Discussions on issues of damping estimation and dealing with data from amplitude-varying force input in the updating process are also provided.
Utilizing Flight Data to Update Aeroelastic Stability Estimates
NASA Technical Reports Server (NTRS)
Lind, Rick; Brenner, Marty
1997-01-01
Stability analysis of high performance aircraft must account for errors in the system model. A method for computing flutter margins that incorporates flight data has been developed using robust stability theory. This paper considers applying this method to update flutter margins during a post-flight or on-line analysis. Areas of modeling uncertainty that arise when using flight data with this method are investigated. The amount of conservatism in the resulting flutter margins depends on the flight data sets used to update the model. Post-flight updates of flutter margins for an F/A-18 are presented along with a simulation of on-line updates during a flight test.
EPA's announced the availability of the final report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) (Version 2). This update furthered land change modeling by providing nationwide housing developmen...
Accomplishments Under the Airport Improvement Program. Fiscal year 1987.
1987-01-01
APRON PORTLAND INTERNATIONAL JETPORT (PRIMARY) 17 $312,300 INSTALL RUNWAY LIGHTING SYSTEM 1B $1,481,738 EXTENDAPRON PRESQUE ISLE 06 $99,000 RECONSTRUCT...LOCATION AND PROJECT FEDERAL NAME OF AIRPORT NUMBER FUNDS DESCRIPTION OF WORK o PENNSYLVANIA S4 $198,000 CONDUCT STATE SYSTEM PLAN UPDATE COMMONWEALTH...OF PENNSYLVANIA (CONTINUOUS) (SYSTEM PLAN) R5 $200,000 CONDUCT REGIONAL SYSTEM PLAN UPDATE DELAWARE VALLEY REGION (CONTINUOUS) (SYSTEM PLAN) RS
An updated whole stand growth and yield system for planted longleaf pine in southwest Georgia
John R. Brooks; Steven B. Jack
2016-01-01
An updated whole stand growth and yield system for planted longleaf pine (Pinus palustris) was developed from permanent plot data collected annually over a 13 to 16 year period. The data set consists of 15 intensively managed longleaf pine plantations that are located in Lee, Worth, Mitchell, and Baker counties in southwest Georgia. Stand survival, dominant height,...
Brooks, Lynette E.
2013-01-01
The U.S. Geological Survey (USGS), in cooperation with the Southern Utah Valley Municipal Water Association, updated an existing USGS model of southern Utah and Goshen Valleys for hydrologic and climatic conditions from 1991 to 2011 and used the model for projection and groundwater management simulations. All model files used in the transient model were updated to be compatible with MODFLOW-2005 and with the additional stress periods. The well and recharge files had the most extensive changes. Discharge to pumping wells in southern Utah and Goshen Valleys was estimated and simulated on an annual basis from 1991 to 2011. Recharge estimates for 1991 to 2011 were included in the updated model by using precipitation, streamflow, canal diversions, and irrigation groundwater withdrawals for each year. The model was evaluated to determine how well it simulates groundwater conditions during recent increased withdrawals and drought, and to determine if the model is adequate for use in future planning. In southern Utah Valley, the magnitude and direction of annual water-level fluctuation simulated by the updated model reasonably match measured water-level changes, but they do not simulate as much decline as was measured in some locations from 2000 to 2002. Both the rapid increase in groundwater withdrawals and the total groundwater withdrawals in southern Utah Valley during this period exceed the variations and magnitudes simulated during the 1949 to 1990 calibration period. It is possible that hydraulic properties may be locally incorrect or that changes, such as land use or irrigation diversions, occurred that are not simulated. In the northern part of Goshen Valley, simulated water-level changes reasonably match measured changes. Farther south, however, simulated declines are much less than measured declines. Land-use changes indicate that groundwater withdrawals in Goshen Valley are possibly greater than estimated and simulated. It is also possible that irrigation methods, amount of diversions, or other factors have changed that are not simulated or that aquifer properties are incorrectly simulated. The model can be used for projections about the effects of future groundwater withdrawals and managed aquifer recharge in southern Utah Valley, but rapid changes in withdrawals and increasing withdrawals dramatically may reduce the accuracy of the predicted water-level and groundwater-budget changes. The model should not be used for projections in Goshen Valley until additional withdrawal and discharge data are collected and the model is recalibrated if necessary. Model projections indicate large drawdowns of up to 400 feet and complete cessation of natural discharge in some areas with potential future increases in water use. Simulated managed aquifer recharge counteracts those effects. Groundwater management examples indicate that drawdown could be less, and discharge at selected springs could be greater, with optimized groundwater withdrawals and managed aquifer recharge than without optimization. Recalibration to more recent stresses and seasonal stress periods, and collection of new withdrawal, stream, land-use, and discharge data could improve the model fit to water-level changes and the accuracy of predictions.
Hippocampus-Dependent Goal Localization by Head-Fixed Mice in Virtual Reality.
Sato, Masaaki; Kawano, Masako; Mizuta, Kotaro; Islam, Tanvir; Lee, Min Goo; Hayashi, Yasunori
2017-01-01
The demonstration of the ability of rodents to navigate in virtual reality (VR) has made it an important behavioral paradigm for studying spatially modulated neuronal activity in these animals. However, their behavior in such simulated environments remains poorly understood. Here, we show that encoding and retrieval of goal location memory in mice head-fixed in VR depends on the postsynaptic scaffolding protein Shank2 and the dorsal hippocampus. In our newly developed virtual cued goal location task, a head-fixed mouse moves from one end of a virtual linear track to seek rewards given at a target location along the track. The mouse needs to visually recognize the target location and stay there for a short period of time to receive the reward. Transient pharmacological blockade of fast glutamatergic synaptic transmission in the dorsal hippocampus dramatically and reversibly impaired performance of this task. Encoding and updating of virtual cued goal location memory was impaired in mice deficient in the postsynaptic scaffolding protein Shank2, a mouse model of autism that exhibits impaired spatial learning in a real environment. These results highlight the crucial roles of the dorsal hippocampus and postsynaptic protein complexes in spatial learning and navigation in VR.
Hippocampus-Dependent Goal Localization by Head-Fixed Mice in Virtual Reality
Kawano, Masako; Mizuta, Kotaro; Islam, Tanvir; Lee, Min Goo; Hayashi, Yasunori
2017-01-01
Abstract The demonstration of the ability of rodents to navigate in virtual reality (VR) has made it an important behavioral paradigm for studying spatially modulated neuronal activity in these animals. However, their behavior in such simulated environments remains poorly understood. Here, we show that encoding and retrieval of goal location memory in mice head-fixed in VR depends on the postsynaptic scaffolding protein Shank2 and the dorsal hippocampus. In our newly developed virtual cued goal location task, a head-fixed mouse moves from one end of a virtual linear track to seek rewards given at a target location along the track. The mouse needs to visually recognize the target location and stay there for a short period of time to receive the reward. Transient pharmacological blockade of fast glutamatergic synaptic transmission in the dorsal hippocampus dramatically and reversibly impaired performance of this task. Encoding and updating of virtual cued goal location memory was impaired in mice deficient in the postsynaptic scaffolding protein Shank2, a mouse model of autism that exhibits impaired spatial learning in a real environment. These results highlight the crucial roles of the dorsal hippocampus and postsynaptic protein complexes in spatial learning and navigation in VR. PMID:28484738
Application of Artificial Intelligence for Bridge Deterioration Model.
Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.
Application of Artificial Intelligence for Bridge Deterioration Model
Chen, Zhang; Wu, Yangyang; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121
NASA Astrophysics Data System (ADS)
Joyce, C. J.; Schwadron, N. A.; Townsend, L. W.; deWet, W. C.; Wilson, J. K.; Spence, H. E.; Tobiska, W. K.; Shelton-Mur, K.; Yarborough, A.; Harvey, J.; Herbst, A.; Koske-Phillips, A.; Molina, F.; Omondi, S.; Reid, C.; Reid, D.; Shultz, J.; Stephenson, B.; McDevitt, M.; Phillips, T.
2016-09-01
We provide an analysis of the galactic cosmic ray radiation environment of Earth's atmosphere using measurements from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) aboard the Lunar Reconnaissance Orbiter (LRO) together with the Badhwar-O'Neil model and dose lookup tables generated by the Earth-Moon-Mars Radiation Environment Module (EMMREM). This study demonstrates an updated atmospheric radiation model that uses new dose tables to improve the accuracy of the modeled dose rates. Additionally, a method for computing geomagnetic cutoffs is incorporated into the model in order to account for location-dependent effects of the magnetosphere. Newly available measurements of atmospheric dose rates from instruments aboard commercial aircraft and high-altitude balloons enable us to evaluate the accuracy of the model in computing atmospheric dose rates. When compared to the available observations, the model seems to be reasonably accurate in modeling atmospheric radiation levels, overestimating airline dose rates by an average of 20%, which falls within the uncertainty limit recommended by the International Commission on Radiation Units and Measurements (ICRU). Additionally, measurements made aboard high-altitude balloons during simultaneous launches from New Hampshire and California provide an additional comparison to the model. We also find that the newly incorporated geomagnetic cutoff method enables the model to represent radiation variability as a function of location with sufficient accuracy.
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
NASA Astrophysics Data System (ADS)
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
A Multi-Index Integrated Change detection method for updating the National Land Cover Database
Jin, Suming; Yang, Limin; Xian, George Z.; Danielson, Patrick; Homer, Collin G.
2010-01-01
Land cover change is typically captured by comparing two or more dates of imagery and associating spectral change with true thematic change. A new change detection method, Multi-Index Integrated Change (MIIC), has been developed to capture a full range of land cover disturbance patterns for updating the National Land Cover Database (NLCD). Specific indices typically specialize in identifying only certain types of disturbances; for example, the Normalized Burn Ratio (NBR) has been widely used for monitoring fire disturbance. Recognizing the potential complementary nature of multiple indices, we integrated four indices into one model to more accurately detect true change between two NLCD time periods. The four indices are NBR, Normalized Difference Vegetation Index (NDVI), Change Vector (CV), and a newly developed index called the Relative Change Vector (RCV). The model is designed to provide both change location and change direction (e.g. biomass increase or biomass decrease). The integrated change model has been tested on five image pairs from different regions exhibiting a variety of disturbance types. Compared with a simple change vector method, MIIC can better capture the desired change without introducing additional commission errors. The model is particularly accurate at detecting forest disturbances, such as forest harvest, forest fire, and forest regeneration. Agreement between the initial change map areas derived from MIIC and the retained final land cover type change areas will be showcased from the pilot test sites.
NASA Astrophysics Data System (ADS)
Di Stefano, R.; Chiaraluce, L.; Valoroso, L.; Waldhauser, F.; Latorre, D.; Piccinini, D.; Tinti, E.
2014-12-01
The Alto Tiberina Near Fault Observatory (TABOO) in the upper Tiber Valley (northern Appennines) is a INGV research infrastructure devoted to the study of preparatory processes and deformation characteristics of the Alto Tiberina Fault (ATF), a 60 km long, low-angle normal fault active since the Quaternary. The TABOO seismic network, covering an area of 120 × 120 km, consists of 60 permanent surface and 250 m deep borehole stations equipped with 3-components, 0.5s to 120s velocimeters, and strong motion sensors. Continuous seismic recordings are transmitted in real-time to the INGV, where we set up an automatic procedure that produces high-resolution earthquakes catalogues (location, magnitudes, 1st motion polarities) in near-real-time. A sensitive event detection engine running on the continuous data stream is followed by advanced phase identification, arrival-time picking, and quality assessment algorithms (MPX). Pick weights are determined from a statistical analysis of a set of predictors designed to correctly apply an a-priori chosen weighting scheme. The MPX results are used to routinely update earthquakes catalogues based on a variety of (1D and 3D) velocity models and location techniques. We are also applying the DD-RT procedure which uses cross-correlation and double-difference methods in real-time to relocate events with high precision relative to a high-resolution background catalog. P- and S-onset and location information are used to automatically compute focal mechanisms, VP/VS variations in space and time, and periodically update 3D VP and VP/VS tomographic models. We present results from four years of operation, during which this monitoring system analyzed over 1.2 million detections and recovered ~60,000 earthquakes at a detection threshold of ML 0.5. The high-resolution information is being used to study changes in seismicity patterns and fault and rock properties along the ATF in space and time, and to elaborate ground shaking scenarios adopting diverse slip distributions and rupture directivity models.
Do we have an internal model of the outside world?
Land, Michael F.
2014-01-01
Our phenomenal world remains stationary in spite of movements of the eyes, head and body. In addition, we can point or turn to objects in the surroundings whether or not they are in the field of view. In this review, I argue that these two features of experience and behaviour are related. The ability to interact with objects we cannot see implies an internal memory model of the surroundings, available to the motor system. And, because we maintain this ability when we move around, the model must be updated, so that the locations of object memories change continuously to provide accurate directional information. The model thus contains an internal representation of both the surroundings and the motions of the head and body: in other words, a stable representation of space. Recent functional MRI studies have provided strong evidence that this egocentric representation has a location in the precuneus, on the medial surface of the superior parietal cortex. This is a region previously identified with ‘self-centred mental imagery’, so it seems likely that the stable egocentric representation, required by the motor system, is also the source of our conscious percept of a stable world. PMID:24395972
Mobile Modelling for Crowdsourcing Building Interior Data
NASA Astrophysics Data System (ADS)
Rosser, J.; Morley, J.; Jackson, M.
2012-06-01
Indoor spatial data forms an important foundation to many ubiquitous computing applications. It gives context to users operating location-based applications, provides an important source of documentation of buildings and can be of value to computer systems where an understanding of environment is required. Unlike external geographic spaces, no centralised body or agency is charged with collecting or maintaining such information. Widespread deployment of mobile devices provides a potential tool that would allow rapid model capture and update by a building's users. Here we introduce some of the issues involved in volunteering building interior data and outline a simple mobile tool for capture of indoor models. The nature of indoor data is inherently private; however in-depth analysis of this issue and legal considerations are not discussed in detail here.
Kalman filter data assimilation: Targeting observations and parameter estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly locatedmore » observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.« less
Overview and Evaluation of the Community Multiscale Air ...
The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In late 2016 or early 2017, CMAQ version 5.2 will be released. This new version of CMAQ will contain important updates from the current CMAQv5.1 modeling system, along with several instrumented versions of the model (e.g. decoupled direct method and sulfur tracking). Some specific model updates include the implementation of a new wind-blown dust treatment in CMAQv5.2, a significant improvement over the treatment in v5.1 which can severely overestimate wind-blown dust under certain conditions. Several other major updates to the modeling system include an update to the calculation of aerosols; implementation of full halogen chemistry (CMAQv5.1 contains a partial implementation of halogen chemistry); the new carbon bond 6 (CB6) chemical mechanism; updates to cloud model in CMAQ; and a new lightning assimilation scheme for the WRF model which significant improves the placement and timing of convective precipitation in the WRF precipitation fields. Numerous other updates to the modeling system will also be available in v5.2.
Travel time seismic tomography on Reykjanes, SW Iceland
NASA Astrophysics Data System (ADS)
Jousset, Philippe; Ágústsson, Kristjan; Blanck, Hanna; Metz, Malte; Franke, Steven; Pàll Hersir, Gylfi; Bruhn, David; Flovenz, Ólafur; Friðleifsson, Guðmundur
2017-04-01
We present updated tomographic results obtained using seismic data recorded around geothermal reservoirs located both on-land Reykjanes, SW-Iceland and offshore along Reykjanes Ridge. We gathered records from a network of 234 seismic stations (including 24 Ocean Bottom Seismometers) deployed between April 2014 and August 2015. In order to determine the orientation of the OBS stations, we used Rayleigh waves planar particle motions from large magnitude earthquakes. This method proved suitable using the on-land stations: orientations determined using this method with the orientations measured using a giro-compass agreed. We focus on the 3D velocity images using local earthquakes to perform travel time tomography. The processing includes first arrival picking of P- and S- phases using an automatic detection and picking technique based on Akaike Information Criteria. We locate earthquakes by using a non-linear localization technique, as a priori information for deriving a 1D velocity model. We then computed 3D velocity model by joint inversion of each earthquake's location and velocity lateral anomalies with respect to the 1D model. Our models confirms previous models obtained in the area, with enhanced details. In a second step, we performed inversion of the Vp/Vs ratio. Results indicate a low Vp/Vs ratio anomaly at depth suggesting the absence of large magmatic body under Reykjanes, unlike results obtained at other geothermal field, sucha as Krafla and Hengill. We discuss implications of those results in the light of recent IDDP drilling in Reykjanes.
Yañez-Arenas, Carlos; Rioja-Nieto, Rodolfo; Martín, Gerardo A; Dzul-Manzanilla, Felipe; Chiappa-Carrara, Xavier; Buenfil-Ávila, Aura; Manrique-Saide, Pablo; Correa-Morales, Fabián; Díaz-Quiñónez, José Alberto; Pérez-Rentería, Crescencio; Ordoñez-Álvarez, José; Vazquez-Prokopec, Gonzalo; Huerta, Herón
2018-01-10
The Asian tiger mosquito, Aedes albopictus (Skuse) (Diptera: Culicidae), is an invasive species and a vector of numerous human pathogens, including chikungunya, dengue, yellow fever, and Zika viruses. This mosquito had been reported from 36 geographic locations in Mexico by 2005, increasing to 101 locations by 2010 and 501 locations (spanning 16 states) by 2016. Here we modeled the occupied niche for Ae. albopictus in Mexico to characterize the environmental conditions related to its presence, and to generate updated environmental suitability maps. The predictors with the greatest contribution to characterizing the occupied niche for Ae. albopictus were NDVI and annual mean temperature. We also estimated the environmental suitability for Ae. albopictus in regions of the country where it has not been documented yet, by means of: 1) transferring its occupied niche model to these regions and 2) modeling its fundamental niche using global data. Our models will help vector control and public health institutions to identify areas where Ae. albopictus has not yet been recorded but where it may be present. We emphasize that most of Mexico has environmental conditions that potentially allow the survival of Ae. albopictus, which underscores the need for systematic mosquito monitoring in all states of the country. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Dannemann, F. K.; Park, J.; Marcillo, O. E.; Blom, P. S.; Stump, B. W.; Hayward, C.
2016-12-01
Data from five infrasound arrays in the western US jointly operated by University of Utah Seismograph Station and Southern Methodist University are used to test a database-centric processing pipeline, InfraPy, for automated event detection, association and location. Infrasonic array data from a one-year time period (January 1 2012 to December 31 2012) are used. This study focuses on the identification and location of 53 ground-truth verified events produced from near surface military explosions at the Utah Test and Training Range (UTTR). Signals are detected using an adaptive F-detector, which accounts for correlated and uncorrelated time-varying noise in order to reduce false detections due to the presence of coherent noise. Variations in detection azimuth and correlation are found to be consistent with seasonal changes in atmospheric winds. The Bayesian infrasonic source location (BISL) method is used to produce source location and time credibility contours based on posterior probability density functions. Updates to the previous BISL methodology include the application of celerity range and azimuth deviation distributions in order to accurately account for the spatial and temporal variability of infrasound propagation through the atmosphere. These priors are estimated by ray tracing through Ground-to-Space (G2S) atmospheric models as a function of season and time of day using historic atmospheric characterizations from 2007 to 2013. Out of the 53 events, 31 are successfully located using the InfraPy pipeline. Confidence contour areas for maximum a posteriori event locations produce error estimates which are reduced a maximum of 98% and an average of 25% from location estimates utilizing a simple time independent uniform atmosphere. We compare real-time ray tracing results with the statistical atmospheric priors used in this study to examine large time differences between known origin times and estimated origin times that might be due to the misidentification of infrasonic phases. This work provides an opportunity to improve atmospheric model predictions by understanding atmospheric variability at a station-level.
Scharlau, Ingrid
2002-11-01
Presenting a masked prime leading a target influences the perceived onset of the masking target (perceptual latency priming; Scharlau & Neumann, in press). This priming effect is explained by the asynchronous updating model (Neumann, 1982; Scharlau & Neumann, in press): The prime initiates attentional allocation toward its location, which renders a trailing target at the same place consciously available earlier. In three experiments, this perceptual latency priming by leading primes was examined jointly with the effects of trailing primes in order to compare the explanation of the asynchronous updating model with the onset-averaging and the P-center hypotheses. Experiment 1 showed that an attended, as well as an unattended, prime leads to perceptual latency priming. In addition, a large effect of trailing primes on the onset of a target was found. As Experiment 2 demonstrated, this effect is quite robust, although smaller than that of a leading prime. In Experiment 3, masked primes were used. Under these conditions, no influence of trailing primes could be found, whereas perceptual latency priming persisted. Thus, a nonattentional explanation for the effect of trailing primes seems likely.
SHM-Based Probabilistic Fatigue Life Prediction for Bridges Based on FE Model Updating
Lee, Young-Joo; Cho, Soojin
2016-01-01
Fatigue life prediction for a bridge should be based on the current condition of the bridge, and various sources of uncertainty, such as material properties, anticipated vehicle loads and environmental conditions, make the prediction very challenging. This paper presents a new approach for probabilistic fatigue life prediction for bridges using finite element (FE) model updating based on structural health monitoring (SHM) data. Recently, various types of SHM systems have been used to monitor and evaluate the long-term structural performance of bridges. For example, SHM data can be used to estimate the degradation of an in-service bridge, which makes it possible to update the initial FE model. The proposed method consists of three steps: (1) identifying the modal properties of a bridge, such as mode shapes and natural frequencies, based on the ambient vibration under passing vehicles; (2) updating the structural parameters of an initial FE model using the identified modal properties; and (3) predicting the probabilistic fatigue life using the updated FE model. The proposed method is demonstrated by application to a numerical model of a bridge, and the impact of FE model updating on the bridge fatigue life is discussed. PMID:26950125
Normal response function method for mass and stiffness matrix updating using complex FRFs
NASA Astrophysics Data System (ADS)
Pradhan, S.; Modak, S. V.
2012-10-01
Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.
Tangle-Free Finite Element Mesh Motion for Ablation Problems
NASA Technical Reports Server (NTRS)
Droba, Justin
2016-01-01
Mesh motion is the process by which a computational domain is updated in time to reflect physical changes in the material the domain represents. Such a technique is needed in the study of the thermal response of ablative materials, which erode when strong heating is applied to the boundary. Traditionally, the thermal solver is coupled with a linear elastic or biharmonic system whose sole purpose is to update mesh node locations in response to altering boundary heating. Simple mesh motion algorithms rely on boundary surface normals. In such schemes, evolution in time will eventually cause the mesh to intersect and "tangle" with itself, causing failure. Furthermore, such schemes are greatly limited in the problems geometries on which they will be successful. This paper presents a comprehensive and sophisticated scheme that tailors the directions of motion based on context. By choosing directions for each node smartly, the inevitable tangle can be completely avoided and mesh motion on complex geometries can be modeled accurately.
75 FR 3392 - Outer Continental Shelf Air Regulations Consistency Update for Alaska
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-21
... 40 CFR part 55,\\1\\ which established requirements to control air pollution from OCS sources in order... air pollution from OCS sources located within 25 miles of States' seaward boundaries that are the same... the Act requires that EPA establish requirements to control air pollution from OCS sources located...
Frequency Response Function Based Damage Identification for Aerospace Structures
NASA Astrophysics Data System (ADS)
Oliver, Joseph Acton
Structural health monitoring technologies continue to be pursued for aerospace structures in the interests of increased safety and, when combined with health prognosis, efficiency in life-cycle management. The current dissertation develops and validates damage identification technology as a critical component for structural health monitoring of aerospace structures and, in particular, composite unmanned aerial vehicles. The primary innovation is a statistical least-squares damage identification algorithm based in concepts of parameter estimation and model update. The algorithm uses frequency response function based residual force vectors derived from distributed vibration measurements to update a structural finite element model through statistically weighted least-squares minimization producing location and quantification of the damage, estimation uncertainty, and an updated model. Advantages compared to other approaches include robust applicability to systems which are heavily damped, large, and noisy, with a relatively low number of distributed measurement points compared to the number of analytical degrees-of-freedom of an associated analytical structural model (e.g., modal finite element model). Motivation, research objectives, and a dissertation summary are discussed in Chapter 1 followed by a literature review in Chapter 2. Chapter 3 gives background theory and the damage identification algorithm derivation followed by a study of fundamental algorithm behavior on a two degree-of-freedom mass-spring system with generalized damping. Chapter 4 investigates the impact of noise then successfully proves the algorithm against competing methods using an analytical eight degree-of-freedom mass-spring system with non-proportional structural damping. Chapter 5 extends use of the algorithm to finite element models, including solutions for numerical issues, approaches for modeling damping approximately in reduced coordinates, and analytical validation using a composite sandwich plate model. Chapter 6 presents the final extension to experimental systems-including methods for initial baseline correlation and data reduction-and validates the algorithm on an experimental composite plate with impact damage. The final chapter deviates from development and validation of the primary algorithm to discuss development of an experimental scaled-wing test bed as part of a collaborative effort for developing structural health monitoring and prognosis technology. The dissertation concludes with an overview of technical conclusions and recommendations for future work.
Numerical modeling and model updating for smart laminated structures with viscoelastic damping
NASA Astrophysics Data System (ADS)
Lu, Jun; Zhan, Zhenfei; Liu, Xu; Wang, Pan
2018-07-01
This paper presents a numerical modeling method combined with model updating techniques for the analysis of smart laminated structures with viscoelastic damping. Starting with finite element formulation, the dynamics model with piezoelectric actuators is derived based on the constitutive law of the multilayer plate structure. The frequency-dependent characteristics of the viscoelastic core are represented utilizing the anelastic displacement fields (ADF) parametric model in the time domain. The analytical model is validated experimentally and used to analyze the influencing factors of kinetic parameters under parametric variations. Emphasis is placed upon model updating for smart laminated structures to improve the accuracy of the numerical model. Key design variables are selected through the smoothing spline ANOVA statistical technique to mitigate the computational cost. This updating strategy not only corrects the natural frequencies but also improves the accuracy of damping prediction. The effectiveness of the approach is examined through an application problem of a smart laminated plate. It is shown that a good consistency can be achieved between updated results and measurements. The proposed method is computationally efficient.
Adapting to change: The role of the right hemisphere in mental model building and updating.
Filipowicz, Alex; Anderson, Britt; Danckert, James
2016-09-01
We recently proposed that the right hemisphere plays a crucial role in the processes underlying mental model building and updating. Here, we review the evidence we and others have garnered to support this novel account of right hemisphere function. We begin by presenting evidence from patient work that suggests a critical role for the right hemisphere in the ability to learn from the statistics in the environment (model building) and adapt to environmental change (model updating). We then provide a review of neuroimaging research that highlights a network of brain regions involved in mental model updating. Next, we outline specific roles for particular regions within the network such that the anterior insula is purported to maintain the current model of the environment, the medial prefrontal cortex determines when to explore new or alternative models, and the inferior parietal lobule represents salient and surprising information with respect to the current model. We conclude by proposing some future directions that address some of the outstanding questions in the field of mental model building and updating. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Efficient Synthesis of Network Updates
2015-06-17
model include switches S i , links L j , and a single controller element C, and a network N is a tuple containing these. Each switch S i is encoded as a...and the ports they should be forwarded to respec- tively. Each link L j is represented by a record consisting of two locations loc and loc0 and a list...the union of multisets m1 and m2. We write [x] for a singleton list, and l1@l2 for concatenation of l1 and l2. Each transition N o ! N 0 is anno - tated
1980-07-01
Lfl 0 0.0 0ř 1 1ř 2Ŕ 2ř 3Ŕ 35 X (MILES) Figure 8. Map of Aircraf Line Sources at JFK 29 Table 8. Summary of Aircraft Emission for Hour 19 at JFK ... Airport Emissions (103 lbs) Location CO THC NOx Runways 0.08 0.05 0.52 Taxiways 3.94 2.30 0.15 Queue 1.21 0.64 0.05 Terminal 0.60 0.28 0.04 Total on
NASA Technical Reports Server (NTRS)
Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian
2008-01-01
The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.
Multi-Hazard Assessment of Scour Damaged Bridges with UAS-Based Measurements
NASA Astrophysics Data System (ADS)
Özcan, O.; Ozcan, O.
2017-12-01
Flood and stream induced scour occurring in bridge piers constructed on rivers is one of the mostly observed failure reasons in bridges. Scour induced failure risk in bridges and determination of the alterations in bridge safety under seismic effects has the ultimate importance. Thus, for the determination of bridge safety under the scour effects, the scour amount under bridge piers should be designated realistically and should be tracked and updated continuously. Hereby, the scour induced failures in bridge foundation systems will be prevented and bridge substructure design will be conducted safely. In this study, in order to measure the amount of scour in bridge load bearing system (pile foundations and pile abutments) and to attain very high definition 3 dimensional models of river flood plain for the flood analysis, unmanned aircraft system (UAS) based measurement methods were implemented. UAS based measurement systems provide new and practical approach and bring high precision and reliable solutions considering recent measurement systems. For this purpose, the reinforced concrete (RC) bridge that is located on Antalya Boğaçayı River, Turkey and that failed in 2003 due to flood-induced scour was selected as the case study. The amount of scour occurred in bridge piers and piles was determined realistically and the behavior of bridge piers under scour effects was investigated. Future flood effects and the resultant amount of scour was determined with HEC-RAS software by using digital surface models that were obtained at regular intervals using UAS for the riverbed. In the light of the attained scour measurements and expected scour after a probable flood event, the behavior of scour damaged RC bridge was investigated by pushover and time history analyses under lateral and vertical seismic loadings. In the analyses, the load and displacement capacity of bridge was observed to diminish significantly under expected scour. Thus, the deterioration in multi hazard performance of the bridge was monitored significantly in the light of updated bridge load bearing system capacity. Regarding the case study, UAS based and continuously updated bridge multi hazard risk detection system was established that can be used for bridges located on riverbed.
Artificial Boundary Conditions for Finite Element Model Update and Damage Detection
2017-03-01
BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION by Emmanouil Damanakis March 2017 Thesis Advisor: Joshua H. Gordis...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE ARTIFICIAL BOUNDARY CONDITIONS FOR FINITE ELEMENT MODEL UPDATE AND DAMAGE DETECTION...release. Distribution is unlimited. 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) In structural engineering, a finite element model is often
An Investigation Into the Effects of Frequency Response Function Estimators on Model Updating
NASA Astrophysics Data System (ADS)
Ratcliffe, M. J.; Lieven, N. A. J.
1999-03-01
Model updating is a very active research field, in which significant effort has been invested in recent years. Model updating methodologies are invariably successful when used on noise-free simulated data, but tend to be unpredictable when presented with real experimental data that are—unavoidably—corrupted with uncorrelated noise content. In the development and validation of model-updating strategies, a random zero-mean Gaussian variable is added to simulated test data to tax the updating routines more fully. This paper proposes a more sophisticated model for experimental measurement noise, and this is used in conjunction with several different frequency response function estimators, from the classical H1and H2to more refined estimators that purport to be unbiased. Finite-element model case studies, in conjunction with a genuine experimental test, suggest that the proposed noise model is a more realistic representation of experimental noise phenomena. The choice of estimator is shown to have a significant influence on the viability of the FRF sensitivity method. These test cases find that the use of the H2estimator for model updating purposes is contraindicated, and that there is no advantage to be gained by using the sophisticated estimators over the classical H1estimator.
Adaptation of clinical prediction models for application in local settings.
Kappen, Teus H; Vergouwe, Yvonne; van Klei, Wilton A; van Wolfswinkel, Leo; Kalkman, Cor J; Moons, Karel G M
2012-01-01
When planning to use a validated prediction model in new patients, adequate performance is not guaranteed. For example, changes in clinical practice over time or a different case mix than the original validation population may result in inaccurate risk predictions. To demonstrate how clinical information can direct updating a prediction model and development of a strategy for handling missing predictor values in clinical practice. A previously derived and validated prediction model for postoperative nausea and vomiting was updated using a data set of 1847 patients. The update consisted of 1) changing the definition of an existing predictor, 2) reestimating the regression coefficient of a predictor, and 3) adding a new predictor to the model. The updated model was then validated in a new series of 3822 patients. Furthermore, several imputation models were considered to handle real-time missing values, so that possible missing predictor values could be anticipated during actual model use. Differences in clinical practice between our local population and the original derivation population guided the update strategy of the prediction model. The predictive accuracy of the updated model was better (c statistic, 0.68; calibration slope, 1.0) than the original model (c statistic, 0.62; calibration slope, 0.57). Inclusion of logistical variables in the imputation models, besides observed patient characteristics, contributed to a strategy to deal with missing predictor values at the time of risk calculation. Extensive knowledge of local, clinical processes provides crucial information to guide the process of adapting a prediction model to new clinical practices.
Interface Message Processors for the ARPA Computer Network
1976-07-01
and then clear the location) as its primitive locking facility (i.e., as the necessary multiprocessor lock equivalent to Dijkstra semaphores )[37]. To...of the extra storage required for the redundant copies. There is the problem of maintaining synchronization of multiple copy data bases in the presence...through any of the data base sites. I Update synchronization . Races between conflicting, "concurrent" update requests are resolved in a manner that j
A review and update of the Virginia Department of Transportation cash flow forecasting model.
DOT National Transportation Integrated Search
1996-01-01
This report details the research done to review and update components of the VDOT cash flow forecasting model. Specifically, the study updated the monthly factors submodel used to predict payments on construction contracts. For the other submodel rev...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayes, T.; Smith, K.S.; Severino, F.
A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To providemore » synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.« less
Model Refinement and Simulation of Groundwater Flow in Clinton, Eaton, and Ingham Counties, Michigan
Luukkonen, Carol L.
2010-01-01
A groundwater-flow model that was constructed in 1996 of the Saginaw aquifer was refined to better represent the regional hydrologic system in the Tri-County region, which consists of Clinton, Eaton, and Ingham Counties, Michigan. With increasing demand for groundwater, the need to manage withdrawals from the Saginaw aquifer has become more important, and the 1996 model could not adequately address issues of water quality and quantity. An updated model was needed to better address potential effects of drought, locally high water demands, reduction of recharge by impervious surfaces, and issues affecting water quality, such as contaminant sources, on water resources and the selection of pumping rates and locations. The refinement of the groundwater-flow model allows simulations to address these issues of water quantity and quality and provides communities with a tool that will enable them to better plan for expansion and protection of their groundwater-supply systems. Model refinement included representation of the system under steady-state and transient conditions, adjustments to the estimated regional groundwater-recharge rates to account for both temporal and spatial differences, adjustments to the representation and hydraulic characteristics of the glacial deposits and Saginaw Formation, and updates to groundwater-withdrawal rates to reflect changes from the early 1900s to 2005. Simulations included steady-state conditions (in which stresses remained constant and changes in storage were not included) and transient conditions (in which stresses changed in annual and monthly time scales and changes in storage within the system were included). These simulations included investigation of the potential effects of reduced recharge due to impervious areas or to low-rainfall/drought conditions, delineation of contributing areas with recent pumping rates, and optimization of pumping subject to various quantity and quality constraints. Simulation results indicate potential declines in water levels in both the upper glacial aquifer and the upper sandstone bedrock aquifer under steady-state and transient conditions when recharge was reduced by 20 and 50 percent in urban areas. Transient simulations were done to investigate reduced recharge due to low rainfall and increased pumping to meet anticipated future demand with 24 months (2 years) of modified recharge or modified recharge and pumping rates. During these two simulation years, monthly recharge rates were reduced by about 30 percent, and monthly withdrawal rates for Lansing area production wells were increased by 15 percent. The reduction in the amount of water available to recharge the groundwater system affects the upper model layers representing the glacial aquifers more than the deeper bedrock layers. However, with a reduction in recharge and an increase in withdrawals from the bedrock aquifer, water levels in the bedrock layers are affected more than those in the glacial layers. Differences in water levels between simulations with reduced recharge and reduced recharge with increased pumping are greatest in the Lansing area and least away from pumping centers, as expected. Additionally, the increases in pumping rates had minimal effect on most simulated streamflows. Additional simulations included updating the estimated 10-year wellhead-contributing areas for selected Lansing-area wells under 2006-7 pumping conditions. Optimization of groundwater withdrawals with a water-resource management model was done to determine withdrawal rates while minimizing operational costs and to determine withdrawal locations to achieve additional capacity while meeting specified head constraints. In these optimization scenarios, the desired groundwater withdrawals are achieved by simulating managed wells (where pumping rates can be optimized) and unmanaged wells (where pumping rates are not optimized) and by using various combinations of existing and proposed well locations.
General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagwell, L.; Bennett, P.; Flach, G.
2017-02-21
This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).
Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W
2010-07-01
Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter performance, but it did not improve performance when used individually. The EnKF estimates of leaf area followed the expected springtime canopy phenology. However, there were also diel fluctuations in the leaf-area estimates; these are a clear indication of a model deficiency possibly related to vapor pressure effects on canopy conductance.
Dynamic analysis of I cross beam section dissimilar plate joined by TIG welding
NASA Astrophysics Data System (ADS)
Sani, M. S. M.; Nazri, N. A.; Rani, M. N. Abdul; Yunus, M. A.
2018-04-01
In this paper, finite element (FE) joint modelling technique for prediction of dynamic properties of sheet metal jointed by tungsten inert gas (TTG) will be presented. I cross section dissimilar flat plate with different series of aluminium alloy; AA7075 and AA6061 joined by TTG are used. In order to find the most optimum set of TTG welding dissimilar plate, the finite element model with three types of joint modelling were engaged in this study; bar element (CBAR), beam element and spot weld element connector (CWELD). Experimental modal analysis (EMA) was carried out by impact hammer excitation on the dissimilar plates that welding by TTG method. Modal properties of FE model with joints were compared and validated with model testing. CWELD element was chosen to represent weld model for TTG joints due to its accurate prediction of mode shapes and contains an updating parameter for weld modelling compare to other weld modelling. Model updating was performed to improve correlation between EMA and FEA and before proceeds to updating, sensitivity analysis was done to select the most sensitive updating parameter. After perform model updating, average percentage of error of the natural frequencies for CWELD model is improved significantly.
Test and analysis procedures for updating math models of Space Shuttle payloads
NASA Technical Reports Server (NTRS)
Craig, Roy R., Jr.
1991-01-01
Over the next decade or more, the Space Shuttle will continue to be the primary transportation system for delivering payloads to Earth orbit. Although a number of payloads have already been successfully carried by the Space Shuttle in the payload bay of the Orbiter vehicle, there continues to be a need for evaluation of the procedures used for verifying and updating the math models of the payloads. The verified payload math models is combined with an Orbiter math model for the coupled-loads analysis, which is required before any payload can fly. Several test procedures were employed for obtaining data for use in verifying payload math models and for carrying out the updating of the payload math models. Research was directed at the evaluation of test/update procedures for use in the verification of Space Shuttle payload math models. The following research tasks are summarized: (1) a study of free-interface test procedures; (2) a literature survey and evaluation of model update procedures; and (3) the design and construction of a laboratory payload simulator.
Build-up Approach to Updating the Mock Quiet Spike(TradeMark) Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike(TradeMark) (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented in order to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike(TradeMark) project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
Build-up Approach to Updating the Mock Quiet Spike(TM)Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. The NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
Updates to the Demographic and Spatial Allocation Models to ...
EPA announced the availability of the draft report, Updates to the Demographic and Spatial Allocation Models to Produce Integrated Climate and Land Use Scenarios (ICLUS) for a 30-day public comment period. The ICLUS version 2 (v2) modeling tool furthered land change modeling by providing nationwide housing development scenarios up to 2100. ICLUS V2 includes updated population and land use data sets and addressing limitations identified in ICLUS v1 in both the migration and spatial allocation models. The companion user guide describes the development of ICLUS v2 and the updates that were made to the original data sets and the demographic and spatial allocation models. [2017 UPDATE] Get the latest version of ICLUS and stay up-to-date by signing up to the ICLUS mailing list. The GIS tool enables users to run SERGoM with the population projections developed for the ICLUS project and allows users to modify the spatial allocation housing density across the landscape.
Lee, A J; Cunningham, A P; Kuchenbaecker, K B; Mavaddat, N; Easton, D F; Antoniou, A C
2014-01-01
Background: The Breast and Ovarian Analysis of Disease Incidence and Carrier Estimation Algorithm (BOADICEA) is a risk prediction model that is used to compute probabilities of carrying mutations in the high-risk breast and ovarian cancer susceptibility genes BRCA1 and BRCA2, and to estimate the future risks of developing breast or ovarian cancer. In this paper, we describe updates to the BOADICEA model that extend its capabilities, make it easier to use in a clinical setting and yield more accurate predictions. Methods: We describe: (1) updates to the statistical model to include cancer incidences from multiple populations; (2) updates to the distributions of tumour pathology characteristics using new data on BRCA1 and BRCA2 mutation carriers and women with breast cancer from the general population; (3) improvements to the computational efficiency of the algorithm so that risk calculations now run substantially faster; and (4) updates to the model's web interface to accommodate these new features and to make it easier to use in a clinical setting. Results: We present results derived using the updated model, and demonstrate that the changes have a significant impact on risk predictions. Conclusion: All updates have been implemented in a new version of the BOADICEA web interface that is now available for general use: http://ccge.medschl.cam.ac.uk/boadicea/. PMID:24346285
Earthquakes in the Central United States, 1699-2010
Dart, Richard L.; Volpi, Christina M.
2010-01-01
This publication is an update of an earlier report, U.S. Geological Survey (USGS) Geologic Investigation I-2812 by Wheeler and others (2003), titled ?Earthquakes in the Central United States-1699-2002.? Like the original poster, the center of the updated poster is a map showing the pattern of earthquake locations in the most seismically active part of the central United States. Arrayed around the map are short explanatory texts and graphics, which describe the distribution of historical earthquakes and the effects of the most notable of them. The updated poster contains additional, post 2002, earthquake data. These are 38 earthquakes covering the time interval from January 2003 to June 2010, including the Mount Carmel, Illinois, earthquake of 2008. The USGS Preliminary Determination of Epicenters (PDE) was the source of these additional data. Like the I-2812 poster, this poster was prepared for a nontechnical audience and designed to inform the general public as to the widespread occurrence of felt and damaging earthquakes in the Central United States. Accordingly, the poster should not be used to assess earthquake hazard in small areas or at individual locations.
NASA Technical Reports Server (NTRS)
Connell, Andrea M.
2011-01-01
The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.
Geometric database maintenance using CCTV cameras and overlay graphics
NASA Astrophysics Data System (ADS)
Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin
1988-01-01
An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.
Dissociable effects of surprise and model update in parietal and anterior cingulate cortex
O’Reilly, Jill X.; Schüffelgen, Urs; Cuell, Steven F.; Behrens, Timothy E. J.; Mars, Rogier B.; Rushworth, Matthew F. S.
2013-01-01
Brains use predictive models to facilitate the processing of expected stimuli or planned actions. Under a predictive model, surprising (low probability) stimuli or actions necessitate the immediate reallocation of processing resources, but they can also signal the need to update the underlying predictive model to reflect changes in the environment. Surprise and updating are often correlated in experimental paradigms but are, in fact, distinct constructs that can be formally defined as the Shannon information (IS) and Kullback–Leibler divergence (DKL) associated with an observation. In a saccadic planning task, we observed that distinct behaviors and brain regions are associated with surprise/IS and updating/DKL. Although surprise/IS was associated with behavioral reprogramming as indexed by slower reaction times, as well as with activity in the posterior parietal cortex [human lateral intraparietal area (LIP)], the anterior cingulate cortex (ACC) was specifically activated during updating of the predictive model (DKL). A second saccade-sensitive region in the inferior posterior parietal cortex (human 7a), which has connections to both LIP and ACC, was activated by surprise and modulated by updating. Pupillometry revealed a further dissociation between surprise and updating with an early positive effect of surprise and late negative effect of updating on pupil area. These results give a computational account of the roles of the ACC and two parietal saccade regions, LIP and 7a, by which their involvement in diverse tasks can be understood mechanistically. The dissociation of functional roles between regions within the reorienting/reprogramming network may also inform models of neurological phenomena, such as extinction and Balint syndrome, and neglect. PMID:23986499
Cleanups In My Community (CIMC) - Removals/Responses, National Layer
This data layer provides access to Removal/Response sites as part of the CIMC web service. Removals are hazardous substance releases that require immediate or short-term response actions. These are generally addressed under the Emergency Response program and are initially tracked centrally by the federal government's National Reporting Center. Cleanups in My Community maps and lists removals that are included in EPA??s epaosc.org site, and provides direct links to information on these sites. CIMC obtains updated removal data through a web service from epaosc.org just before the 18th of each month.The CIMC web service was initially published in 2013, but the data are updated on the 18th of each month. The full schedule for data updates in CIMC is located here: http://iaspub.epa.gov/enviro/data_update_v2.
Chemical transport model simulations of organic aerosol in ...
Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA–SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data
Urbinello, Damiano; Röösli, Martin
2013-01-01
When moving around, mobile phones in stand-by mode periodically send data about their positions. The aim of this paper is to evaluate how personal radiofrequency electromagnetic field (RF-EMF) measurements are affected by such location updates. Exposure from a mobile phone handset (uplink) was measured during commuting by using a randomized cross-over study with three different scenarios: disabled mobile phone (reference), an activated dual-band phone and a quad-band phone. In the reference scenario, uplink exposure was highest during train rides (1.19 mW/m(2)) and lowest during car rides in rural areas (0.001 mW/m(2)). In public transports, the impact of one's own mobile phone on personal RF-EMF measurements was not observable because of high background uplink radiation from other people's mobile phone. In a car, uplink exposure with an activated phone was orders of magnitude higher compared with the reference scenario. This study demonstrates that personal RF-EMF exposure is affected by one's own mobile phone in stand-by mode because of its regular location update. Further dosimetric studies should quantify the contribution of location updates to the total RF-EMF exposure in order to clarify whether the duration of mobile phone use, the most common exposure surrogate in the epidemiological RF-EMF research, is actually an adequate exposure proxy.
UCERF3: A new earthquake forecast for California's complex fault system
Field, Edward H.; ,
2015-01-01
With innovations, fresh data, and lessons learned from recent earthquakes, scientists have developed a new earthquake forecast model for California, a region under constant threat from potentially damaging events. The new model, referred to as the third Uniform California Earthquake Rupture Forecast, or "UCERF" (http://www.WGCEP.org/UCERF3), provides authoritative estimates of the magnitude, location, and likelihood of earthquake fault rupture throughout the state. Overall the results confirm previous findings, but with some significant changes because of model improvements. For example, compared to the previous forecast (Uniform California Earthquake Rupture Forecast 2), the likelihood of moderate-sized earthquakes (magnitude 6.5 to 7.5) is lower, whereas that of larger events is higher. This is because of the inclusion of multifault ruptures, where earthquakes are no longer confined to separate, individual faults, but can occasionally rupture multiple faults simultaneously. The public-safety implications of this and other model improvements depend on several factors, including site location and type of structure (for example, family dwelling compared to a long-span bridge). Building codes, earthquake insurance products, emergency plans, and other risk-mitigation efforts will be updated accordingly. This model also serves as a reminder that damaging earthquakes are inevitable for California. Fortunately, there are many simple steps residents can take to protect lives and property.
NASA Astrophysics Data System (ADS)
Yu, Liuqian; Fennel, Katja; Bertino, Laurent; Gharamti, Mohamad El; Thompson, Keith R.
2018-06-01
Effective data assimilation methods for incorporating observations into marine biogeochemical models are required to improve hindcasts, nowcasts and forecasts of the ocean's biogeochemical state. Recent assimilation efforts have shown that updating model physics alone can degrade biogeochemical fields while only updating biogeochemical variables may not improve a model's predictive skill when the physical fields are inaccurate. Here we systematically investigate whether multivariate updates of physical and biogeochemical model states are superior to only updating either physical or biogeochemical variables. We conducted a series of twin experiments in an idealized ocean channel that experiences wind-driven upwelling. The forecast model was forced with biased wind stress and perturbed biogeochemical model parameters compared to the model run representing the "truth". Taking advantage of the multivariate nature of the deterministic Ensemble Kalman Filter (DEnKF), we assimilated different combinations of synthetic physical (sea surface height, sea surface temperature and temperature profiles) and biogeochemical (surface chlorophyll and nitrate profiles) observations. We show that when biogeochemical and physical properties are highly correlated (e.g., thermocline and nutricline), multivariate updates of both are essential for improving model skill and can be accomplished by assimilating either physical (e.g., temperature profiles) or biogeochemical (e.g., nutrient profiles) observations. In our idealized domain, the improvement is largely due to a better representation of nutrient upwelling, which results in a more accurate nutrient input into the euphotic zone. In contrast, assimilating surface chlorophyll improves the model state only slightly, because surface chlorophyll contains little information about the vertical density structure. We also show that a degradation of the correlation between observed subsurface temperature and nutrient fields, which has been an issue in several previous assimilation studies, can be reduced by multivariate updates of physical and biogeochemical fields.
Probabilistic In Situ Stress Estimation and Forecasting using Sequential Data Assimilation
NASA Astrophysics Data System (ADS)
Fichtner, A.; van Dinther, Y.; Kuensch, H. R.
2017-12-01
Our physical understanding and forecasting ability of earthquakes, and other solid Earth dynamic processes, is significantly hampered by limited indications on the evolving state of stress and strength on faults. Integrating observations and physics-based numerical modeling to quantitatively estimate this evolution of a fault's state is crucial. However, systematic attempts are limited and tenuous, especially in light of the scarcity and uncertainty of natural data and the difficulty of modelling the physics governing earthquakes. We adopt the statistical framework of sequential data assimilation - extensively developed for weather forecasting - to efficiently integrate observations and prior knowledge in a forward model, while acknowledging errors in both. To prove this concept we perform a perfect model test in a simplified subduction zone setup, where we assimilate synthetic noised data on velocities and stresses from a single location. Using an Ensemble Kalman Filter, these data and their errors are assimilated to update 150 ensemble members from a Partial Differential Equation-driven seismic cycle model. Probabilistic estimates of fault stress and dynamic strength evolution capture the truth exceptionally well. This is possible, because the sampled error covariance matrix contains prior information from the physics that relates velocities, stresses and pressure at the surface to those at the fault. During the analysis step, stress and strength distributions are thus reconstructed such that fault coupling can be updated to either inhibit or trigger events. In the subsequent forecast step the physical equations are solved to propagate the updated states forward in time and thus provide probabilistic information on the occurrence of the next event. At subsequent assimilation steps, the system's forecasting ability turns out to be significantly better than that of a periodic recurrence model (requiring an alarm 17% vs. 68% of the time). This thus provides distinct added value with respect to using observations or numerical models separately. Although several challenges for applications to a natural setting remain, these first results indicate the large potential of data assimilation techniques for probabilistic seismic hazard assessment and other challenges in dynamic solid earth systems.
Model updating in flexible-link multibody systems
NASA Astrophysics Data System (ADS)
Belotti, R.; Caneva, G.; Palomba, I.; Richiedei, D.; Trevisani, A.
2016-09-01
The dynamic response of flexible-link multibody systems (FLMSs) can be predicted through nonlinear models based on finite elements, to describe the coupling between rigid- body and elastic behaviour. Their accuracy should be as high as possible to synthesize controllers and observers. Model updating based on experimental measurements is hence necessary. By taking advantage of the experimental modal analysis, this work proposes a model updating procedure for FLMSs and applies it experimentally to a planar robot. Indeed, several peculiarities of the model of FLMS should be carefully tackled. On the one hand, nonlinear models of a FLMS should be linearized about static equilibrium configurations. On the other, the experimental mode shapes should be corrected to be consistent with the elastic displacements represented in the model, which are defined with respect to a fictitious moving reference (the equivalent rigid link system). Then, since rotational degrees of freedom are also represented in the model, interpolation of the experimental data should be performed to match the model displacement vector. Model updating has been finally cast as an optimization problem in the presence of bounds on the feasible values, by also adopting methods to improve the numerical conditioning and to compute meaningful updated inertial and elastic parameters.
An Update on Phased Array Results Obtained on the GE Counter-Rotating Open Rotor Model
NASA Technical Reports Server (NTRS)
Podboy, Gary; Horvath, Csaba; Envia, Edmane
2013-01-01
Beamform maps have been generated from 1) simulated data generated by the LINPROP code and 2) actual experimental phased array data obtained on the GE Counter-rotating open rotor model. The beamform maps show that many of the tones in the experimental data come from their corresponding Mach radius. If the phased array points to the Mach radius associated with a tone then it is likely that the tone is a result of the loading and thickness noise on the blades. In this case, the phased array correctly points to where the noise is coming from and indicates the axial location of the loudest source in the image but not necessarily the correct vertical location. If the phased array does not point to the Mach radius associated with a tone then some mechanism other than loading and thickness noise may control the amplitude of the tone. In this case, the phased array may or may not point to the actual source. If the source is not rotating it is likely that the phased array points to the source. If the source is rotating it is likely that the phased array indicates the axial location of the loudest source but not necessarily the correct vertical location. These results indicate that you have to be careful in how you interpret phased array data obtained on an open rotor since they may show the tones coming from a location other than the source location. With a subsonic tip speed open rotor the tones can come form locations outboard of the blade tips. This has implications regarding noise shielding.
78 FR 20359 - Sunshine Act Meetings; National Science Board
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-04
... for May 2013 meeting. STATUS: Open. LOCATION: This meeting will be held by teleconference at the... visitors must report to the NSF visitor desk located in the lobby at the 9th and N. Stuart Streets entrance... information and updates (time, place, subject matter or status of meeting) may be found at http://www.nsf.gov...
The four-dimensional data assimilation (FDDA) technique in the Weather Research and Forecasting (WRF) meteorological model has recently undergone an important update from the original version. Previous evaluation results have demonstrated that the updated FDDA approach in WRF pr...
Both younger and older adults have difficulty updating emotional memories.
Nashiro, Kaoru; Sakaki, Michiko; Huffman, Derek; Mather, Mara
2013-03-01
The main purpose of the study was to examine whether emotion impairs associative memory for previously seen items in older adults, as previously observed in younger adults. Thirty-two younger adults and 32 older adults participated. The experiment consisted of 2 parts. In Part 1, participants learned picture-object associations for negative and neutral pictures. In Part 2, they learned picture-location associations for negative and neutral pictures; half of these pictures were seen in Part 1 whereas the other half were new. The dependent measure was how many locations of negative versus neutral items in the new versus old categories participants remembered in Part 2. Both groups had more difficulty learning the locations of old negative pictures than of new negative pictures. However, this pattern was not observed for neutral items. Despite the fact that older adults showed overall decline in associative memory, the impairing effect of emotion on updating associative memory was similar between younger and older adults.
Aqua/Aura Updated Inclination Adjust Maneuver Performance Prediction Model
NASA Technical Reports Server (NTRS)
Boone, Spencer
2017-01-01
This presentation will discuss the updated Inclination Adjust Maneuver (IAM) performance prediction model that was developed for Aqua and Aura following the 2017 IAM series. This updated model uses statistical regression methods to identify potential long-term trends in maneuver parameters, yielding improved predictions when re-planning past maneuvers. The presentation has been reviewed and approved by Eric Moyer, ESMO Deputy Project Manager.
Valence-Dependent Belief Updating: Computational Validation
Kuzmanovic, Bojana; Rigoux, Lionel
2017-01-01
People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments. PMID:28706499
Valence-Dependent Belief Updating: Computational Validation.
Kuzmanovic, Bojana; Rigoux, Lionel
2017-01-01
People tend to update beliefs about their future outcomes in a valence-dependent way: they are likely to incorporate good news and to neglect bad news. However, belief formation is a complex process which depends not only on motivational factors such as the desire for favorable conclusions, but also on multiple cognitive variables such as prior beliefs, knowledge about personal vulnerabilities and resources, and the size of the probabilities and estimation errors. Thus, we applied computational modeling in order to test for valence-induced biases in updating while formally controlling for relevant cognitive factors. We compared biased and unbiased Bayesian models of belief updating, and specified alternative models based on reinforcement learning. The experiment consisted of 80 trials with 80 different adverse future life events. In each trial, participants estimated the base rate of one of these events and estimated their own risk of experiencing the event before and after being confronted with the actual base rate. Belief updates corresponded to the difference between the two self-risk estimates. Valence-dependent updating was assessed by comparing trials with good news (better-than-expected base rates) with trials with bad news (worse-than-expected base rates). After receiving bad relative to good news, participants' updates were smaller and deviated more strongly from rational Bayesian predictions, indicating a valence-induced bias. Model comparison revealed that the biased (i.e., optimistic) Bayesian model of belief updating better accounted for data than the unbiased (i.e., rational) Bayesian model, confirming that the valence of the new information influenced the amount of updating. Moreover, alternative computational modeling based on reinforcement learning demonstrated higher learning rates for good than for bad news, as well as a moderating role of personal knowledge. Finally, in this specific experimental context, the approach based on reinforcement learning was superior to the Bayesian approach. The computational validation of valence-dependent belief updating represents a novel support for a genuine optimism bias in human belief formation. Moreover, the precise control of relevant cognitive variables justifies the conclusion that the motivation to adopt the most favorable self-referential conclusions biases human judgments.
Unthank, Michael D.
2013-01-01
The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.
Indoor localization using pedestrian dead reckoning updated with RFID-based fiducials.
House, Samuel; Connell, Sean; Milligan, Ian; Austin, Daniel; Hayes, Tamara L; Chiang, Patrick
2011-01-01
We describe a low-cost wearable system that tracks the location of individuals indoors using commonly available inertial navigation sensors fused with radio frequency identification (RFID) tags placed around the smart environment. While conventional pedestrian dead reckoning (PDR) calculated with an inertial measurement unit (IMU) is susceptible to sensor drift inaccuracies, the proposed wearable prototype fuses the drift-sensitive IMU with a RFID tag reader. Passive RFID tags placed throughout the smart-building then act as fiducial markers that update the physical locations of each user, thereby correcting positional errors and sensor inaccuracy. Experimental measurements taken for a 55 m × 20 m 2D floor space indicate an over 1200% improvement in average error rate of the proposed RFID-fused system over dead reckoning alone.
Liu, Haorui; Yi, Fengyan; Yang, Heli
2016-01-01
The shuffled frog leaping algorithm (SFLA) easily falls into local optimum when it solves multioptimum function optimization problem, which impacts the accuracy and convergence speed. Therefore this paper presents grouped SFLA for solving continuous optimization problems combined with the excellent characteristics of cloud model transformation between qualitative and quantitative research. The algorithm divides the definition domain into several groups and gives each group a set of frogs. Frogs of each region search in their memeplex, and in the search process the algorithm uses the “elite strategy” to update the location information of existing elite frogs through cloud model algorithm. This method narrows the searching space and it can effectively improve the situation of a local optimum; thus convergence speed and accuracy can be significantly improved. The results of computer simulation confirm this conclusion. PMID:26819584
NASA Astrophysics Data System (ADS)
Allawi, Mohammed Falah; Jaafar, Othman; Mohamad Hamzah, Firdaus; Mohd, Nuruol Syuhadaa; Deo, Ravinesh C.; El-Shafie, Ahmed
2017-10-01
Existing forecast models applied for reservoir inflow forecasting encounter several drawbacks, due to the difficulty of the underlying mathematical procedures being to cope with and to mimic the naturalization and stochasticity of the inflow data patterns. In this study, appropriate adjustments to the conventional coactive neuro-fuzzy inference system (CANFIS) method are proposed to improve the mathematical procedure, thus enabling a better detection of the high nonlinearity patterns found in the reservoir inflow training data. This modification includes the updating of the back propagation algorithm, leading to a consequent update of the membership rules and the induction of the centre-weighted set rather than the global weighted set used in feature extraction. The modification also aids in constructing an integrated model that is able to not only detect the nonlinearity in the training data but also the wide range of features within the training data records used to simulate the forecasting model. To demonstrate the model's efficacy, the proposed CANFIS method has been applied to forecast monthly inflow data at Aswan High Dam (AHD), located in southern Egypt. Comparative analyses of the forecasting skill of the modified CANFIS and the conventional ANFIS model are carried out with statistical score indicators to assess the reliability of the developed method. The statistical metrics support the better performance of the developed CANFIS model, which significantly outperforms the ANFIS model to attain a low relative error value (23%), mean absolute error (1.4 BCM month-1), root mean square error (1.14 BCM month-1), and a relative large coefficient of determination (0.94). The present study ascertains the better utility of the modified CANFIS model in respect to the traditional ANFIS model applied in reservoir inflow forecasting for a semi-arid region.
NASA Astrophysics Data System (ADS)
Goswami, M.; O'Connor, K. M.; Shamseldin, A. Y.
The "Galway Real-Time River Flow Forecasting System" (GFFS) is a software pack- age developed at the Department of Engineering Hydrology, of the National University of Ireland, Galway, Ireland. It is based on a selection of lumped black-box and con- ceptual rainfall-runoff models, all developed in Galway, consisting primarily of both the non-parametric (NP) and parametric (P) forms of two black-box-type rainfall- runoff models, namely, the Simple Linear Model (SLM-NP and SLM-P) and the seasonally-based Linear Perturbation Model (LPM-NP and LPM-P), together with the non-parametric wetness-index-based Linearly Varying Gain Factor Model (LVGFM), the black-box Artificial Neural Network (ANN) Model, and the conceptual Soil Mois- ture Accounting and Routing (SMAR) Model. Comprised of the above suite of mod- els, the system enables the user to calibrate each model individually, initially without updating, and it is capable also of producing combined (i.e. consensus) forecasts us- ing the Simple Average Method (SAM), the Weighted Average Method (WAM), or the Artificial Neural Network Method (NNM). The updating of each model output is achieved using one of four different techniques, namely, simple Auto-Regressive (AR) updating, Linear Transfer Function (LTF) updating, Artificial Neural Network updating (NNU), and updating by the Non-linear Auto-Regressive Exogenous-input method (NARXM). The models exhibit a considerable range of variation in degree of complexity of structure, with corresponding degrees of complication in objective func- tion evaluation. Operating in continuous river-flow simulation and updating modes, these models and techniques have been applied to two Irish catchments, namely, the Fergus and the Brosna. A number of performance evaluation criteria have been used to comparatively assess the model discharge forecast efficiency.
NASA Technical Reports Server (NTRS)
Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl
2008-01-01
The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.
76 FR 11216 - Inland Waterways Users Board
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-01
...: April 1, 2011. Location: The Westin New Orleans Canal Place, 100 Rue Iberville, New Orleans, Louisiana... Marine Transportation System (IMTS) Investment Strategy report recommendations, as well as be updated on...
Public Participation Guide: Briefings
Briefings are generally short presentations provided directly to community groups at their existing meetings or locations – such as social and civic clubs – to provide an overview or update on a project.
Key algorithms used in GR02: A computer simulation model for predicting tree and stand growth
Garrett A. Hughes; Paul E. Sendak; Paul E. Sendak
1985-01-01
GR02 is an individual tree, distance-independent simulation model for predicting tree and stand growth over time. It performs five major functions during each run: (1) updates diameter at breast height, (2) updates total height, (3) estimates mortality, (4) determines regeneration, and (5) updates crown class.
Perrier, Pascal; Schwartz, Jean-Luc; Diard, Julien
2018-01-01
Shifts in perceptual boundaries resulting from speech motor learning induced by perturbations of the auditory feedback were taken as evidence for the involvement of motor functions in auditory speech perception. Beyond this general statement, the precise mechanisms underlying this involvement are not yet fully understood. In this paper we propose a quantitative evaluation of some hypotheses concerning the motor and auditory updates that could result from motor learning, in the context of various assumptions about the roles of the auditory and somatosensory pathways in speech perception. This analysis was made possible thanks to the use of a Bayesian model that implements these hypotheses by expressing the relationships between speech production and speech perception in a joint probability distribution. The evaluation focuses on how the hypotheses can (1) predict the location of perceptual boundary shifts once the perturbation has been removed, (2) account for the magnitude of the compensation in presence of the perturbation, and (3) describe the correlation between these two behavioral characteristics. Experimental findings about changes in speech perception following adaptation to auditory feedback perturbations serve as reference. Simulations suggest that they are compatible with a framework in which motor adaptation updates both the auditory-motor internal model and the auditory characterization of the perturbed phoneme, and where perception involves both auditory and somatosensory pathways. PMID:29357357
On the predictability of event boundaries in discourse: An ERP investigation.
Delogu, Francesca; Drenhaus, Heiner; Crocker, Matthew W
2018-02-01
When reading a text describing an everyday activity, comprehenders build a model of the situation described that includes prior knowledge of the entities, locations, and sequences of actions that typically occur within the event. Previous work has demonstrated that such knowledge guides the processing of incoming information by making event boundaries more or less expected. In the present ERP study, we investigated whether comprehenders' expectations about event boundaries are influenced by how elaborately common events are described in the context. Participants read short stories in which a common activity (e.g., washing the dishes) was described either in brief or in an elaborate manner. The final sentence contained a target word referring to a more predictable action marking a fine event boundary (e.g., drying) or a less predictable action, marking a coarse event boundary (e.g., jogging). The results revealed a larger N400 effect for coarse event boundaries compared to fine event boundaries, but no interaction with description length. Between 600 and 1000 ms, however, elaborate contexts elicited a larger frontal positivity compared to brief contexts. This effect was largely driven by less predictable targets, marking coarse event boundaries. We interpret the P600 effect as indexing the updating of the situation model at event boundaries, consistent with Event Segmentation Theory (EST). The updating process is more demanding with coarse event boundaries, which presumably require the construction of a new situation model.
User's guide to the MESOI diffusion model and to the utility programs UPDATE and LOGRVU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athey, G.F.; Allwine, K.J.; Ramsdell, J.V.
MESOI is an interactive, Lagrangian puff trajectory diffusion model. The model is documented separately (Ramsdell and Athey, 1981); this report is intended to provide MESOI users with the information needed to successfully conduct model simulations. The user is also provided with guidance in the use of the data file maintenance and review programs; UPDATE and LOGRVU. Complete examples are given for the operaton of all three programs and an appendix documents UPDATE and LOGRVU.
Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2
Leggett, Richard W.
2017-03-02
Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less
Basis for the ICRP’s updated biokinetic model for carbon inhaled as CO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leggett, Richard W.
Here, the International Commission on Radiological Protection (ICRP) is updating its biokinetic and dosimetric models for occupational intake of radionuclides (OIR) in a series of reports called the OIR series. This paper describes the basis for the ICRP's updated biokinetic model for inhalation of radiocarbon as carbon dioxide (CO 2) gas. The updated model is based on biokinetic data for carbon isotopes inhaled as carbon dioxide or injected or ingested as bicarbonatemore » $$({{{\\rm{HCO}}}_{3}}^{-}).$$ The data from these studies are expected to apply equally to internally deposited (or internally produced) carbon dioxide and bicarbonate based on comparison of excretion rates for the two administered forms and the fact that carbon dioxide and bicarbonate are largely carried in a common form (CO 2–H$${{{\\rm{CO}}}_{3}}^{-})$$ in blood. Compared with dose estimates based on current ICRP biokinetic models for inhaled carbon dioxide or ingested carbon, the updated model will result in a somewhat higher dose estimate for 14C inhaled as CO 2 and a much lower dose estimate for 14C ingested as bicarbonate.« less
'DIRTMAP2': Dust and Palaeoclimate.
NASA Astrophysics Data System (ADS)
Maher, B.
2008-12-01
The influence of dust on climate, through changes in the radiative properties of the atmosphere and/or the CO2 content of the oceans and atmosphere (through iron fertilisation of high nutrient, low chlorophyll, HNLC, regions of the world's oceans), remains a poorly quantified and actively changing element of the Earth's climate system. Dust-cycle models presently employ a relatively simple representation of dust properties; these simplifications may severely limit the realism of simulations of the impact of changes in dust loading on either or both radiative forcing and biogeochemical cycling. Further, whilst state-of-the-art models achieve reasonable estimates of dust deposition in the far-field (i.e. at ocean locations), they under-estimate - by an order of magnitude - levels of dust deposition over the continents, unless glacigenic dust production is explicitly and spatially represented. The 'DIRTMAP2' working group aims to address these problems directly, through a series of explicitly interacting contributions from the international modelling and palaeo-data communities. A key aim of the project is to produce an updated version of the DIRTMAP database ('DIRTMAP2'), incorporating (a) records and age models newly available since ~ 2001, (b) longer records, and especially high-resolution records, that will target time windows also focused on by other international research programs (e.g. DO8/9, MIS5), (c) metadata to allow quality-control issues to be dealt with objectively, (d) information on mineralogy and isotopes relevant to provenancing, radiative forcing and iron bioavailability, and (e) enhanced characterisation of the aeolian component of existing records. This update will be coordinated with work (led by Karen Kohfeld) to expand the DIRTMAP database to incorporate information on marine productivity and improved sedimentation rate estimation techniques. It will also build upon a recently-developed dust model evaluation tool for current climate (e.g. Miller et al. 2006) to enable application of this and other evaluative models to palaeoclimate simulations. We invite colleagues to contribute to this update; the DIRTMAP2 database will shortly be accessible from the University of Lancaster website.
A groundwater data assimilation application study in the Heihe mid-reach
NASA Astrophysics Data System (ADS)
Ragettli, S.; Marti, B. S.; Wolfgang, K.; Li, N.
2017-12-01
The present work focuses on modelling of the groundwater flow in the mid-reach of the endorheic river Heihe in the Zhangye oasis (Gansu province) in arid north-west China. In order to optimise the water resources management in the oasis, reliable forecasts of groundwater level development under different management options and environmental boundary conditions have to be produced. For this means, groundwater flow is modelled with Modflow and coupled to an Ensemble Kalman Filter programmed in Matlab. The model is updated with monthly time steps, featuring perturbed boundary conditions to account for uncertainty in model forcing. Constant biases between model and observations have been corrected prior to updating and compared to model runs without bias correction. Different options for data assimilation (states and/or parameters), updating frequency, and measures against filter inbreeding (damping factor, covariance inflation, spatial localization) have been tested against each other. Results show a high dependency of the Ensemble Kalman filter performance on the selection of observations for data assimilation. For the present regional model, bias correction is necessary for a good filter performance. A combination of spatial localization and covariance inflation is further advisable to reduce filter inbreeding problems. Best performance is achieved if parameter updates are not large, an indication for good prior model calibration. Asynchronous updating of parameter values once every five years (with data of the past five years) and synchronous updating of the groundwater levels is better suited for this groundwater system with not or slow changing parameter values than synchronous updating of both groundwater levels and parameters at every time step applying a damping factor. The filter is not able to correct time lags of signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollerach, R.; Leszczynski, F.; Fink, J.
2006-07-01
In 2005 the Argentine Government took the decision to complete the construction of the Atucha-II nuclear power plant, which has been progressing slowly during the last ten years. Atucha-II is a 745 MWe nuclear station moderated and cooled with heavy water, of German (Siemens) design located in Argentina. It has a pressure-vessel design with 451 vertical coolant channels, and the fuel assemblies (FA) are clusters of 37 natural UO{sub 2} rods with an active length of 530 cm. For the reactor physics area, a revision and update calculation methods and models (cell, supercell and reactor) was recently carried out coveringmore » cell, supercell (control rod) and core calculations. As a validation of the new models some benchmark comparisons were done with Monte Carlo calculations with MCNP5. This paper presents comparisons of cell and supercell benchmark problems based on a slightly idealized model of the Atucha-I core obtained with the WIMS-D5 and DRAGON codes with MCNP5 results. The Atucha-I core was selected because it is smaller, similar from a neutronic point of view, and more symmetric than Atucha-II Cell parameters compared include cell k-infinity, relative power levels of the different rings of fuel rods, and some two-group macroscopic cross sections. Supercell comparisons include supercell k-infinity changes due to the control rods (tubes) of steel and hafnium. (authors)« less
ERIC Educational Resources Information Center
Blankenship, Glen; Tinkler, D. William
This packet contains five lessons related to the five themes of geography: location; place; human-environment interaction; movement; and region. The lessons are designed to support the teaching of courses in world geography, U.S. government/civics, and economics from a comparative U.S./German perspective. Lessons include: (1) "Location of…
The uploaded data consists of the BRACE Na aerosol observations paired with CMAQ model output, the updated model's parameterization of sea salt aerosol emission size distribution, and the model's parameterization of the sea salt emission factor as a function of sea surface temperature. This dataset is associated with the following publication:Gantt , B., J. Kelly , and J. Bash. Updating sea spray aerosol emissions in the Community Multiscale Air Quality (CMAQ) model version 5.0.2. Geoscientific Model Development. Copernicus Publications, Katlenburg-Lindau, GERMANY, 8: 3733-3746, (2015).
Changing viewer perspectives reveals constraints to implicit visual statistical learning.
Jiang, Yuhong V; Swallow, Khena M
2014-10-07
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.
Optimising seasonal streamflow forecast lead time for operational decision making in Australia
NASA Astrophysics Data System (ADS)
Schepen, Andrew; Zhao, Tongtiegang; Wang, Q. J.; Zhou, Senlin; Feikema, Paul
2016-10-01
Statistical seasonal forecasts of 3-month streamflow totals are released in Australia by the Bureau of Meteorology and updated on a monthly basis. The forecasts are often released in the second week of the forecast period, due to the onerous forecast production process. The current service relies on models built using data for complete calendar months, meaning the forecast production process cannot begin until the first day of the forecast period. Somehow, the bureau needs to transition to a service that provides forecasts before the beginning of the forecast period; timelier forecast release will become critical as sub-seasonal (monthly) forecasts are developed. Increasing the forecast lead time to one month ahead is not considered a viable option for Australian catchments that typically lack any predictability associated with snowmelt. The bureau's forecasts are built around Bayesian joint probability models that have antecedent streamflow, rainfall and climate indices as predictors. In this study, we adapt the modelling approach so that forecasts have any number of days of lead time. Daily streamflow and sea surface temperatures are used to develop predictors based on 28-day sliding windows. Forecasts are produced for 23 forecast locations with 0-14- and 21-day lead time. The forecasts are assessed in terms of continuous ranked probability score (CRPS) skill score and reliability metrics. CRPS skill scores, on average, reduce monotonically with increase in days of lead time, although both positive and negative differences are observed. Considering only skilful forecast locations, CRPS skill scores at 7-day lead time are reduced on average by 4 percentage points, with differences largely contained within +5 to -15 percentage points. A flexible forecasting system that allows for any number of days of lead time could benefit Australian seasonal streamflow forecast users by allowing more time for forecasts to be disseminated, comprehended and made use of prior to the commencement of a forecast season. The system would allow for forecasts to be updated if necessary.
Agent-Based Modeling of China's Rural-Urban Migration and Social Network Structure.
Fu, Zhaohao; Hao, Lingxin
2018-01-15
We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k -core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.
Agent-based modeling of China's rural-urban migration and social network structure
NASA Astrophysics Data System (ADS)
Fu, Zhaohao; Hao, Lingxin
2018-01-01
We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k-core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.
Avila, G A; Davidson, M; van Helden, M; Fagan, L
2018-04-18
Diuraphis noxia (Kurdjumov), Russian wheat aphid, is one of the world's most invasive and economically important agricultural pests of wheat and barley. In May 2016, it was found for the first time in Australia, with further sampling confirming it was widespread throughout south-eastern regions. Russian wheat aphid is not yet present in New Zealand. The impacts of this pest if it establishes in New Zealand, could result in serious control problems in wheat- and barley-growing regions. To evaluate whether D. noxia could establish populations in New Zealand we used the climate modelling software CLIMEX to locate where potential viable populations might occur. We re-parameterised the existing CLIMEX model by Hughes and Maywald (1990) by improving the model fit using currently known distribution records of D. noxia, and we also considered the role of irrigation into the potential spread of this invasive insect. The updated model now fits the current known distribution better than the previous Hughes and Maywald CLIMEX model, particularly in temperate and Mediterranean areas in Australia and Europe; and in more semi-arid areas in north-western China and Middle Eastern countries. Our model also highlights new climatically suitable areas for the establishment of D. noxia, not previously reported, including parts of France, the UK and New Zealand. Our results suggest that, when suitable host plants are present, Russian wheat aphid could establish in these regions. The new CLIMEX projections in the present study are useful tools to inform risk assessments and target surveillance and monitoring efforts for identifying susceptible areas to invasion by Russian wheat aphid.
MENA 1.1 - An Updated Geophysical Regionalization of the Middle East and North Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walters, B.; Pasyanos, M.E.; Bhattacharyya, J.
2000-03-01
This short report provides an update to the earlier LLNL paper entitled ''Preliminary Definition of Geophysical Regions for the Middle East and North Africa'' (Sweeney and Walter, 1998). This report is designed to be used in combination with that earlier paper. The reader is referred to Sweeney and Walter (1998) for all details, including definitions, references, uses, shortcomings, etc., of the regionalization process. In this report we will discuss only those regions in which we have changed the boundaries or velocity structure from that given by the original paper. The paper by Sweeney and Walter (1998) drew on a varietymore » of sources to estimate a preliminary, first-order regionalization of the Middle East and North Africa (MENA), providing regional boundaries and velocity models within each region. The model attempts to properly account for major structural discontinuities and significant crustal thickness and velocity variations on a gross scale. The model can be used to extrapolate sparse calibration data within a distinct geophysical region. This model can also serve as a background model in the process of forming station calibration maps using intelligent interpolation techniques such as kriging, extending the calibration into aseismic areas. Such station maps can greatly improve the ability to locate and identify seismic events, which in turn improves the ability to seismically monitor for underground nuclear testing. The original model from Sweeney and Walter (1998) was digitized to a 1{sup o} resolution, for simplicity we will hereafter refer to this model as MENA 1.0. The new model described here has also been digitized to a 1{sup o} resolution and will be referred to as MENA1.1 throughout this report.« less
NASA Astrophysics Data System (ADS)
Posselt, D.; L'Ecuyer, T.; Matsui, T.
2009-05-01
Cloud resolving models are typically used to examine the characteristics of clouds and precipitation and their relationship to radiation and the large-scale circulation. As such, they are not required to reproduce the exact location of each observed convective system, much less each individual cloud. Some of the most relevant information about clouds and precipitation is provided by instruments located on polar-orbiting satellite platforms, but these observations are intermittent "snapshots" in time, making assessment of model performance challenging. In contrast to direct comparison, model results can be evaluated statistically. This avoids the requirement for the model to reproduce the observed systems, while returning valuable information on the performance of the model in a climate-relevant sense. The focus of this talk is a model evaluation study, in which updates to the microphysics scheme used in a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model are evaluated using statistics of observed clouds, precipitation, and radiation. We present the results of multiday (non-equilibrium) simulations of organized deep convection using single- and double-moment versions of a the model's cloud microphysical scheme. Statistics of TRMM multi-sensor derived clouds, precipitation, and radiative fluxes are used to evaluate the GCE results, as are simulated TRMM measurements obtained using a sophisticated instrument simulator suite. We present advantages and disadvantages of performing model comparisons in retrieval and measurement space and conclude by motivating the use of data assimilation techniques for analyzing and improving model parameterizations.
An update on airborne contact dermatitis.
Huygens, S; Goossens, A
2001-01-01
This review is an update of 2 previously published articles on airborne contact dermatoses. Because reports in the literature often omit the term 'airborne', 18 volumes of Contact Dermatitis (April 1991-June 2000), 8 volumes of the American Journal of Contact Dermatitis (1992 1999) and 4 volumes of La Lettre du Gerda (1996-1999) were screened, and the cases cited were classified as to history, lesion locations, sensitization sources, and other factors. Reports on airborne dermatitis are increasingly being published, sometimes in relation to specific occupational areas.
National Hydropower Plant Dataset, Version 1 (Update FY18Q2)
Samu, Nicole; Kao, Shih-Chieh; O'Connor, Patrick; Johnson, Megan; Uria-Martinez, Rocio; McManamay, Ryan
2016-09-30
The National Hydropower Plant Dataset, Version 1, Update FY18Q2, includes geospatial point-level locations and key characteristics of existing hydropower plants in the United States that are currently online. These data are a subset extracted from NHAAP’s Existing Hydropower Assets (EHA) dataset, which is a cornerstone of NHAAP’s EHA effort that has supported multiple U.S. hydropower R&D research initiatives related to market acceleration, environmental impact reduction, technology-to-market activities, and climate change impact assessment.
ERIC Educational Resources Information Center
Stevenson, Betty Satterwhite, Ed.
This update of a bibliography published in 1981 lists selected holdings of the Sarah K. Davidson Family-Patient Library, which is located in the Strong Memorial Hospital of the University of Rochester (New York); it is intended for use with the original bibliography. To aid in ordering books and seeking further information on various topics, this…
NASA Astrophysics Data System (ADS)
Bernardi, Gabriella; Vecchiato, Alberto; Bucciarelli, Beatrice
2014-07-01
This paper reviews and updates the accounts of a previous article discussing the possible astronomical significance of a peculiar, man-made circular stone structure, located close to the European Southern Observatory in La Silla, Chile, and attributed to the El Molle culture. Thanks to further, higher-accuracy measurements in situ, we can confirm some of the original hypotheses and dismiss others, upholding the main tenets of the original work.
Zarriello, Phillip J.; Olson, Scott A.; Flynn, Robert H.; Strauch, Kellan R.; Murphy, Elizabeth A.
2014-01-01
Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term streamgages in Rhode Island. In response to this event, hydraulic models were updated for selected reaches covering about 56 river miles in the Pawtuxet River Basin to simulate water-surface elevations (WSEs) at specified flows and boundary conditions. Reaches modeled included the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Dry Brook, Meshanticut Brook, Furnace Hill Brook, Flat River, Quidneck Brook, and two unnamed tributaries referred to as South Branch Pawtuxet River Tributary A1 and Tributary A2. All the hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) version 4.1.0 using steady-state simulations. Updates to the models included incorporation of new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were assessed using high-water marks (HWMs) obtained in a related study following the March– April 2010 flood and the simulated water levels at the 0.2-percent annual exceedance probability (AEP), which is the estimated AEP of the 2010 flood in the basin. HWMs were obtained at 110 sites along the main stem of the Pawtuxet River, the North and South Branches of the Pawtuxet River, Pocasset River, Simmons Brook, Furnace Hill Brook, Flat River, and Quidneck Brook. Differences between the 2010 HWM elevations and the simulated 0.2-percent AEP WSEs from flood insurance studies (FISs) and the updated models developed in this study varied with most differences attributed to the magnitude of the 0.2-percent AEP flows. WSEs from the updated models generally are in closer agreement with the observed 2010 HWMs than with the FIS WSEs. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.
Radar Altimetry for Hydrological Modeling and Monitoring in the Zambezi River Basin
NASA Astrophysics Data System (ADS)
Michailovsky, C. I.; Berry, P. A.; Smith, R. G.; Bauer-Gottwein, P.
2011-12-01
Hydrological model forecasts are subject to large uncertainties stemming from uncertain input data, model structure, parameterization and lack of sufficient calibration/validation data. For real-time or near-real-time applications data assimilation techniques such as the Ensemble Kalman Filter (EnKF) can be used to reduce forecast uncertainty by updating model states as new data becomes available. The use of remote sensing data is attractive for such applications as it provides wide geographical coverage and continuous time-series without the typically long delays that exist in obtaining in-situ data. River discharge is one of the main hydrological variables of interest, and while it cannot currently be directly measured remotely, water levels in rivers can be obtained from satellite based radar altimetry and converted to discharge through rating curves. This study aims to give a realistic assessment of the improvements that can be derived from the use of satellite radar altimetry measurements from the Envisat mission for discharge monitoring and modeling on the basin scale for the Zambezi River. The altimetry data used is the Radar AlTimetry (RAT) product developed at the Earth and Planetary Remote Sensing Laboratory at the De Montfort University. The first step in analyzing the data is the determination of potential altimetry targets which are the locations at which the Envisat orbit and the river network cross in order to select data points corresponding to surface water. The quality of the water level time-series is then analyzed for all targets and the exploitable targets identified. Rating curves are derived from in-situ or remotely-sensed data depending on data-availability at the various locations and discharge time-series are established. A Monte Carlo analysis is carried out to assess the uncertainties on the computed discharge. It was found that having a single cross-section and associated discharge measurement at one point in time significantly reduces discharge uncertainty. To assess improvements in model predictions, a model of the Zambezi River basin based on remote sensing data is set up with the Soil and Water Assessment Tool and calibrated with available in-situ data. The discharge data from altimetry is then used in an EnKF framework to update discharge in the model as it runs. The method showed improvements in prediction uncertainties for short lead times.
Imputatoin and Model-Based Updating Technique for Annual Forest Inventories
Ronald E. McRoberts
2001-01-01
The USDA Forest Service is developing an annual inventory system to establish the capability of producing annual estimates of timber volume and related variables. The inventory system features measurement of an annual sample of field plots with options for updating data for plots measured in previous years. One imputation and two model-based updating techniques are...
Spatial Case Information Management System (SCIMS)
SCIMS facilitates the update of the Land Administration System (LAS) Case File location. Please select Cleanup Notes Utilities LAS Request Import Utility Privacy Copyright System Status Support User Guide
Forwarding Pointers for Efficient Location Management in Distributed Mobile Environments
1994-09-01
signalling trac on the SS7 signalling system(capacity of 56 Kbps) is expected to be 4-11 times greater for cellular networks than for ISDN and3-4 times...load. Thus location updatewill become a major bottleneck at the switches (such as SS7 ) and mechanisms to control the costof location update are...ACM, pp. 19-28, Oct. 1994.16 [3] Kathleen S. Meier-Hellstern, et. al., \\The Use of SS7 and GSM to support high density per-sonal communications
NASA Technical Reports Server (NTRS)
Minor, Robert
2002-01-01
Two ISS (International Space Station) experiment payloads will vent a volume of gas overboard via either the ISS Vacuum Exhaust System or the Vacuum Resource System. A system of ducts, valves and sensors, under design, will connect the experiments to the ISS systems. The following tasks are required: Create an analysis tool that will verify the rack vacuum system design with respect to design requirements, more specifically approximate pressure at given locations within the vacuum systems; Determine the vent duration required to achieve desired pressure within the experiment modules; Update the analysis as systems and operations definitions mature.
NASA Technical Reports Server (NTRS)
Schutt, J.; Fessler, B.; Cassidy, W. A.
1993-01-01
This technical report is an update to LPI Technical Report 89-02, which contained data and information that was current to May 1987. Since that time approximately 4000 new meteorites have been collected, mapped, and characterized, mainly from the numerous ice fields in the Allan Hills-David Glacier region, from the Pecora Escarpment and Moulton Escarpment in the Thiel Mountains-Patuxent region, the Wisconsin Range region, and from the Beardmore region. Meteorite location maps for ice fields from these regions have been produced and are available. This report includes explanatory texts for the maps of new areas and provides information on updates of maps of the areas covered in LPI Technical Report 89-02. Sketch maps and description of locales that have been searched and have yielded single or few meteorites are also included. The meteorite listings for all the ice fields have been updated to include any classification changes and new meteorites recovered from ice fields in the Allan Hills-David Glacier region since 1987. The text has been reorganized and minor errors in the original report have been corrected. Computing capabilities have improved immensely since the early days of this project. Current software and hardware allow easy access to data over computer networks. With various commercial software packages, the data can be used many different ways, including database creation, statistics, and mapping. The databases, explanatory texts, and the plotter files used to produce the meteorite location maps are available through a computer network. Information on how to access AMLAMP data, its formats, and ways it can be used are given in the User's Guide to AMLAMP Data section. Meteorite location maps and thematic maps may be ordered from the Lunar and Planetary Institute. Ordering information is given in Appendix A.
Machine learning in updating predictive models of planning and scheduling transportation projects
DOT National Transportation Integrated Search
1997-01-01
A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...
Hierarchical information fusion for global displacement estimation in microsensor motion capture.
Meng, Xiaoli; Zhang, Zhi-Qiang; Wu, Jian-Kang; Wong, Wai-Choong
2013-07-01
This paper presents a novel hierarchical information fusion algorithm to obtain human global displacement for different gait patterns, including walking, running, and hopping based on seven body-worn inertial and magnetic measurement units. In the first-level sensor fusion, the orientation for each segment is achieved by a complementary Kalman filter (CKF) which compensates for the orientation error of the inertial navigation system solution through its error state vector. For each foot segment, the displacement is also estimated by the CKF, and zero velocity update is included for the drift reduction in foot displacement estimation. Based on the segment orientations and left/right foot locations, two global displacement estimates can be acquired from left/right lower limb separately using a linked biomechanical model. In the second-level geometric fusion, another Kalman filter is deployed to compensate for the difference between the two estimates from the sensor fusion and get more accurate overall global displacement estimation. The updated global displacement will be transmitted to left/right foot based on the human lower biomechanical model to restrict the drifts in both feet displacements. The experimental results have shown that our proposed method can accurately estimate human locomotion for the three different gait patterns with regard to the optical motion tracker.
Measurement and Characterization of Space Shuttle Solid Rocket Motor Plume Acoustics
NASA Technical Reports Server (NTRS)
Kenny, Robert Jeremy
2009-01-01
NASA's current models to predict lift-off acoustics for launch vehicles are currently being updated using several numerical and empirical inputs. One empirical input comes from free-field acoustic data measured at three Space Shuttle Reusable Solid Rocket Motor (RSRM) static firings. The measurements were collected by a joint collaboration between NASA - Marshall Space Flight Center, Wyle Labs, and ATK Launch Systems. For the first time NASA measured large-thrust solid rocket motor plume acoustics for evaluation of both noise sources and acoustic radiation properties. Over sixty acoustic free-field measurements were taken over the three static firings to support evaluation of acoustic radiation near the rocket plume, far-field acoustic radiation patterns, plume acoustic power efficiencies, and apparent noise source locations within the plume. At approximately 67 m off nozzle centerline and 70 m downstream of the nozzle exit plan, the measured overall sound pressure level of the RSRM was 155 dB. Peak overall levels in the far field were over 140 dB at 300 m and 50-deg off of the RSRM thrust centerline. The successful collaboration has yielded valuable data that are being implemented into NASA's lift-off acoustic models, which will then be used to update predictions for Ares I and Ares V liftoff acoustic environments.
Preliminary Model of Porphyry Copper Deposits
Berger, Byron R.; Ayuso, Robert A.; Wynn, Jeffrey C.; Seal, Robert R.
2008-01-01
The U.S. Geological Survey (USGS) Mineral Resources Program develops mineral-deposit models for application in USGS mineral-resource assessments and other mineral resource-related activities within the USGS as well as for nongovernmental applications. Periodic updates of models are published in order to incorporate new concepts and findings on the occurrence, nature, and origin of specific mineral deposit types. This update is a preliminary model of porphyry copper deposits that begins an update process of porphyry copper models published in USGS Bulletin 1693 in 1986. This update includes a greater variety of deposit attributes than were included in the 1986 model as well as more information about each attribute. It also includes an expanded discussion of geophysical and remote sensing attributes and tools useful in resource evaluations, a summary of current theoretical concepts of porphyry copper deposit genesis, and a summary of the environmental attributes of unmined and mined deposits.
[Purity Detection Model Update of Maize Seeds Based on Active Learning].
Tang, Jin-ya; Huang, Min; Zhu, Qi-bing
2015-08-01
Seed purity reflects the degree of seed varieties in typical consistent characteristics, so it is great important to improve the reliability and accuracy of seed purity detection to guarantee the quality of seeds. Hyperspectral imaging can reflect the internal and external characteristics of seeds at the same time, which has been widely used in nondestructive detection of agricultural products. The essence of nondestructive detection of agricultural products using hyperspectral imaging technique is to establish the mathematical model between the spectral information and the quality of agricultural products. Since the spectral information is easily affected by the sample growth environment, the stability and generalization of model would weaken when the test samples harvested from different origin and year. Active learning algorithm was investigated to add representative samples to expand the sample space for the original model, so as to implement the rapid update of the model's ability. Random selection (RS) and Kennard-Stone algorithm (KS) were performed to compare the model update effect with active learning algorithm. The experimental results indicated that in the division of different proportion of sample set (1:1, 3:1, 4:1), the updated purity detection model for maize seeds from 2010 year which was added 40 samples selected by active learning algorithm from 2011 year increased the prediction accuracy for 2011 new samples from 47%, 33.75%, 49% to 98.89%, 98.33%, 98.33%. For the updated purity detection model of 2011 year, its prediction accuracy for 2010 new samples increased by 50.83%, 54.58%, 53.75% to 94.57%, 94.02%, 94.57% after adding 56 new samples from 2010 year. Meanwhile the effect of model updated by active learning algorithm was better than that of RS and KS. Therefore, the update for purity detection model of maize seeds is feasible by active learning algorithm.
NASA Astrophysics Data System (ADS)
Konkol, Jakub; Bałachowski, Lech
2017-03-01
In this paper, the whole process of pile construction and performance during loading is modelled via large deformation finite element methods such as Coupled Eulerian Lagrangian (CEL) and Updated Lagrangian (UL). Numerical study consists of installation process, consolidation phase and following pile static load test (SLT). The Poznań site is chosen as the reference location for the numerical analysis, where series of pile SLTs have been performed in highly overconsolidated clay (OCR ≈ 12). The results of numerical analysis are compared with corresponding field tests and with so-called "wish-in-place" numerical model of pile, where no installation effects are taken into account. The advantages of using large deformation numerical analysis are presented and its application to the pile designing is shown.
Build-Up Approach to Updating the Mock Quiet Spike Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
When a new aircraft is designed or a modification is done to an existing aircraft, the aeroelastic properties of the aircraft should be examined to ensure the aircraft is flight worthy. Evaluating the aeroelastic properties of a new or modified aircraft can include performing a variety of analyses, such as modal and flutter analyses. In order to produce accurate results from these analyses, it is imperative to work with finite element models (FEM) that have been validated by or correlated to ground vibration test (GVT) data, Updating an analytical model using measured data is a challenge in the area of structural dynamics. The analytical model update process encompasses a series of optimizations that match analytical frequencies and mode shapes to the measured modal characteristics of structure. In the past, the method used to update a model to test data was "trial and error." This is an inefficient method - running a modal analysis, comparing the analytical results to the GVT data, manually modifying one or more structural parameters (mass, CG, inertia, area, etc.), rerunning the analysis, and comparing the new analytical modal characteristics to the GVT modal data. If the match is close enough (close enough defined by analyst's updating requirements), then the updating process is completed. If the match does not meet updating-requirements, then the parameters are changed again and the process is repeated. Clearly, this manual optimization process is highly inefficient for large FEM's and/or a large number of structural parameters. NASA Dryden Flight Research Center (DFRC) has developed, in-house, a Mode Matching Code that automates the above-mentioned optimization process, DFRC's in-house Mode Matching Code reads mode shapes and frequencies acquired from GVT to create the target model. It also reads the current analytical model, as we11 as the design variables and their upper and lower limits. It performs a modal analysis on this model and modifies it to create an updated model that has similar mode shapes and frequencies as those of the target model. The Mode Matching Code output frequencies and modal assurance criteria (MAC) values that allow for the quantified comparison of the updated model versus the target model. A recent application of this code is the F453 supersonic flight testing platform, NASA DFRC possesses a modified F-15B that is used as a test bed aircraft for supersonic flight experiments. Traditionally, the finite element model of the test article is generated. A GVT is done on the test article ta validate and update its FEM. This FEM is then mated to the F-15B model, which was correlated to GVT data in fall of 2004, A GVT is conducted with the test article mated to the aircraft, and this mated F-15B/ test article FEM is correlated to this final GVT.
National Center on Sleep Disorders Research
... for Updates The National Center on Sleep Disorders Research (NCSDR) Located within the National Heart, Lung, and ... key functions: research, training, technology transfer, and coordination. Research Sleep disorders span many medical fields, requiring multidisciplinary ...
Equal Severity Curve (ESC) update.
DOT National Transportation Integrated Search
2009-07-01
Caltrans uses the Equal Severity Curve to determine appropriate locations for the placement of guardrail on : embankments. The ESC assists designers in determining the relative severity of encroachments on embankments : versus impacts with roadside b...
Atmospheric Science Data Center
2013-07-10
... channel due to uncertainty in the H2O spectroscopy in this spectral band Updated our estimation of the SAGE II water vapor channel filter location drift resulting in better agreement with more modern datasets ...
The 3D Recognition, Generation, Fusion, Update and Refinement (RG4) Concept
NASA Technical Reports Server (NTRS)
Maluf, David A.; Cheeseman, Peter; Smelyanskyi, Vadim N.; Kuehnel, Frank; Morris, Robin D.; Norvig, Peter (Technical Monitor)
2001-01-01
This paper describes an active (real time) recognition strategy whereby information is inferred iteratively across several viewpoints in descent imagery. We will show how we use inverse theory within the context of parametric model generation, namely height and spectral reflection functions, to generate model assertions. Using this strategy in an active context implies that, from every viewpoint, the proposed system must refine its hypotheses taking into account the image and the effect of uncertainties as well. The proposed system employs probabilistic solutions to the problem of iteratively merging information (images) from several viewpoints. This involves feeding the posterior distribution from all previous images as a prior for the next view. Novel approaches will be developed to accelerate the inversion search using novel statistic implementations and reducing the model complexity using foveated vision. Foveated vision refers to imagery where the resolution varies across the image. In this paper, we allow the model to be foveated where the highest resolution region is called the foveation region. Typically, the images will have dynamic control of the location of the foveation region. For descent imagery in the Entry, Descent, and Landing (EDL) process, it is possible to have more than one foveation region. This research initiative is directed towards descent imagery in connection with NASA's EDL applications. Three-Dimensional Model Recognition, Generation, Fusion, Update, and Refinement (RGFUR or RG4) for height and the spectral reflection characteristics are in focus for various reasons, one of which is the prospect that their interpretation will provide for real time active vision for automated EDL.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model.
Li, Jing; Zhang, Fangbing; Wei, Lisong; Yang, Tao; Lu, Zhaoyang
2017-10-16
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost.
Nighttime Foreground Pedestrian Detection Based on Three-Dimensional Voxel Surface Model
Li, Jing; Zhang, Fangbing; Wei, Lisong; Lu, Zhaoyang
2017-01-01
Pedestrian detection is among the most frequently-used preprocessing tasks in many surveillance application fields, from low-level people counting to high-level scene understanding. Even though many approaches perform well in the daytime with sufficient illumination, pedestrian detection at night is still a critical and challenging problem for video surveillance systems. To respond to this need, in this paper, we provide an affordable solution with a near-infrared stereo network camera, as well as a novel three-dimensional foreground pedestrian detection model. Specifically, instead of using an expensive thermal camera, we build a near-infrared stereo vision system with two calibrated network cameras and near-infrared lamps. The core of the system is a novel voxel surface model, which is able to estimate the dynamic changes of three-dimensional geometric information of the surveillance scene and to segment and locate foreground pedestrians in real time. A free update policy for unknown points is designed for model updating, and the extracted shadow of the pedestrian is adopted to remove foreground false alarms. To evaluate the performance of the proposed model, the system is deployed in several nighttime surveillance scenes. Experimental results demonstrate that our method is capable of nighttime pedestrian segmentation and detection in real time under heavy occlusion. In addition, the qualitative and quantitative comparison results show that our work outperforms classical background subtraction approaches and a recent RGB-D method, as well as achieving comparable performance with the state-of-the-art deep learning pedestrian detection method even with a much lower hardware cost. PMID:29035295
NASA Astrophysics Data System (ADS)
Arikan, Feza; Gulyaeva, Tamara; Sezen, Umut; Arikan, Orhan; Toker, Cenk; Hakan Tuna, MR.; Erdem, Esra
2016-07-01
International Reference Ionosphere is the most acknowledged climatic model of ionosphere that provides electron density profile and hourly, monthly median values of critical layer parameters of the ionosphere for a desired location, date and time between 60 to 2,000 km altitude. IRI is also accepted as the International Standard Ionosphere model. Recently, the IRI model is extended to the Global Positioning System (GPS) satellite orbital range of 20,000 km. The new version is called IRI-Plas and it can be obtained from http://ftp.izmiran.ru/pub/izmiran /SPIM/. A user-friendly online version is also provided at www.ionolab.org as a space weather service. Total Electron Content (TEC), which is defined as the line integral of electron density on a given ray path, is an observable parameter that can be estimated from earth based GPS receivers in a cost-effective manner as GPS-TEC. One of the most important advantages of IRI-Plas is the possible input of GPS-TEC to update the background deterministic ionospheric model to the current ionospheric state. This option is highly useful in regional and global tomography studies and HF link assessments. IONOLAB group currently implements IRI-Plas as a background model and updates the ionospheric state using GPS-TEC in IONOLAB-CIT and IONOLAB-RAY algorithms. The improved state of ionosphere allows the most reliable 4-D imaging of electron density profiles and HF and satellite communication link simulations.This study is supported by TUBITAK 115E915 and joint TUBITAK 114E092 and AS CR 14/001.
NASA Astrophysics Data System (ADS)
Wang, Zuo-Cai; Xin, Yu; Ren, Wei-Xin
2016-08-01
This paper proposes a new nonlinear joint model updating method for shear type structures based on the instantaneous characteristics of the decomposed structural dynamic responses. To obtain an accurate representation of a nonlinear system's dynamics, the nonlinear joint model is described as the nonlinear spring element with bilinear stiffness. The instantaneous frequencies and amplitudes of the decomposed mono-component are first extracted by the analytical mode decomposition (AMD) method. Then, an objective function based on the residuals of the instantaneous frequencies and amplitudes between the experimental structure and the nonlinear model is created for the nonlinear joint model updating. The optimal values of the nonlinear joint model parameters are obtained by minimizing the objective function using the simulated annealing global optimization method. To validate the effectiveness of the proposed method, a single-story shear type structure subjected to earthquake and harmonic excitations is simulated as a numerical example. Then, a beam structure with multiple local nonlinear elements subjected to earthquake excitation is also simulated. The nonlinear beam structure is updated based on the global and local model using the proposed method. The results show that the proposed local nonlinear model updating method is more effective for structures with multiple local nonlinear elements. Finally, the proposed method is verified by the shake table test of a real high voltage switch structure. The accuracy of the proposed method is quantified both in numerical and experimental applications using the defined error indices. Both the numerical and experimental results have shown that the proposed method can effectively update the nonlinear joint model.
Goverde, A; Spaander, M C W; Nieboer, D; van den Ouweland, A M W; Dinjens, W N M; Dubbink, H J; Tops, C J; Ten Broeke, S W; Bruno, M J; Hofstra, R M W; Steyerberg, E W; Wagner, A
2018-07-01
Until recently, no prediction models for Lynch syndrome (LS) had been validated for PMS2 mutation carriers. We aimed to evaluate MMRpredict and PREMM5 in a clinical cohort and for PMS2 mutation carriers specifically. In a retrospective, clinic-based cohort we calculated predictions for LS according to MMRpredict and PREMM5. The area under the operator receiving characteristic curve (AUC) was compared between MMRpredict and PREMM5 for LS patients in general and for different LS genes specifically. Of 734 index patients, 83 (11%) were diagnosed with LS; 23 MLH1, 17 MSH2, 31 MSH6 and 12 PMS2 mutation carriers. Both prediction models performed well for MLH1 and MSH2 (AUC 0.80 and 0.83 for PREMM5 and 0.79 for MMRpredict) and fair for MSH6 mutation carriers (0.69 for PREMM5 and 0.66 for MMRpredict). MMRpredict performed fair for PMS2 mutation carriers (AUC 0.72), while PREMM5 failed to discriminate PMS2 mutation carriers from non-mutation carriers (AUC 0.51). The only statistically significant difference between PMS2 mutation carriers and non-mutation carriers was proximal location of colorectal cancer (77 vs. 28%, p < 0.001). Adding location of colorectal cancer to PREMM5 considerably improved the models performance for PMS2 mutation carriers (AUC 0.77) and overall (AUC 0.81 vs. 0.72). We validated these results in an external cohort of 376 colorectal cancer patients, including 158 LS patients. MMRpredict and PREMM5 cannot adequately identify PMS2 mutation carriers. Adding location of colorectal cancer to PREMM5 may improve the performance of this model, which should be validated in larger cohorts.
Simulated and observed 2010 floodwater elevations in the Pawcatuck and Wood Rivers, Rhode Island
Zarriello, Phillip J.; Straub, David E.; Smith, Thor E.
2014-01-01
Heavy, persistent rains from late February through March 2010 caused severe flooding that set, or nearly set, peaks of record for streamflows and water levels at many long-term U.S. Geological Survey streamgages in Rhode Island. In response to this flood, hydraulic models of Pawcatuck River (26.9 miles) and Wood River (11.6 miles) were updated from the most recent approved U.S. Department of Homeland Security-Federal Emergency Management Agency flood insurance study (FIS) to simulate water-surface elevations (WSEs) for specified flows and boundary conditions. The hydraulic models were updated to Hydrologic Engineering Center-River Analysis System (HEC-RAS) using steady-state simulations and incorporate new field-survey data at structures, high resolution land-surface elevation data, and updated flood flows from a related study. The models were used to simulate the 0.2-percent annual exceedance probability (AEP) flood, which is the AEP determined for the 2010 flood in the Pawcatuck and Wood Rivers. The simulated WSEs were compared to high-water mark (HWM) elevation data obtained in a related study following the March–April 2010 flood, which included 39 HWMs along the Pawcatuck River and 11 HWMs along the Wood River. The 2010 peak flow generally was larger than the 0.2-percent AEP flow, which, in part, resulted in the FIS and updated model WSEs to be lower than the 2010 HWMs. The 2010 HWMs for the Pawcatuck River averaged about 1.6 feet (ft) higher than the 0.2-percent AEP WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The 2010 HWMs for the Wood River averaged about 1.3 ft higher than the WSEs simulated in the updated model and 2.5 ft higher than the WSEs in the FIS. The improved agreement of the updated simulated water elevations to observed 2010 HWMs provides a measure of the hydraulic model performance, which indicates the updated models better represent flooding at other AEPs than the existing FIS models.
Yi, Wei; Sheng-de, Wu; Lian-Ju, Shen; Tao, Lin; Da-Wei, He; Guang-Hui, Wei
2018-05-24
To investigate whether management of undescended testis (UDT) may be improved with educational updates and new transferring model among referring providers (RPs). The age of orchidopexies performed in Children's Hospital of Chongqing Medical University were reviewed. We then proposed educational updates and new transferring model among RPs. The age of orchidopexies performed after our intervention were collected. Data were represented graphically and statistical analysis Chi-square for trend were used. A total of 1543 orchidopexies were performed. The median age of orchidopexy did not matched the target age of 6-12 months in any subsequent year. Survey of the RPs showed that 48.85% of their recommended age was below 12 months. However, only 25.50% of them would directly make a surgical referral to pediatric surgery specifically at this point. After we proposed educational updates, tracking the age of orchidopexy revealed a statistically significant trend downward. The management of undescended testis may be improved with educational updates and new transferring model among primary healthcare practitioners.
Interrupted object-based updating of reach program leads to a negative compatibility effect.
Vainio, Lari
2009-07-01
The author investigated how the motor program elicited by an object's orientation is updated by object-based information while a participant reaches for the object. Participants selected the hand of response according to the thickness of the graspable object and then reached toward the location in which the object appeared. Reach initiation times decreased when the handle of the object was oriented toward the responding hand. This positive compatibility effect turned into a negative compatibility effect (NCE) during reach execution when the object was removed from the display 300 ms after object onset or replaced with a mask at movement onset. The results demonstrate that interrupted object-based updating of an ongoing reach movement triggers the NCE.
Status and Performance Updates for the Cosmic Origins Spectrograph
NASA Astrophysics Data System (ADS)
Snyder, Elaine M.; De Rosa, Gisella; Fischer, William J.; Fix, Mees; Fox, Andrew; Indriolo, Nick; James, Bethan; Oliveira, Cristina M.; Penton, Steven V.; Plesha, Rachel; Rafelski, Marc; Roman-Duval, Julia; Sahnow, David J.; Sankrit, Ravi; Taylor, Joanna M.; White, James
2018-01-01
The Hubble Space Telescope's Cosmic Origins Spectrograph (COS) moved the spectra on the FUV detector from Lifetime Position 3 (LP3) to a new pristine location, LP4, in October 2017. The spectra were shifted in the cross-dispersion direction by -2.5" (roughly -31 pixels) from LP3, or -5" (roughly -62 pixels) from the original LP1. This move mitigates the adverse effects of gain sag on the spectral quality and accuracy of COS FUV observations. Here, we present updates regarding the calibration of FUV data at LP4, including the flat fields, flux calibrations, and spectral resolution. We also present updates on the time-dependent sensitivities and dark rates of both the NUV and FUV detectors.
NASA Technical Reports Server (NTRS)
Newman, C. M.
1977-01-01
The updated consumables flight planning worksheet (CFPWS) is documented. The update includes: (1) additional consumables: ECLSS ammonia, APU propellant, HYD water; (2) additional on orbit activity for development flight instrumentation (DFI); (3) updated use factors for all consumables; and (4) sources and derivations of the use factors.
Nonequivalence of updating rules in evolutionary games under high mutation rates.
Kaiping, G A; Jacobs, G S; Cox, S J; Sluckin, T J
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
Nonequivalence of updating rules in evolutionary games under high mutation rates
NASA Astrophysics Data System (ADS)
Kaiping, G. A.; Jacobs, G. S.; Cox, S. J.; Sluckin, T. J.
2014-10-01
Moran processes are often used to model selection in evolutionary simulations. The updating rule in Moran processes is a birth-death process, i. e., selection according to fitness of an individual to give birth, followed by the death of a random individual. For well-mixed populations with only two strategies this updating rule is known to be equivalent to selecting unfit individuals for death and then selecting randomly for procreation (biased death-birth process). It is, however, known that this equivalence does not hold when considering structured populations. Here we study whether changing the updating rule can also have an effect in well-mixed populations in the presence of more than two strategies and high mutation rates. We find, using three models from different areas of evolutionary simulation, that the choice of updating rule can change model results. We show, e. g., that going from the birth-death process to the death-birth process can change a public goods game with punishment from containing mostly defectors to having a majority of cooperative strategies. From the examples given we derive guidelines indicating when the choice of the updating rule can be expected to have an impact on the results of the model.
75 FR 13344 - Revised Meeting Time for Citizens Coinage Advisory Committee March 2010 Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-19
... DEPARTMENT OF THE TREASURY United States Mint Revised Meeting Time for Citizens Coinage Advisory...: March 23, 2010. Time: 2:30 p.m. to 5 p.m. Location: 8th Floor Boardroom, United States Mint, 801 9th... call 202-354-7502 for the latest update on meeting time and room location. In accordance with 31 U.S.C...
OSATE Overview & Community Updates
2015-02-15
update 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Delange /Julien 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK...main language capabilities Modeling patterns & model samples for beginners Error-Model examples EMV2 model constructs Demonstration of tools Case
Burles, Ford; Slone, Edward; Iaria, Giuseppe
2017-04-01
The retrosplenial complex is a region within the posterior cingulate cortex implicated in spatial navigation. Here, we investigated the functional specialization of this large and anatomically heterogeneous region using fMRI and resting-state functional connectivity combined with a spatial task with distinct phases of spatial 'updating' (i.e., integrating and maintaining object locations in memory during spatial displacement) and 'orienting' (i.e., recalling unseen locations from current position in space). Both spatial 'updating' and 'orienting' produced bilateral activity in the retrosplenial complex, among other areas. However, spatial 'updating' produced slightly greater activity in ventro-lateral portions, of the retrosplenial complex, whereas spatial 'orienting' produced greater activity in a more dorsal and medial portion of it (both regions localized along the parieto-occipital fissure). At rest, both ventro-lateral and dorso-medial subregions of the retrosplenial complex were functionally connected to the hippocampus and parahippocampus, regions both involved in spatial orientation and navigation. However, the ventro-lateral subregion of the retrosplenial complex displayed more positive functional connectivity with ventral occipital and temporal object recognition regions, whereas the dorso-medial subregion activity was more correlated to dorsal activity and frontal activity, as well as negatively correlated with more ventral parietal structures. These findings provide evidence for a dorso-medial to ventro-lateral functional specialization within the human retrosplenial complex that may shed more light on the complex neural mechanisms underlying spatial orientation and navigation in humans.
Utilizing semantic Wiki technology for intelligence analysis at the tactical edge
NASA Astrophysics Data System (ADS)
Little, Eric
2014-05-01
Challenges exist for intelligence analysts to efficiently and accurately process large amounts of data collected from a myriad of available data sources. These challenges are even more evident for analysts who must operate within small military units at the tactical edge. In such environments, decisions must be made quickly without guaranteed access to the kinds of large-scale data sources available to analysts working at intelligence agencies. Improved technologies must be provided to analysts at the tactical edge to make informed, reliable decisions, since this is often a critical collection point for important intelligence data. To aid tactical edge users, new types of intelligent, automated technology interfaces are required to allow them to rapidly explore information associated with the intersection of hard and soft data fusion, such as multi-INT signals, semantic models, social network data, and natural language processing of text. Abilities to fuse these types of data is paramount to providing decision superiority. For these types of applications, we have developed BLADE. BLADE allows users to dynamically add, delete and link data via a semantic wiki, allowing for improved interaction between different users. Analysts can see information updates in near-real-time due to a common underlying set of semantic models operating within a triple store that allows for updates on related data points from independent users tracking different items (persons, events, locations, organizations, etc.). The wiki can capture pictures, videos and related information. New information added directly to pages is automatically updated in the triple store and its provenance and pedigree is tracked over time, making that data more trustworthy and easily integrated with other users' pages.
Klemans, Rob J B; Otte, Dianne; Knol, Mirjam; Knol, Edward F; Meijer, Yolanda; Gmelig-Meyling, Frits H J; Bruijnzeel-Koomen, Carla A F M; Knulst, André C; Pasmans, Suzanne G M A
2013-01-01
A diagnostic prediction model for peanut allergy in children was recently published, using 6 predictors: sex, age, history, skin prick test, peanut specific immunoglobulin E (sIgE), and total IgE minus peanut sIgE. To validate this model and update it by adding allergic rhinitis, atopic dermatitis, and sIgE to peanut components Ara h 1, 2, 3, and 8 as candidate predictors. To develop a new model based only on sIgE to peanut components. Validation was performed by testing discrimination (diagnostic value) with an area under the receiver operating characteristic curve and calibration (agreement between predicted and observed frequencies of peanut allergy) with the Hosmer-Lemeshow test and a calibration plot. The performance of the (updated) models was similarly analyzed. Validation of the model in 100 patients showed good discrimination (88%) but poor calibration (P < .001). In the updating process, age, history, and additional candidate predictors did not significantly increase discrimination, being 94%, and leaving only 4 predictors of the original model: sex, skin prick test, peanut sIgE, and total IgE minus sIgE. When building a model with sIgE to peanut components, Ara h 2 was the only predictor, with a discriminative ability of 90%. Cutoff values with 100% positive and negative predictive values could be calculated for both the updated model and sIgE to Ara h 2. In this way, the outcome of the food challenge could be predicted with 100% accuracy in 59% (updated model) and 50% (Ara h 2) of the patients. Discrimination of the validated model was good; however, calibration was poor. The discriminative ability of Ara h 2 was almost comparable to that of the updated model, containing 4 predictors. With both models, the need for peanut challenges could be reduced by at least 50%. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.
Description and evaluation of the Community Multiscale Air ...
The Community Multiscale Air Quality (CMAQ) model is a comprehensive multipollutant air quality modeling system developed and maintained by the US Environmental Protection Agency's (EPA) Office of Research and Development (ORD). Recently, version 5.1 of the CMAQ model (v5.1) was released to the public, incorporating a large number of science updates and extended capabilities over the previous release version of the model (v5.0.2). These updates include the following: improvements in the meteorological calculations in both CMAQ and the Weather Research and Forecast (WRF) model used to provide meteorological fields to CMAQ, updates to the gas and aerosol chemistry, revisions to the calculations of clouds and photolysis, and improvements to the dry and wet deposition in the model. Sensitivity simulations isolating several of the major updates to the modeling system show that changes to the meteorological calculations result in enhanced afternoon and early evening mixing in the model, periods when the model historically underestimates mixing. This enhanced mixing results in higher ozone (O3) mixing ratios on average due to reduced NO titration, and lower fine particulate matter (PM2. 5) concentrations due to greater dilution of primary pollutants (e.g., elemental and organic carbon). Updates to the clouds and photolysis calculations greatly improve consistency between the WRF and CMAQ models and result in generally higher O3 mixing ratios, primarily due to reduced
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-04-15
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.
Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update
Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong
2016-01-01
Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505
An interval model updating strategy using interval response surface models
NASA Astrophysics Data System (ADS)
Fang, Sheng-En; Zhang, Qiu-Hu; Ren, Wei-Xin
2015-08-01
Stochastic model updating provides an effective way of handling uncertainties existing in real-world structures. In general, probabilistic theories, fuzzy mathematics or interval analyses are involved in the solution of inverse problems. However in practice, probability distributions or membership functions of structural parameters are often unavailable due to insufficient information of a structure. At this moment an interval model updating procedure shows its superiority in the aspect of problem simplification since only the upper and lower bounds of parameters and responses are sought. To this end, this study develops a new concept of interval response surface models for the purpose of efficiently implementing the interval model updating procedure. The frequent interval overestimation due to the use of interval arithmetic can be maximally avoided leading to accurate estimation of parameter intervals. Meanwhile, the establishment of an interval inverse problem is highly simplified, accompanied by a saving of computational costs. By this means a relatively simple and cost-efficient interval updating process can be achieved. Lastly, the feasibility and reliability of the developed method have been verified against a numerical mass-spring system and also against a set of experimentally tested steel plates.
Malinowski, Kathleen; McAvoy, Thomas J; George, Rohini; Dieterich, Sonja; D'Souza, Warren D
2013-07-01
To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥ 3 mm), and always (approximately once per minute). Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization.
Updating the Behavior Engineering Model.
ERIC Educational Resources Information Center
Chevalier, Roger
2003-01-01
Considers Thomas Gilbert's Behavior Engineering Model as a tool for systematically identifying barriers to individual and organizational performance. Includes a detailed case study and a performance aid that incorporates gap analysis, cause analysis, and force field analysis to update the original model. (Author/LRW)
Support Routines for In Situ Image Processing
NASA Technical Reports Server (NTRS)
Deen, Robert G.; Pariser, Oleg; Yeates, Matthew C.; Lee, Hyun H.; Lorre, Jean
2013-01-01
This software consists of a set of application programs that support ground-based image processing for in situ missions. These programs represent a collection of utility routines that perform miscellaneous functions in the context of the ground data system. Each one fulfills some specific need as determined via operational experience. The most unique aspect to these programs is that they are integrated into the large, in situ image processing system via the PIG (Planetary Image Geometry) library. They work directly with space in situ data, understanding the appropriate image meta-data fields and updating them properly. The programs themselves are completely multimission; all mission dependencies are handled by PIG. This suite of programs consists of: (1)marscahv: Generates a linearized, epi-polar aligned image given a stereo pair of images. These images are optimized for 1-D stereo correlations, (2) marscheckcm: Compares the camera model in an image label with one derived via kinematics modeling on the ground, (3) marschkovl: Checks the overlaps between a list of images in order to determine which might be stereo pairs. This is useful for non-traditional stereo images like long-baseline or those from an articulating arm camera, (4) marscoordtrans: Translates mosaic coordinates from one form into another, (5) marsdispcompare: Checks a Left Right stereo disparity image against a Right Left disparity image to ensure they are consistent with each other, (6) marsdispwarp: Takes one image of a stereo pair and warps it through a disparity map to create a synthetic opposite- eye image. For example, a right eye image could be transformed to look like it was taken from the left eye via this program, (7) marsfidfinder: Finds fiducial markers in an image by projecting their approximate location and then using correlation to locate the markers to subpixel accuracy. These fiducial markets are small targets attached to the spacecraft surface. This helps verify, or improve, the pointing of in situ cameras, (8) marsinvrange: Inverse of marsrange . given a range file, re-computes an XYZ file that closely matches the original. . marsproj: Projects an XYZ coordinate through the camera model, and reports the line/sample coordinates of the point in the image, (9) marsprojfid: Given the output of marsfidfinder, projects the XYZ locations and compares them to the found locations, creating a report showing the fiducial errors in each image. marsrad: Radiometrically corrects an image, (10) marsrelabel: Updates coordinate system or camera model labels in an image, (11) marstiexyz: Given a stereo pair, allows the user to interactively pick a point in each image and reports the XYZ value corresponding to that pair of locations. marsunmosaic: Extracts a single frame from a mosaic, which will be created such that it could have been an input to the original mosaic. Useful for creating simulated input frames using different camera models than the original mosaic used, and (12) merinverter: Uses an inverse lookup table to convert 8-bit telemetered data to its 12-bit original form. Can be used in other missions despite the name.
Li, Yan; Wang, Dejun; Zhang, Shaoyi
2014-01-01
Updating the structural model of complex structures is time-consuming due to the large size of the finite element model (FEM). Using conventional methods for these cases is computationally expensive or even impossible. A two-level method, which combined the Kriging predictor and the component mode synthesis (CMS) technique, was proposed to ensure the successful implementing of FEM updating of large-scale structures. In the first level, the CMS was applied to build a reasonable condensed FEM of complex structures. In the second level, the Kriging predictor that was deemed as a surrogate FEM in structural dynamics was generated based on the condensed FEM. Some key issues of the application of the metamodel (surrogate FEM) to FEM updating were also discussed. Finally, the effectiveness of the proposed method was demonstrated by updating the FEM of a real arch bridge with the measured modal parameters. PMID:24634612
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-26
... present an update on topics including emergency warning systems, 9-1-1 location accuracy, distributed denial-of-service (DDoS), and cybersecurity best practices. DATES: December 4, 2013. ADDRESSES: Federal...
Information dissemination model for social media with constant updates
NASA Astrophysics Data System (ADS)
Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui
2018-07-01
With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.
Seismic hazard in the eastern United States
Mueller, Charles; Boyd, Oliver; Petersen, Mark D.; Moschetti, Morgan P.; Rezaeian, Sanaz; Shumway, Allison
2015-01-01
The U.S. Geological Survey seismic hazard maps for the central and eastern United States were updated in 2014. We analyze results and changes for the eastern part of the region. Ratio maps are presented, along with tables of ground motions and deaggregations for selected cities. The Charleston fault model was revised, and a new fault source for Charlevoix was added. Background seismicity sources utilized an updated catalog, revised completeness and recurrence models, and a new adaptive smoothing procedure. Maximum-magnitude models and ground motion models were also updated. Broad, regional hazard reductions of 5%–20% are mostly attributed to new ground motion models with stronger near-source attenuation. The revised Charleston fault geometry redistributes local hazard, and the new Charlevoix source increases hazard in northern New England. Strong increases in mid- to high-frequency hazard at some locations—for example, southern New Hampshire, central Virginia, and eastern Tennessee—are attributed to updated catalogs and/or smoothing.
A Summary of the NASA Lightning Nitrogen Oxides Model (LNOM) and Recent Results
NASA Technical Reports Server (NTRS)
Koshak, William; Peterson, Harld
2011-01-01
The NASA Marshall Space Flight Center introduced the Lightning Nitrogen Oxides Model (LNOM) a couple of years ago to combine routine state-of-the-art measurements of lightning with empirical laboratory results of lightning NOx production. The routine measurements included VHF lightning source data [such as from the North Alabama Lightning Mapping Array (LMA)], and ground flash location, peak current, and stroke multiplicity data from the National Lightning Detection Network(TradeMark) (NLDN). Following these initial runs of LNOM, the model was updated to include several non-return stroke lightning NOx production mechanisms, and provided the impact of lightning NOx on an August 2006 run of CMAQ. In this study, we review the evolution of the LNOM in greater detail and discuss the model?s latest upgrades and applications. Whereas previous applications were limited to five summer months of data for North Alabama thunderstorms, the most recent LNOM analyses cover several years. The latest statistics of ground and cloud flash NOx production are provided.
Jungé, Justin A; Scholl, Brian J; Chun, Marvin M
2007-01-01
Over repeated exposure to particular visual search displays, subjects are able to implicitly extract regularities that then make search more efficient-a phenomenon known as contextual cueing. Here we explore how the learning involved in contextual cueing is formed, maintained, and updated over experience. During an initial training phase, a group of signal first subjects searched through a series of predictive displays (where distractor locations were perfectly correlated with the target location), followed with no overt break by a series of unpredictive displays (where repeated contexts were uncorrelated with target locations). A second noise first group of subjects encountered the unpredictive displays followed by the predictive displays. Despite the fact that both groups had the same overall exposure to signal and noise, only the signal first group demonstrated subsequent contextual cueing. This primacy effect indicates that initial experience can result in hypotheses about regularities in displays-or the lack thereof-which then become resistant to updating. The absence of regularities in early stages of training even blocked observers from learning predictive regularities later on.
Jungé, Justin A.; Scholl, Brian J.; Chun, Marvin M.
2008-01-01
Over repeated exposure to particular visual search displays, subjects are able to implicitly extract regularities that then make search more efficient—a phenomenon known as contextual cueing. Here we explore how the learning involved in contextual cueing is formed, maintained, and updated over experience. During an initial training phase, a group of signal first subjects searched through a series of predictive displays (where distractor locations were perfectly correlated with the target location), followed with no overt break by a series of unpredictive displays (where repeated contexts were uncorrelated with target locations). A second noise first group of subjects encountered the unpredictive displays followed by the predictive displays. Despite the fact that both groups had the same overall exposure to signal and noise, only the signal first group demonstrated subsequent contextual cueing. This primacy effect indicates that initial experience can result in hypotheses about regularities in displays—or the lack thereof—which then become resistant to updating. The absence of regularities in early stages of training even blocked observers from learning predictive regularities later on. PMID:18725966
Genomic selection in a commercial winter wheat population.
He, Sang; Schulthess, Albert Wilhelm; Mirdita, Vilson; Zhao, Yusheng; Korzun, Viktor; Bothe, Reiner; Ebmeyer, Erhard; Reif, Jochen C; Jiang, Yong
2016-03-01
Genomic selection models can be trained using historical data and filtering genotypes based on phenotyping intensity and reliability criterion are able to increase the prediction ability. We implemented genomic selection based on a large commercial population incorporating 2325 European winter wheat lines. Our objectives were (1) to study whether modeling epistasis besides additive genetic effects results in enhancement on prediction ability of genomic selection, (2) to assess prediction ability when training population comprised historical or less-intensively phenotyped lines, and (3) to explore the prediction ability in subpopulations selected based on the reliability criterion. We found a 5 % increase in prediction ability when shifting from additive to additive plus epistatic effects models. In addition, only a marginal loss from 0.65 to 0.50 in accuracy was observed using the data collected from 1 year to predict genotypes of the following year, revealing that stable genomic selection models can be accurately calibrated to predict subsequent breeding stages. Moreover, prediction ability was maximized when the genotypes evaluated in a single location were excluded from the training set but subsequently decreased again when the phenotyping intensity was increased above two locations, suggesting that the update of the training population should be performed considering all the selected genotypes but excluding those evaluated in a single location. The genomic prediction ability was substantially higher in subpopulations selected based on the reliability criterion, indicating that phenotypic selection for highly reliable individuals could be directly replaced by applying genomic selection to them. We empirically conclude that there is a high potential to assist commercial wheat breeding programs employing genomic selection approaches.
FORCARB2: An updated version of the U.S. Forest Carbon Budget Model
Linda S. Heath; Michael C. Nichols; James E. Smith; John R. Mills
2010-01-01
FORCARB2, an updated version of the U.S. FORest CARBon Budget Model (FORCARB), produces estimates of carbon stocks and stock changes for forest ecosystems and forest products at 5-year intervals. FORCARB2 includes a new methodology for carbon in harvested wood products, updated initial inventory data, a revised algorithm for dead wood, and now includes public forest...
Indoor Spatial Updating With Impaired Vision
Legge, Gordon E.; Granquist, Christina; Baek, Yihwa; Gage, Rachel
2016-01-01
Purpose Spatial updating is the ability to keep track of position and orientation while moving through an environment. We asked how normally sighted and visually impaired subjects compare in spatial updating and in estimating room dimensions. Methods Groups of 32 normally sighted, 16 low-vision, and 16 blind subjects estimated the dimensions of six rectangular rooms. Updating was assessed by guiding the subjects along three-segment paths in the rooms. At the end of each path, they estimated the distance and direction to the starting location, and to a designated target. Spatial updating was tested in five conditions ranging from free viewing to full auditory and visual deprivation. Results The normally sighted and low-vision groups did not differ in their accuracy for judging room dimensions. Correlations between estimated size and physical size were high. Accuracy of low-vision performance was not correlated with acuity, contrast sensitivity, or field status. Accuracy was lower for the blind subjects. The three groups were very similar in spatial-updating performance, and exhibited only weak dependence on the nature of the viewing conditions. Conclusions People with a wide range of low-vision conditions are able to judge room dimensions as accurately as people with normal vision. Blind subjects have difficulty in judging the dimensions of quiet rooms, but some information is available from echolocation. Vision status has little impact on performance in simple spatial updating; proprioceptive and vestibular cues are sufficient. PMID:27978556
Indoor Spatial Updating With Impaired Vision.
Legge, Gordon E; Granquist, Christina; Baek, Yihwa; Gage, Rachel
2016-12-01
Spatial updating is the ability to keep track of position and orientation while moving through an environment. We asked how normally sighted and visually impaired subjects compare in spatial updating and in estimating room dimensions. Groups of 32 normally sighted, 16 low-vision, and 16 blind subjects estimated the dimensions of six rectangular rooms. Updating was assessed by guiding the subjects along three-segment paths in the rooms. At the end of each path, they estimated the distance and direction to the starting location, and to a designated target. Spatial updating was tested in five conditions ranging from free viewing to full auditory and visual deprivation. The normally sighted and low-vision groups did not differ in their accuracy for judging room dimensions. Correlations between estimated size and physical size were high. Accuracy of low-vision performance was not correlated with acuity, contrast sensitivity, or field status. Accuracy was lower for the blind subjects. The three groups were very similar in spatial-updating performance, and exhibited only weak dependence on the nature of the viewing conditions. People with a wide range of low-vision conditions are able to judge room dimensions as accurately as people with normal vision. Blind subjects have difficulty in judging the dimensions of quiet rooms, but some information is available from echolocation. Vision status has little impact on performance in simple spatial updating; proprioceptive and vestibular cues are sufficient.
Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.
Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y
2007-01-01
Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.
Titan's surface and atmosphere from Cassini/VIMS data with updated methane opacity
NASA Astrophysics Data System (ADS)
Hirtzig, M.; Bézard, B.; Coustenis, A.; Lellouch, E.; Drossart, P.; deBergh, C.; Campargue, A.; Boudon, V.; Tyuterev, V.; Rannou, P.; Cours, T.; Kassi, S.; Nikitin, A.; Wang, L.; Solomonidou, A.; Schmitt, B.; Rodriguez, S.
2012-04-01
In this paper we present an updated analysis of VIMS data in view of recent developments on the methane opacity in the 1.3-5.2 µm region, a very important parameter in simulating Titan's spectrum. We use a multi-stream radiative transfer model, benefitting from the latest methane absorption coefficients available [1], which allows us to determine more accurately the haze and surface contributions. This code is applied to Cassini/VIMS spectro-imaging data of various regions with very different spectral responses to extract information on the content of the lower atmosphere (0-200 km) as well as on the surface properties. In particular, we update the DISR aerosol model [2] for the Huygens landing site that we then adjust to fit the data for other locations on Titan's disk. Fitting VIMS data taken from 2004 to 2010 (TA to T70), around Titan's mid-latitudes (40°S-40°N), we determine the latitudinal and temporal evolution of the aerosol population, monitoring the North-South Asymmetry. While around the equinox [3] witnessed the collapse of the detached haze layer, we measure a continuous depletion of the aerosols throughout the atmosphere, although the NSA remains with a brighter northern hemisphere. Using this improved atmospheric model, we also retrieve surface albedos simultaneously for all the seven windows in the whole VIMS range for these regions, also recovering the shape of the surface albedo within each window. Eventually, we look for Titan's surface probable chemical composition, using mixtures of dark and complex hydrocarbons like bitumens and tholins, as well as bright CH4, CO2, NH3 and H2O ices of various grain sizes. [4] [1] Campargue, A. et al., (2012) Icarus, submitted. [2] Tomasko, M. et al., (2008) Planetary and Space Science, 56, 669. [3] West, R.A. et al., (2011) Geophysical Research Letters, 38, L06204. [4] Hirtzig, M. et al., (2012) Planetary and Space Science, submitted.
NASA Astrophysics Data System (ADS)
Floriane, Provost; Jean-Philippe, Malet; Cécile, Doubre; Julien, Gance; Alessia, Maggi; Agnès, Helmstetter
2015-04-01
Characterizing the micro-seismic activity of landslides is an important parameter for a better understanding of the physical processes controlling landslide behaviour. However, the location of the seismic sources on landslides is a challenging task mostly because of (a) the recording system geometry, (b) the lack of clear P-wave arrivals and clear wave differentiation, (c) the heterogeneous velocities of the ground. The objective of this work is therefore to test whether the integration of a 3D velocity model in probabilistic seismic source location codes improves the quality of the determination especially in depth. We studied the clay-rich landslide of Super-Sauze (French Alps). Most of the seismic events (rockfalls, slidequakes, tremors...) are generated in the upper part of the landslide near the main scarp. The seismic recording system is composed of two antennas with four vertical seismometers each located on the east and west sides of the seismically active part of the landslide. A refraction seismic campaign was conducted in August 2014 and a 3D P-wave model has been estimated using the Quasi-Newton tomography inversion algorithm. The shots of the seismic campaign are used as calibration shots to test the performance of the different location methods and to further update the 3D velocity model. Natural seismic events are detected with a semi-automatic technique using a frequency threshold. The first arrivals are picked using a kurtosis-based method and compared to the manual picking. Several location methods were finally tested. We compared a non-linear probabilistic method coupled with the 3D P-wave model and a beam-forming method inverted for an apparent velocity. We found that the Quasi-Newton tomography inversion algorithm provides results coherent with the original underlaying topography. The velocity ranges from 500 m.s-1 at the surface to 3000 m.s-1 in the bedrock. For the majority of the calibration shots, the use of a 3D velocity model significantly improve the results of the location procedure using P-wave arrivals. All the shots were made 50 centimeters below the surface and hence the vertical error could not be determined with the seismic campaign. We further discriminate the rockfalls and the slidequakes occurring on the landslide with the depth computed thanks to the 3D velocity model. This could be an additional criteria to automatically classify the events.
Highly efficient model updating for structural condition assessment of large-scale bridges.
DOT National Transportation Integrated Search
2015-02-01
For eciently updating models of large-scale structures, the response surface (RS) method based on radial basis : functions (RBFs) is proposed to model the input-output relationship of structures. The key issues for applying : the proposed method a...
UPDATE ON EPA'S URBAN WATERSHED MANAGEMENT BRANCH MODELING ACTIVITIES
This paper provides the Stormwater Management Model (SWMM) user community with a description of the Environmental Protection Agency (EPA's) Office of Research and Development (ORD) approach to urban watershed modeling research and provides an update on current ORD SWMM-related pr...
Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics
NASA Astrophysics Data System (ADS)
Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong
2018-02-01
Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.
SEED Servers: High-Performance Access to the SEED Genomes, Annotations, and Metabolic Models
Aziz, Ramy K.; Devoid, Scott; Disz, Terrence; Edwards, Robert A.; Henry, Christopher S.; Olsen, Gary J.; Olson, Robert; Overbeek, Ross; Parrello, Bruce; Pusch, Gordon D.; Stevens, Rick L.; Vonstein, Veronika; Xia, Fangfang
2012-01-01
The remarkable advance in sequencing technology and the rising interest in medical and environmental microbiology, biotechnology, and synthetic biology resulted in a deluge of published microbial genomes. Yet, genome annotation, comparison, and modeling remain a major bottleneck to the translation of sequence information into biological knowledge, hence computational analysis tools are continuously being developed for rapid genome annotation and interpretation. Among the earliest, most comprehensive resources for prokaryotic genome analysis, the SEED project, initiated in 2003 as an integration of genomic data and analysis tools, now contains >5,000 complete genomes, a constantly updated set of curated annotations embodied in a large and growing collection of encoded subsystems, a derived set of protein families, and hundreds of genome-scale metabolic models. Until recently, however, maintaining current copies of the SEED code and data at remote locations has been a pressing issue. To allow high-performance remote access to the SEED database, we developed the SEED Servers (http://www.theseed.org/servers): four network-based servers intended to expose the data in the underlying relational database, support basic annotation services, offer programmatic access to the capabilities of the RAST annotation server, and provide access to a growing collection of metabolic models that support flux balance analysis. The SEED servers offer open access to regularly updated data, the ability to annotate prokaryotic genomes, the ability to create metabolic reconstructions and detailed models of metabolism, and access to hundreds of existing metabolic models. This work offers and supports a framework upon which other groups can build independent research efforts. Large integrations of genomic data represent one of the major intellectual resources driving research in biology, and programmatic access to the SEED data will provide significant utility to a broad collection of potential users. PMID:23110173
Selective updating of working memory content modulates meso-cortico-striatal activity.
Murty, Vishnu P; Sambataro, Fabio; Radulescu, Eugenia; Altamura, Mario; Iudicello, Jennifer; Zoltick, Bradley; Weinberger, Daniel R; Goldberg, Terry E; Mattay, Venkata S
2011-08-01
Accumulating evidence from non-human primates and computational modeling suggests that dopaminergic signals arising from the midbrain (substantia nigra/ventral tegmental area) mediate striatal gating of the prefrontal cortex during the selective updating of working memory. Using event-related functional magnetic resonance imaging, we explored the neural mechanisms underlying the selective updating of information stored in working memory. Participants were scanned during a novel working memory task that parses the neurophysiology underlying working memory maintenance, overwriting, and selective updating. Analyses revealed a functionally coupled network consisting of a midbrain region encompassing the substantia nigra/ventral tegmental area, caudate, and dorsolateral prefrontal cortex that was selectively engaged during working memory updating compared to the overwriting and maintenance of working memory content. Further analysis revealed differential midbrain-dorsolateral prefrontal interactions during selective updating between low-performing and high-performing individuals. These findings highlight the role of this meso-cortico-striatal circuitry during the selective updating of working memory in humans, which complements previous research in behavioral neuroscience and computational modeling. Published by Elsevier Inc.
High-resolution seismicity catalog of Italian peninsula in the period 1981-2015
NASA Astrophysics Data System (ADS)
Michele, M.; Latorre, D.; Castello, B.; Di Stefano, R.; Chiaraluce, L.
2017-12-01
In order to provide an updated reference catalog of Italian seismicity, the absolute location of the last 35 years (1981-2015) of seismic activity was computed with a three-dimensional VP and VS velocity model covering the whole Italian territory. The NonLinLoc code (Lomax et al., 2000), which is based on a probabilistic approach, was used to provide a complete and robust description of the uncertainties associated to the locations corresponding to the hypocentral solutions with the highest probability density. Moreover, the code using a finite difference approximation of the eikonal equation (Podvin and Lecomte, 1991), allows to manage very contrasted velocity models in the arrival time computation. To optimize the earthquakes location, we included the station corrections in the inverse problem. For each year, the number of available earthquakes depends on both the network detection capability and the occurrence of major seismic sequences. The starting earthquakes catalog was based on 2.6 million P and 1.9 million S arrival time picks for 278.607 selected earthquakes, recorded at least by 3 seismic stations of the Italian seismic network. The new catalog compared to the previous ones consisting of hypocentral locations retrieved with linearized location methods, shows a very good improvement as testified by the location parameters assessing the quality of the solution (i.e., RMS, azimuthal gap, formal error on horizontal and vertical components). In addition, we used the distance between the expected and the maximum likelihood hypocenter location to establish the unimodal (high-resolved location) or multimodal (poor-resolved location) character of the probability distribution. We used these parameters to classify the resulting locations in four classes (A, B, C and D) considering the simultaneous goodness of the previous parameters. The upper classes (A and B) include the 65% of the relocated earthquake, while the lowest class (D) only includes the 7% of the seismicity. We present the new catalog, consisting of 272.847 events, showing some example of earthquakes location related to the background as well as small to large seismic sequences occurred in Italy the last 35 years.
Peterson, M.D.; Mueller, C.S.
2011-01-01
The USGS National Seismic Hazard Maps are updated about every six years by incorporating newly vetted science on earthquakes and ground motions. The 2008 hazard maps for the central and eastern United States region (CEUS) were updated by using revised New Madrid and Charleston source models, an updated seismicity catalog and an estimate of magnitude uncertainties, a distribution of maximum magnitudes, and several new ground-motion prediction equations. The new models resulted in significant ground-motion changes at 5 Hz and 1 Hz spectral acceleration with 5% damping compared to the 2002 version of the hazard maps. The 2008 maps have now been incorporated into the 2009 NEHRP Recommended Provisions, the 2010 ASCE-7 Standard, and the 2012 International Building Code. The USGS is now planning the next update of the seismic hazard maps, which will be provided to the code committees in December 2013. Science issues that will be considered for introduction into the CEUS maps include: 1) updated recurrence models for New Madrid sources, including new geodetic models and magnitude estimates; 2) new earthquake sources and techniques considered in the 2010 model developed by the nuclear industry; 3) new NGA-East ground-motion models (currently under development); and 4) updated earthquake catalogs. We will hold a regional workshop in late 2011 or early 2012 to discuss these and other issues that will affect the seismic hazard evaluation in the CEUS.
Los Alamos Climatology 2016 Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruggeman, David Alan
The Los Alamos National Laboratory (LANL or the Laboratory) operates a meteorology monitoring network to support LANL emergency response, engineering designs, environmental compliance, environmental assessments, safety evaluations, weather forecasting, environmental monitoring, research programs, and environmental restoration. Weather data has been collected in Los Alamos since 1910. Bowen (1990) provided climate statistics (temperature and precipitation) for the 1961– 1990 averaging period, and included other analyses (e.g., wind and relative humidity) based on the available station locations and time periods. This report provides an update to the 1990 publication Los Alamos Climatology (Bowen 1990).
78 FR 40553 - Privacy Act of 1974: Republication of Notice of Systems of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-05
...In accordance with 5 U.S.C. 552a(e)(4), the Tennessee Valley Authority (TVA) is republishing in full a notice of the existence and character of each TVA system of records. TVA is correcting minor typographical and stylistic errors in previously existing notices and has updated those notices to reflect current organizational structure. Also, updates are being made to show any changes to system locations; managers and addresses; categories of individuals and records; procedures and practices for storing, retrieving, accessing, retaining, and disposing of records.
SysML model of exoplanet archive functionality and activities
NASA Astrophysics Data System (ADS)
Ramirez, Solange
2016-08-01
The NASA Exoplanet Archive is an online service that serves data and information on exoplanets and their host stars to help astronomical research related to search for and characterization of extra-solar planetary systems. In order to provide the most up to date data sets to the users, the exoplanet archive performs weekly updates that include additions into the database and updates to the services as needed. These weekly updates are complex due to interfaces within the archive. I will be presenting a SysML model that helps us perform these update activities in a weekly basis.
Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
2016-10-01
This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates.
The Identity-Location Binding Problem.
Howe, Piers D L; Ferguson, Adam
2015-09-01
The binding problem is fundamental to visual perception. It is the problem of associating an object's visual properties with itself and not with some other object. The problem is made particular difficult because different properties of an object, such as its color, shape, size, and motion, are often processed independently, sometimes in different cortical areas. The results of these separate analyses have to be combined before the object can be seen as a single coherent entity as opposed to a collection of unconnected features. Visual bindings are typically initiated and updated in a serial fashion, one object at a time. Here, we show that one type of binding, location-identity bindings, can be updated in parallel. We do this by using two complementary techniques, the simultaneous-sequential paradigm and systems factorial technology. These techniques make different assumptions and rely on different behavioral measures, yet both came to the same conclusion. Copyright © 2014 Cognitive Science Society, Inc.
NASA Astrophysics Data System (ADS)
Osterman, G. B.; Fisher, B.; Roehl, C. M.; Wunch, D.; Wennberg, P. O.; Eldering, A.; Naylor, B. J.; Crisp, D.; Pollock, H. R.; Gunson, M. R.
2014-12-01
The NASA Orbiting Carbon Observatory-2 (OCO-2) successfully launched from Vandenberg Air Force Base in California on July 2, 2014. The OCO-2 mission is designed to provide remotely sensed measurements of the column averaged dry air mole fraction of carbon dioxide from space. OCO-2 is capable of making measurements in three observation modes: Nadir, glint and target. The standard operational mode for OCO-2 alternates between nadir and glint mode every 16 days, but target mode observations are possible by commanding the spacecraft to point to specific surface location. In this presentation we provide information on the preliminary observations and plans for OCO-2 2015. In particular, we will also provide an update on the pointing capabilities and accuracy for OCO-2. We provide updates on OCO-2 target mode including possible target mode locations. We will show calendars for the different viewing geometries and target mode possibilities.
LITHO1.0: An Updated Crust and Lithosphere Model of the Earth
NASA Astrophysics Data System (ADS)
Masters, G.; Ma, Z.; Laske, G.; Pasyanos, M. E.
2011-12-01
We are developing LITHO1.0: an updated crust and lithosphere model of the Earth. The overall plan is to take the popular CRUST2.0 model - a global model of crustal structure with a relatively poor representation of the uppermost mantle - and improve its nominal resolution to 1 degree and extend the model to include lithospheric structure. The new model, LITHO1.0, will be constrained by many different datasets including extremely large new datasets of relatively short period group velocity data. Other data sets include (but are not limited to) compilations of receiver function constraints and active source studies. To date, we have completed the compilation of extremely large global datasets of group velocity for Rayleigh and Love waves from 10mHz to 40mHz using a cluster analysis technique. We have also extended the method to measure phase velocity and are complementing the group velocity with global data sets of longer period phase data that help to constrain deep lithosphere properties. To model these data, we require a starting model for the crust at a nominal resolution of 1 degree. This has been developed by constructing a map of crustal thickness using data from receiver function and active source experiments where available, and by using CRUST2.0 where other constraints are not available. Particular care has been taken to make sure that the locations of sharp changes in crustal thickness are accurately represented. This map is then used as a template to extend CRUST2.0 to 1 degree nominal resolution and to develop starting maps of all crustal properties. We are currently modeling the data using two techniques. The first is a linearized inversion about the 3D crustal starting model. Note that it is important to use local eigenfunctions to compute Frechet derivatives due to the extreme variations in crustal structure. Another technique uses a targeted grid search method. A preliminary model for the crustal part of the model will be presented.
Electronic Education System Model-2
ERIC Educational Resources Information Center
Güllü, Fatih; Kuusik, Rein; Laanpere, Mart
2015-01-01
In this study we presented new EES Model-2 extended from EES model for more productive implementation in e-learning process design and modelling in higher education. The most updates were related to uppermost instructional layer. We updated learning processes object of the layer for adaptation of educational process for young and old people,…
Capital update factor: a new era approaches.
Grimaldi, P L
1993-02-01
The Health Care Financing Administration (HCFA) has constructed a preliminary model of a new capital update method which is consistent with the framework being developed to refine the update method for PPS operating costs. HCFA's eventual goal is to develop a single update framework for operating and capital costs. Initial results suggest that adopting the new capital update method would reduce capital payments substantially, which might intensify creditor's concerns about extending loans to hospitals.
Proposed reporting model update creates dialogue between FASB and not-for-profits.
Mosrie, Norman C
2016-04-01
Seeing a need to refresh the current guidelines, the Financial Accounting Standards Board (FASB) proposed an update to the financial accounting and reporting model for not-for-profit entities. In a response to solicited feedback, the board is now revisiting its proposed update and has set forth a plan to finalize its new guidelines. The FASB continues to solicit and respond to feedback as the process progresses.
Capital planning for operating theatres based on projecting future theatre requirements.
Sheehan, Jennifer A; Tyler, Peter; Jayasinha, Hirani; Meleady, Kathleen T; Jones, Neill
2011-05-01
During 2006, NSW and ACT Health Departments jointly engaged KPMG to develop an Operating Theatre Requirements' Projection Model and an accompanying planning guideline. A research scan was carried out to identify drivers of surgical demand, theatre capacity and theatre performance, as well as locating existing approaches to modelling operating theatre requirements for planning purposes. The project delivered a Microsoft Excel-based model for projecting future operating theatre requirements, together with an accompanying guideline for use of the model and interpretation of its outputs. It provides a valuable addition to the suite of tools available to Health staff for service and capital planning. The model operates with several limitations, largely due to being data dependent, and the state and completeness of available theatre activity data. However, the operational flexibility built into the model allows users to compensate for these limitations, on a case by case basis, when the user has access to suitable, local data. The design flexibility of the model means that updating the model as improved data become available is not difficult; resulting in revisions being able to be made quickly, and disseminated to users rapidly.
NASA Astrophysics Data System (ADS)
Sani, M. S. M.; Nazri, N. A.; Alawi, D. A. J.
2017-09-01
Resistance spot welding (RSW) is a proficient joining method commonly used for sheet metal joining and become one of the oldest spot welding processes use in industry especially in the automotive. RSW involves the application of heat and pressure without neglecting time taken when joining two or more metal sheets at a localized area which is claimed as the most efficient welding process in metal fabrication. The purpose of this project is to perform model updating of RSW plate structure between mild steel 1010 and stainless steel 304. In order to do the updating, normal mode finite element analysis (FEA) and experimental modal analysis (EMA) have been carried out. Result shows that the discrepancies of natural frequency between FEA and EMA are below than 10 %. Sensitivity model updating is evaluated in order to make sure which parameters are influences in this structural dynamic modification. Young’s modulus and density both materials are indicate significant parameters to do model updating. As a conclusion, after perform model updating, total average error of dissimilar RSW plate is improved significantly.
NODA for EPA's Updated Ozone Transport Modeling
Find EPA's NODA for the Updated Ozone Transport Modeling Data for the 2008 Ozone National Ambient Air Quality Standard (NAAQS) along with the ExitExtension of Public Comment Period on CSAPR for the 2008 NAAQS.
Malinowski, Kathleen; McAvoy, Thomas J.; George, Rohini; Dieterich, Sonja; D’Souza, Warren D.
2013-01-01
Purpose: To determine how best to time respiratory surrogate-based tumor motion model updates by comparing a novel technique based on external measurements alone to three direct measurement methods. Methods: Concurrently measured tumor and respiratory surrogate positions from 166 treatment fractions for lung or pancreas lesions were analyzed. Partial-least-squares regression models of tumor position from marker motion were created from the first six measurements in each dataset. Successive tumor localizations were obtained at a rate of once per minute on average. Model updates were timed according to four methods: never, respiratory surrogate-based (when metrics based on respiratory surrogate measurements exceeded confidence limits), error-based (when localization error ≥3 mm), and always (approximately once per minute). Results: Radial tumor displacement prediction errors (mean ± standard deviation) for the four schema described above were 2.4 ± 1.2, 1.9 ± 0.9, 1.9 ± 0.8, and 1.7 ± 0.8 mm, respectively. The never-update error was significantly larger than errors of the other methods. Mean update counts over 20 min were 0, 4, 9, and 24, respectively. Conclusions: The same improvement in tumor localization accuracy could be achieved through any of the three update methods, but significantly fewer updates were required when the respiratory surrogate method was utilized. This study establishes the feasibility of timing image acquisitions for updating respiratory surrogate models without direct tumor localization. PMID:23822413
Reynolds, Richard J.; Calef, F.J.
2011-01-01
The hydrogeology of the stratified-drift aquifer in the Sprout Creek and Fishkill Creek valleys in southern Dutchess County, New York, previously investigated by the U.S. Geological Survey (USGS) in 1982, was updated through the use of new well data made available through the New York State Department of Environmental Conservation's Water Well Program. Additional well data related to U.S. Environmental Protection Agency (USEPA) remedial investigations of two groundwater contamination sites near the villages of Hopewell Junction and Shenandoah, New York, were also used in this study. The boundary of the stratified-drift aquifer described in a previous USGS report was extended slightly eastward and southward to include adjacent tributary valleys and the USEPA groundwater contamination site at Shenandoah, New York. The updated report consists of maps showing well locations, surficial geology, altitude of the water table, and saturated thickness of the aquifer. Geographic information system coverages of these four maps were created as part of the update process.
Validation of New Wind Resource Maps
NASA Astrophysics Data System (ADS)
Elliott, D.; Schwartz, M.
2002-05-01
The National Renewable Energy Laboratory (NREL) recently led a project to validate updated state wind resource maps for the northwestern United States produced by a private U.S. company, TrueWind Solutions (TWS). The independent validation project was a cooperative activity among NREL, TWS, and meteorological consultants. The independent validation concept originated at a May 2001 technical workshop held at NREL to discuss updating the Wind Energy Resource Atlas of the United States. Part of the workshop, which included more than 20 attendees from the wind resource mapping and consulting community, was dedicated to reviewing the latest techniques for wind resource assessment. It became clear that using a numerical modeling approach for wind resource mapping was rapidly gaining ground as a preferred technique and if the trend continues, it will soon become the most widely-used technique around the world. The numerical modeling approach is a relatively fast application compared to older mapping methods and, in theory, should be quite accurate because it directly estimates the magnitude of boundary-layer processes that affect the wind resource of a particular location. Numerical modeling output combined with high resolution terrain data can produce useful wind resource information at a resolution of 1 km or lower. However, because the use of the numerical modeling approach is new (last 35 years) and relatively unproven, meteorological consultants question the accuracy of the approach. It was clear that new state or regional wind maps produced by this method would have to undergo independent validation before the results would be accepted by the wind energy community and developers.
NASA Astrophysics Data System (ADS)
Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.
2016-12-01
Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.
Update to the USDA-ARS fixed-wing spray nozzle models
USDA-ARS?s Scientific Manuscript database
The current USDA ARS Aerial Spray Nozzle Models were updated to reflect both new standardized measurement methods and systems, as well as, to increase operational spray pressure, aircraft airspeed and nozzle orientation angle limits. The new models were developed using both Central Composite Design...
Lazy Updating of hubs can enable more realistic models by speeding up stochastic simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ehlert, Kurt; Loewe, Laurence, E-mail: loewe@wisc.edu; Wisconsin Institute for Discovery, University of Wisconsin-Madison, Madison, Wisconsin 53715
2014-11-28
To respect the nature of discrete parts in a system, stochastic simulation algorithms (SSAs) must update for each action (i) all part counts and (ii) each action's probability of occurring next and its timing. This makes it expensive to simulate biological networks with well-connected “hubs” such as ATP that affect many actions. Temperature and volume also affect many actions and may be changed significantly in small steps by the network itself during fever and cell growth, respectively. Such trends matter for evolutionary questions, as cell volume determines doubling times and fever may affect survival, both key traits for biological evolution.more » Yet simulations often ignore such trends and assume constant environments to avoid many costly probability updates. Such computational convenience precludes analyses of important aspects of evolution. Here we present “Lazy Updating,” an add-on for SSAs designed to reduce the cost of simulating hubs. When a hub changes, Lazy Updating postpones all probability updates for reactions depending on this hub, until a threshold is crossed. Speedup is substantial if most computing time is spent on such updates. We implemented Lazy Updating for the Sorting Direct Method and it is easily integrated into other SSAs such as Gillespie's Direct Method or the Next Reaction Method. Testing on several toy models and a cellular metabolism model showed >10× faster simulations for its use-cases—with a small loss of accuracy. Thus we see Lazy Updating as a valuable tool for some special but important simulation problems that are difficult to address efficiently otherwise.« less
Determination of replicate composite bone material properties using modal analysis.
Leuridan, Steven; Goossens, Quentin; Pastrav, Leonard; Roosen, Jorg; Mulier, Michiel; Denis, Kathleen; Desmet, Wim; Sloten, Jos Vander
2017-02-01
Replicate composite bones are used extensively for in vitro testing of new orthopedic devices. Contrary to tests with cadaveric bone material, which inherently exhibits large variability, they offer a standardized alternative with limited variability. Accurate knowledge of the composite's material properties is important when interpreting in vitro test results and when using them in FE models of biomechanical constructs. The cortical bone analogue material properties of three different fourth-generation composite bone models were determined by updating FE bone models using experimental and numerical modal analyses results. The influence of the cortical bone analogue material model (isotropic or transversely isotropic) and the inter- and intra-specimen variability were assessed. Isotropic cortical bone analogue material models failed to represent the experimental behavior in a satisfactory way even after updating the elastic material constants. When transversely isotropic material models were used, the updating procedure resulted in a reduction of the longitudinal Young's modulus from 16.00GPa before updating to an average of 13.96 GPa after updating. The shear modulus was increased from 3.30GPa to an average value of 3.92GPa. The transverse Young's modulus was lowered from an initial value of 10.00GPa to 9.89GPa. Low inter- and intra-specimen variability was found. Copyright © 2016 Elsevier Ltd. All rights reserved.
Businnes Model of Cors-Tr Tusaga-Aktif
NASA Astrophysics Data System (ADS)
Bakici, S.; Erkek, B.; İlbey, A.; Kulaksiz, E.
2017-11-01
CORS-TR (TUSAGA-Aktif (Turkish National Permanent GNSS Network - Active)), serves location information at cm level accuracy in Turkey and TR Northern Cyprus in few seconds, where adequate numbers of GNSS satellites are observed and communication possibilities are present. No ground control points and benchmarks are necessary. There are 146 permanent GNSS stations within the CORS-TR System. Station data are transferred online to the main control center located in the Mapping Department of the General Directorate of Land Registry and Cadastre. CORS-TR System was established in 2008 and has been updated in software, hardware, communication and pricing areas from technical and administrative point of view in order to improve the system and provide better service to the users. Thus, the added value obtained from the CORS-TR System has been increased and contributed to the more efficient use of country resources. In this paper, how the technical, administrative and financial studies in the operation of the CORS-TR System were managed with a sustainable business model, studies for solving problems encountered in operating of the system, the cost / benefit analysis of the system and the sharing of experience gained from the perspective of how web-based applications are managed and the business model of the CORS-TR System are explained in detail.
System and method for bullet tracking and shooter localization
Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA
2011-06-21
A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.
The AFIS tree growth model for updating annual forest inventories in Minnesota
Margaret R. Holdaway
2000-01-01
As the Forest Service moves towards annual inventories, states may use model predictions of growth to update unmeasured plots. A tree growth model (AFIS) based on the scaled Weibull function and using the average-adjusted model form is presented. Annual diameter growth for four species was modeled using undisturbed plots from Minnesota's Aspen-Birch and Northern...
Musser, Jonathan W.; Watson, Kara M.; Painter, Jaime A.; Gotvald, Anthony J.
2016-02-22
Heavy rainfall occurred across South Carolina during October 1–5, 2015, as a result of an upper atmospheric low-pressure system that funneled tropical moisture from Hurricane Joaquin into the State. The storm caused major flooding in the central and coastal parts of South Carolina. Almost 27 inches of rain fell near Mount Pleasant in Charleston County during this period. U.S. Geological Survey (USGS) streamgages recorded peaks of record at 17 locations, and 15 other locations had peaks that ranked in the top 5 for the period of record. During the October 2015 flood event, USGS personnel made about 140 streamflow measurements at 86 locations to verify, update, or extend existing rating curves (which are used to compute streamflow from monitored river stage). Immediately after the storm event, USGS personnel documented 602 high-water marks, noting the location and height of the water above land surface. Later in October, 50 additional high-water marks were documented near bridges for South Carolina Department of Transportation. Using a subset of these high-water marks, 20 flood-inundation maps of 12 communities were created. Digital datasets of the inundation area, modeling boundary, and water depth rasters are all available for download.
Soil erosion assessment - Mind the gap
NASA Astrophysics Data System (ADS)
Kim, Jongho; Ivanov, Valeriy Y.; Fatichi, Simone
2016-12-01
Accurate assessment of erosion rates remains an elusive problem because soil loss is strongly nonunique with respect to the main drivers. In addressing the mechanistic causes of erosion responses, we discriminate between macroscale effects of external factors - long studied and referred to as "geomorphic external variability", and microscale effects, introduced as "geomorphic internal variability." The latter source of erosion variations represents the knowledge gap, an overlooked but vital element of geomorphic response, significantly impacting the low predictability skill of deterministic models at field-catchment scales. This is corroborated with experiments using a comprehensive physical model that dynamically updates the soil mass and particle composition. As complete knowledge of microscale conditions for arbitrary location and time is infeasible, we propose that new predictive frameworks of soil erosion should embed stochastic components in deterministic assessments of external and internal types of geomorphic variability.
Agents, Bayes, and Climatic Risks - a modular modelling approach
NASA Astrophysics Data System (ADS)
Haas, A.; Jaeger, C.
2005-08-01
When insurance firms, energy companies, governments, NGOs, and other agents strive to manage climatic risks, it is by no way clear what the aggregate outcome should and will be. As a framework for investigating this subject, we present the LAGOM model family. It is based on modules depicting learning social agents. For managing climate risks, our agents use second order probabilities and update them by means of a Bayesian mechanism while differing in priors and risk aversion. The interactions between these modules and the aggregate outcomes of their actions are implemented using further modules. The software system is implemented as a series of parallel processes using the CIAMn approach. It is possible to couple modules irrespective of the language they are written in, the operating system under which they are run, and the physical location of the machine.
Uncertainty aggregation and reduction in structure-material performance prediction
NASA Astrophysics Data System (ADS)
Hu, Zhen; Mahadevan, Sankaran; Ao, Dan
2018-02-01
An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.
NASA Astrophysics Data System (ADS)
Kilcommons, Liam M.; Redmon, Robert J.; Knipp, Delores J.
2017-08-01
We have developed a method for reprocessing the multidecadal, multispacecraft Defense Meteorological Satellite Program Special Sensor Magnetometer (DMSP SSM) data set and have applied it to 15 spacecraft years of data (DMSP Flight 16-18, 2010-2014). This Level-2 data set improves on other available SSM data sets with recalculated spacecraft locations and magnetic perturbations, artifact signal removal, representations of the observations in geomagnetic coordinates, and in situ auroral boundaries. Spacecraft locations have been recalculated using ground-tracking information. Magnetic perturbations (measured field minus modeled main field) are recomputed. The updated locations ensure the appropriate model field is used. We characterize and remove a slow-varying signal in the magnetic field measurements. This signal is a combination of ring current and measurement artifacts. A final artifact remains after processing: step discontinuities in the baseline caused by activation/deactivation of spacecraft electronics. Using coincident data from the DMSP precipitating electrons and ions instrument (SSJ4/5), we detect the in situ auroral boundaries with an improvement to the Redmon et al. (2010) algorithm. We embed the location of the aurora and an accompanying figure of merit in the Level-2 SSM data product. Finally, we demonstrate the potential of this new data set by estimating field-aligned current (FAC) density using the Minimum Variance Analysis technique. The FAC estimates are then expressed in dynamic auroral boundary coordinates using the SSJ-derived boundaries, demonstrating a dawn-dusk asymmetry in average FAC location relative to the equatorward edge of the aurora. The new SSM data set is now available in several public repositories.
Temporal parameters and time course of perceptual latency priming.
Scharlau, Ingrid; Neumann, Odmar
2003-06-01
Visual stimuli (primes) reduce the perceptual latency of a target appearing at the same location (perceptual latency priming, PLP). Three experiments assessed the time course of PLP by masked and, in Experiment 3, unmasked primes. Experiments 1 and 2 investigated the temporal parameters that determine the size of priming. Stimulus onset asynchrony was found to exert the main influence accompanied by a small effect of prime duration. Experiment 3 used a large range of priming onset asynchronies. We suggest to explain PLP by the Asynchronous Updating Model which relates it to the asynchrony of 2 central coding processes, preattentive coding of basic visual features and attentional orienting as a prerequisite for perceptual judgments and conscious perception.
Forcing and variability of nonstationary rip currents
Long, Joseph W.; H.T. Özkan-Haller,
2016-01-01
Surface wave transformation and the resulting nearshore circulation along a section of coast with strong alongshore bathymetric gradients outside the surf zone are modeled for a consecutive 4 week time period. The modeled hydrodynamics are compared to in situ measurements of waves and currents collected during the Nearshore Canyon Experiment and indicate that for the entire range of observed conditions, the model performance is similar to other studies along this stretch of coast. Strong alongshore wave height gradients generate rip currents that are observed by remote sensing data and predicted qualitatively well by the numerical model. Previous studies at this site have used idealized scenarios to link the rip current locations to undulations in the offshore bathymetry but do not explain the dichotomy between permanent offshore bathymetric features and intermittent rip current development. Model results from the month‐long simulation are used to track the formation and location of rip currents using hourly statistics, and results show that the direction of the incoming wave energy strongly controls whether rip currents form. In particular, most of the offshore wave spectra were bimodal and we find that the ratio of energy contained in each mode dictates rip current development, and the alongshore rip current position is controlled by the incident wave period. Additionally, model simulations performed with and without updating the nearshore morphology yield no significant change in the accuracy of the predicted surf zone hydrodyanmics indicating that the large‐scale offshore features (e.g., submarine canyon) predominately control the nearshore wave‐circulation system.
NRL/VOA Modifications to IONCAP as of 12 July 1988
1989-08-02
suitable for wide-area coverage studies), to incorporate a newer noise model , to improve the accuracy of some calculations, to correct a few...with IONANT ............................................................... 13 C. Incorporation of an Updated Noise Model into IONCAP...LISTINGS OF FOUR IONCAP SUBROUTINES SUPPORTING THE UPDATED NOISE MODEL ................................................................... 42 VI. LISTING
Incremental Testing of the Community Multiscale Air Quality (CMAQ) Modeling System Version 4.7
This paper describes the scientific and structural updates to the latest release of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7 (v4.7) and points the reader to additional resources for further details. The model updates were evaluated relative to obse...
OUTER LOOP LANDFILL CASE STUDY
This presentation will describe the interim data reaulting from a CRADA between USEPA and Waste Management, Inc. at the outer Loop Landfill Bioreactor research project located in Louisville, KY. Recently updated data will be presented covering landfill solids, gas being collecte...
Screening methodology for needs of roadway lighting.
DOT National Transportation Integrated Search
2003-01-01
Screening methods of AASHTO and NCHRP that assess the local potential for fixed roadway lighting to decrease nighttime crashes have not been updated since the 1970's. The methods dilute the influence of important factors, are inadequate for locations...
77 FR 21621 - Meeting of Notification of Citizens Coinage Advisory Committee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-10
... coin legends, mottos, dates, symbols and devices. Interested Persons Should Call the CCAC HOTLINE at (202) 354-7502 for the Latest Update on Meeting Time and Room Location In accordance with 31 U.S.C...
78 FR 42111 - National Science Board; Sunshine Act Meetings; Notice
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-15
...'s badge. All visitors must report to the NSF visitor desk located in the lobby at the 9th and N... information. Meeting information and updates (time, place, subject matter or status of meeting) may be found...
Update on High-Resolution Geodetically Controlled LROC Polar Mosaics
NASA Astrophysics Data System (ADS)
Archinal, B.; Lee, E.; Weller, L.; Richie, J.; Edmundson, K.; Laura, J.; Robinson, M.; Speyerer, E.; Boyd, A.; Bowman-Cisneros, E.; Wagner, R.; Nefian, A.
2015-10-01
We describe progress on high-resolution (1 m/pixel) geodetically controlled LROC mosaics of the lunar poles, which can be used for locating illumination resources (for solar power or cold traps) or landing site and surface operations planning.
Enumeration and extension of non-equivalent deterministic update schedules in Boolean networks.
Palma, Eduardo; Salinas, Lilian; Aracena, Julio
2016-03-01
Boolean networks (BNs) are commonly used to model genetic regulatory networks (GRNs). Due to the sensibility of the dynamical behavior to changes in the updating scheme (order in which the nodes of a network update their state values), it is increasingly common to use different updating rules in the modeling of GRNs to better capture an observed biological phenomenon and thus to obtain more realistic models.In Aracena et al. equivalence classes of deterministic update schedules in BNs, that yield exactly the same dynamical behavior of the network, were defined according to a certain label function on the arcs of the interaction digraph defined for each scheme. Thus, the interaction digraph so labeled (update digraphs) encode the non-equivalent schemes. We address the problem of enumerating all non-equivalent deterministic update schedules of a given BN. First, we show that it is an intractable problem in general. To solve it, we first construct an algorithm that determines the set of update digraphs of a BN. For that, we use divide and conquer methodology based on the structural characteristics of the interaction digraph. Next, for each update digraph we determine a scheme associated. This algorithm also works in the case where there is a partial knowledge about the relative order of the updating of the states of the nodes. We exhibit some examples of how the algorithm works on some GRNs published in the literature. An executable file of the UpdateLabel algorithm made in Java and the files with the outputs of the algorithms used with the GRNs are available at: www.inf.udec.cl/ ∼lilian/UDE/ CONTACT: lilisalinas@udec.cl Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Ground Motion Prediction Model Using Artificial Neural Network
NASA Astrophysics Data System (ADS)
Dhanya, J.; Raghukanth, S. T. G.
2018-03-01
This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.
Integrated Hydrogeological Model of the General Separations Area, Vol. 2, Rev. 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
FLACH, GREGORYK.
1999-04-01
The 15 mi2 General Separations Area (GSA) contains more than 35 RCRA and CERCLA waste units, and is the focus of numerous ongoing and anticipated contaminant migration and remedial alternatives studies. To meet the analysis needs of GSA remediation programs, a groundwater flow model of the area based on the FACT code was developed. The model is consistent with detailed characterization and monitoring data through 1996. Model preprocessing has been automated so that future updates and modifications can be performed quickly and efficiently. Most remedial action scenarios can be explicitly simulated, including vertical recirculation wells, vertical barriers, surface caps, pumpingmore » wells at arbitrary locations, specified drawdown within well casings (instead of flowrate), and wetland impacts of remedial actions. The model has a fine scale vertical mesh and heterogeneous conductivity field, and includes the vadose zone. Therefore, the model is well suited to support subsequent contaminant transport simulations. the model can provide a common framework for analyzing groundwater flow, contaminant migration, and remedial alternatives across Environmental Restoration programs within the GSA.« less
Single-Trial Event-Related Potential Correlates of Belief Updating
Murawski, Carsten; Bode, Stefan
2015-01-01
Abstract Belief updating—the process by which an agent alters an internal model of its environment—is a core function of the CNS. Recent theory has proposed broad principles by which belief updating might operate, but more precise details of its implementation in the human brain remain unclear. In order to address this question, we studied how two components of the human event-related potential encoded different aspects of belief updating. Participants completed a novel perceptual learning task while electroencephalography was recorded. Participants learned the mapping between the contrast of a dynamic visual stimulus and a monetary reward and updated their beliefs about a target contrast on each trial. A Bayesian computational model was formulated to estimate belief states at each trial and was used to quantify the following two variables: belief update size and belief uncertainty. Robust single-trial regression was used to assess how these model-derived variables were related to the amplitudes of the P3 and the stimulus-preceding negativity (SPN), respectively. Results showed a positive relationship between belief update size and P3 amplitude at one fronto-central electrode, and a negative relationship between SPN amplitude and belief uncertainty at a left central and a right parietal electrode. These results provide evidence that belief update size and belief uncertainty have distinct neural signatures that can be tracked in single trials in specific ERP components. This, in turn, provides evidence that the cognitive mechanisms underlying belief updating in humans can be described well within a Bayesian framework. PMID:26473170
Overview and Evaluation of the Community Multiscale Air Quality (CMAQ) Modeling System Version 5.2
A new version of the Community Multiscale Air Quality (CMAQ) model, version 5.2 (CMAQv5.2), is currently being developed, with a planned release date in 2017. The new model includes numerous updates from the previous version of the model (CMAQv5.1). Specific updates include a new...
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452
Application of firefly algorithm to the dynamic model updating problem
NASA Astrophysics Data System (ADS)
Shabbir, Faisal; Omenzetter, Piotr
2015-04-01
Model updating can be considered as a branch of optimization problems in which calibration of the finite element (FE) model is undertaken by comparing the modal properties of the actual structure with these of the FE predictions. The attainment of a global solution in a multi dimensional search space is a challenging problem. The nature-inspired algorithms have gained increasing attention in the previous decade for solving such complex optimization problems. This study applies the novel Firefly Algorithm (FA), a global optimization search technique, to a dynamic model updating problem. This is to the authors' best knowledge the first time FA is applied to model updating. The working of FA is inspired by the flashing characteristics of fireflies. Each firefly represents a randomly generated solution which is assigned brightness according to the value of the objective function. The physical structure under consideration is a full scale cable stayed pedestrian bridge with composite bridge deck. Data from dynamic testing of the bridge was used to correlate and update the initial model by using FA. The algorithm aimed at minimizing the difference between the natural frequencies and mode shapes of the structure. The performance of the algorithm is analyzed in finding the optimal solution in a multi dimensional search space. The paper concludes with an investigation of the efficacy of the algorithm in obtaining a reference finite element model which correctly represents the as-built original structure.
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
NASA Astrophysics Data System (ADS)
Gantt, B.; Kelly, J. T.; Bash, J. O.
2015-11-01
Sea spray aerosols (SSAs) impact the particle mass concentration and gas-particle partitioning in coastal environments, with implications for human and ecosystem health. Model evaluations of SSA emissions have mainly focused on the global scale, but regional-scale evaluations are also important due to the localized impact of SSAs on atmospheric chemistry near the coast. In this study, SSA emissions in the Community Multiscale Air Quality (CMAQ) model were updated to enhance the fine-mode size distribution, include sea surface temperature (SST) dependency, and reduce surf-enhanced emissions. Predictions from the updated CMAQ model and those of the previous release version, CMAQv5.0.2, were evaluated using several coastal and national observational data sets in the continental US. The updated emissions generally reduced model underestimates of sodium, chloride, and nitrate surface concentrations for coastal sites in the Bay Regional Atmospheric Chemistry Experiment (BRACE) near Tampa, Florida. Including SST dependency to the SSA emission parameterization led to increased sodium concentrations in the southeastern US and decreased concentrations along parts of the Pacific coast and northeastern US. The influence of sodium on the gas-particle partitioning of nitrate resulted in higher nitrate particle concentrations in many coastal urban areas due to increased condensation of nitric acid in the updated simulations, potentially affecting the predicted nitrogen deposition in sensitive ecosystems. Application of the updated SSA emissions to the California Research at the Nexus of Air Quality and Climate Change (CalNex) study period resulted in a modest improvement in the predicted surface concentration of sodium and nitrate at several central and southern California coastal sites. This update of SSA emissions enabled a more realistic simulation of the atmospheric chemistry in coastal environments where marine air mixes with urban pollution.
Wisconsin's forest statistics, 1987: an inventory update.
W. Brad Smith; Jerold T. Hahn
1989-01-01
The Wisconsin 1987 inventory update, derived by using tree growth models, reports 14.7 million acres of timberland, a decline of less than 1% since 1983. This bulletin presents findings from the inventory update in tables detailing timberland area, volume, and biomass.
Stabilizing Motifs in Autonomous Boolean Networks and the Yeast Cell Cycle Oscillator
NASA Astrophysics Data System (ADS)
Sevim, Volkan; Gong, Xinwei; Socolar, Joshua
2009-03-01
Synchronously updated Boolean networks are widely used to model gene regulation. Some properties of these model networks are known to be artifacts of the clocking in the update scheme. Autonomous updating is a less artificial scheme that allows one to introduce small timing perturbations and study stability of the attractors. We argue that the stabilization of a limit cycle in an autonomous Boolean network requires a combination of motifs such as feed-forward loops and auto-repressive links that can correct small fluctuations in the timing of switching events. A recently published model of the transcriptional cell-cycle oscillator in yeast contains the motifs necessary for stability under autonomous updating [1]. [1] D. A. Orlando, et al. Nature (London), 4530 (7197):0 944--947, 2008.
Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, G. A.; Hiergesell, R. A.
2013-11-12
The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptunemore » and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow-on work using GoldSim to develop 1D equivalent models of the PORFLOW multi-dimensional models was justified. The comparison of GoldSim 1D equivalent models to PORFLOW multi-dimensional models was made at two locations in the model domains - at the unsaturated-saturated zone interface and at the 100m point of compliance. PORFLOW model results from the 2008 PA were utilized to investigate the comparison. By making iterative adjustments to certain water flux terms in the GoldSim models it was possible to produce contaminant mass fluxes and water concentrations that were highly similar to the PORFLOW model results at the two locations where comparisons were made. Based on the ability of the GoldSim 1D trench models to produce mass flux and concentration curves that are sufficiently similar to multi-dimensional PORFLOW models for all of the evaluated radionuclides and their progeny, it is concluded that the use of the GoldSim 1D equivalent Slit and Engineered trenches models for further probabilistic sensitivity and uncertainty analysis of ELLWF trench units is justified. A revision to the original report was undertaken to correct mislabeling on the y-axes of the compliance point concentration graphs, to modify the terminology used to define the ''blended'' source term Case for the saturated zone to make it consistent with terminology used in the 2008 PA, and to make a more definitive statement regarding the justification of the use of the GoldSim 1D equivalent trench models for follow-on probabilistic sensitivity and uncertainty analysis.« less
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; McGinnis, S. A.
2017-12-01
Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.
NASA Astrophysics Data System (ADS)
Rondeau-Genesse, G.; Trudel, M.; Leconte, R.
2014-12-01
Coupling C-Band synthetic aperture radar (SAR) data to a multilayer snow model is a step in better understanding the temporal evolution of the radar backscattering coefficient during snowmelt. The watershed used for this study is the Nechako River Basin, located in the Rocky Mountains of British-Columbia (Canada). This basin has a snowpack of several meters in depth and part of its water is diverted to the Kemano hydropower system, managed by Rio-Tinto Alcan. Eighteen RADARSAT-2 ScanSAR Wide archive images were acquired in VV/VH polarization for the winter of 2011-2012, under different snow conditions. They are interpreted along with CROCUS, a multilayer physically-based snow model developed by Météo-France. This model discretizes the snowpack into 50 layers, which makes it possible to monitor various characteristics, such as liquid water content (LWC), throughout the season. CROCUS is used to model three specific locations of the Nechako River Basin. Results vary from one site to another, but in general there is a good agreement between the modeled LWC of the first layer of the snowpack and the backscattering coefficient of the RADARSAT-2 images, with a coefficient of determination (R²) of 0.80 and more. The radar images themselves were processed using an updated version of Nagler's methodology, which consists of subtracting an image in wet snow conditions to one in dry snow conditions, as wet snow can then be identified using a soft threshold centered around -3 dB. A second filter was used in order to differentiate dry snow and bare soil. That filter combines a VH/VV ratio threshold and an altitude criterion. The ensuing maps show a good agreement with the MODIS snow-covered area, which is already obtained daily over the Nechako River Basin, but with additional information on the location of wet snow and without sensibility to cloud cover. As a next step, the outputs of CROCUS will be used in Mätzler's Microwave Emission Model of Layered Snowpacks (MEMLS) to simulate the backscattering coefficient at different locations in the basin.
Missouri StreamStats—A water-resources web application
Ellis, Jarrett T.
2018-01-31
The U.S. Geological Survey (USGS) maintains and operates more than 8,200 continuous streamgages nationwide. Types of data that may be collected, computed, and stored for streamgages include streamgage height (water-surface elevation), streamflow, and water quality. The streamflow data allow scientists and engineers to calculate streamflow statistics, such as the 1-percent annual exceedance probability flood (also known as the 100-year flood), the mean flow, and the 7-day, 10-year low flow, which are used by managers to make informed water resource management decisions, at each streamgage location. Researchers, regulators, and managers also commonly need physical characteristics (basin characteristics) that describe the unique properties of a basin. Common uses for streamflow statistics and basin characteristics include hydraulic design, water-supply management, water-use appropriations, and flood-plain mapping for establishing flood-insurance rates and land-use zones. The USGS periodically publishes reports that update the values of basin characteristics and streamflow statistics at selected gaged locations (locations with streamgages), but these studies usually only update a subset of streamgages, making data retrieval difficult. Additionally, streamflow statistics and basin characteristics are most often needed at ungaged locations (locations without streamgages) for which published streamflow statistics and basin characteristics do not exist. Missouri StreamStats is a web-based geographic information system that was created by the USGS in cooperation with the Missouri Department of Natural Resources to provide users with access to an assortment of tools that are useful for water-resources planning and management. StreamStats allows users to easily obtain the most recent published streamflow statistics and basin characteristics for streamgage locations and to automatically calculate selected basin characteristics and estimate streamflow statistics at ungaged locations.
NASA Astrophysics Data System (ADS)
Li, Y. P.; Elbern, H.; Lu, K. D.; Friese, E.; Kiendler-Scharr, A.; Mentel, Th. F.; Wang, X. S.; Wahner, A.; Zhang, Y. H.
2013-03-01
The formation of Secondary organic aerosol (SOA) was simulated with the Secondary ORGanic Aerosol Model (SORGAM) by a classical gas-particle partitioning concept, using the two-product model approach, which is widely used in chemical transport models. In this study, we extensively updated SORGAM including three major modifications: firstly, we derived temperature dependence functions of the SOA yields for aromatics and biogenic VOCs, based on recent chamber studies within a sophisticated mathematic optimization framework; secondly, we implemented the SOA formation pathways from photo oxidation (OH initiated) of isoprene; thirdly, we implemented the SOA formation channel from NO3-initiated oxidation of reactive biogenic hydrocarbons (isoprene and monoterpenes). The temperature dependence functions of the SOA yields were validated against available chamber experiments. Moreover, the whole updated SORGAM module was validated against ambient SOA observations represented by the summed oxygenated organic aerosol (OOA) concentrations abstracted from Aerosol Mass Spectrometer (AMS) measurements at a rural site near Rotterdam, the Netherlands, performed during the IMPACT campaign in May 2008. In this case, we embedded both the original and the updated SORGAM module into the EURopean Air pollution and Dispersion-Inverse Model (EURAD-IM), which showed general good agreements with the observed meteorological parameters and several secondary products such as O3, sulfate and nitrate. With the updated SORGAM module, the EURAD-IM model also captured the observed SOA concentrations reasonably well especially those during nighttime. In contrast, the EURAD-IM model before update underestimated the observations by a factor of up to 5. The large improvements of the modeled SOA concentrations by updated SORGAM were attributed to the mentioned three modifications. Embedding the temperature dependence functions of the SOA yields, including the new pathways from isoprene photo oxidations, and switching on the SOA formation from NO3 initiated biogenic VOCs oxidations contributed to this enhancement by 10%, 22% and 47%, respectively. However, the EURAD-IM model with updated SORGAM still clearly underestimated the afternoon SOA observations up to a factor of two. More work such as to improve the simulated OH concentrations under high VOCs and low NOx concentrations, further including the SOA formation from semi-volatile organic compounds, the correct aging process of aerosols, oligomerization process and the influence on the biogenic SOA by the anthropogenic SOA, are still required to fill the gap.
Jason C. Cross; Eric C. Turnblom; Gregory J. Ettl
2013-01-01
Biomass residue produced by timber harvest operations is estimated for the Olympic and Kitsap Peninsulas, Washington. Scattered residues were sampled in 53 harvest units and piled residues were completely enumerated in 55 harvest units. Production is based on 2008 and 2009 data and is stratified by forest location, ownership type, harvest intensity, and harvest method...
Crucial role of strategy updating for coexistence of strategies in interaction networks.
Zhang, Jianlei; Zhang, Chunyan; Cao, Ming; Weissing, Franz J
2015-04-01
Network models are useful tools for studying the dynamics of social interactions in a structured population. After a round of interactions with the players in their local neighborhood, players update their strategy based on the comparison of their own payoff with the payoff of one of their neighbors. Here we show that the assumptions made on strategy updating are of crucial importance for the strategy dynamics. In the first step, we demonstrate that seemingly small deviations from the standard assumptions on updating have major implications for the evolutionary outcome of two cooperation games: cooperation can more easily persist in a Prisoner's Dilemma game, while it can go more easily extinct in a Snowdrift game. To explain these outcomes, we develop a general model for the updating of states in a network that allows us to derive conditions for the steady-state coexistence of states (or strategies). The analysis reveals that coexistence crucially depends on the number of agents consulted for updating. We conclude that updating rules are as important for evolution on a network as network structure and the nature of the interaction.
Crucial role of strategy updating for coexistence of strategies in interaction networks
NASA Astrophysics Data System (ADS)
Zhang, Jianlei; Zhang, Chunyan; Cao, Ming; Weissing, Franz J.
2015-04-01
Network models are useful tools for studying the dynamics of social interactions in a structured population. After a round of interactions with the players in their local neighborhood, players update their strategy based on the comparison of their own payoff with the payoff of one of their neighbors. Here we show that the assumptions made on strategy updating are of crucial importance for the strategy dynamics. In the first step, we demonstrate that seemingly small deviations from the standard assumptions on updating have major implications for the evolutionary outcome of two cooperation games: cooperation can more easily persist in a Prisoner's Dilemma game, while it can go more easily extinct in a Snowdrift game. To explain these outcomes, we develop a general model for the updating of states in a network that allows us to derive conditions for the steady-state coexistence of states (or strategies). The analysis reveals that coexistence crucially depends on the number of agents consulted for updating. We conclude that updating rules are as important for evolution on a network as network structure and the nature of the interaction.
NASA Astrophysics Data System (ADS)
Jathar, Shantanu H.; Woody, Matthew; Pye, Havala O. T.; Baker, Kirk R.; Robinson, Allen L.
2017-03-01
Gasoline- and diesel-fueled engines are ubiquitous sources of air pollution in urban environments. They emit both primary particulate matter and precursor gases that react to form secondary particulate matter in the atmosphere. In this work, we updated the organic aerosol module and organic emissions inventory of a three-dimensional chemical transport model, the Community Multiscale Air Quality Model (CMAQ), using recent, experimentally derived inputs and parameterizations for mobile sources. The updated model included a revised volatile organic compound (VOC) speciation for mobile sources and secondary organic aerosol (SOA) formation from unspeciated intermediate volatility organic compounds (IVOCs). The updated model was used to simulate air quality in southern California during May and June 2010, when the California Research at the Nexus of Air Quality and Climate Change (CalNex) study was conducted. Compared to the Traditional version of CMAQ, which is commonly used for regulatory applications, the updated model did not significantly alter the predicted organic aerosol (OA) mass concentrations but did substantially improve predictions of OA sources and composition (e.g., POA-SOA split), as well as ambient IVOC concentrations. The updated model, despite substantial differences in emissions and chemistry, performed similar to a recently released research version of CMAQ (Woody et al., 2016) that did not include the updated VOC and IVOC emissions and SOA data. Mobile sources were predicted to contribute 30-40 % of the OA in southern California (half of which was SOA), making mobile sources the single largest source contributor to OA in southern California. The remainder of the OA was attributed to non-mobile anthropogenic sources (e.g., cooking, biomass burning) with biogenic sources contributing to less than 5 % to the total OA. Gasoline sources were predicted to contribute about 13 times more OA than diesel sources; this difference was driven by differences in SOA production. Model predictions highlighted the need to better constrain multi-generational oxidation reactions in chemical transport models.
Help Me Please!: Designing and Developing Application for Emergencies
NASA Astrophysics Data System (ADS)
Hong, Ng Ken; Hafit, Hanayanti; Wahid, Norfaradilla; Kasim, Shahreen; Yusof, Munirah Mohd
2017-08-01
Help Me Please! Application is an android platform emergency button application that is designed to transmit emergency messages to target receivers with real time information. The purpose of developing this application is to help people to notify any emergency circumstances via Short Message Service (SMS) in android platform. The application will receive the current location from Global Positioning System (GPS), will obtain the current time from the mobile device and send this information to the receivers when user presses the emergency button. Simultaneously, the application will keep sending the emergency alerts to receivers and will update to database based on the time interval set by user until user stop the function. Object-oriented Software Development model is employed to guide the development of this application with the knowledge of Java language and Android Studio. In conclusion, this application plays an important role in rescuing process when emergency circumstances happen. The rescue process will become more effective by notifying the emergency circumstances and send the current location of user to others in the early hours.
Minnesota's forest statistics, 1987: an inventory update.
Jerold T. Hahn; W. Brad Smith
1987-01-01
The Minnesota 1987 inventory update, derived by using tree growth models, reports 13.5 million acres of timberland, a decline of less than 1% since 1977. This bulletin presents findings from the inventory update in tables detailing timer land area, volume, and biomass.
Expectancy Learning from Probabilistic Input by Infants
Romberg, Alexa R.; Saffran, Jenny R.
2013-01-01
Across the first few years of life, infants readily extract many kinds of regularities from their environment, and this ability is thought to be central to development in a number of domains. Numerous studies have documented infants’ ability to recognize deterministic sequential patterns. However, little is known about the processes infants use to build and update representations of structure in time, and how infants represent patterns that are not completely predictable. The present study investigated how infants’ expectations fora simple structure develope over time, and how infants update their representations with new information. We measured 12-month-old infants’ anticipatory eye movements to targets that appeared in one of two possible locations. During the initial phase of the experiment, infants either saw targets that appeared consistently in the same location (Deterministic condition) or probabilistically in either location, with one side more frequent than the other (Probabilistic condition). After this initial divergent experience, both groups saw the same sequence of trials for the rest of the experiment. The results show that infants readily learn from both deterministic and probabilistic input, with infants in both conditions reliably predicting the most likely target location by the end of the experiment. Local context had a large influence on behavior: infants adjusted their predictions to reflect changes in the target location on the previous trial. This flexibility was particularly evident in infants with more variable prior experience (the Probabilistic condition). The results provide some of the first data showing how infants learn in real time. PMID:23439947
DOSE ASSESSMENT OF THE FINAL INVENTORIES IN CENTER SLIT TRENCHES ONE THROUGH FIVE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collard, L.; Hamm, L.; Smith, F.
2011-05-02
In response to a request from Solid Waste Management (SWM), this study evaluates the performance of waste disposed in Slit Trenches 1-5 by calculating exposure doses and concentrations. As of 8/19/2010, Slit Trenches 1-5 have been filled and are closed to future waste disposal in support of an ARRA-funded interim operational cover project. Slit Trenches 6 and 7 are currently in operation and are not addressed within this analysis. Their current inventory limits are based on the 2008 SA and are not being impacted by this study. This analysis considers the location and the timing of waste disposal in Slitmore » Trenches 1-5 throughout their operational life. In addition, the following improvements to the modeling approach have been incorporated into this analysis: (1) Final waste inventories from WITS are used for the base case analysis where variance in the reported final disposal inventories is addressed through a sensitivity analysis; (2) Updated K{sub d} values are used; (3) Area percentages of non-crushable containers are used in the analysis to determine expected infiltration flows for cases that consider collapse of these containers; (4) An updated representation of ETF carbon column vessels disposed in SLIT3-Unit F is used. Preliminary analyses indicated a problem meeting the groundwater beta-gamma dose limit because of high H-3 and I-129 release from the ETF vessels. The updated model uses results from a recent structural analysis of the ETF vessels indicating that water does not penetrate the vessels for about 130 years and that the vessels remain structurally intact throughout the 1130-year period of assessment; and (5) Operational covers are included with revised installation dates and sets of Slit Trenches that have a common cover. With the exception of the modeling enhancements noted above, the analysis follows the same methodology used in the 2008 PA (WSRC, 2008) and the 2008 SA (Collard and Hamm, 2008). Infiltration flows through the vadose zone are identical to the flows used in the 2008 PA, except for flows during the operational cover time period. The physical (i.e., non-geochemical) models of the vadose zone and aquifer are identical in most cases to the models used in the 2008 PA. However, the 2008 PA assumed a uniform distribution of waste within each Slit Trench (WITS Location) and assumed that the entire inventory of each trench was disposed of at the time the first Slit Trench was opened. The current analysis considers individual trench excavations (i.e., segments) and groups of segments (i.e., Inventory Groups also known as WITS Units) within Slit Trenches. Waste disposal is assumed to be spatially uniform in each Inventory Group and is distributed in time increments of six months or less between the time the Inventory Group was opened and closed.« less
ERIC Educational Resources Information Center
College Store Journal, 1979
1979-01-01
Topics discussed by the NACS Store Planning/Renovation Committees in this updated version of the college store renovation manual include: short- and long-range planning, financial considerations, professional planning assistance, the store's image and business character, location considerations, building requirements, space requirements, fixtures,…
First year update on green infrastructure monitoring in Camden, NJ
The Camden County Municipal Utilities Authority (CCMUA) installed green infrastructure Stormwater Control Measures (SCMs) at multiple locations around the city of Camden, NJ. The SCMs include raised downspout planter boxes, rain gardens, and cisterns. The cisterns capture water ...
ERIC Educational Resources Information Center
Schiffbauer, Pam
2000-01-01
School buildings ideally would have few exterior access points, no isolated hallways, and sunlit classrooms. A safety checklist recommends locating offices near main doors, monitoring hallway traffic, enhancing communications, updating crisis-management plans, teaching coping skills, standardizing dismissal policies, and ensuring legal compliance…
Louisiana Airport System Plan.
DOT National Transportation Integrated Search
1992-10-01
This report is a non-technical summary of the update to the Louisiana Airport System Plan. The system plan identifies the location, service level, and role of the 81 airports included in the plan and the costs to develop individual airports and the a...
Particle Filtering Methods for Incorporating Intelligence Updates
2017-03-01
methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non
"Updates to Model Algorithms & Inputs for the Biogenic Emissions Inventory System (BEIS) Model"
We have developed new canopy emission algorithms and land use data for BEIS. Simulations with BEIS v3.4 and these updates in CMAQ v5.0.2 are compared these changes to the Model of Emissions of Gases and Aerosols from Nature (MEGAN) and evaluated the simulations against observatio...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-05
... projections models, as well as changes to future vehicle mix assumptions, that influence the emission... methodology that may occur in the future such as updated socioeconomic data, new models, and other factors... updated mobile emissions model, the Motor Vehicle Emissions Simulator (also known as MOVES2010a), and to...
Changes to online control and eye-hand coordination with healthy ageing.
O'Rielly, Jessica L; Ma-Wyatt, Anna
2018-06-01
Goal directed movements are typically accompanied by a saccade to the target location. Online control plays an important part in correction of a reach, especially if the target or goal of the reach moves during the reach. While there are notable changes to visual processing and motor control with healthy ageing, there is limited evidence about how eye-hand coordination during online updating changes with healthy ageing. We sought to quantify differences between older and younger people for eye-hand coordination during online updating. Participants completed a double step reaching task implemented under time pressure. The target perturbation could occur 200, 400 and 600 ms into a reach. We measured eye position and hand position throughout the trials to investigate changes to saccade latency, movement latency, movement time, reach characteristics and eye-hand latency and accuracy. Both groups were able to update their reach in response to a target perturbation that occurred at 200 or 400 ms into the reach. All participants demonstrated incomplete online updating for the 600 ms perturbation time. Saccade latencies, measured from the first target presentation, were generally longer for older participants. Older participants had significantly increased movement times but there was no significant difference between groups for touch accuracy. We speculate that the longer movement times enable the use of new visual information about the target location for online updating towards the end of the movement. Interestingly, older participants also produced a greater proportion of secondary saccades within the target perturbation condition and had generally shorter eye-hand latencies. This is perhaps a compensatory mechanism as there was no significant group effect on final saccade accuracy. Overall, the pattern of results suggests that online control of movements may be qualitatively different in older participants. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Seasonal and Surface Hydrologic Loading Signals at GPS Stations Processed by the GAGE Facility
NASA Astrophysics Data System (ADS)
Puskas, C. M.; Meertens, C. M.; Phillips, D.
2017-12-01
UNAVCO is now producing hydrologic displacement model time series at GPS station coordinates in the Geodesy Advancing Geosciences and EarthScope (GAGE) Facility, including the Plate Boundary Observatory (PBO). The surface loads are obtained from global and national land data assimilation systems (GLDAS and NLDAS, respectively) land surface models produced by the Goddard Earth Sciences Data and Information Services Center (GES DISC). The land surface models are available as monthly files of environmental parameters documenting water, pressure, temperature, and other measures mass/energy transfer on a grid at the Earth's surface. Grids are 1º for the global GLDAS models and 0.125º for the NLDAS models in the conterminous US. UNAVCO extracts the soil moisture, snowpack, and water stored in vegetation parameters and calculates displacements in an elastic half-space at selected points, i.e., GPS station locations. UNAVCO has recently upgraded its hydrologic data products from GLDAS version 1 to version 2 and added NLDAS-based models, and the new data products are now available from the UNAVCO ftp server (ftp://data-out.unavco.org/pub/products/hydro) and will soon be available through web services. The GLDAS v2 models supersede those based on v1, which will no longer be updated. UNAVCO updates its hydrologic products on a quarterly basis. Seasonal signals in the GAGE GPS position time series have amplitudes on the order of several millimeters, which vary across the PBO network depending on local climate and geology. The signals are thought to be a combination of elastic displacement from surface loading and poroelastic displacement from groundwater depletion and recharge. We present a description of the hydrologic displacement modeling and provide examples of loading and resulting displacement. The GLDAS and NLDAS models are compared with each other and with GPS position time series at selected stations in different geographic regions.
Separate encoding of model-based and model-free valuations in the human brain.
Beierholm, Ulrik R; Anen, Cedric; Quartz, Steven; Bossaerts, Peter
2011-10-01
Behavioral studies have long shown that humans solve problems in two ways, one intuitive and fast (System 1, model-free), and the other reflective and slow (System 2, model-based). The neurobiological basis of dual process problem solving remains unknown due to challenges of separating activation in concurrent systems. We present a novel neuroeconomic task that predicts distinct subjective valuation and updating signals corresponding to these two systems. We found two concurrent value signals in human prefrontal cortex: a System 1 model-free reinforcement signal and a System 2 model-based Bayesian signal. We also found a System 1 updating signal in striatal areas and a System 2 updating signal in lateral prefrontal cortex. Further, signals in prefrontal cortex preceded choices that are optimal according to either updating principle, while signals in anterior cingulate cortex and globus pallidus preceded deviations from optimal choice for reinforcement learning. These deviations tended to occur when uncertainty regarding optimal values was highest, suggesting that disagreement between dual systems is mediated by uncertainty rather than conflict, confirming recent theoretical proposals. Copyright © 2011 Elsevier Inc. All rights reserved.
Update to core reporting practices in structural equation modeling.
Schreiber, James B
This paper is a technical update to "Core Reporting Practices in Structural Equation Modeling." 1 As such, the content covered in this paper includes, sample size, missing data, specification and identification of models, estimation method choices, fit and residual concerns, nested, alternative, and equivalent models, and unique issues within the SEM family of techniques. Copyright © 2016 Elsevier Inc. All rights reserved.
Summary Analysis: Hanford Site Composite Analysis Update
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nichols, W. E.; Lehman, L. L.
2017-06-05
The Hanford Site’s currently maintained Composite Analysis, originally completed in 1998, requires an update. A previous update effort was undertaken by the U.S. Department of Energy (DOE) in 2001-2005, but was ended before completion to allow the Tank Closure & Waste Management Environmental Impact Statement (TC&WM EIS) (DOE/EIS-0391) to be prepared without potential for conflicting sitewide models. This EIS was issued in 2012, and the deferral was ended with guidance in memorandum “Modeling to Support Regulatory Decision Making at Hanford” (Williams, 2012) provided with the aim of ensuring subsequent modeling is consistent with the EIS.
Efficient Storage Scheme of Covariance Matrix during Inverse Modeling
NASA Astrophysics Data System (ADS)
Mao, D.; Yeh, T. J.
2013-12-01
During stochastic inverse modeling, the covariance matrix of geostatistical based methods carries the information about the geologic structure. Its update during iterations reflects the decrease of uncertainty with the incorporation of observed data. For large scale problem, its storage and update cost too much memory and computational resources. In this study, we propose a new efficient storage scheme for storage and update. Compressed Sparse Column (CSC) format is utilized to storage the covariance matrix, and users can assign how many data they prefer to store based on correlation scales since the data beyond several correlation scales are usually not very informative for inverse modeling. After every iteration, only the diagonal terms of the covariance matrix are updated. The off diagonal terms are calculated and updated based on shortened correlation scales with a pre-assigned exponential model. The correlation scales are shortened by a coefficient, i.e. 0.95, every iteration to show the decrease of uncertainty. There is no universal coefficient for all the problems and users are encouraged to try several times. This new scheme is tested with 1D examples first. The estimated results and uncertainty are compared with the traditional full storage method. In the end, a large scale numerical model is utilized to validate this new scheme.
1999 update of the Arizona highway cost allocation study
DOT National Transportation Integrated Search
1999-08-01
The purpose of this report was to update the Arizona highway cost allocation study and to evaluate the alternative of using the new FHWA cost allocation model as a replacement The update revealed that the repeal of Arizona's weight-distance tas has l...
Aircraft engine sensor fault diagnostics using an on-line OBEM update method.
Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye
2017-01-01
This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault.
Aircraft engine sensor fault diagnostics using an on-line OBEM update method
Liu, Xiaofeng; Xue, Naiyu; Yuan, Ye
2017-01-01
This paper proposed a method to update the on-line health reference baseline of the On-Board Engine Model (OBEM) to maintain the effectiveness of an in-flight aircraft sensor Fault Detection and Isolation (FDI) system, in which a Hybrid Kalman Filter (HKF) was incorporated. Generated from a rapid in-flight engine degradation, a large health condition mismatch between the engine and the OBEM can corrupt the performance of the FDI. Therefore, it is necessary to update the OBEM online when a rapid degradation occurs, but the FDI system will lose estimation accuracy if the estimation and update are running simultaneously. To solve this problem, the health reference baseline for a nonlinear OBEM was updated using the proposed channel controller method. Simulations based on the turbojet engine Linear-Parameter Varying (LPV) model demonstrated the effectiveness of the proposed FDI system in the presence of substantial degradation, and the channel controller can ensure that the update process finishes without interference from a single sensor fault. PMID:28182692
NASA Astrophysics Data System (ADS)
Kuznetsova, M. M.; Liu, Y. H.; Rastaetter, L.; Pembroke, A. D.; Chen, L. J.; Hesse, M.; Glocer, A.; Komar, C. M.; Dorelli, J.; Roytershteyn, V.
2016-12-01
The presentation will provide overview of new tools, services and models implemented at the Community Coordinated Modeling Center (CCMC) to facilitate MMS dayside results analysis. We will provide updates on implementation of Particle-in-Cell (PIC) simulations at the CCMC and opportunities for on-line visualization and analysis of results of PIC simulations of asymmetric magnetic reconnection for different guide fields and boundary conditions. Fields, plasma parameters, particle distribution moments as well as particle distribution functions calculated in selected regions of the vicinity of reconnection sites can be analyzed through the web-based interactive visualization system. In addition there are options to request distribution functions in user selected regions of interest and to fly through simulated magnetic reconnection configurations and a map of distributions to facilitate comparisons with observations. A broad collection of global magnetosphere models hosted at the CCMC provide opportunity to put MMS observations and local PIC simulations into global context. We recently implemented the RECON-X post processing tool (Glocer et al, 2016) which allows users to determine the location of separator surface around closed field lines and between open field lines and solar wind field lines. The tool also finds the separatrix line where the two surfaces touch and positions of magnetic nulls. The surfaces and the separatrix line can be visualized relative to satellite positions in the dayside magnetosphere using an interactive HTML-5 visualization for each time step processed. To validate global magnetosphere models' capability to simulate locations of dayside magnetosphere boundaries we will analyze the proximity of MMS to simulated separatrix locations for a set of MMS diffusion region crossing events.
Wave-equation migration velocity inversion using passive seismic sources
NASA Astrophysics Data System (ADS)
Witten, B.; Shragge, J. C.
2015-12-01
Seismic monitoring at injection sites (e.g., CO2 sequestration, waste water disposal, hydraulic fracturing) has become an increasingly important tool for hazard identification and avoidance. The information obtained from this data is often limited to seismic event properties (e.g., location, approximate time, moment tensor), the accuracy of which greatly depends on the estimated elastic velocity models. However, creating accurate velocity models from passive array data remains a challenging problem. Common techniques rely on picking arrivals or matching waveforms requiring high signal-to-noise data that is often not available for the magnitude earthquakes observed over injection sites. We present a new method for obtaining elastic velocity information from earthquakes though full-wavefield wave-equation imaging and adjoint-state tomography. The technique exploits the fact that the P- and S-wave arrivals originate at the same time and location in the subsurface. We generate image volumes by back-propagating P- and S-wave data through initial Earth models and then applying a correlation-based extended-imaging condition. Energy focusing away from zero lag in the extended image volume is used as a (penalized) residual in an adjoint-state tomography scheme to update the P- and S-wave velocity models. We use an acousto-elastic approximation to greatly reduce the computational cost. Because the method requires neither an initial source location or origin time estimate nor picking of arrivals, it is suitable for low signal-to-noise datasets, such as microseismic data. Synthetic results show that with a realistic distribution of microseismic sources, P- and S-velocity perturbations can be recovered. Although demonstrated at an oil and gas reservoir scale, the technique can be applied to problems of all scales from geologic core samples to global seismology.
Forecasting the (un)productivity of the 2014 M 6.0 South Napa aftershock sequence
Llenos, Andrea L.; Michael, Andrew J.
2017-01-01
The 24 August 2014 Mw 6.0 South Napa mainshock produced fewer aftershocks than expected for a California earthquake of its magnitude. In the first 4.5 days, only 59 M≥1.8 aftershocks occurred, the largest of which was an M 3.9 that happened a little over two days after the mainshock. We investigate the aftershock productivity of the South Napa sequence and compare it with other M≥5.5 California strike‐slip mainshock–aftershock sequences. While the productivity of the South Napa sequence is among the lowest, northern California mainshocks generally have fewer aftershocks than mainshocks further south, although the productivities vary widely in both regions. An epidemic‐type aftershock sequence (ETAS) model (Ogata, 1988) fit to Napa seismicity from 1980 to 23 August 2014 fits the sequence well and suggests that low‐productivity sequences are typical of this area. Utilizing regional variations in productivity could improve operational earthquake forecasting (OEF) by improving the model used immediately after the mainshock. We show this by comparing the daily rate of M≥2 aftershocks to forecasts made with the generic California model (Reasenberg and Jones, 1989; hereafter, RJ89), RJ89 models with productivity updated daily, a generic California ETAS model, an ETAS model based on premainshock seismicity, and ETAS models updated daily following the mainshock. RJ89 models for which only the productivity is updated provide better forecasts than the generic RJ89 California model, and the Napa‐specific ETAS models forecast the aftershock rates more accurately than either generic model. Therefore, forecasts that use localized initial parameters and that rapidly update the productivity may be better for OEF than using a generic model and/or updating all parameters.
Xu, Xiangtao; Medvigy, David; Powers, Jennifer S; Becknell, Justin M; Guan, Kaiyu
2016-10-01
We assessed whether diversity in plant hydraulic traits can explain the observed diversity in plant responses to water stress in seasonally dry tropical forests (SDTFs). The Ecosystem Demography model 2 (ED2) was updated with a trait-driven mechanistic plant hydraulic module, as well as novel drought-phenology and plant water stress schemes. Four plant functional types were parameterized on the basis of meta-analysis of plant hydraulic traits. Simulations from both the original and the updated ED2 were evaluated against 5 yr of field data from a Costa Rican SDTF site and remote-sensing data over Central America. The updated model generated realistic plant hydraulic dynamics, such as leaf water potential and stem sap flow. Compared with the original ED2, predictions from our novel trait-driven model matched better with observed growth, phenology and their variations among functional groups. Most notably, the original ED2 produced unrealistically small leaf area index (LAI) and underestimated cumulative leaf litter. Both of these biases were corrected by the updated model. The updated model was also better able to simulate spatial patterns of LAI dynamics in Central America. Plant hydraulic traits are intercorrelated in SDTFs. Mechanistic incorporation of plant hydraulic traits is necessary for the simulation of spatiotemporal patterns of vegetation dynamics in SDTFs in vegetation models. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Fast model updating coupling Bayesian inference and PGD model reduction
NASA Astrophysics Data System (ADS)
Rubio, Paul-Baptiste; Louf, François; Chamoin, Ludovic
2018-04-01
The paper focuses on a coupled Bayesian-Proper Generalized Decomposition (PGD) approach for the real-time identification and updating of numerical models. The purpose is to use the most general case of Bayesian inference theory in order to address inverse problems and to deal with different sources of uncertainties (measurement and model errors, stochastic parameters). In order to do so with a reasonable CPU cost, the idea is to replace the direct model called for Monte-Carlo sampling by a PGD reduced model, and in some cases directly compute the probability density functions from the obtained analytical formulation. This procedure is first applied to a welding control example with the updating of a deterministic parameter. In the second application, the identification of a stochastic parameter is studied through a glued assembly example.
NASA Astrophysics Data System (ADS)
Noh, Seong Jin; Tachikawa, Yasuto; Shiiba, Michiharu; Kim, Sunmin
Applications of data assimilation techniques have been widely used to improve upon the predictability of hydrologic modeling. Among various data assimilation techniques, sequential Monte Carlo (SMC) filters, known as "particle filters" provide the capability to handle non-linear and non-Gaussian state-space models. This paper proposes a dual state-parameter updating scheme (DUS) based on SMC methods to estimate both state and parameter variables of a hydrologic model. We introduce a kernel smoothing method for the robust estimation of uncertain model parameters in the DUS. The applicability of the dual updating scheme is illustrated using the implementation of the storage function model on a middle-sized Japanese catchment. We also compare performance results of DUS combined with various SMC methods, such as SIR, ASIR and RPF.
Podiatric medical resources on the internet: a fifth update.
Fikar, Charles R
2006-01-01
An updated selection of high-quality Internet resources of potential use to the podiatric medical practitioner, educator, resident, and student is presented. Internet search tools and general Internet reference sources are briefly covered, including methods of locating material residing on the "invisible" Web. General medical and podiatric medical resources are emphasized. These Web sites were judged on the basis of their potential to enhance the practice of podiatric medicine in addition to their contribution to education. Podiatric medical students, educators, residents, and practitioners who require a quick reference guide to the Internet may find this article useful.
India Solar Resource Data: Enhanced Data for Accelerated Deployment (Fact Sheet)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Identifying potential locations for solar photovoltaic (PV) and concentrating solar power (CSP) projects requires an understanding of the underlying solar resource. Under a bilateral partnership between the United States and India - the U.S.-India Energy Dialogue - the National Renewable Energy Laboratory has updated Indian solar data and maps using data provided by the Ministry of New and Renewable Energy (MNRE) and the National Institute for Solar Energy (NISE). This fact sheet overviews the updated maps and data, which help identify high-quality solar energy projects. This can help accelerate the deployment of solar energy in India.
India Solar Resource Data: Enhanced Data for Accelerated Deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
Identifying potential locations for solar photovoltaic (PV) and concentrating solar power (CSP) projects requires an understanding of the underlying solar resource. Under a bilateral partnership between the United States and India - the U.S.-India Energy Dialogue - the National Renewable Energy Laboratory has updated Indian solar data and maps using data provided by the Ministry of New and Renewable Energy (MNRE) and the National Institute for Solar Energy (NISE). This fact sheet overviews the updated maps and data, which help identify high-quality solar energy projects. This can help accelerate the deployment of solar energy in India.
Visualization assisted by parallel processing
NASA Astrophysics Data System (ADS)
Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.
2011-01-01
This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.
NASA Astrophysics Data System (ADS)
Love, C. A.; Skahill, B. E.; AghaKouchak, A.; Karlovits, G. S.; England, J. F.; Duren, A. M.
2017-12-01
We compare gridded extreme precipitation return levels obtained using spatial Bayesian hierarchical modeling (BHM) with their respective counterparts from a traditional regional frequency analysis (RFA) using the same set of extreme precipitation data. Our study area is the 11,478 square mile Willamette River basin (WRB) located in northwestern Oregon, a major tributary of the Columbia River whose 187 miles long main stem, the Willamette River, flows northward between the Coastal and Cascade Ranges. The WRB contains approximately two thirds of Oregon's population and 20 of the 25 most populous cities in the state. The U.S. Army Corps of Engineers (USACE) Portland District operates thirteen dams and extreme precipitation estimates are required to support risk informed hydrologic analyses as part of the USACE Dam Safety Program. Our intent is to profile for the USACE an alternate methodology to an RFA that was developed in 2008 due to the lack of an official NOAA Atlas 14 update for the state of Oregon. We analyze 24-hour annual precipitation maxima data for the WRB utilizing the spatial BHM R package "spatial.gev.bma", which has been shown to be efficient in developing coherent maps of extreme precipitation by return level. Our BHM modeling analysis involved application of leave-one-out cross validation (LOO-CV), which not only supported model selection but also a comprehensive assessment of location specific model performance. The LOO-CV results will provide a basis for the BHM RFA comparison.
NASA Astrophysics Data System (ADS)
Astroza, Rodrigo; Ebrahimian, Hamed; Conte, Joel P.
2015-03-01
This paper describes a novel framework that combines advanced mechanics-based nonlinear (hysteretic) finite element (FE) models and stochastic filtering techniques to estimate unknown time-invariant parameters of nonlinear inelastic material models used in the FE model. Using input-output data recorded during earthquake events, the proposed framework updates the nonlinear FE model of the structure. The updated FE model can be directly used for damage identification and further used for damage prognosis. To update the unknown time-invariant parameters of the FE model, two alternative stochastic filtering methods are used: the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). A three-dimensional, 5-story, 2-by-1 bay reinforced concrete (RC) frame is used to verify the proposed framework. The RC frame is modeled using fiber-section displacement-based beam-column elements with distributed plasticity and is subjected to the ground motion recorded at the Sylmar station during the 1994 Northridge earthquake. The results indicate that the proposed framework accurately estimate the unknown material parameters of the nonlinear FE model. The UKF outperforms the EKF when the relative root-mean-square error of the recorded responses are compared. In addition, the results suggest that the convergence of the estimate of modeling parameters is smoother and faster when the UKF is utilized.
Updating Known Distribution Models for Forecasting Climate Change Impact on Endangered Species
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only. PMID:23840330
Updating known distribution models for forecasting climate change impact on endangered species.
Muñoz, Antonio-Román; Márquez, Ana Luz; Real, Raimundo
2013-01-01
To plan endangered species conservation and to design adequate management programmes, it is necessary to predict their distributional response to climate change, especially under the current situation of rapid change. However, these predictions are customarily done by relating de novo the distribution of the species with climatic conditions with no regard of previously available knowledge about the factors affecting the species distribution. We propose to take advantage of known species distribution models, but proceeding to update them with the variables yielded by climatic models before projecting them to the future. To exemplify our proposal, the availability of suitable habitat across Spain for the endangered Bonelli's Eagle (Aquila fasciata) was modelled by updating a pre-existing model based on current climate and topography to a combination of different general circulation models and Special Report on Emissions Scenarios. Our results suggested that the main threat for this endangered species would not be climate change, since all forecasting models show that its distribution will be maintained and increased in mainland Spain for all the XXI century. We remark on the importance of linking conservation biology with distribution modelling by updating existing models, frequently available for endangered species, considering all the known factors conditioning the species' distribution, instead of building new models that are based on climate change variables only.
NASA Astrophysics Data System (ADS)
Alonso-Contes, C.; Gerber, S.; Bliznyuk, N.; Duerr, I.
2017-12-01
Wetlands contribute approximately 20 to 40 % to global sources of methane emissions. We build a Methane model for tropical and subtropical forests, that allows inundated conditions, following the approaches used in more complex global biogeochemical emission models (LPJWhyMe and CLM4Me). The model was designed to replace model formulations with field and remotely sensed collected data for 2 essential drivers: plant productivity and hydrology. This allows us to directly focus on the central processes of methane production, consumption and transport. One of our long term goals is to make the model available to a scientists interested in including methane modeling in their location of study. Sensitivity analysis results help in focusing field data collection efforts. Here, we present results from a pilot global sensitivity analysis of the model order to determine which parameters and processes contribute most to the model's uncertainty of methane emissions. Results show that parameters related to water table behavior, carbon input (in form of plant productivity) and rooting depth affect simulated methane emissions the most. Current efforts include to perform the sensitivity analysis again on methane emissions outputs from an updated model that incorporates a soil heat flux routine and to determine the extent by which the soil temperature parameters affect CH4 emissions. Currently we are conducting field collection of data during Summer 2017 for comparison among 3 different landscapes located in the Ordway-Swisher Biological Station in Melrose, FL. We are collecting soil moisture and CH4 emission data from 4 different wetland types. Having data from 4 wetland types allows for calibration of the model to diverse soil, water and vegetation characteristics.
Inter-firm Networks, Organizational Learning and Knowledge Updating: An Empirical Study
NASA Astrophysics Data System (ADS)
Zhang, Su-rong; Wang, Wen-ping
In the era of knowledge-based economy which information technology develops rapidly, the rate of knowledge updating has become a critical factor for enterprises to gaining competitive advantage .We build an interactional theoretical model among inter-firm networks, organizational learning and knowledge updating thereby and demonstrate it with empirical study at last. The result shows that inter-firm networks and organizational learning is the source of knowledge updating.
Operations Security (OPSEC) Guide
2011-04-01
information list. Review periodically for currency and update as necessary. b. Incorporate OPSEC into organizational plans, exercises, and...is the phone located? (i.e., on your desk, in a common room, in another office) Where is the crypto -ignition key (CIK) kept? For CIKs kept in a
DOT National Transportation Integrated Search
2016-04-01
The Florida Department of Transportation (FDOT) District One developed the Congestion Management Process : (CMP) system to prioritize low-cost, near-term highway improvements on the Strategic Intermodal System (SIS). : The existing CMP system is desi...
Los Alamos National Laboratory The LANL Research Library website has been moved to http ://www.lanl.gov/library/. Please update your bookmarks. If you are not redirected to the new location within 10 http:// | Last Modified: Send email to the Library
Department of Defense Fire and Emergency Services Certification Program
1995-12-12
Support Agency 2. CDC Career Development Course 3. CIMP Certification Information Management Program 4. ECI Extension Course Institute 5. HAZMAT...10. Notify IFSAC of the date and location of performance evaluations. 11. Maintain and update the Certification Information Management Program ( CIMP
Food Service Trends--The Next Two Years and Beyond.
ERIC Educational Resources Information Center
Bowers, R. Steve
1987-01-01
Surveyed college food service trends in various geographical locations in the United States. Discusses the trends, addressing eating alternatives, program issues, flexibility in offerings, nutritional emphasis, management and training changes, concern with costs and profits, updating of physical facilities, marketing, technology, matching…
The Cancer Family Caregiving Experience: An Updated and Expanded Conceptual Model
Fletcher, Barbara Swore; Miaskowski, Christine; Given, Barbara; Schumacher, Karen
2011-01-01
Objective The decade from 2000–2010 was an era of tremendous growth in family caregiving research specific to the cancer population. This research has implications for how cancer family caregiving is conceptualized, yet the most recent comprehensive model of cancer family caregiving was published ten years ago. Our objective was to develop an updated and expanded comprehensive model of the cancer family caregiving experience, derived from concepts and variables used in research during past ten years. Methods A conceptual model was developed based on cancer family caregiving research published from 2000–2010. Results Our updated and expanded model has three main elements: 1) the stress process, 2) contextual factors, and 3) the cancer trajectory. Emerging ways of conceptualizing the relationships between and within model elements are addressed, as well as an emerging focus on caregiver-patient dyads as the unit of analysis. Conclusions Cancer family caregiving research has grown dramatically since 2000 resulting in a greatly expanded conceptual landscape. This updated and expanded model of the cancer family caregiving experience synthesizes the conceptual implications of an international body of work and demonstrates tremendous progress in how cancer family caregiving research is conceptualized. PMID:22000812
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.
Jílek, K; Slezáková, M; Fronka, A; Prokop, T; Neubauer, L
2017-11-01
During years 2010-12 an automated, on-line and wireless outdoor measurement station of atmospheric radon, gamma dose rate and meteorological parameters was realised at the National Radiation Protection Institute (NRPI) in Prague. At the turn of the year 2013 an expansion of the existing station was completed. Under the project funded by the Czech Technological Agency a new updated station was established, additionally equipped with modules for measurement of atmospheric radon/thoron short-lived decay products, radon in water and soil and radon exhalation rate from soil. After the introduction of the station updated key detection parameters and benefits, its use for atmospheric modelling and monitoring is demonstrated. There are summarised results from the 3-year measurement period in the NRPI outdoor area in Prague and from simultaneous annual measurement performed by another similar station located near uranium mud fields in DIAMO, state enterprise, Stráž pod Ralskem. Observed seasonal and diurnal variations of atmospheric radon concentrations and variability of the equilibrium factor, F, are illustrated and compared. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Long duration exposure facility post-flight thermal analysis, part 1
NASA Technical Reports Server (NTRS)
Berrios, William M.; Sampair, Thomas R.
1992-01-01
Results of the post-flight thermal analysis of the Long Duration Exposure Facility (LDEF) mission are presented. The LDEF mission thermal analysis was verified by comparing the thermal model results to flight data from the LDEF Thermal Measurements System (THERM). Post-flight calculated temperature uncertainties have been reduced to under +/- 18 F from the pre-flight uncertainties of +/- 40 F. The THERM consisted of eight temperature sensors, a shared tape recorder, a standard LDEF flight battery, and an electronics control box. The temperatures were measured at selected locations on the LDEF structure interior during the first 390 days of flight and recorded for post-flight analysis. After the LDEF retrieval from Space on 12 Jan. 1990, the tape recorder was recovered from the spacecraft and the data reduced for comparison to the LDEF predicted temperatures. The LDEF mission temperatures were calculated prior to the LDEF deployment on 7 Apr. 1980, and updated after the LDEF retrieval with the following actual flight parameter data: including thermal fluxes, spacecraft attitudes, thermal coatings degradation, and contamination effects. All updated data used for the calculation of post-flight temperatures is also presented in this document.
Lin, Zhicheng; He, Sheng
2012-10-25
Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.
NASA Astrophysics Data System (ADS)
Pascoe, Stephen; Cinquini, Luca; Lawrence, Bryan
2010-05-01
The Phase 5 Coupled Model Intercomparison Project (CMIP5) will produce a petabyte scale archive of climate data relevant to future international assessments of climate science (e.g., the IPCC's 5th Assessment Report scheduled for publication in 2013). The infrastructure for the CMIP5 archive must meet many challenges to support this ambitious international project. We describe here the distributed software architecture being deployed worldwide to meet these challenges. The CMIP5 architecture extends the Earth System Grid (ESG) distributed architecture of Datanodes, providing data access and visualisation services, and Gateways providing the user interface including registration, search and browse services. Additional features developed for CMIP5 include a publication workflow incorporating quality control and metadata submission, data replication, version control, update notification and production of citable metadata records. Implementation of these features have been driven by the requirements of reliable global access to over 1Pb of data and consistent citability of data and metadata. Central to the implementation is the concept of Atomic Datasets that are identifiable through a Data Reference Syntax (DRS). Atomic Datasets are immutable to allow them to be replicated and tracked whilst maintaining data consistency. However, since occasional errors in data production and processing is inevitable, new versions can be published and users notified of these updates. As deprecated datasets may be the target of existing citations they can remain visible in the system. Replication of Atomic Datasets is designed to improve regional access and provide fault tolerance. Several datanodes in the system are designated replicating nodes and hold replicas of a portion of the archive expected to be of broad interest to the community. Gateways provide a system-wide interface to users where they can track the version history and location of replicas to select the most appropriate location for download. In addition to meeting the immediate needs of CMIP5 this architecture provides a basis for the Earth System Modeling e-infrastructure being further developed within the EU FP7 IS-ENES project.
2016-04-30
Model Acquisition Activities Clifford Whitcomb, Systems Engineering Professor, NPS Corina White, Systems Engineering Research Associate, NPS...Engineering Acquisition Activities Karen Holness, Assistant Professor, NPS Update on the Department of the Navy Systems Engineering Career Competency Model ...Career Competency Model Clifford A. Whitcomb—is a Professor in the Systems Engineering Department at the Naval Postgraduate School, in Monterey, CA
An Updated AP2 Beamline TURTLE Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gormley, M.; O'Day, S.
1991-08-23
This note describes a TURTLE model of the AP2 beamline. This model was created by D. Johnson and improved by J. Hangst. The authors of this note have made additional improvements which reflect recent element and magnet setting changes. The magnet characteristics measurements and survey data compiled to update the model will be presented. A printout of the actual TURTLE deck may be found in appendix A.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-14
... update the air quality modeling in the San Joaquin Valley 8-Hour Ozone SIP by December 31, 2014. DATES... modeling in the San Joaquin Valley 8-Hour Ozone SIP to reflect emissions inventory improvements and any...) * * * (396) * * * (ii) * * * (A) * * * (2) * * * (ii) Commitment to update the air quality modeling in the...
PRMS-IV, the precipitation-runoff modeling system, version 4
Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.
2015-01-01
Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.
Agent Communication for Dynamic Belief Update
NASA Astrophysics Data System (ADS)
Kobayashi, Mikito; Tojo, Satoshi
Thus far, various formalizations of rational / logical agent model have been proposed. In this paper, we include the notion of communication channel and belief modality into update logic, and introduce Belief Update Logic (BUL). First, we discuss that how we can reformalize the inform action of FIPA-ACL into communication channel, which represents a connection between agents. Thus, our agents can send a message only when they believe, and also there actually is, a channel between him / her and a receiver. Then, we present a static belief logic (BL) and show its soundness and completeness. Next, we develop the logic to BUL, which can update Kripke model by the inform action; in which we show that in the updated model the belief operator also satisfies K45. Thereafter, we show that every sentence in BUL can be translated into BL; thus, we can contend that BUL is also sound and complete. Furthermore, we discuss the features of CUL, including the case of inconsistent information, as well as channel transmission. Finally, we summarize our contribution and discuss some future issues.
Estimating urban flood risk - uncertainty in design criteria
NASA Astrophysics Data System (ADS)
Newby, M.; Franks, S. W.; White, C. J.
2015-06-01
The design of urban stormwater infrastructure is generally performed assuming that climate is static. For engineering practitioners, stormwater infrastructure is designed using a peak flow method, such as the Rational Method as outlined in the Australian Rainfall and Runoff (AR&R) guidelines and estimates of design rainfall intensities. Changes to Australian rainfall intensity design criteria have been made through updated releases of the AR&R77, AR&R87 and the recent 2013 AR&R Intensity Frequency Distributions (IFDs). The primary focus of this study is to compare the three IFD sets from 51 locations Australia wide. Since the release of the AR&R77 IFDs, the duration and number of locations for rainfall data has increased and techniques for data analysis have changed. Updated terminology coinciding with the 2013 IFD release has also resulted in a practical change to the design rainfall. For example, infrastructure that is designed for a 1 : 5 year ARI correlates with an 18.13% AEP, however for practical purposes, hydraulic guidelines have been updated with the more intuitive 20% AEP. The evaluation of design rainfall variation across Australia has indicated that the changes are dependent upon location, recurrence interval and rainfall duration. The changes to design rainfall IFDs are due to the application of differing data analysis techniques, the length and number of data sets and the change in terminology from ARI to AEP. Such changes mean that developed infrastructure has been designed to a range of different design criteria indicating the likely inadequacy of earlier developments to the current estimates of flood risk. In many cases, the under-design of infrastructure is greater than the expected impact of increased rainfall intensity under climate change scenarios.
Role of early visual cortex in trans-saccadic memory of object features.
Malik, Pankhuri; Dessing, Joost C; Crawford, J Douglas
2015-08-01
Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.
Application of terrestrial laser scanning to the development and updating of the base map
NASA Astrophysics Data System (ADS)
Klapa, Przemysław; Mitka, Bartosz
2017-06-01
The base map provides basic information about land to individuals, companies, developers, design engineers, organizations, and government agencies. Its contents include spatial location data for control network points, buildings, land lots, infrastructure facilities, and topographic features. As the primary map of the country, it must be developed in accordance with specific laws and regulations and be continuously updated. The base map is a data source used for the development and updating of derivative maps and other large scale cartographic materials such as thematic or topographic maps. Thanks to the advancement of science and technology, the quality of land surveys carried out by means of terrestrial laser scanning (TLS) matches that of traditional surveying methods in many respects. This paper discusses the potential application of output data from laser scanners (point clouds) to the development and updating of cartographic materials, taking Poland's base map as an example. A few research sites were chosen to present the method and the process of conducting a TLS land survey: a fragment of a residential area, a street, the surroundings of buildings, and an undeveloped area. The entire map that was drawn as a result of the survey was checked by comparing it to a map obtained from PODGiK (pol. Powiatowy Ośrodek Dokumentacji Geodezyjnej i Kartograficznej - Regional Centre for Geodetic and Cartographic Records) and by conducting a field inspection. An accuracy and quality analysis of the conducted fieldwork and deskwork yielded very good results, which provide solid grounds for predicating that cartographic materials based on a TLS point cloud are a reliable source of information about land. The contents of the map that had been created with the use of the obtained point cloud were very accurately located in space (x, y, z). The conducted accuracy analysis and the inspection of the performed works showed that high quality is characteristic of TLS surveys. The accuracy of determining the location of the various map contents has been estimated at 0.02-0.03 m. The map was developed in conformity with the applicable laws and regulations as well as with best practice requirements.