Sample records for observer model analysis

  1. Analysis of the hydrological response of a distributed physically-based model using post-assimilation (EnKF) diagnostics of streamflow and in situ soil moisture observations

    NASA Astrophysics Data System (ADS)

    Trudel, Mélanie; Leconte, Robert; Paniconi, Claudio

    2014-06-01

    Data assimilation techniques not only enhance model simulations and forecast, they also provide the opportunity to obtain a diagnostic of both the model and observations used in the assimilation process. In this research, an ensemble Kalman filter was used to assimilate streamflow observations at a basin outlet and at interior locations, as well as soil moisture at two different depths (15 and 45 cm). The simulation model is the distributed physically-based hydrological model CATHY (CATchment HYdrology) and the study site is the Des Anglais watershed, a 690 km2 river basin located in southern Quebec, Canada. Use of Latin hypercube sampling instead of a conventional Monte Carlo method to generate the ensemble reduced the size of the ensemble, and therefore the calculation time. Different post-assimilation diagnostics, based on innovations (observation minus background), analysis residuals (observation minus analysis), and analysis increments (analysis minus background), were used to evaluate assimilation optimality. An important issue in data assimilation is the estimation of error covariance matrices. These diagnostics were also used in a calibration exercise to determine the standard deviation of model parameters, forcing data, and observations that led to optimal assimilations. The analysis of innovations showed a lag between the model forecast and the observation during rainfall events. Assimilation of streamflow observations corrected this discrepancy. Assimilation of outlet streamflow observations improved the Nash-Sutcliffe efficiencies (NSE) between the model forecast (one day) and the observation at both outlet and interior point locations, owing to the structure of the state vector used. However, assimilation of streamflow observations systematically increased the simulated soil moisture values.

  2. The Role of Model and Initial Condition Error in Numerical Weather Forecasting Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2013-01-01

    A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.

  3. Objective analysis of observational data from the FGGE observing systems

    NASA Technical Reports Server (NTRS)

    Baker, W.; Edelmann, D.; Iredell, M.; Han, D.; Jakkempudi, S.

    1981-01-01

    An objective analysis procedure for updating the GLAS second and fourth order general atmospheric circulation models using observational data from the first GARP global experiment is described. The objective analysis procedure is based on a successive corrections method and the model is updated in a data assimilation cycle. Preparation of the observational data for analysis and the objective analysis scheme are described. The organization of the program and description of the required data sets are presented. The program logic and detailed descriptions of each subroutine are given.

  4. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1993-01-01

    We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.

  5. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1992-01-01

    We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.

  6. Meta-analysis of high-latitude nitrogen-addition and warming studies implies ecological mechanisms overlooked by land models

    NASA Astrophysics Data System (ADS)

    Bouskill, N. J.; Riley, W. J.; Tang, J. Y.

    2014-12-01

    Accurate representation of ecosystem processes in land models is crucial for reducing predictive uncertainty in energy and greenhouse gas feedbacks with the climate. Here we describe an observational and modeling meta-analysis approach to benchmark land models, and apply the method to the land model CLM4.5 with two versions of belowground biogeochemistry. We focused our analysis on the aboveground and belowground responses to warming and nitrogen addition in high-latitude ecosystems, and identified absent or poorly parameterized mechanisms in CLM4.5. While the two model versions predicted similar soil carbon stock trajectories following both warming and nitrogen addition, other predicted variables (e.g., belowground respiration) differed from observations in both magnitude and direction, indicating that CLM4.5 has inadequate underlying mechanisms for representing high-latitude ecosystems. On the basis of observational synthesis, we attribute the model-observation differences to missing representations of microbial dynamics, aboveground and belowground coupling, and nutrient cycling, and we use the observational meta-analysis to discuss potential approaches to improving the current models. However, we also urge caution concerning the selection of data sets and experiments for meta-analysis. For example, the concentrations of nitrogen applied in the synthesized field experiments (average = 72 kg ha-1 yr-1) are many times higher than projected soil nitrogen concentrations (from nitrogen deposition and release during mineralization), which precludes a rigorous evaluation of the model responses to likely nitrogen perturbations. Overall, we demonstrate that elucidating ecological mechanisms via meta-analysis can identify deficiencies in ecosystem models and empirical experiments.

  7. An Observation Analysis Tool for time-series analysis and sensor management in the FREEWAT GIS environment for water resources management

    NASA Astrophysics Data System (ADS)

    Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo

    2017-04-01

    In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.

  8. An analysis of USSPACECOM's space surveillance network sensor tasking methodology

    NASA Astrophysics Data System (ADS)

    Berger, Jeff M.; Moles, Joseph B.; Wilsey, David G.

    1992-12-01

    This study provides the basis for the development of a cost/benefit assessment model to determine the effects of alterations to the Space Surveillance Network (SSN) on orbital element (OE) set accuracy. It provides a review of current methods used by NORAD and the SSN to gather and process observations, an alternative to the current Gabbard classification method, and the development of a model to determine the effects of observation rate and correction interval on OE set accuracy. The proposed classification scheme is based on satellite J2 perturbations. Specifically, classes were established based on mean motion, eccentricity, and inclination since J2 perturbation effects are functions of only these elements. Model development began by creating representative sensor observations using a highly accurate orbital propagation model. These observations were compared to predicted observations generated using the NORAD Simplified General Perturbation (SGP4) model and differentially corrected using a Bayes, sequential estimation, algorithm. A 10-run Monte Carlo analysis was performed using this model on 12 satellites using 16 different observation rate/correction interval combinations. An ANOVA and confidence interval analysis of the results show that this model does demonstrate the differences in steady state position error based on varying observation rate and correction interval.

  9. Meta-analysis of high-latitude nitrogen-addition and warming studies implies ecological mechanisms overlooked by land models

    DOE PAGES

    Bouskill, N. J.; Riley, W. J.; Tang, J. Y.

    2014-12-11

    Accurate representation of ecosystem processes in land models is crucial for reducing predictive uncertainty in energy and greenhouse gas feedbacks with the climate. Here we describe an observational and modeling meta-analysis approach to benchmark land models, and apply the method to the land model CLM4.5 with two versions of belowground biogeochemistry. We focused our analysis on the aboveground and belowground responses to warming and nitrogen addition in high-latitude ecosystems, and identified absent or poorly parameterized mechanisms in CLM4.5. While the two model versions predicted similar soil carbon stock trajectories following both warming and nitrogen addition, other predicted variables (e.g., belowgroundmore » respiration) differed from observations in both magnitude and direction, indicating that CLM4.5 has inadequate underlying mechanisms for representing high-latitude ecosystems. On the basis of observational synthesis, we attribute the model–observation differences to missing representations of microbial dynamics, aboveground and belowground coupling, and nutrient cycling, and we use the observational meta-analysis to discuss potential approaches to improving the current models. However, we also urge caution concerning the selection of data sets and experiments for meta-analysis. For example, the concentrations of nitrogen applied in the synthesized field experiments (average = 72 kg ha -1 yr -1) are many times higher than projected soil nitrogen concentrations (from nitrogen deposition and release during mineralization), which precludes a rigorous evaluation of the model responses to likely nitrogen perturbations. Overall, we demonstrate that elucidating ecological mechanisms via meta-analysis can identify deficiencies in ecosystem models and empirical experiments.« less

  10. Detecting Outliers in Factor Analysis Using the Forward Search Algorithm

    ERIC Educational Resources Information Center

    Mavridis, Dimitris; Moustaki, Irini

    2008-01-01

    In this article we extend and implement the forward search algorithm for identifying atypical subjects/observations in factor analysis models. The forward search has been mainly developed for detecting aberrant observations in regression models (Atkinson, 1994) and in multivariate methods such as cluster and discriminant analysis (Atkinson, Riani,…

  11. Uncertainty aggregation and reduction in structure-material performance prediction

    NASA Astrophysics Data System (ADS)

    Hu, Zhen; Mahadevan, Sankaran; Ao, Dan

    2018-02-01

    An uncertainty aggregation and reduction framework is presented for structure-material performance prediction. Different types of uncertainty sources, structural analysis model, and material performance prediction model are connected through a Bayesian network for systematic uncertainty aggregation analysis. To reduce the uncertainty in the computational structure-material performance prediction model, Bayesian updating using experimental observation data is investigated based on the Bayesian network. It is observed that the Bayesian updating results will have large error if the model cannot accurately represent the actual physics, and that this error will be propagated to the predicted performance distribution. To address this issue, this paper proposes a novel uncertainty reduction method by integrating Bayesian calibration with model validation adaptively. The observation domain of the quantity of interest is first discretized into multiple segments. An adaptive algorithm is then developed to perform model validation and Bayesian updating over these observation segments sequentially. Only information from observation segments where the model prediction is highly reliable is used for Bayesian updating; this is found to increase the effectiveness and efficiency of uncertainty reduction. A composite rotorcraft hub component fatigue life prediction model, which combines a finite element structural analysis model and a material damage model, is used to demonstrate the proposed method.

  12. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  13. Four dimensional data assimilation (FDDA) impacts on WRF performance in simulating inversion layer structure and distributions of CMAQ-simulated winter ozone concentrations in Uintah Basin

    NASA Astrophysics Data System (ADS)

    Tran, Trang; Tran, Huy; Mansfield, Marc; Lyman, Seth; Crosman, Erik

    2018-03-01

    Four-dimensional data assimilation (FDDA) was applied in WRF-CMAQ model sensitivity tests to study the impact of observational and analysis nudging on model performance in simulating inversion layers and O3 concentration distributions within the Uintah Basin, Utah, U.S.A. in winter 2013. Observational nudging substantially improved WRF model performance in simulating surface wind fields, correcting a 10 °C warm surface temperature bias, correcting overestimation of the planetary boundary layer height (PBLH) and correcting underestimation of inversion strengths produced by regular WRF model physics without nudging. However, the combined effects of poor performance of WRF meteorological model physical parameterization schemes in simulating low clouds, and warm and moist biases in the temperature and moisture initialization and subsequent simulation fields, likely amplified the overestimation of warm clouds during inversion days when observational nudging was applied, impacting the resulting O3 photochemical formation in the chemistry model. To reduce the impact of a moist bias in the simulations on warm cloud formation, nudging with the analysis water mixing ratio above the planetary boundary layer (PBL) was applied. However, due to poor analysis vertical temperature profiles, applying analysis nudging also increased the errors in the modeled inversion layer vertical structure compared to observational nudging. Combining both observational and analysis nudging methods resulted in unrealistically extreme stratified stability that trapped pollutants at the lowest elevations at the center of the Uintah Basin and yielded the worst WRF performance in simulating inversion layer structure among the four sensitivity tests. The results of this study illustrate the importance of carefully considering the representativeness and quality of the observational and model analysis data sets when applying nudging techniques within stable PBLs, and the need to evaluate model results on a basin-wide scale.

  14. Using Enabling Technologies to Facilitate the Comparison of Satellite Observations with the Model Forecasts for Hurricane Study

    NASA Astrophysics Data System (ADS)

    Li, P.; Knosp, B.; Hristova-Veleva, S. M.; Niamsuwan, N.; Johnson, M. P.; Shen, T. P. J.; Tanelli, S.; Turk, J.; Vu, Q. A.

    2014-12-01

    Due to their complexity and volume, the satellite data are underutilized in today's hurricane research and operations. To better utilize these data, we developed the JPL Tropical Cyclone Information System (TCIS) - an Interactive Data Portal providing fusion between Near-Real-Time satellite observations and model forecasts to facilitate model evaluation and improvement. We have collected satellite observations and model forecasts in the Atlantic Basin and the East Pacific for the hurricane seasons since 2010 and supported the NASA Airborne Campaigns for Hurricane Study such as the Genesis and Rapid Intensification Processes (GRIP) in 2010 and the Hurricane and Severe Storm Sentinel (HS3) from 2012 to 2014. To enable the direct inter-comparisons of the satellite observations and the model forecasts, the TCIS was integrated with the NASA Earth Observing System Simulator Suite (NEOS3) to produce synthetic observations (e.g. simulated passive microwave brightness temperatures) from a number of operational hurricane forecast models (HWRF and GFS). An automated process was developed to trigger NEOS3 simulations via web services given the location and time of satellite observations, monitor the progress of the NEOS3 simulations, display the synthetic observation and ingest them into the TCIS database when they are done. In addition, three analysis tools, the joint PDF analysis of the brightness temperatures, ARCHER for finding the storm-center and the storm organization and the Wave Number Analysis tool for storm asymmetry and morphology analysis were integrated into TCIS to provide statistical and structural analysis on both observed and synthetic data. Interactive tools were built in the TCIS visualization system to allow the spatial and temporal selections of the datasets, the invocation of the tools with user specified parameters, and the display and the delivery of the results. In this presentation, we will describe the key enabling technologies behind the design of the TCIS interactive data portal and analysis tools, including the spatial database technology for the representation and query of the level 2 satellite data, the automatic process flow using web services, the interactive user interface using the Google Earth API, and a common and expandable Python wrapper to invoke the analysis tools.

  15. Validation of a common data model for active safety surveillance research

    PubMed Central

    Ryan, Patrick B; Reich, Christian G; Hartzema, Abraham G; Stang, Paul E

    2011-01-01

    Objective Systematic analysis of observational medical databases for active safety surveillance is hindered by the variation in data models and coding systems. Data analysts often find robust clinical data models difficult to understand and ill suited to support their analytic approaches. Further, some models do not facilitate the computations required for systematic analysis across many interventions and outcomes for large datasets. Translating the data from these idiosyncratic data models to a common data model (CDM) could facilitate both the analysts' understanding and the suitability for large-scale systematic analysis. In addition to facilitating analysis, a suitable CDM has to faithfully represent the source observational database. Before beginning to use the Observational Medical Outcomes Partnership (OMOP) CDM and a related dictionary of standardized terminologies for a study of large-scale systematic active safety surveillance, the authors validated the model's suitability for this use by example. Validation by example To validate the OMOP CDM, the model was instantiated into a relational database, data from 10 different observational healthcare databases were loaded into separate instances, a comprehensive array of analytic methods that operate on the data model was created, and these methods were executed against the databases to measure performance. Conclusion There was acceptable representation of the data from 10 observational databases in the OMOP CDM using the standardized terminologies selected, and a range of analytic methods was developed and executed with sufficient performance to be useful for active safety surveillance. PMID:22037893

  16. Spherical harmonic analysis of a synoptic climatology generated with a global general circulation model

    NASA Technical Reports Server (NTRS)

    Christidis, Z. D.; Spar, J.

    1980-01-01

    Spherical harmonic analysis was used to analyze the observed climatological (C) fields of temperature at 850 mb, geopotential height at 500 mb, and sea level pressure. The spherical harmonic method was also applied to the corresponding "model climatological" fields (M) generated by a general circulation model, the "GISS climate model." The climate model was initialized with observed data for the first of December 1976 at 00. GMT and allowed to generate five years of meteorological history. Monthly means of the above fields for the five years were computed and subjected to spherical harmonic analysis. It was found from the comparison of the spectral components of both sets, M and C, that the climate model generated reasonable 500 mb geopotential heights. The model temperature field at 850 mb exhibited a generally correct structure. However, the meridional temperature gradient was overestimated and overheating of the continents was observed in summer.

  17. Testing averaged cosmology with type Ia supernovae and BAO data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less

  18. Macro-modeling and micro-modeling tools for HOV-to-HOT lane analysis.

    DOT National Transportation Integrated Search

    2016-06-01

    This report summarizes the analysis of observed commuting changes after conversion of an : existing carpool lane into a high-occupancy toll lane, on 15.5 miles of Atlanta I-85. The team explored the : correlations between observed changes in travel b...

  19. Observability and synchronization of neuron models.

    PubMed

    Aguirre, Luis A; Portes, Leonardo L; Letellier, Christophe

    2017-10-01

    Observability is the property that enables recovering the state of a dynamical system from a reduced number of measured variables. In high-dimensional systems, it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. The observability of a network of neuron models depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, to perform a study of observability using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, to study the emergence of phase synchronization in networks composed of neuron models. This is done performing multivariate singular spectrum analysis which, to the best of the authors' knowledge, has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization: (i) without having to measure all the state variables, but only one (that provides greatest observability) from each node and (ii) without having to estimate the phase.

  20. Observer-Pattern Modeling and Slow-Scale Bifurcation Analysis of Two-Stage Boost Inverters

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Wan, Xiaojin; Li, Weijie; Ding, Honghui; Yi, Chuanzhi

    2017-06-01

    This paper deals with modeling and bifurcation analysis of two-stage Boost inverters. Since the effect of the nonlinear interactions between source-stage converter and load-stage inverter causes the “hidden” second-harmonic current at the input of the downstream H-bridge inverter, an observer-pattern modeling method is proposed by removing time variance originating from both fundamental frequency and hidden second harmonics in the derived averaged equations. Based on the proposed observer-pattern model, the underlying mechanism of slow-scale instability behavior is uncovered with the help of eigenvalue analysis method. Then eigenvalue sensitivity analysis is used to select some key system parameters of two-stage Boost inverter, and some behavior boundaries are given to provide some design-oriented information for optimizing the circuit. Finally, these theoretical results are verified by numerical simulations and circuit experiment.

  1. Progress in Modeling Global Atmospheric CO2 Fluxes and Transport: Results from Simulations with Diurnal Fluxes

    NASA Technical Reports Server (NTRS)

    Collatz, G. James; Kawa, R.

    2007-01-01

    Progress in better determining CO2 sources and sinks will almost certainly rely on utilization of more extensive and intensive CO2 and related observations including those from satellite remote sensing. Use of advanced data requires improved modeling and analysis capability. Under NASA Carbon Cycle Science support we seek to develop and integrate improved formulations for 1) atmospheric transport, 2) terrestrial uptake and release, 3) biomass and 4) fossil fuel burning, and 5) observational data analysis including inverse calculations. The transport modeling is based on meteorological data assimilation analysis from the Goddard Modeling and Assimilation Office. Use of assimilated met data enables model comparison to CO2 and other observations across a wide range of scales of variability. In this presentation we focus on the short end of the temporal variability spectrum: hourly to synoptic to seasonal. Using CO2 fluxes at varying temporal resolution from the SIB 2 and CASA biosphere models, we examine the model's ability to simulate CO2 variability in comparison to observations at different times, locations, and altitudes. We find that the model can resolve much of the variability in the observations, although there are limits imposed by vertical resolution of boundary layer processes. The influence of key process representations is inferred. The high degree of fidelity in these simulations leads us to anticipate incorporation of realtime, highly resolved observations into a multiscale carbon cycle analysis system that will begin to bridge the gap between top-down and bottom-up flux estimation, which is a primary focus of NACP.

  2. Zonal harmonic model of Saturn's magnetic field from Voyager 1 and 2 observations

    NASA Technical Reports Server (NTRS)

    Connerney, J. E. P.; Ness, N. F.; Acuna, M. H.

    1982-01-01

    An analysis of the magnetic field of Saturn is presented which takes into account both the Voyager 1 and 2 vector magnetic field observations. The analysis is based on the traditional spherical harmonic expansion of a scale potential to derive the magnetic field within 8 Saturn radii. A third-order zonal harmonic model fitted to Voyager 1 and 2 observations is found to be capable of predicting the magnetic field characteristics at one encounter based on those observed at another, unlike models including dipole and quadrupole terms only. The third-order model is noted to lead to significantly enhanced polar surface field intensities with respect to dipole models, and probably represents the axisymmetric part of a complex dynamo field.

  3. Performance characteristics of a visual-search human-model observer with sparse PET image data

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2012-02-01

    As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.

  4. Scaling Properties of Arctic Sea Ice Deformation in a High‐Resolution Viscous‐Plastic Sea Ice Model and in Satellite Observations

    PubMed Central

    Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Abstract Sea ice models with the traditional viscous‐plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan‐Arctic sea ice‐ocean simulation, the small‐scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data. PMID:29576996

  5. Scaling Properties of Arctic Sea Ice Deformation in a High-Resolution Viscous-Plastic Sea Ice Model and in Satellite Observations

    NASA Astrophysics Data System (ADS)

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan-Arctic sea ice-ocean simulation, the small-scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  6. Scaling Properties of Arctic Sea Ice Deformation in a High-Resolution Viscous-Plastic Sea Ice Model and in Satellite Observations.

    PubMed

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2018-01-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very small horizontal grid spacing can resolve leads and deformation rates localized along Linear Kinematic Features (LKF). In a 1 km pan-Arctic sea ice-ocean simulation, the small-scale sea ice deformations are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS) in the Central Arctic. A new coupled scaling analysis for data on Eulerian grids is used to determine the spatial and temporal scaling and the coupling between temporal and spatial scales. The spatial scaling of the modeled sea ice deformation implies multifractality. It is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling with satellite observations challenges previous results with VP models at coarser resolution, which did not reproduce the observed scaling. The temporal scaling analysis shows that the VP model, as configured in this 1 km simulation, does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  7. Observation model and parameter partials for the JPL geodetic GPS modeling software GPSOMC

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.; Border, J. S.

    1988-01-01

    The physical models employed in GPSOMC and the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities in the current report with their counterparts in the computer programs. There are no basic model revisions, with the exceptions of an improved ocean loading model and some new options for handling clock parametrization. Such misprints as were discovered were corrected. Further revisions include modeling improvements and assurances that the model description is in accord with the current software.

  8. Basic research for the geodynamics program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokół, J. M.; Kubiak, M. A.; Bzowski, M.

    We have developed a refined and optimized version of the Warsaw Test Particle Model of interstellar neutral gas in the heliosphere, specially tailored for analysis of IBEX-Lo observations. The former version of the model was used in the analysis of neutral He observed by IBEX that resulted in an unexpected conclusion that the interstellar neutral He flow vector was different than previously thought and that a new population of neutral He, dubbed the Warm Breeze, exists in the heliosphere. It was also used in the reanalysis of Ulysses observations that confirmed the original findings on the flow vector, but suggestedmore » a significantly higher temperature. The present version of the model has two strains targeted for different applications, based on an identical paradigm, but differing in the implementation and in the treatment of ionization losses. We present the model in detail and discuss numerous effects related to the measurement process that potentially modify the resulting flux of ISN He observed by IBEX, and identify those of them that should not be omitted in the simulations to avoid biasing the results. This paper is part of a coordinated series of papers presenting the current state of analysis of IBEX-Lo observations of ISN He. Details of the analysis method are presented by Swaczyna et al. and results of the analysis are presented by Bzowski et al.« less

  10. Multiple model analysis with discriminatory data collection (MMA-DDC): A new method for improving measurement selection

    NASA Astrophysics Data System (ADS)

    Kikuchi, C.; Ferre, P. A.; Vrugt, J. A.

    2011-12-01

    Hydrologic models are developed, tested, and refined based on the ability of those models to explain available hydrologic data. The optimization of model performance based upon mismatch between model outputs and real world observations has been extensively studied. However, identification of plausible models is sensitive not only to the models themselves - including model structure and model parameters - but also to the location, timing, type, and number of observations used in model calibration. Therefore, careful selection of hydrologic observations has the potential to significantly improve the performance of hydrologic models. In this research, we seek to reduce prediction uncertainty through optimization of the data collection process. A new tool - multiple model analysis with discriminatory data collection (MMA-DDC) - was developed to address this challenge. In this approach, multiple hydrologic models are developed and treated as competing hypotheses. Potential new data are then evaluated on their ability to discriminate between competing hypotheses. MMA-DDC is well-suited for use in recursive mode, in which new observations are continuously used in the optimization of subsequent observations. This new approach was applied to a synthetic solute transport experiment, in which ranges of parameter values constitute the multiple hydrologic models, and model predictions are calculated using likelihood-weighted model averaging. MMA-DDC was used to determine the optimal location, timing, number, and type of new observations. From comparison with an exhaustive search of all possible observation sequences, we find that MMA-DDC consistently selects observations which lead to the highest reduction in model prediction uncertainty. We conclude that using MMA-DDC to evaluate potential observations may significantly improve the performance of hydrologic models while reducing the cost associated with collecting new data.

  11. A Community Data Model for Hydrologic Observations

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.; Jennings, B.

    2006-12-01

    The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. Hydrologic information science involves the description of hydrologic environments in a consistent way, using data models for information integration. This includes a hydrologic observations data model for the storage and retrieval of hydrologic observations in a relational database designed to facilitate data retrieval for integrated analysis of information collected by multiple investigators. It is intended to provide a standard format to facilitate the effective sharing of information between investigators and to facilitate analysis of information within a single study area or hydrologic observatory, or across hydrologic observatories and regions. The observations data model is designed to store hydrologic observations and sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used and provide traceable heritage from raw measurements to usable information. The design is based on the premise that a relational database at the single observation level is most effective for providing querying capability and cross dimension data retrieval and analysis. This premise is being tested through the implementation of a prototype hydrologic observations database, and the development of web services for the retrieval of data from and ingestion of data into the database. These web services hosted by the San Diego Supercomputer center make data in the database accessible both through a Hydrologic Data Access System portal and directly from applications software such as Excel, Matlab and ArcGIS that have Standard Object Access Protocol (SOAP) capability. This paper will (1) describe the data model; (2) demonstrate the capability for representing diverse data in the same database; (3) demonstrate the use of the database from applications software for the performance of hydrologic analysis across different observation types.

  12. Observation model and parameter partials for the JPL geodetic (GPS) modeling software 'GPSOMC'

    NASA Technical Reports Server (NTRS)

    Sovers, O. J.

    1990-01-01

    The physical models employed in GPSOMC, the modeling module of the GIPSY software system developed at JPL for analysis of geodetic Global Positioning Satellite (GPS) measurements are described. Details of the various contributions to range and phase observables are given, as well as the partial derivatives of the observed quantities with respect to model parameters. A glossary of parameters is provided to enable persons doing data analysis to identify quantities with their counterparts in the computer programs. The present version is the second revision of the original document which it supersedes. The modeling is expanded to provide the option of using Cartesian station coordinates; parameters for the time rates of change of universal time and polar motion are also introduced.

  13. The Sensitivity of a Global Ocean Model to Wind Forcing: A Test Using Sea Level and Wind Observations from Satellites and Operational Analysis

    NASA Technical Reports Server (NTRS)

    Fu, L. L.; Chao, Y.

    1997-01-01

    Investigated in this study is the response of a global ocean general circulation model to forcing provided by two wind products: operational analysis from the National Center for Environmental Prediction (NCEP); observations made by the ERS-1 radar scatterometer.

  14. Isolating the anthropogenic component of Arctic warming

    DOE PAGES

    Chylek, Petr; Hengartner, Nicholas; Lesins, Glen; ...

    2014-05-28

    Structural equation modeling is used in statistical applications as both confirmatory and exploratory modeling to test models and to suggest the most plausible explanation for a relationship between the independent and the dependent variables. Although structural analysis cannot prove causation, it can suggest the most plausible set of factors that influence the observed variable. Here, we apply structural model analysis to the annual mean Arctic surface air temperature from 1900 to 2012 to find the most effective set of predictors and to isolate the anthropogenic component of the recent Arctic warming by subtracting the effects of natural forcing and variabilitymore » from the observed temperature. We also find that anthropogenic greenhouse gases and aerosols radiative forcing and the Atlantic Multidecadal Oscillation internal mode dominate Arctic temperature variability. Finally, our structural model analysis of observational data suggests that about half of the recent Arctic warming of 0.64 K/decade may have anthropogenic causes.« less

  15. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  16. Dynamical systems analysis of phantom dark energy models

    NASA Astrophysics Data System (ADS)

    Roy, Nandan; Bhadra, Nivedita

    2018-06-01

    In this work, we study the dynamical systems analysis of phantom dark energy models considering five different potentials. From the analysis of these five potentials we have found a general parametrization of the scalar field potentials which is obeyed by many other potentials. Our investigation shows that there is only one fixed point which could be the beginning of the universe. However, future destiny has many possible options. A detailed numerical analysis of the system has been presented. The observed late time behaviour in this analysis shows very good agreement with the recent observations.

  17. Advancing Clouds Lifecycle Representation in Numerical Models Using Innovative Analysis Methods that Bridge ARM Observations and Models Over a Breadth of Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollias, Pavlos

    2016-09-06

    This the final report for the DE-SC0007096 - Advancing Clouds Lifecycle Representation in Numerical Models Using Innovative Analysis Methods that Bridge ARM Observations and Models Over a Breadth of Scales - PI: Pavlos Kollias. The final report outline the main findings of the research conducted using the aforementioned award in the area of cloud research from the cloud scale (10-100 m) to the mesoscale (20-50 km).

  18. Observation-Oriented Modeling: Going beyond "Is It All a Matter of Chance"?

    ERIC Educational Resources Information Center

    Grice, James W.; Yepez, Maria; Wilson, Nicole L.; Shoda, Yuichi

    2017-01-01

    An alternative to null hypothesis significance testing is presented and discussed. This approach, referred to as observation-oriented modeling, is centered on model building in an effort to explicate the structures and processes believed to generate a set of observations. In terms of analysis, this novel approach complements traditional methods…

  19. Advancing coastal ocean modelling, analysis, and prediction for the US Integrated Ocean Observing System

    USGS Publications Warehouse

    Wilkin, John L.; Rosenfeld, Leslie; Allen, Arthur; Baltes, Rebecca; Baptista, Antonio; He, Ruoying; Hogan, Patrick; Kurapov, Alexander; Mehra, Avichal; Quintrell, Josie; Schwab, David; Signell, Richard; Smith, Jane

    2017-01-01

    This paper outlines strategies that would advance coastal ocean modelling, analysis and prediction as a complement to the observing and data management activities of the coastal components of the US Integrated Ocean Observing System (IOOS®) and the Global Ocean Observing System (GOOS). The views presented are the consensus of a group of US-based researchers with a cross-section of coastal oceanography and ocean modelling expertise and community representation drawn from Regional and US Federal partners in IOOS. Priorities for research and development are suggested that would enhance the value of IOOS observations through model-based synthesis, deliver better model-based information products, and assist the design, evaluation, and operation of the observing system itself. The proposed priorities are: model coupling, data assimilation, nearshore processes, cyberinfrastructure and model skill assessment, modelling for observing system design, evaluation and operation, ensemble prediction, and fast predictors. Approaches are suggested to accomplish substantial progress in a 3–8-year timeframe. In addition, the group proposes steps to promote collaboration between research and operations groups in Regional Associations, US Federal Agencies, and the international ocean research community in general that would foster coordination on scientific and technical issues, and strengthen federal–academic partnerships benefiting IOOS stakeholders and end users.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B; Fujita, A; Buch, K

    Purpose: To investigate the correlation between texture analysis-based model observer and human observer in the task of diagnosis of ischemic infarct in non-contrast head CT of adults. Methods: Non-contrast head CTs of five patients (2 M, 3 F; 58–83 y) with ischemic infarcts were retro-reconstructed using FBP and Adaptive Statistical Iterative Reconstruction (ASIR) of various levels (10–100%). Six neuro -radiologists reviewed each image and scored image quality for diagnosing acute infarcts by a 9-point Likert scale in a blinded test. These scores were averaged across the observers to produce the average human observer responses. The chief neuro-radiologist placed multiple ROIsmore » over the infarcts. These ROIs were entered into a texture analysis software package. Forty-two features per image, including 11 GLRL, 5 GLCM, 4 GLGM, 9 Laws, and 13 2-D features, were computed and averaged over the images per dataset. The Fisher-coefficient (ratio of between-class variance to in-class variance) was calculated for each feature to identify the most discriminating features from each matrix that separate the different confidence scores most efficiently. The 15 features with the highest Fisher -coefficient were entered into linear multivariate regression for iterative modeling. Results: Multivariate regression analysis resulted in the best prediction model of the confidence scores after three iterations (df=11, F=11.7, p-value<0.0001). The model predicted scores and human observers were highly correlated (R=0.88, R-sq=0.77). The root-mean-square and maximal residual were 0.21 and 0.44, respectively. The residual scatter plot appeared random, symmetric, and unbiased. Conclusion: For diagnosis of ischemic infarct in non-contrast head CT in adults, the predicted image quality scores from texture analysis-based model observer was highly correlated with that of human observers for various noise levels. Texture-based model observer can characterize image quality of low contrast, subtle texture changes in addition to human observers.« less

  1. An ocean data assimilation system and reanalysis of the World Ocean hydrophysical fields

    NASA Astrophysics Data System (ADS)

    Zelenko, A. A.; Vil'fand, R. M.; Resnyanskii, Yu. D.; Strukov, B. S.; Tsyrulnikov, M. D.; Svirenko, P. I.

    2016-07-01

    A new version of the ocean data assimilation system (ODAS) developed at the Hydrometcentre of Russia is presented. The assimilation is performed following the sequential scheme analysis-forecast-analysis. The main components of the ODAS are procedures for operational observation data processing, a variational analysis scheme, and an ocean general circulation model used to estimate the first guess fields involved in the analysis. In situ observations of temperature and salinity in the upper 1400-m ocean layer obtained from various observational platforms are used as input data. In the new ODAS version, the horizontal resolution of the assimilating model and of the output products is increased, the previous 2D-Var analysis scheme is replaced by a more general 3D-Var scheme, and a more flexible incremental analysis updating procedure is introduced to correct the model calculations. A reanalysis of the main World Ocean hydrophysical fields over the 2005-2015 period has been performed using the updated ODAS. The reanalysis results are compared with data from independent sources.

  2. Testing deformation hypotheses by constraints on a time series of geodetic observations

    NASA Astrophysics Data System (ADS)

    Velsink, Hiddo

    2018-01-01

    In geodetic deformation analysis observations are used to identify form and size changes of a geodetic network, representing objects on the earth's surface. The network points are monitored, often continuously, because of suspected deformations. A deformation may affect many points during many epochs. The problem is that the best description of the deformation is, in general, unknown. To find it, different hypothesised deformation models have to be tested systematically for agreement with the observations. The tests have to be capable of stating with a certain probability the size of detectable deformations, and to be datum invariant. A statistical criterion is needed to find the best deformation model. Existing methods do not fulfil these requirements. Here we propose a method that formulates the different hypotheses as sets of constraints on the parameters of a least-squares adjustment model. The constraints can relate to subsets of epochs and to subsets of points, thus combining time series analysis and congruence model analysis. The constraints are formulated as nonstochastic observations in an adjustment model of observation equations. This gives an easy way to test the constraints and to get a quality description. The proposed method aims at providing a good discriminating method to find the best description of a deformation. The method is expected to improve the quality of geodetic deformation analysis. We demonstrate the method with an elaborate example.

  3. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  4. Meta-analysis of high-latitude nitrogen-addition and warming studies imply ecological mechanisms overlooked by land models

    NASA Astrophysics Data System (ADS)

    Bouskill, N. J.; Riley, W. J.; Tang, J.

    2014-08-01

    Accurate representation of ecosystem processes in land models is crucial for reducing predictive uncertainty in energy and greenhouse gas feedbacks with the atmosphere. Here we describe an observational and modeling meta-analysis approach to benchmark land models, and apply the method to the land model CLM4.5 with two versions of belowground biogeochemistry. We focused our analysis on the above and belowground high-latitude ecosystem responses to warming and nitrogen addition, and identified mechanisms absent, or poorly parameterized in CLM4.5. While the two model versions predicted similar trajectories for soil carbon stocks following both types of perturbation, other variables (e.g., belowground respiration) differed from the observations in both magnitude and direction, indicating the underlying mechanisms are inadequate for representing high-latitude ecosystems. The observational synthesis attribute these differences to missing representations of microbial dynamics, characterization of above and belowground functional processes, and nutrient competition. We use the observational meta-analyses to discuss potential approaches to improving the current models (e.g., the inclusion of dynamic vegetation or different microbial functional guilds), however, we also raise a cautionary note on the selection of data sets and experiments to be included in a meta-analysis. For example, the concentrations of nitrogen applied in the synthesized field experiments (average =72 kg ha-1 yr-1) are many times higher than projected soil nitrogen concentrations (from nitrogen deposition and release during mineralization), which preclude a rigorous evaluation of the model responses to nitrogen perturbation. Overall, we demonstrate here that elucidating ecological mechanisms via meta-analysis can identify deficiencies in both ecosystem models and empirical experiments.

  5. Meta-analysis of high-latitude nitrogen-addition and warming studies imply ecological mechanisms overlooked by land models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouskill, N. J.; Riley, W. J.; Tang, J.

    2014-08-18

    Accurate representation of ecosystem processes in land models is crucial for reducing predictive uncertainty in energy and greenhouse gas feedbacks with the atmosphere. Here we describe an observational and modeling meta-analysis approach to benchmark land models, and apply the method to the land model CLM4.5 with two versions of belowground biogeochemistry. We focused our analysis on the above and belowground high-latitude ecosystem responses to warming and nitrogen addition, and identified mechanisms absent, or poorly parameterized in CLM4.5. While the two model versions predicted similar trajectories for soil carbon stocks following both types of perturbation, other variables (e.g., belowground respiration) differedmore » from the observations in both magnitude and direction, indicating the underlying mechanisms are inadequate for representing high-latitude ecosystems. The observational synthesis attribute these differences to missing representations of microbial dynamics, characterization of above and belowground functional processes, and nutrient competition. We use the observational meta-analyses to discuss potential approaches to improving the current models (e.g., the inclusion of dynamic vegetation or different microbial functional guilds), however, we also raise a cautionary note on the selection of data sets and experiments to be included in a meta-analysis. For example, the concentrations of nitrogen applied in the synthesized field experiments (average =72 kg ha -1 yr -1) are many times higher than projected soil nitrogen concentrations (from nitrogen deposition and release during mineralization), which preclude a rigorous evaluation of the model responses to nitrogen perturbation. Overall, we demonstrate here that elucidating ecological mechanisms via meta-analysis can identify deficiencies in both ecosystem models and empirical experiments.« less

  6. Mixture modelling for cluster analysis.

    PubMed

    McLachlan, G J; Chang, S U

    2004-10-01

    Cluster analysis via a finite mixture model approach is considered. With this approach to clustering, the data can be partitioned into a specified number of clusters g by first fitting a mixture model with g components. An outright clustering of the data is then obtained by assigning an observation to the component to which it has the highest estimated posterior probability of belonging; that is, the ith cluster consists of those observations assigned to the ith component (i = 1,..., g). The focus is on the use of mixtures of normal components for the cluster analysis of data that can be regarded as being continuous. But attention is also given to the case of mixed data, where the observations consist of both continuous and discrete variables.

  7. Data Assimilation Cycling for Weather Analysis

    NASA Technical Reports Server (NTRS)

    Tran, Nam; Li, Yongzuo; Fitzpatrick, Patrick

    2008-01-01

    This software package runs the atmospheric model MM5 in data assimilation cycling mode to produce an optimized weather analysis, including the ability to insert or adjust a hurricane vortex. The program runs MM5 through a cycle of short forecasts every three hours where the vortex is adjusted to match the observed hurricane location and storm intensity. This technique adjusts the surrounding environment so that the proper steering current and environmental shear are achieved. MM5cycle uses a Cressman analysis to blend observation into model fields to get a more accurate weather analysis. Quality control of observations is also done in every cycle to remove bad data that may contaminate the analysis. This technique can assimilate and propagate data in time from intermittent and infrequent observations while maintaining the atmospheric field in a dynamically balanced state. The software consists of a C-shell script (MM5cycle.driver) and three FORTRAN programs (splitMM5files.F, comRegrid.F, and insert_vortex.F), and are contained in the pre-processor component of MM5 called "Regridder." The model is first initialized with data from a global model such as the Global Forecast System (GFS), which also provides lateral boundary conditions. These data are separated into single-time files using splitMM5.F. The hurricane vortex is then bogussed in the correct location and with the correct wind field using insert_vortex.F. The modified initial and boundary conditions are then recombined into the model fields using comRegrid.F. The model then makes a three-hour forecast. The three-hour forecast data from MM5 now become the analysis for the next short forecast run, where the vortex will again be adjusted. The process repeats itself until the desired time of analysis is achieved. This code can also assimilate observations if desired.

  8. A random effects meta-analysis model with Box-Cox transformation.

    PubMed

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.

  9. The GLOBE Contrail Protocol: Initial Analysis of Results

    NASA Technical Reports Server (NTRS)

    Chambers, Lin; Duda, David

    2004-01-01

    The GLOBE contrail protocol was launched in March 2003 to obtain surface observer reports of contrail occurrence to complement satellite and model studies underway at NASA Langley, among others. During the first year, more than 30,000 ground observations of contrails were submitted to GLOBE. An initial analysis comparing the GLOBE observations to weather prediction model results for relative humidity at flight altitudes is in progress. This paper reports on the findings to date from this effort.

  10. An Observation-Driven Agent-Based Modeling and Analysis Framework for C. elegans Embryogenesis.

    PubMed

    Wang, Zi; Ramsey, Benjamin J; Wang, Dali; Wong, Kwai; Li, Husheng; Wang, Eric; Bao, Zhirong

    2016-01-01

    With cutting-edge live microscopy and image analysis, biologists can now systematically track individual cells in complex tissues and quantify cellular behavior over extended time windows. Computational approaches that utilize the systematic and quantitative data are needed to understand how cells interact in vivo to give rise to the different cell types and 3D morphology of tissues. An agent-based, minimum descriptive modeling and analysis framework is presented in this paper to study C. elegans embryogenesis. The framework is designed to incorporate the large amounts of experimental observations on cellular behavior and reserve data structures/interfaces that allow regulatory mechanisms to be added as more insights are gained. Observed cellular behaviors are organized into lineage identity, timing and direction of cell division, and path of cell movement. The framework also includes global parameters such as the eggshell and a clock. Division and movement behaviors are driven by statistical models of the observations. Data structures/interfaces are reserved for gene list, cell-cell interaction, cell fate and landscape, and other global parameters until the descriptive model is replaced by a regulatory mechanism. This approach provides a framework to handle the ongoing experiments of single-cell analysis of complex tissues where mechanistic insights lag data collection and need to be validated on complex observations.

  11. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data

    PubMed Central

    Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369

  12. Application of Multiple Imputation for Missing Values in Three-Way Three-Mode Multi-Environment Trial Data.

    PubMed

    Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E

    2015-01-01

    It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.

  13. Scaling properties of Arctic sea ice deformation in high-resolution viscous-plastic sea ice models and satellite observations

    NASA Astrophysics Data System (ADS)

    Hutter, Nils; Losch, Martin; Menemenlis, Dimitris

    2017-04-01

    Sea ice models with the traditional viscous-plastic (VP) rheology and very high grid resolution can resolve leads and deformation rates that are localised along Linear Kinematic Features (LKF). In a 1-km pan-Arctic sea ice-ocean simulation, the small scale sea-ice deformations in the Central Arctic are evaluated with a scaling analysis in relation to satellite observations of the Envisat Geophysical Processor System (EGPS). A new coupled scaling analysis for data on Eulerian grids determines the spatial and the temporal scaling as well as the coupling between temporal and spatial scales. The spatial scaling of the modelled sea ice deformation implies multi-fractality. The spatial scaling is also coupled to temporal scales and varies realistically by region and season. The agreement of the spatial scaling and its coupling to temporal scales with satellite observations and models with the modern elasto-brittle rheology challenges previous results with VP models at coarse resolution where no such scaling was found. The temporal scaling analysis, however, shows that the VP model does not fully resolve the intermittency of sea ice deformation that is observed in satellite data.

  14. Linking Science Analysis with Observation Planning: A Full Circle Data Lifecycle

    NASA Technical Reports Server (NTRS)

    Grosvenor, Sandy; Jones, Jeremy; Koratkar, Anuradha; Li, Connie; Mackey, Jennifer; Neher, Ken; Wolf, Karl; Obenschain, Arthur F. (Technical Monitor)

    2001-01-01

    A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations more efficiently, The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our paper examines the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what have been its successes and challenges.

  15. Linking Science Analysis with Observation Planning: A Full Circle Data Lifecycle

    NASA Technical Reports Server (NTRS)

    Jones, Jeremy; Grosvenor, Sandy; Wolf, Karl; Li, Connie; Koratkar, Anuradha; Powers, Edward I. (Technical Monitor)

    2001-01-01

    A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations. The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our paper will examine the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what has been its successes and challenges.

  16. Model accuracy impact through rescaled observations in hydrological data assimilation studies

    USDA-ARS?s Scientific Manuscript database

    Signal and noise time-series variability of soil moisture datasets (e.g. satellite-, model-, station-based) vary greatly. Optimality of the analysis obtained after observations are assimilated into the model depends on the degree that the differences between the signal variances of model and observa...

  17. Immortal time bias in observational studies of time-to-event outcomes.

    PubMed

    Jones, Mark; Fowler, Robert

    2016-12-01

    The purpose of the study is to show, through simulation and example, the magnitude and direction of immortal time bias when an inappropriate analysis is used. We compare 4 methods of analysis for observational studies of time-to-event outcomes: logistic regression, standard Cox model, landmark analysis, and time-dependent Cox model using an example data set of patients critically ill with influenza and a simulation study. For the example data set, logistic regression, standard Cox model, and landmark analysis all showed some evidence that treatment with oseltamivir provides protection from mortality in patients critically ill with influenza. However, when the time-dependent nature of treatment exposure is taken account of using a time-dependent Cox model, there is no longer evidence of a protective effect of treatment. The simulation study showed that, under various scenarios, the time-dependent Cox model consistently provides unbiased treatment effect estimates, whereas standard Cox model leads to bias in favor of treatment. Logistic regression and landmark analysis may also lead to bias. To minimize the risk of immortal time bias in observational studies of survival outcomes, we strongly suggest time-dependent exposures be included as time-dependent variables in hazard-based analyses. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. A Cyber-Attack Detection Model Based on Multivariate Analyses

    NASA Astrophysics Data System (ADS)

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  19. Validation and Verification of Operational Land Analysis Activities at the Air Force Weather Agency

    NASA Technical Reports Server (NTRS)

    Shaw, Michael; Kumar, Sujay V.; Peters-Lidard, Christa D.; Cetola, Jeffrey

    2011-01-01

    The NASA developed Land Information System (LIS) is the Air Force Weather Agency's (AFWA) operational Land Data Assimilation System (LDAS) combining real time precipitation observations and analyses, global forecast model data, vegetation, terrain, and soil parameters with the community Noah land surface model, along with other hydrology module options, to generate profile analyses of global soil moisture, soil temperature, and other important land surface characteristics. (1) A range of satellite data products and surface observations used to generate the land analysis products (2) Global, 1/4 deg spatial resolution (3) Model analysis generated at 3 hours

  20. Toward improving hurricane forecasts using the JPL Tropical Cyclone Information System (TCIS): A framework to address the issues of Big Data

    NASA Astrophysics Data System (ADS)

    Hristova-Veleva, S. M.; Boothe, M.; Gopalakrishnan, S.; Haddad, Z. S.; Knosp, B.; Lambrigtsen, B.; Li, P.; montgomery, M. T.; Niamsuwan, N.; Tallapragada, V. S.; Tanelli, S.; Turk, J.; Vukicevic, T.

    2013-12-01

    Accurate forecasting of extreme weather requires the use of both regional models as well as global General Circulation Models (GCMs). The regional models have higher resolution and more accurate physics - two critical components needed for properly representing the key convective processes. GCMs, on the other hand, have better depiction of the large-scale environment and, thus, are necessary for properly capturing the important scale interactions. But how to evaluate the models, understand their shortcomings and improve them? Satellite observations can provide invaluable information. And this is where the issues of Big Data come: satellite observations are very complex and have large variety while model forecast are very voluminous. We are developing a system - TCIS - that addresses the issues of model evaluation and process understanding with the goal of improving the accuracy of hurricane forecasts. This NASA/ESTO/AIST-funded project aims at bringing satellite/airborne observations and model forecasts into a common system and developing on-line tools for joint analysis. To properly evaluate the models we go beyond the comparison of the geophysical fields. We input the model fields into instrument simulators (NEOS3, CRTM, etc.) and compute synthetic observations for a more direct comparison to the observed parameters. In this presentation we will start by describing the scientific questions. We will then outline our current framework to provide fusion of models and observations. Next, we will illustrate how the system can be used to evaluate several models (HWRF, GFS, ECMWF) by applying a couple of our analysis tools to several hurricanes observed during the 2013 season. Finally, we will outline our future plans. Our goal is to go beyond the image comparison and point-by-point statistics, by focusing instead on understanding multi-parameter correlations and providing robust statistics. By developing on-line analysis tools, our framework will allow for consistent model evaluation, providing results that are much more robust than those produced by case studies - the current paradigm imposed by the Big Data issues (voluminous data and incompatible analysis tools). We believe that this collaborative approach, with contributions of models, observations and analysis approaches used by the research and operational communities, will help untangle the complex interactions that lead to hurricane genesis and rapid intensity changes - two processes that still pose many unanswered questions. The developed framework for evaluation of the global models will also have implications for the improvement of the climate models, which output only a limited amount of information making it difficult to evaluate them. Our TCIS will help by investigating the GCMs under current weather scenarios and with much more detailed model output, making it possible to compare the models to multiple observed parameters to help narrow down the uncertainty in their performance. This knowledge could then be transferred to the climate models to lower the uncertainty in their predictions. The work described here was performed at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

  1. Uncertainty in Operational Atmospheric Analyses and Re-Analyses

    NASA Astrophysics Data System (ADS)

    Langland, R.; Maue, R. N.

    2016-12-01

    This talk will describe uncertainty in atmospheric analyses of wind and temperature produced by operational forecast models and in re-analysis products. Because the "true" atmospheric state cannot be precisely quantified, there is necessarily error in every atmospheric analysis, and this error can be estimated by computing differences ( variance and bias) between analysis products produced at various centers (e.g., ECMWF, NCEP, U.S Navy, etc.) that use independent data assimilation procedures, somewhat different sets of atmospheric observations and forecast models with different resolutions, dynamical equations, and physical parameterizations. These estimates of analysis uncertainty provide a useful proxy to actual analysis error. For this study, we use a unique multi-year and multi-model data archive developed at NRL-Monterey. It will be shown that current uncertainty in atmospheric analyses is closely correlated with the geographic distribution of assimilated in-situ atmospheric observations, especially those provided by high-accuracy radiosonde and commercial aircraft observations. The lowest atmospheric analysis uncertainty is found over North America, Europe and Eastern Asia, which have the largest numbers of radiosonde and commercial aircraft observations. Analysis uncertainty is substantially larger (by factors of two to three times) in most of the Southern hemisphere, the North Pacific ocean, and under-developed nations of Africa and South America where there are few radiosonde or commercial aircraft data. It appears that in regions where atmospheric analyses depend primarily on satellite radiance observations, analysis uncertainty of both temperature and wind remains relatively high compared to values found over North America and Europe.

  2. Gaze distribution analysis and saliency prediction across age groups.

    PubMed

    Krishna, Onkar; Helo, Andrea; Rämä, Pia; Aizawa, Kiyoharu

    2018-01-01

    Knowledge of the human visual system helps to develop better computational models of visual attention. State-of-the-art models have been developed to mimic the visual attention system of young adults that, however, largely ignore the variations that occur with age. In this paper, we investigated how visual scene processing changes with age and we propose an age-adapted framework that helps to develop a computational model that can predict saliency across different age groups. Our analysis uncovers how the explorativeness of an observer varies with age, how well saliency maps of an age group agree with fixation points of observers from the same or different age groups, and how age influences the center bias tendency. We analyzed the eye movement behavior of 82 observers belonging to four age groups while they explored visual scenes. Explorative- ness was quantified in terms of the entropy of a saliency map, and area under the curve (AUC) metrics was used to quantify the agreement analysis and the center bias tendency. Analysis results were used to develop age adapted saliency models. Our results suggest that the proposed age-adapted saliency model outperforms existing saliency models in predicting the regions of interest across age groups.

  3. Earth Observing System (EOS) Communication (Ecom) Modeling, Analysis, and Testbed (EMAT) activiy

    NASA Technical Reports Server (NTRS)

    Desai, Vishal

    1994-01-01

    This paper describes the Earth Observing System (EOS) Communication (Ecom) Modeling, Analysis, and Testbed (EMAT) activity performed by Code 540 in support of the Ecom project. Ecom is the ground-to-ground data transport system for operational EOS traffic. The National Aeronautic and Space Administration (NASA) Communications (Nascom) Division, Code 540, is responsible for implementing Ecom. Ecom interfaces with various systems to transport EOS forward link commands, return link telemetry, and science payload data. To understand the complexities surrounding the design and implementation of Ecom, it is necessary that sufficient testbedding, modeling, and analysis be conducted prior to the design phase. These activities, when grouped, are referred to as the EMAT activity. This paper describes work accomplished to date in each of the three major EMAT activities: modeling, analysis, and testbedding.

  4. Validating the WRF-Chem model for wind energy applications using High Resolution Doppler Lidar data from a Utah 2012 field campaign

    NASA Astrophysics Data System (ADS)

    Mitchell, M. J.; Pichugina, Y. L.; Banta, R. M.

    2015-12-01

    Models are important tools for assessing potential of wind energy sites, but the accuracy of these projections has not been properly validated. In this study, High Resolution Doppler Lidar (HRDL) data obtained with high temporal and spatial resolution at heights of modern turbine rotors were compared to output from the WRF-chem model in order to help improve the performance of the model in producing accurate wind forecasts for the industry. HRDL data were collected from January 23-March 1, 2012 during the Uintah Basin Winter Ozone Study (UBWOS) field campaign. A model validation method was based on the qualitative comparison of the wind field images, time-series analysis and statistical analysis of the observed and modeled wind speed and direction, both for case studies and for the whole experiment. To compare the WRF-chem model output to the HRDL observations, the model heights and forecast times were interpolated to match the observed times and heights. Then, time-height cross-sections of the HRDL and WRF-Chem wind speed and directions were plotted to select case studies. Cross-sections of the differences between the observed and forecasted wind speed and directions were also plotted to visually analyze the model performance in different wind flow conditions. A statistical analysis includes the calculation of vertical profiles and time series of bias, correlation coefficient, root mean squared error, and coefficient of determination between two datasets. The results from this analysis reveals where and when the model typically struggles in forecasting winds at heights of modern turbine rotors so that in the future the model can be improved for the industry.

  5. A comparison of correlation-length estimation methods for the objective analysis of surface pollutants at Environment and Climate Change Canada.

    PubMed

    Ménard, Richard; Deshaies-Jacques, Martin; Gasset, Nicolas

    2016-09-01

    An objective analysis is one of the main components of data assimilation. By combining observations with the output of a predictive model we combine the best features of each source of information: the complete spatial and temporal coverage provided by models, with a close representation of the truth provided by observations. The process of combining observations with a model output is called an analysis. To produce an analysis requires the knowledge of observation and model errors, as well as its spatial correlation. This paper is devoted to the development of methods of estimation of these error variances and the characteristic length-scale of the model error correlation for its operational use in the Canadian objective analysis system. We first argue in favor of using compact support correlation functions, and then introduce three estimation methods: the Hollingsworth-Lönnberg (HL) method in local and global form, the maximum likelihood method (ML), and the [Formula: see text] diagnostic method. We perform one-dimensional (1D) simulation studies where the error variance and true correlation length are known, and perform an estimation of both error variances and correlation length where both are non-uniform. We show that a local version of the HL method can capture accurately the error variances and correlation length at each observation site, provided that spatial variability is not too strong. However, the operational objective analysis requires only a single and globally valid correlation length. We examine whether any statistics of the local HL correlation lengths could be a useful estimate, or whether other global estimation methods such as by the global HL, ML, or [Formula: see text] should be used. We found in both 1D simulation and using real data that the ML method is able to capture physically significant aspects of the correlation length, while most other estimates give unphysical and larger length-scale values. This paper describes a proposed improvement of the objective analysis of surface pollutants at Environment and Climate Change Canada (formerly known as Environment Canada). Objective analyses are essentially surface maps of air pollutants that are obtained by combining observations with an air quality model output, and are thought to provide a complete and more accurate representation of the air quality. The highlight of this study is an analysis of methods to estimate the model (or background) error correlation length-scale. The error statistics are an important and critical component to the analysis scheme.

  6. Modeling abundance using multinomial N-mixture models

    USGS Publications Warehouse

    Royle, Andy

    2016-01-01

    Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.

  7. Observational data needs useful for modeling the coma

    NASA Technical Reports Server (NTRS)

    Huebner, W. F.; Giguere, P. T.

    1981-01-01

    A computer model of comet comae is described; results from assumed composition of frozen gases are summarized and compared to coma observations. Restrictions on relative abundance of some frozen constituents are illustrated. Modeling, when tightly coupled to observational data, can be important for comprehensive analysis of observations, for predicting undetected molecular species and for improved understanding of coma and nucleus. To accomplish this, total gas production rates and relative elemental abundances of H:C:N:O:S are needed as a function of heliocentric distance of the comet. Also needed are relative column densitites and column density profiles with well defined diaphragm range and pointing position on the coma. Production rates are less desirable since they are model dependent. Total number (or upper limits) of molecules in the coma and analysis of unidentified spectral lines are needed also.

  8. The GEOS Ozone Data Assimilation System: Specification of Error Statistics

    NASA Technical Reports Server (NTRS)

    Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.

    2000-01-01

    A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.

  9. Observation uncertainty in reversible Markov chains.

    PubMed

    Metzner, Philipp; Weber, Marcus; Schütte, Christof

    2010-09-01

    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .

  10. Effects of vicarious punishment: a meta-analysis.

    PubMed

    Malouff, John; Thorsteinsson, Einar; Schutte, Nicola; Rooke, Sally Erin

    2009-07-01

    Vicarious punishment involves observing a model exhibit a behavior that leads to punishment for the model. If observers then exhibit the behavior at a lower rate than do individuals in a control group, vicarious punishment occurred. The authors report the results of a meta-analysis of studies that tested for vicarious-punishment effects. Across 21 research samples and 876 participants, the viewing of a model experiencing punishment for a behavior led to a significantly lower level of the behavior by the observers, d = 0.58. Vicarious punishment occurred consistently with (a) live and filmed models, (b) severe and nonsevere punishment for the model, (c) positive punishment alone or positive plus negative punishment, (d) various types of behavior, (e) adults and children, and (f) male and female participants. The findings have implications for the use of models in reducing undesirable behavior.

  11. Numerical study of Asian dust transport during the springtime of 2001 simulated with the Chemical Weather Forecasting System (CFORS) model

    NASA Astrophysics Data System (ADS)

    Uno, Itsushi; Satake, Shinsuke; Carmichael, Gregory R.; Tang, Youhua; Wang, Zifa; Takemura, Toshihiko; Sugimoto, Nobuo; Shimizu, Atsushi; Murayama, Toshiyuki; Cahill, Thomas A.; Cliff, Steven; Uematsu, Mitsuo; Ohta, Sachio; Quinn, Patricia K.; Bates, Timothy S.

    2004-10-01

    The regional-scale aerosol transport model Chemical Weather Forecasting System (CFORS) is used for analysis of large-scale dust phenomena during the Asian Pacific Regional Characterization Experiment (ACE-Asia) intensive observation. Dust modeling results are examined with the surface weather reports, satellite-derived dust index (Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI)), Mie-scattering lidar observation, and surface aerosol observations. The CFORS dust results are shown to accurately reproduce many of the important observed features. Model analysis shows that the simulated dust vertical loading correlates well with TOMS AI and that the dust loading is transported with the meandering of the synoptic-scale temperature field at the 500-hPa level. Quantitative examination of aerosol optical depth shows that model predictions are within 20% difference of the lidar observations for the major dust episodes. The structure of the ACE-Asia Perfect Dust Storm, which occurred in early April, is clarified with the help of the CFORS model analysis. This storm consisted of two boundary layer components and one elevated dust (>6-km height) feature (resulting from the movement of two large low-pressure systems). Time variation of the CFORS dust fields shows the correct onset timing of the elevated dust for each observation site, but the model results tend to overpredict dust concentrations at lower latitude sites. The horizontal transport flux at 130°E longitude is examined, and the overall dust transport flux at 130°E during March-April is evaluated to be 55 Tg.

  12. Development and Sensitivity Analysis of a Frost Risk model based primarily on freely distributed Earth Observation data

    NASA Astrophysics Data System (ADS)

    Louka, Panagiota; Petropoulos, George; Papanikolaou, Ioannis

    2015-04-01

    The ability to map the spatiotemporal distribution of extreme climatic conditions, such as frost, is a significant tool in successful agricultural management and decision making. Nowadays, with the development of Earth Observation (EO) technology, it is possible to obtain accurately, timely and in a cost-effective way information on the spatiotemporal distribution of frost conditions, particularly over large and otherwise inaccessible areas. The present study aimed at developing and evaluating a frost risk prediction model, exploiting primarily EO data from MODIS and ASTER sensors and ancillary ground observation data. For the evaluation of our model, a region in north-western Greece was selected as test site and a detailed sensitivity analysis was implemented. The agreement between the model predictions and the observed (remotely sensed) frost frequency obtained by MODIS sensor was evaluated thoroughly. Also, detailed comparisons of the model predictions were performed against reference frost ground observations acquired from the Greek Agricultural Insurance Organization (ELGA) over a period of 10-years (2000-2010). Overall, results evidenced the ability of the model to produce reasonably well the frost conditions, following largely explainable patterns in respect to the study site and local weather conditions characteristics. Implementation of our proposed frost risk model is based primarily on satellite imagery analysis provided nowadays globally at no cost. It is also straightforward and computationally inexpensive, requiring much less effort in comparison for example to field surveying. Finally, the method is adjustable to be potentially integrated with other high resolution data available from both commercial and non-commercial vendors. Keywords: Sensitivity analysis, frost risk mapping, GIS, remote sensing, MODIS, Greece

  13. UV spectroscopy including ISM line absorption: of the exciting star of Abell 35

    NASA Astrophysics Data System (ADS)

    Ziegler, M.; Rauch, T.; Werner, K.; Kruk, J. W.

    Reliable spectral analysis that is based on high-resolution UV observations requires an adequate, simultaneous modeling of the interstellar line absorption and reddening. In the case of the central star of the planetary nebula Abell 35, BD-22 3467, we demonstrate our current standard spectral-analysis method that is based on the Tübingen NLTE Model-Atmosphere Package (TMAP). We present an on- going spectral analysis of FUSE and HST/STIS observations of BD-22 3467.

  14. Estimation of the Ocean Skin Temperature using the NASA GEOS Atmospheric Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.; Akella, Santha; Todling, Ricardo; Suarez, Max

    2016-01-01

    This report documents the status of the development of a sea surface temperature (SST) analysis for the Goddard Earth Observing System (GEOS) Version-5 atmospheric data assimilation system (ADAS). Its implementation is part of the steps being taken toward the development of an integrated earth system analysis. Currently, GEOS-ADAS SST is a bulk ocean temperature (from ocean boundary conditions), and is almost identical to the skin sea surface temperature. Here we describe changes to the atmosphere-ocean interface layer of the GEOS-atmospheric general circulation model (AGCM) to include near surface diurnal warming and cool-skin effects. We also added SST relevant Advanced Very High Resolution Radiometer (AVHRR) observations to the GEOS-ADAS observing system. We provide a detailed description of our analysis of these observations, along with the modifications to the interface between the GEOS atmospheric general circulation model, gridpoint statistical interpolation-based atmospheric analysis and the community radiative transfer model. Our experiments (with and without these changes) show improved assimilation of satellite radiance observations. We obtained a closer fit to withheld, in-situ buoys measuring near-surface SST. Evaluation of forecast skill scores corroborate improvements seen in the observation fits. Along with a discussion of our results, we also include directions for future work.

  15. Quantitative petri net model of gene regulated metabolic networks in the cell.

    PubMed

    Chen, Ming; Hofestädt, Ralf

    2011-01-01

    A method to exploit hybrid Petri nets (HPN) for quantitatively modeling and simulating gene regulated metabolic networks is demonstrated. A global kinetic modeling strategy and Petri net modeling algorithm are applied to perform the bioprocess functioning and model analysis. With the model, the interrelations between pathway analysis and metabolic control mechanism are outlined. Diagrammatical results of the dynamics of metabolites are simulated and observed by implementing a HPN tool, Visual Object Net ++. An explanation of the observed behavior of the urea cycle is proposed to indicate possibilities for metabolic engineering and medical care. Finally, the perspective of Petri nets on modeling and simulation of metabolic networks is discussed.

  16. Dynamical System Analysis of Reynolds Stress Closure Equations

    NASA Technical Reports Server (NTRS)

    Girimaji, Sharath S.

    1997-01-01

    In this paper, we establish the causality between the model coefficients in the standard pressure-strain correlation model and the predicted equilibrium states for homogeneous turbulence. We accomplish this by performing a comprehensive fixed point analysis of the modeled Reynolds stress and dissipation rate equations. The results from this analysis will be very useful for developing improved pressure-strain correlation models to yield observed equilibrium behavior.

  17. How Many Separable Sources? Model Selection In Independent Components Analysis

    PubMed Central

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  18. Sensitivity of the Tropical Atmospheric Energy Balance to ENSO-Related SST Changes: Comparison of Climate Model Simulations to Observed Responses

    NASA Technical Reports Server (NTRS)

    Robertson, Franklin R.; Fitzjarrald, Dan; Marshall, Susan; Oglesby, Robert; Roads, John; Arnold, James E. (Technical Monitor)

    2001-01-01

    This paper focuses on how fresh water and radiative fluxes over the tropical oceans change during ENSO warm and cold events and how these changes affect the tropical energy balance. At present, ENSO remains the most prominent known mode of natural variability at interannual time scales. While this natural perturbation to climate is quite distinct from possible anthropogenic changes in climate, adjustments in the tropical water and energy budgets during ENSO may give insight into feedback processes involving water vapor and cloud feedbacks. Although great advances have been made in understanding this phenomenon and realizing prediction skill over the past decade, our ability to document the coupled water and energy changes observationally and to represent them in climate models seems far from settled (Soden, 2000 J Climate). In a companion paper we have presented observational analyses, based principally on space-based measurements which document systematic changes in rainfall, evaporation, and surface and top-of-atmosphere (TOA) radiative fluxes. Here we analyze several contemporary climate models run with observed SSTs over recent decades and compare SST-induced changes in radiation, precipitation, evaporation, and energy transport to observational results. Among these are the NASA / NCAR Finite Volume Model, the NCAR Community Climate Model, the NCEP Global Spectral Model, and the NASA NSIPP Model. Key disagreements between model and observational results noted in the recent literature are shown to be due predominantly to observational shortcomings. A reexamination of the Langley 8-Year Surface Radiation Budget data reveals errors in the SST surface longwave emission due to biased SSTs. Subsequent correction allows use of this data set along with ERBE TOA fluxes to infer net atmospheric radiative heating. Further analysis of recent rainfall algorithms provides new estimates for precipitation variability in line with interannual evaporation changes inferred from the da Silva, Young, Levitus COADS analysis. The overall results from our analysis suggest an increase (decrease) of the hydrologic cycle during ENSO warm (cold) events at the rate of about 5 W/sq m per K of SST change. Model results agree reasonably well with this estimate of sensitivity. This rate is slightly less than that which would be expected for constant relative humidity over the tropical oceans. There remain, however, significant quantitative uncertainties in cloud forcing changes in the models as compared to observations. These differences are examined in relationship to model convection and cloud parameterizations Analysis of the possible sampling and measurement errors compared to systematic model errors is also presented.

  19. Tipping point analysis of atmospheric oxygen concentration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livina, V. N.; Forbes, A. B.; Vaz Martins, T. M.

    2015-03-15

    We apply tipping point analysis to nine observational oxygen concentration records around the globe, analyse their dynamics and perform projections under possible future scenarios, leading to oxygen deficiency in the atmosphere. The analysis is based on statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the observed data using Bayesian and wavelet techniques.

  20. Application of an Ensemble Smoother to Precipitation Assimilation

    NASA Technical Reports Server (NTRS)

    Zhang, Sara; Zupanski, Dusanka; Hou, Arthur; Zupanski, Milija

    2008-01-01

    Assimilation of precipitation in a global modeling system poses a special challenge in that the observation operators for precipitation processes are highly nonlinear. In the variational approach, substantial development work and model simplifications are required to include precipitation-related physical processes in the tangent linear model and its adjoint. An ensemble based data assimilation algorithm "Maximum Likelihood Ensemble Smoother (MLES)" has been developed to explore the ensemble representation of the precipitation observation operator with nonlinear convection and large-scale moist physics. An ensemble assimilation system based on the NASA GEOS-5 GCM has been constructed to assimilate satellite precipitation data within the MLES framework. The configuration of the smoother takes the time dimension into account for the relationship between state variables and observable rainfall. The full nonlinear forward model ensembles are used to represent components involving the observation operator and its transpose. Several assimilation experiments using satellite precipitation observations have been carried out to investigate the effectiveness of the ensemble representation of the nonlinear observation operator and the data impact of assimilating rain retrievals from the TMI and SSM/I sensors. Preliminary results show that this ensemble assimilation approach is capable of extracting information from nonlinear observations to improve the analysis and forecast if ensemble size is adequate, and a suitable localization scheme is applied. In addition to a dynamically consistent precipitation analysis, the assimilation system produces a statistical estimate of the analysis uncertainty.

  1. Potential Applications of Gosat Based Carbon Budget Products to Refine Terrestrial Ecosystem Model

    NASA Astrophysics Data System (ADS)

    Kondo, M.; Ichii, K.

    2011-12-01

    Estimation of carbon exchange in terrestrial ecosystem associates with difficulties due to complex entanglement of physical and biological processes: thus, the net ecosystem productivity (NEP) estimated from simulation often differs among process-based terrestrial ecosystem models. In addition to complexity of the system, validation can only be conducted in a point scale since reliable observation is only available from ground observations. With a lack of large spatial data, extension of model simulation to a global scale results in significant uncertainty in the future carbon balance and climate change. Greenhouse gases Observing SATellite (GOSAT), launched by the Japanese space agency (JAXA) in January, 2009, is the 1st operational satellite promised to deliver the net land-atmosphere carbon budget to the terrestrial biosphere research community. Using that information, the model reproducibility of carbon budget is expected to improve: hence, gives a better estimation of the future climate change. This initial analysis is to seek and evaluate the potential applications of GOSAT observation toward the sophistication of terrestrial ecosystem model. The present study was conducted in two processes: site-based analysis using eddy covariance observation data to assess the potential use of terrestrial carbon fluxes (GPP, RE, and NEP) to refine the model, and extension of the point scale analysis to spatial using Carbon Tracker product as a prototype of GOSAT product. In the first phase of the experiment, it was verified that an optimization routine adapted to a terrestrial model, Biome-BGC, yielded the improved result with respect to eddy covariance observation data from AsiaFlux Network. Spatial data sets used in the second phase were consists of GPP from empirical algorithm (e.g. support vector machine), NEP from Carbon Tracker, and RE from the combination of these. These spatial carbon flux estimations was used to refine the model applying the exactly same optimization procedure as the point analysis, and found that these spatial data help to improve the model's overall reproducibility. The GOSAT product is expected to have higher accuracy since it uses global CO2 observations. Therefore, with the application of GOSAT data, a better estimation of terrestrial carbon cycle can be achieved with optimization. It is anticipated to carry out more detailed analysis upon the arrival of GOSAT product and to verify the reduction in the uncertainty in the future carbon budget and the climate change with the calibrated models, which is the major contribution can be achieved from GOSAT.

  2. Downscaling, 2-way Nesting, and Data Assimilative Modeling in Coastal and Shelf Waters of the U.S. Mid-Atlantic Bight and Gulf of Maine

    NASA Astrophysics Data System (ADS)

    Wilkin, J.; Levin, J.; Lopez, A.; Arango, H.

    2016-02-01

    Coastal ocean models that downscale output from basin and global scale models are widely used to study regional circulation at enhanced resolution and locally important ecosystem, biogeochemical, and geomorphologic processes. When operated as now-cast or forecast systems, these models offer predictions that assist decision-making for numerous maritime applications. We describe such a system for shelf waters of the Mid-Atlantic Bight (MAB) and Gulf of Maine (GoM) where the MARACOOS and NERACOOS associations of U.S. IOOS operate coastal ocean observing systems that deliver a dense observation set using CODAR HF-radar, autonomous underwater glider vehicles (AUGV), telemetering moorings, and drifting buoys. Other U.S. national and global observing systems deliver further sustained observations from moorings, ships, profiling floats, and a constellation of satellites. Our MAB and GoM re-analysis and forecast system uses the Regional Ocean Modeling System (ROMS; myroms.org) with 4-dimensional Variational (4D-Var) data assimilation to adjust initial conditions, boundary conditions, and surface forcing in each analysis cycle. Data routinely assimilated include CODAR velocities, altimeter satellite sea surface height (with coastal corrections), satellite temperature, in situ CTD data from AUGV and ships (NMFS Ecosystem Monitoring voyages), and all in situ data reported via the WMO GTS network. A climatological data assimilative analysis of hydrographic and long-term mean velocity observations specifies the regional Mean Dynamic Topography that augments altimeter sea level anomaly data and is also used to adjust boundary condition biases that would otherwise be introduced in the process of downscaling from global models. System performance is described with respect to the impact of satellite, CODAR and in situ observations on analysis skill. Results from a 2-way nested modeling system that adds enhanced resolution over the NSF OOI Pioneer Array in the central MAB are also shown.

  3. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  4. Evolution in Cloud Population Statistics of the MJO: From AMIE Field Observations to Global-Cloud Permitting Models Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollias, Pavlos

    This is a multi-institutional, collaborative project using a three-tier modeling approach to bridge field observations and global cloud-permitting models, with emphases on cloud population structural evolution through various large-scale environments. Our contribution was in data analysis for the generation of high value cloud and precipitation products and derive cloud statistics for model validation. There are two areas in data analysis that we contributed: the development of a synergistic cloud and precipitation cloud classification that identify different cloud (e.g. shallow cumulus, cirrus) and precipitation types (shallow, deep, convective, stratiform) using profiling ARM observations and the development of a quantitative precipitation ratemore » retrieval algorithm using profiling ARM observations. Similar efforts have been developed in the past for precipitation (weather radars), but not for the millimeter-wavelength (cloud) radar deployed at the ARM sites.« less

  5. Electromagnetic field radiation model for lightning strokes to tall structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motoyama, H.; Janischewskyj, W.; Hussein, A.M.

    1996-07-01

    This paper describes observation and analysis of electromagnetic field radiation from lightning strokes to tall structures. Electromagnetic field waveforms and current waveforms of lightning strokes to the CN Tower have been simultaneously measured since 1991. A new calculation model of electromagnetic field radiation is proposed. The proposed model consists of the lightning current propagation and distribution model and the electromagnetic field radiation model. Electromagnetic fields calculated by the proposed model, based on the observed lightning current at the CN Tower, agree well with the observed fields at 2km north of the tower.

  6. Multi-Sensory Aerosol Data and the NRL NAAPS model for Regulatory Exceptional Event Analysis

    NASA Astrophysics Data System (ADS)

    Husar, R. B.; Hoijarvi, K.; Westphal, D. L.; Haynes, J.; Omar, A. H.; Frank, N. H.

    2013-12-01

    Beyond scientific exploration and analysis, multi-sensory observations along with models are finding increasing applications for operational air quality management. EPA's Exceptional Event (EE) Rule allows the exclusion of data strongly influenced by impacts from "exceptional events," such as smoke from wildfires or dust from abnormally high winds. The EE Rule encourages the use of satellite observations and other non-standard data along with models as evidence for formal documentation of EE samples for exclusion. Thus, the implementation of the EE Rule is uniquely suited for the direct application of integrated multi-sensory observations and indirectly through the assimilation into an aerosol simulation model. Here we report the results of a project: NASA and NAAPS Products for Air Quality Decision Making. The project uses of observations from multiple satellite sensors, surface-based aerosol measurements and the NRL Aerosol Analysis and Prediction System (NAAPS) model that assimilates key satellite observations. The satellite sensor data for detecting and documenting smoke and dust events include: MODIS AOD and Images; OMI Aerosol Index, Tropospheric NO2; AIRS, CO. The surface observations include the EPA regulatory PM2.5 network; the IMPROVE/STN aerosol chemical network; AIRNOW PM2.5 mass network, and surface met. data. Within this application, crucial role is assigned to the NAAPS model for estimating the surface concentration of windblown dust and biomass smoke. The operational model assimilates quality-assured daily MODIS data and 2DVAR to adjust the model concentrations and CALIOP-based climatology to adjust the vertical profiles at 6-hour intervals. The assimilation of satellite data from multiple satellites significantly contributes to the usefulness of NAAPS for EE analysis. The NAAPS smoke and dust simulations were evaluated using the IMPROVE/STN chemical data. The multi-sensory observations along with the model simulations are integrated into a web-based Exceptional Event Decision System (EE DSS) application program, designed to support air quality analysts at the Federal and Regional EPA offices and the EE-affected States. EE DSS screening tool automatically identifies the EPA PM2.5 mass samples that are candidates for EE flagging, based mainly on the NAAPS-simulated surface concentration of dust and smoke. The AQ analysts at the States and the EPA can also use the EE DSS to gather further evidence from the examination of spatio-temporal pattern, Absorbing Aerosol Index, CO and NO2 concentration, backward and forward airmass trajectories and other signatures. Since early 2013, the DSS has been used for the identification and analysis of dozens of events. Hence, integration of multi-sensory observations and modeling with data assimilation is maturing to support real-world operational AQ management applications. The remaining challenges can be resolved by seeking ';closure' of the system components; i.e. the systematic adjustments to reconcile the satellite and surface observations, the emissions and their integration through a suitable AQ model.

  7. Macro-level pedestrian and bicycle crash analysis: Incorporating spatial spillover effects in dual state count models.

    PubMed

    Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed

    2016-08-01

    This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Geocoronal Balmer α line profile observations and forward-model analysis

    NASA Astrophysics Data System (ADS)

    Mierkiewicz, E. J.; Bishop, J.; Roesler, F. L.; Nossal, S. M.

    2006-05-01

    High spectral resolution geocoronal Balmer α line profile observations from Pine Bluff Observatory (PBO) are presented in the context of forward-model analysis. Because Balmer series column emissions depend significantly on multiple scattering, retrieval of hydrogen parameters of general aeronomic interest from these observations (e.g., the hydrogen column abundance) currently requires a forward modeling approach. This capability is provided by the resonance radiative transfer code LYAO_RT. We have recently developed a parametric data-model comparison search procedure employing an extensive grid of radiative transport model input parameters (defining a 6-dimensional parameter space) to map-out bounds for feasible forward model retrieved atomic hydrogen density distributions. We applied this technique to same-night (March, 2000) ground-based Balmer α data from PBO and geocoronal Lyman β measurements from the Espectrógrafo Ultravioleta extremo para la Radiación Difusa (EURD) instrument on the Spanish satellite MINISAT-1 (provided by J.F. Gómez and C. Morales of the Laboratorio de Astrofisica Espacial y Física Fundamental, INTA, Madrid, Spain) in order to investigate the modeling constraints imposed by two sets of independent geocoronal intensity measurements, both of which rely on astronomical calibration methods. In this poster we explore extending this analysis to the line profile information also contained in the March 2000 PBO Balmer α data set. In general, a decrease in the Doppler width of the Balmer α emission with shadow altitude is a persistent feature in every night of PBO observations in which a wide range of shadow altitudes are observed. Preliminary applications of the LYAO_RT code, which includes the ability to output Doppler line profiles for both the singly and multiply scattered contributions to the Balmer α emission line, displays good qualitative agreement with regard to geocoronal Doppler width trends observed from PBO. Model-data Balmer α Doppler width comparisons, using the best-fit model parameters obtained during the March 2000 PBO/EURD forward-model study, will be presented and discussed, including the feasibility of using Balmer α observed Doppler widths as an additional model constraint in our forward-model search procedure.

  9. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  10. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  11. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  12. A New Search Paradigm for Correlated Neutrino Emission from Discrete GRBs using Antarctic Cherenkov Telescopes in the Swift Era

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamatikos, Michael; Band, David L.; JCA/UMBC, Baltimore, MD 21250

    2006-05-19

    We describe the theoretical modeling and analysis techniques associated with a preliminary search for correlated neutrino emission from GRB980703a, which triggered the Burst and Transient Source Experiment (BATSE GRB trigger 6891), using archived data from the Antarctic Muon and Neutrino Detector Array (AMANDA-B10). Under the assumption of associated hadronic acceleration, the expected observed neutrino energy flux is directly derived, based upon confronting the fireball phenomenology with the discrete set of observed electromagnetic parameters of GRB980703a, gleaned from ground-based and satellite observations, for four models, corrected for oscillations. Models 1 and 2, based upon spectral analysis featuring a prompt photon energymore » fit to the Band function, utilize an observed spectroscopic redshift, for isotropic and anisotropic emission geometry, respectively. Model 3 is based upon averaged burst parameters, assuming isotropic emission. Model 4 based upon a Band fit, features an estimated redshift from the lag-luminosity relation, with isotropic emission. Consistent with our AMANDA-II analysis of GRB030329, which resulted in a flux upper limit of {approx} 0.150GeV /cm2/s for model 1, we find differences in excess of an order of magnitude in the response of AMANDA-B10, among the various models for GRB980703a. Implications for future searches in the era of Swift and IceCube are discussed.« less

  13. Analysis of flow in an observation well intersecting a single fracture

    USGS Publications Warehouse

    Lapcevic, P.A.; Novakowski, K.S.; Paillet, Frederick L.

    1993-01-01

    A semi-analytical model is developed to determine transmissivity and storativity from the interpretation of transient flow in an observation well due to pumping in a source well where the two wells are connected by a single fracture. Flow rate can be determined using a heat-pulse flowmeter located above the intersection of the fracture in the observation well. The results of a field experiment were interpreted using the new model and compared with drawdown data from the same test. Good agreement between the transmissivity estimates was observed whereas estimates of storativity were found to be better determined from the analysis of flow rate. ?? 1993.

  14. Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.

    PubMed

    Munir, Mohammad

    2018-06-01

    Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics. PMID:27764187

  16. NASA's Carbon Monitoring System Flux-Pilot Project: A Multi-Component Analysis System for Carbon-Cycle Research and Monitoring

    NASA Technical Reports Server (NTRS)

    Pawson, S.; Gunson, M.; Potter, C.; Jucks, K.

    2012-01-01

    The importance of greenhouse gas increases for climate motivates NASA s observing strategy for CO2 from space, including the forthcoming Orbiting Carbon Observatory (OCO-2) mission. Carbon cycle monitoring, including attribution of atmospheric concentrations to regional emissions and uptake, requires a robust modeling and analysis infrastructure to optimally extract information from the observations. NASA's Carbon-Monitoring System Flux-Pilot Project (FPP) is a prototype for such analysis, combining a set of unique tools to facilitate analysis of atmospheric CO2 along with fluxes between the atmosphere and the terrestrial biosphere or ocean. NASA's analysis system is unique, in that it combines information and expertise from the land, oceanic, and atmospheric branches of the carbon cycle and includes some estimates of uncertainty. Numerous existing space-based missions provide information of relevance to the carbon cycle. This study describes the components of the FPP framework, assessing the realism of computed fluxes, thus providing the basis for research and monitoring applications. Fluxes are computed using data-constrained terrestrial biosphere models and physical ocean models, driven by atmospheric observations and assimilating ocean-color information. Use of two estimates provides a measure of uncertainty in the fluxes. Along with inventories of other emissions, these data-derived fluxes are used in transport models to assess their consistency with atmospheric CO2 observations. Closure is achieved by using a four-dimensional data assimilation (inverse) approach that adjusts the terrestrial biosphere fluxes to make them consistent with the atmospheric CO2 observations. Results will be shown, illustrating the year-to-year variations in land biospheric and oceanic fluxes computed in the FPP. The signals of these surface-flux variations on atmospheric CO2 will be isolated using forward modeling tools, which also incorporate estimates of transport error. The results will be discussed in the context of interannual variability of observed atmospheric CO2 distributions.

  17. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  18. Towards a Better Understanding of Water Stores and Fluxes: Model Observation Synthesis in a Snowmelt Dominated Research Watershed

    NASA Astrophysics Data System (ADS)

    Ryken, A.; Gochis, D.; Carroll, R. W. H.; Bearup, L. A.; Williams, K. H.; Maxwell, R. M.

    2017-12-01

    The hydrology of high-elevation, mountainous regions is poorly represented in Earth Systems Models (ESMs). In addition to regulating downstream water delivery, these ecosystems play an important role in the storage and land-atmosphere exchange of carbon and water. Water balances are sensitive to the amount of water stored in the snowpack (SWE) and the amount of water leaving the system in the form of evapotranspiration—two pieces of the hydrologic cycle that are difficult to observe and model in heterogeneous mountainous regions due to spatially variant weather patterns. In an effort to resolve this hydrologic gap in ESMs, this study seeks to better understand the interactions between groundwater, carbon flux, and the lower atmosphere in these high-altitude environments through integration of field observations and model simulations. We compare model simulations to field observations to elucidate process performance combined with a sensitivity analysis to better understand parameter uncertainty. Observations from a meteorological station in the East River Basin are used to force an integrated single-column hydrologic model, ParFlow-CLM. This met station is co-located with an eddy covariance tower, which, along with snow surveys, is used to better constrain the water, carbon, and energy fluxes in the coupled land-atmosphere model to increase our understanding of high-altitude headwaters. Preliminary results suggest the model compares well to the eddy covariance tower and field observations, shown through both correct magnitude and timing of peak SWE along with similar magnitudes and diurnal patterns of heat and water fluxes. Initial sensitivity analysis results show that an increase in temperature leads to a decrease in peak SWE as well as an increase in latent heat revealing a sensitivity of the model to air temperature. Further sensitivity analysis will help us understand more parameter uncertainty. Through obtaining more accurate and higher resolution meteorological data and applying it to a coupled hydrologic model, this study can lead to better representation of mountainous environments in all ESMs.

  19. Variability and Spectral Studies of Luminous Seyfert 1 Galaxy Fairall 9. Search for the Reflection Component is a Quasar: RXTE and ASCA Observation of a Nearby Radio-Quiet Quasar MR 2251-178

    NASA Technical Reports Server (NTRS)

    Leighly, Karen M.

    1999-01-01

    Monitoring observations with interval of 3 days using RXTE (X Ray Timing Explorer) of the luminous Seyfert 1 galaxy Fairall 9 were performed for one year. The purpose of the observations were to study the variability of Fairall 9 and compare the results with those from the radio-loud object 3C 390.3. The data has been received and analysis is underway, using the new background model. An observation of the quasar MR 2251-178 was made in order to determine whether or not it has a reflection component. Older background models gave an unacceptable subtraction and analysis is underway using the new background model. The observation of NGC 6300 showed that the X-ray spectrum from this Seyfert 2 galaxy appears to be dominated by Compton reflection.

  20. Understanding Satellite Characterization Knowledge Gained from Radiometric Data

    DTIC Science & Technology

    2011-09-01

    observation model, the time - resolved pose of a satellite can be estimated autonomously through each pass from non- resolved radiometry. The benefits of...and we assume the satellite can achieve both the set attitude and the necessary maneuver to change its orientation from one time -step to the next...Observation Model The UKF observation model uses the Time domain Analysis Simulation for Advanced Tracking (TASAT) software to provide high-fidelity satellite

  1. Using deep neural networks to augment NIF post-shot analysis

    NASA Astrophysics Data System (ADS)

    Humbird, Kelli; Peterson, Luc; McClarren, Ryan; Field, John; Gaffney, Jim; Kruse, Michael; Nora, Ryan; Spears, Brian

    2017-10-01

    Post-shot analysis of National Ignition Facility (NIF) experiments is the process of determining which simulation inputs yield results consistent with experimental observations. This analysis is typically accomplished by running suites of manually adjusted simulations, or Monte Carlo sampling surrogate models that approximate the response surfaces of the physics code. These approaches are expensive and often find simulations that match only a small subset of observables simultaneously. We demonstrate an alternative method for performing post-shot analysis using inverse models, which map directly from experimental observables to simulation inputs with quantified uncertainties. The models are created using a novel machine learning algorithm which automates the construction and initialization of deep neural networks to optimize predictive accuracy. We show how these neural networks, trained on large databases of post-shot simulations, can rigorously quantify the agreement between simulation and experiment. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  2. Some aspects of the analysis of geodetic strain observations in kinematic models

    NASA Astrophysics Data System (ADS)

    Welsch, W. M.

    1986-11-01

    Frequently, deformation processes are analyzed in static models. In many cases, this procedure is justified, in particular if the deformation occurring is a singular event. If. however, the deformation is a continuous process, as is the case, for instance, with recent crustal movements, the analysis in kinematic models is more commensurate with the problem because the factor "time" is considered an essential part of the model. Some specialities have to be considered when analyzing geodetic strain observations in kinematic models. They are dealt with in this paper. After a brief derivation of the basic kinematic model and the kinematic strain model, the following subjects are treated: the adjustment of the pointwise velocity field and the derivation of strain-rate parameters; the fixing of the kinematic reference system as part of the geodetic datum; statistical tests of models by testing linear hypotheses; the invariance of kinematic strain-rate parameters with respect to transformations of the coordinate-system and the geodetic datum; the interpolation of strain rates by finite-element methods. After the representation of some advanced models for the description of secular and episodic kinematic processes, the data analysis in dynamic models is regarded as a further generalization of deformation analysis.

  3. Modelling a model?!! Prediction of observed and calculated daily pan evaporation in New Mexico, U.S.A.

    NASA Astrophysics Data System (ADS)

    Beriro, D. J.; Abrahart, R. J.; Nathanail, C. P.

    2012-04-01

    Data-driven modelling is most commonly used to develop predictive models that will simulate natural processes. This paper, in contrast, uses Gene Expression Programming (GEP) to construct two alternative models of different pan evaporation estimations by means of symbolic regression: a simulator, a model of a real-world process developed on observed records, and an emulator, an imitator of some other model developed on predicted outputs calculated by that source model. The solutions are compared and contrasted for the purposes of determining whether any substantial differences exist between either option. This analysis will address recent arguments over the impact of using downloaded hydrological modelling datasets originating from different initial sources i.e. observed or calculated. These differences can be easily be overlooked by modellers, resulting in a model of a model developed on estimations derived from deterministic empirical equations and producing exceptionally high goodness-of-fit. This paper uses different lines-of-evidence to evaluate model output and in so doing paves the way for a new protocol in machine learning applications. Transparent modelling tools such as symbolic regression offer huge potential for explaining stochastic processes, however, the basic tenets of data quality and recourse to first principles with regard to problem understanding should not be trivialised. GEP is found to be an effective tool for the prediction of observed and calculated pan evaporation, with results supported by an understanding of the records, and of the natural processes concerned, evaluated using one-at-a-time response function sensitivity analysis. The results show that both architectures and response functions are very similar, implying that previously observed differences in goodness-of-fit can be explained by whether models are applied to observed or calculated data.

  4. A study of the extended-range forecasting problem blocking

    NASA Technical Reports Server (NTRS)

    Chen, T. C.; Marshall, H. G.; Shukla, J.

    1981-01-01

    Wavenumber frequency spectral analysis of a 90 day winter (Jan. 15 - April 14) wind field simulated by a climate experiment of the GLAS atmospheric circulation model is made using the space time Fourier analysis which is modified with Tukey's numerical spectral analysis. Computations are also made to examine how the model wave disturbances in the wavenumber frequency domain are maintained by nonlinear interactions. Results are compared with observation. It is found that equatorial easterlies do not show up in this climate experiment at 200 mb. The zonal kinetic energy and momentum transport of stationary waves are too small in the model's Northern Hemisphere. The wavenumber and frequency spectra of the model are generally in good agreement with observation. However, some distinct features of the model's spectra are revealed. The wavenumber spectra of kinetic energy show that the eastward moving waves of low wavenumbers have stronger zonal motion while the eastward moving waves of intermediate wavenumbers have larger meridional motion compared with observation. Furthermore, the eastward moving waves show a band of large spectral value in the medium frequency regime.

  5. The Supernovae Analysis Application (SNAP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  6. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  7. The Supernovae Analysis Application (SNAP)

    DOE PAGES

    Bayless, Amanda J.; Fryer, Christopher Lee; Wollaeger, Ryan Thomas; ...

    2017-09-06

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginningmore » to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.« less

  8. The Supernovae Analysis Application (SNAP)

    NASA Astrophysics Data System (ADS)

    Bayless, Amanda J.; Fryer, Chris L.; Wollaeger, Ryan; Wiggins, Brandon; Even, Wesley; de la Rosa, Janie; Roming, Peter W. A.; Frey, Lucy; Young, Patrick A.; Thorpe, Rob; Powell, Luke; Landers, Rachel; Persson, Heather D.; Hay, Rebecca

    2017-09-01

    The SuperNovae Analysis aPplication (SNAP) is a new tool for the analysis of SN observations and validation of SN models. SNAP consists of a publicly available relational database with observational light curve, theoretical light curve, and correlation table sets with statistical comparison software, and a web interface available to the community. The theoretical models are intended to span a gridded range of parameter space. The goal is to have users upload new SN models or new SN observations and run the comparison software to determine correlations via the website. There are problems looming on the horizon that SNAP is beginning to solve. For example, large surveys will discover thousands of SNe annually. Frequently, the parameter space of a new SN event is unbounded. SNAP will be a resource to constrain parameters and determine if an event needs follow-up without spending resources to create new light curve models from scratch. Second, there is no rapidly available, systematic way to determine degeneracies between parameters, or even what physics is needed to model a realistic SN. The correlations made within the SNAP system are beginning to solve these problems.

  9. Alternatives to Multilevel Modeling for the Analysis of Clustered Data

    ERIC Educational Resources Information Center

    Huang, Francis L.

    2016-01-01

    Multilevel modeling has grown in use over the years as a way to deal with the nonindependent nature of observations found in clustered data. However, other alternatives to multilevel modeling are available that can account for observations nested within clusters, including the use of Taylor series linearization for variance estimation, the design…

  10. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  11. The physical origin of the X-ray emission from SN 1987A

    NASA Astrophysics Data System (ADS)

    Miceli, M.; Orlando, S.; Petruk, O.

    2017-10-01

    We revisit the spectral analysis of the set of archive XMM-Newton observations of SN 1987A through our 3-D hydrodynamic model describing the whole evolution from the onset of the supernova to the full remnant development. For the first time the spectral analysis accounts for the single observations and for the evolution of the system self-consistently. We adopt a forward modeling approach which allows us to directly synthesize, from the model, X-ray spectra and images in different energy bands. We fold the synthetic observables through the XMM-Newton instrumental response and directly compare models and actual data. We find that our simulation provides an excellent fit to the data, by reproducing simultaneously X-ray fluxes, spectral features, and morphology of SN 1987A at all evolutionary stages. Our analysis enables us to obtain a deep insight on the physical origin of the observed multi-thermal emission, by revealing the contribution of shocked surrounding medium, dense clumps of the circumstellar ring, and ejecta to the total emission. We finally provide predictions for future observations (to be performed with XMM-Newton in the next future and with the forthcoming Athena X-ray telescope in approximately 10 years), showing the growing contribution of the ejecta X-ray emission.

  12. Uncertainty analysis of signal deconvolution using a measured instrument response function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartouni, E. P.; Beeman, B.; Caggiano, J. A.

    2016-10-05

    A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less

  13. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    NASA Technical Reports Server (NTRS)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  14. Maximum likelihood-based analysis of single-molecule photon arrival trajectories

    NASA Astrophysics Data System (ADS)

    Hajdziona, Marta; Molski, Andrzej

    2011-02-01

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  15. On the Relationship between Observed NLDN Lightning ...

    EPA Pesticide Factsheets

    Lightning-produced nitrogen oxides (NOX=NO+NO2) in the middle and upper troposphere play an essential role in the production of ozone (O3) and influence the oxidizing capacity of the troposphere. Despite much effort in both observing and modeling lightning NOX during the past decade, considerable uncertainties still exist with the quantification of lightning NOX production and distribution in the troposphere. It is even more challenging for regional chemistry and transport models to accurately parameterize lightning NOX production and distribution in time and space. The Community Multiscale Air Quality Model (CMAQ) parameterizes the lightning NO emissions using local scaling factors adjusted by the convective precipitation rate that is predicted by the upstream meteorological model; the adjustment is based on the observed lightning strikes from the National Lightning Detection Network (NLDN). For this parameterization to be valid, the existence of an a priori reasonable relationship between the observed lightning strikes and the modeled convective precipitation rates is needed. In this study, we will present an analysis leveraged on the observed NLDN lightning strikes and CMAQ model simulations over the continental United States for a time period spanning over a decade. Based on the analysis, new parameterization scheme for lightning NOX will be proposed and the results will be evaluated. The proposed scheme will be beneficial to modeling exercises where the obs

  16. Recent developments in imaging system assessment methodology, FROC analysis and the search model.

    PubMed

    Chakraborty, Dev P

    2011-08-21

    A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search-model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.

  17. Linear and Poisson models for genetic evaluation of tick resistance in cross-bred Hereford x Nellore cattle.

    PubMed

    Ayres, D R; Pereira, R J; Boligon, A A; Silva, F F; Schenkel, F S; Roso, V M; Albuquerque, L G

    2013-12-01

    Cattle resistance to ticks is measured by the number of ticks infesting the animal. The model used for the genetic analysis of cattle resistance to ticks frequently requires logarithmic transformation of the observations. The objective of this study was to evaluate the predictive ability and goodness of fit of different models for the analysis of this trait in cross-bred Hereford x Nellore cattle. Three models were tested: a linear model using logarithmic transformation of the observations (MLOG); a linear model without transformation of the observations (MLIN); and a generalized linear Poisson model with residual term (MPOI). All models included the classificatory effects of contemporary group and genetic group and the covariates age of animal at the time of recording and individual heterozygosis, as well as additive genetic effects as random effects. Heritability estimates were 0.08 ± 0.02, 0.10 ± 0.02 and 0.14 ± 0.04 for MLIN, MLOG and MPOI models, respectively. The model fit quality, verified by deviance information criterion (DIC) and residual mean square, indicated fit superiority of MPOI model. The predictive ability of the models was compared by validation test in independent sample. The MPOI model was slightly superior in terms of goodness of fit and predictive ability, whereas the correlations between observed and predicted tick counts were practically the same for all models. A higher rank correlation between breeding values was observed between models MLOG and MPOI. Poisson model can be used for the selection of tick-resistant animals. © 2013 Blackwell Verlag GmbH.

  18. An ideal observer analysis of visual working memory.

    PubMed

    Sims, Chris R; Jacobs, Robert A; Knill, David C

    2012-10-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this article we develop an ideal observer analysis of human VWM by deriving the expected behavior of an optimally performing but limited-capacity memory system. This analysis is framed around rate-distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in 2 empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (e.g., how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis-one that allows variability in the number of stored memory representations but does not assume the presence of a fixed item limit-provides an excellent account of the empirical data and further offers a principled reinterpretation of existing models of VWM. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  19. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses

    PubMed Central

    Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.

    2015-01-01

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576

  20. FUSE Observations of Galactic and LMC Novae in Outburst

    NASA Technical Reports Server (NTRS)

    Huschildt, P. H.

    2001-01-01

    This document is a collection of five abstracts from papers written on the 'FUSE Observations of Galactic and LMC Novae in Outburst'. The titles are the following: (1) Analyzing FUSE Observations of Galactic and LMC Novae; (2) Detailed NLTE Model Atmospheres for Novae during Outburst: Modeling Optical and Ultraviolet Observations for Nova LMC 1988; (3) Numerical Solution of the Expanding Stellar Atmosphere Problem; (4) A Non-LTE Line-Blanketed Expanding Atmosphere Model for A-supergiant Alpha Cygni; and (5) Non-LTE Model Atmosphere Analysis of the Early Ultraviolet Spectra of Nova Andromedae 1986. A list of journal publications is also included.

  1. Simultaneous assimilation of AIRS Xco2 and meteorological observations in a carbon climate model with an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Liu, Junjie; Fung, Inez; Kalnay, Eugenia; Kang, Ji-Sun; Olsen, Edward T.; Chen, Luke

    2012-03-01

    This study is our first step toward the generation of 6 hourly 3-D CO2 fields that can be used to validate CO2 forecast models by combining CO2 observations from multiple sources using ensemble Kalman filtering. We discuss a procedure to assimilate Atmospheric Infrared Sounder (AIRS) column-averaged dry-air mole fraction of CO2 (Xco2) in conjunction with meteorological observations with the coupled Local Ensemble Transform Kalman Filter (LETKF)-Community Atmospheric Model version 3.5. We examine the impact of assimilating AIRS Xco2 observations on CO2 fields by comparing the results from the AIRS-run, which assimilates both AIRS Xco2 and meteorological observations, to those from the meteor-run, which only assimilates meteorological observations. We find that assimilating AIRS Xco2 results in a surface CO2 seasonal cycle and the N-S surface gradient closer to the observations. When taking account of the CO2 uncertainty estimation from the LETKF, the CO2 analysis brackets the observed seasonal cycle. Verification against independent aircraft observations shows that assimilating AIRS Xco2 improves the accuracy of the CO2 vertical profiles by about 0.5-2 ppm depending on location and altitude. The results show that the CO2 analysis ensemble spread at AIRS Xco2 space is between 0.5 and 2 ppm, and the CO2 analysis ensemble spread around the peak level of the averaging kernels is between 1 and 2 ppm. This uncertainty estimation is consistent with the magnitude of the CO2 analysis error verified against AIRS Xco2 observations and the independent aircraft CO2 vertical profiles.

  2. Comparison of dark energy models after Planck 2015

    NASA Astrophysics Data System (ADS)

    Xu, Yue-Yao; Zhang, Xin

    2016-11-01

    We make a comparison for ten typical, popular dark energy models according to their capabilities of fitting the current observational data. The observational data we use in this work include the JLA sample of type Ia supernovae observation, the Planck 2015 distance priors of cosmic microwave background observation, the baryon acoustic oscillations measurements, and the direct measurement of the Hubble constant. Since the models have different numbers of parameters, in order to make a fair comparison, we employ the Akaike and Bayesian information criteria to assess the worth of the models. The analysis results show that, according to the capability of explaining observations, the cosmological constant model is still the best one among all the dark energy models. The generalized Chaplygin gas model, the constant w model, and the α dark energy model are worse than the cosmological constant model, but still are good models compared to others. The holographic dark energy model, the new generalized Chaplygin gas model, and the Chevalliear-Polarski-Linder model can still fit the current observations well, but from an economically feasible perspective, they are not so good. The new agegraphic dark energy model, the Dvali-Gabadadze-Porrati model, and the Ricci dark energy model are excluded by the current observations.

  3. Application of Wavelet Filters in an Evaluation of ...

    EPA Pesticide Factsheets

    Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model performance metrics lead one to devote resources to stochastic variations in model outputs. In this analysis, observations are compared with model outputs at seasonal, weekly, diurnal and intra-day time scales. Filters provide frequency specific information that can be used to compare the strength (amplitude) and timing (phase) of observations and model estimates. The National Exposure Research Laboratory′s (NERL′s) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollu

  4. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  5. Comparison of Forecast and Observed Energetics

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Brin, Y.

    1984-01-01

    An energetics analysis scheme was developed to compare the observed kinetic energy balance over North America with that derived from forecast fields of the GLAS fourth order model for the 13 to 15 January 1979 cyclone case. It is found that: (1) the observed and predicted kinetic energy and eddy conversion are in good qualitative agreement, although the model eddy conversion tends to be 2 to 3 times stronger than the observed values. The eddy conversion which is stronger in the 12 h forecast than in observations and may be due to several factors is studied; (2) vertical profiles of kinetic energy generation and dissipation exhibit lower and upper tropospheric maxima in both the forecast and observations; (3) a lag in the observational analysis with the maximum in the observed kinetic energy occurring at 0000 GMT 14 January over the same region as the maximum ddy conversion 12 h earlier is noted.

  6. Limb-darkening and the structure of the Jovian atmosphere

    NASA Technical Reports Server (NTRS)

    Newman, W. I.; Sagan, C.

    1978-01-01

    By observing the transit of various cloud features across the Jovian disk, limb-darkening curves were constructed for three regions in the 4.6 to 5.1 mu cm band. Several models currently employed in describing the radiative or dynamical properties of planetary atmospheres are here examined to understand their implications for limb-darkening. The statistical problem of fitting these models to the observed data is reviewed and methods for applying multiple regression analysis are discussed. Analysis of variance techniques are introduced to test the viability of a given physical process as a cause of the observed limb-darkening.

  7. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  8. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time series. At global scale, our understanding of C fluxes relies on the use of consistently applied land models. Here, we use ITQ to evaluate model structure: The measures are largely insensitive to climatic scenarios, land use and atmospheric gas concentrations used to drive them, but clearly separate the structure of 13 different land models taken from the CMIP5 archive and an observations-based product. In conclusion, diagnostic measures of this kind provide data-analytical tools that distinguish different types of natural processes based solely on their dynamics, and are thus highly suitable for environmental science applications such as model structural diagnostics.« less

  9. Assimilation of temperature and salinity profile data in the Norwegian Climate Prediction Model

    NASA Astrophysics Data System (ADS)

    Wang, Yiguo; Counillon, Francois; Bertino, Laurent; Bethke, Ingo; Keenlyside, Noel

    2016-04-01

    Assimilating temperature and salinity profile data is promising to constrain the ocean component of Earth system models for the purpose of seasonal-to-dedacal climate predictions. However, assimilating temperature and salinity profiles that are measured in standard depth coordinate (z-coordinate) into isopycnic coordinate ocean models that are discretised by water densities is challenging. Prior studies (Thacker and Esenkov, 2002; Xie and Zhu, 2010) suggested that converting observations to the model coordinate (i.e. innovations in isopycnic coordinate) performs better than interpolating model state to observation coordinate (i.e. innovations in z-coordinate). This problem is revisited here with the Norwegian Climate Prediction Model, which applies the ensemble Kalman filter (EnKF) into the ocean isopycnic model (MICOM) of the Norwegian Earth System Model. We perform Observing System Simulation Experiments (OSSEs) to compare two schemes (the EnKF-z and EnKF-ρ). In OSSEs, the truth is set to the EN4 objective analyses and observations are perturbations of the truth with white noises. Unlike in previous studies, it is found that EnKF-z outperforms EnKF-ρ for different observed vertical resolution, inhomogeneous sampling (e.g. upper 1000 meter observations only), or lack of salinity measurements. That is mostly because the operator converting observations into isopycnic coordinate is strongly non-linear. We also study the horizontal localisation radius at certain arbitrary grid points. Finally, we perform the EnKF-z with the chosen localisation radius in a realistic framework with NorCPM over a 5-year analysis period. The analysis is validated by different independent datasets.

  10. Mixture Factor Analysis for Approximating a Nonnormally Distributed Continuous Latent Factor with Continuous and Dichotomous Observed Variables

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Guo, Jia; Amemiya, Yasuo

    2012-01-01

    Mixture factor analysis is examined as a means of flexibly estimating nonnormally distributed continuous latent factors in the presence of both continuous and dichotomous observed variables. A simulation study compares mixture factor analysis with normal maximum likelihood (ML) latent factor modeling. Different results emerge for continuous versus…

  11. Accuracy of latent-variable estimation in Bayesian semi-supervised learning.

    PubMed

    Yamazaki, Keisuke

    2015-09-01

    Hierarchical probabilistic models, such as Gaussian mixture models, are widely used for unsupervised learning tasks. These models consist of observable and latent variables, which represent the observable data and the underlying data-generation process, respectively. Unsupervised learning tasks, such as cluster analysis, are regarded as estimations of latent variables based on the observable ones. The estimation of latent variables in semi-supervised learning, where some labels are observed, will be more precise than that in unsupervised, and one of the concerns is to clarify the effect of the labeled data. However, there has not been sufficient theoretical analysis of the accuracy of the estimation of latent variables. In a previous study, a distribution-based error function was formulated, and its asymptotic form was calculated for unsupervised learning with generative models. It has been shown that, for the estimation of latent variables, the Bayes method is more accurate than the maximum-likelihood method. The present paper reveals the asymptotic forms of the error function in Bayesian semi-supervised learning for both discriminative and generative models. The results show that the generative model, which uses all of the given data, performs better when the model is well specified. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Viscoplastic analysis of an experimental cylindrical thrust chamber liner

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.; Arnold, Steven M.

    1991-01-01

    A viscoplastic stress-strain analysis of an experimental cylindrical thrust chamber is presented. A viscoelastic constitutive model incorporating a single internal state variable that represents kinematic hardening was employed to investigate whether such a viscoplastic model could predict the experimentally observed behavior of the thrust chamber. Two types of loading cycles were considered: a short cycle of 3.5 sec. duration that corresponded to the experiments, and an extended loading cycle of 485.1 sec. duration that is typical of the Space Shuttle Main Engine (SSME) operating cycle. The analysis qualitatively replicated the deformation behavior of the component as observed in experiments designed to simulate SSME operating conditions. The analysis also showed that the mode and location in the component may depend on the loading cycle. The results indicate that using viscoplastic models for structural analysis can lead to a more realistic life assessment of thrust chambers.

  13. Periodic Properties and Inquiry: Student Mental Models Observed during a Periodic Table Puzzle Activity

    ERIC Educational Resources Information Center

    Larson, Kathleen G.; Long, George R.; Briggs, Michael W.

    2012-01-01

    The mental models of both novice and advanced chemistry students were observed while the students performed a periodic table activity. The mental model framework seems to be an effective way of analyzing student behavior during learning activities. The analysis suggests that students do not recognize periodic trends through the examination of…

  14. On the Choice of Variable for Atmospheric Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; DaSilva, Arlindo M.; Atlas, Robert (Technical Monitor)

    2002-01-01

    The implications of using different control variables for the analysis of moisture observations in a global atmospheric data assimilation system are investigated. A moisture analysis based on either mixing ratio or specific humidity is prone to large extrapolation errors, due to the high variability in space and time of these parameters and to the difficulties in modeling their error covariances. Using the logarithm of specific humidity does not alleviate these problems, and has the further disadvantage that very dry background estimates cannot be effectively corrected by observations. Relative humidity is a better choice from a statistical point of view, because this field is spatially and temporally more coherent and error statistics are therefore easier to obtain. If, however, the analysis is designed to preserve relative humidity in the absence of moisture observations, then the analyzed specific humidity field depends entirely on analyzed temperature changes. If the model has a cool bias in the stratosphere this will lead to an unstable accumulation of excess moisture there. A pseudo-relative humidity can be defined by scaling the mixing ratio by the background saturation mixing ratio. A univariate pseudo-relative humidity analysis will preserve the specific humidity field in the absence of moisture observations. A pseudorelative humidity analysis is shown to be equivalent to a mixing ratio analysis with flow-dependent covariances. In the presence of multivariate (temperature-moisture) observations it produces analyzed relative humidity values that are nearly identical to those produced by a relative humidity analysis. Based on a time series analysis of radiosonde observed-minus-background differences it appears to be more justifiable to neglect specific humidity-temperature correlations (in a univariate pseudo-relative humidity analysis) than to neglect relative humidity-temperature correlations (in a univariate relative humidity analysis). A pseudo-relative humidity analysis is easily implemented in an existing moisture analysis system, by simply scaling observed-minus background moisture residuals prior to solving the analysis equation, and rescaling the analyzed increments afterward.

  15. Modeling Chinese ionospheric layer parameters based on EOF analysis

    NASA Astrophysics Data System (ADS)

    Yu, You; Wan, Weixing; Xiong, Bo; Ren, Zhipeng; Zhao, Biqiang; Zhang, Yun; Ning, Baiqi; Liu, Libo

    2015-05-01

    Using 24-ionosonde observations in and around China during the 20th solar cycle, an assimilative model is constructed to map the ionospheric layer parameters (foF2, hmF2, M(3000)F2, and foE) over China based on empirical orthogonal function (EOF) analysis. First, we decompose the background maps from the International Reference Ionosphere model 2007 (IRI-07) into different EOF modes. The obtained EOF modes consist of two factors: the EOF patterns and the corresponding EOF amplitudes. These two factors individually reflect the spatial distributions (e.g., the latitudinal dependence such as the equatorial ionization anomaly structure and the longitude structure with east-west difference) and temporal variations on different time scales (e.g., solar cycle, annual, semiannual, and diurnal variations) of the layer parameters. Then, the EOF patterns and long-term observations of ionosondes are assimilated to get the observed EOF amplitudes, which are further used to construct the Chinese Ionospheric Maps (CIMs) of the layer parameters. In contrast with the IRI-07 model, the mapped CIMs successfully capture the inherent temporal and spatial variations of the ionospheric layer parameters. Finally, comparison of the modeled (EOF and IRI-07 model) and observed values reveals that the EOF model reproduces the observation with smaller root-mean-square errors and higher linear correlation coefficients. In addition, IRI discrepancy at the low latitude especially for foF2 is effectively removed by EOF model.

  16. Modeling Chinese ionospheric layer parameters based on EOF analysis

    NASA Astrophysics Data System (ADS)

    Yu, You; Wan, Weixing

    2016-04-01

    Using 24-ionosonde observations in and around China during the 20th solar cycle, an assimilative model is constructed to map the ionospheric layer parameters (foF2, hmF2, M(3000)F2, and foE) over China based on empirical orthogonal function (EOF) analysis. First, we decompose the background maps from the International Reference Ionosphere model 2007 (IRI-07) into different EOF modes. The obtained EOF modes consist of two factors: the EOF patterns and the corresponding EOF amplitudes. These two factors individually reflect the spatial distributions (e.g., the latitudinal dependence such as the equatorial ionization anomaly structure and the longitude structure with east-west difference) and temporal variations on different time scales (e.g., solar cycle, annual, semiannual, and diurnal variations) of the layer parameters. Then, the EOF patterns and long-term observations of ionosondes are assimilated to get the observed EOF amplitudes, which are further used to construct the Chinese Ionospheric Maps (CIMs) of the layer parameters. In contrast with the IRI-07 model, the mapped CIMs successfully capture the inherent temporal and spatial variations of the ionospheric layer parameters. Finally, comparison of the modeled (EOF and IRI-07 model) and observed values reveals that the EOF model reproduces the observation with smaller root-mean-square errors and higher linear correlation co- efficients. In addition, IRI discrepancy at the low latitude especially for foF2 is effectively removed by EOF model.

  17. CRYSTAL-FACE Analysis and Simulations of the July 23rd Extended Anvil Case

    NASA Technical Reports Server (NTRS)

    Starr, David

    2003-01-01

    A key focus of CRYSTAL-FACE (Cirrus Regional Study of Tropical Anvils and cirrus Layers - Florida Area Cirrus Experiment) was the generation and subsequent evolution of cirrus outflow from deep convective cloud systems. Present theoretical background and motivations will be discussed. An integrated look at the observations of an extended cirrus anvil cloud system observed on 23 July 2002 will be presented, including lidar and millimeter radar observation; from NASA s ER-2 and in-situ observations from NASA s WB-57 and University of North Dakota Citation. The observations will be compared to results of simulations using 1-D and 2-D high-resolution (100 meter) cloud resolving models. The CRMs explicitly account for cirrus microphysical development by resolving the evolving ice crystal size distribution (bin model) in time and space. Both homogeneous and heterogeneous nucleation are allowed in the model. The CRM simulations are driven using the output of regional simulations using MM5 that produces deep convection similar to what was observed. The MM5 model employs a 2 km inner grid (32 layers) over a 360 km domain, nested within a 6-km grid over a 600-km domain. Initial and boundary conditions for the 36-hour MM5 simulation are taken from NCEP Eta model analysis at 32 km resolution. Key issues to be explored are the settling of the observed anvil versus the model simulations, and comparisons of dynamical properties, such as vertical motions, occurring in the observations and models. The former provides an integrated measure of the validity of the model microphysics (fallspeed) while the latter is the key factor in forcing continued ice generation.

  18. Using Latent Class Analysis to Model Temperament Types

    ERIC Educational Resources Information Center

    Loken, Eric

    2004-01-01

    Mixture models are appropriate for data that arise from a set of qualitatively different subpopulations. In this study, latent class analysis was applied to observational data from a laboratory assessment of infant temperament at four months of age. The EM algorithm was used to fit the models, and the Bayesian method of posterior predictive checks…

  19. Research Review, 1984

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A variety of topics relevant to global modeling and simulation are presented. Areas of interest include: (1) analysis and forecast studies; (2) satellite observing systems; (3) analysis and forecast model development; (4) atmospheric dynamics and diagnostic studies; (5) climate/ocean-air interactions; and notes from lectures.

  20. Review: To be or not to be an identifiable model. Is this a relevant question in animal science modelling?

    PubMed

    Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P

    2018-04-01

    What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.

  1. An Overview of Atmospheric Composition OSSE Activities at NASA's Global Modeling and Assimilation Office

    NASA Technical Reports Server (NTRS)

    daSilva, Arlinda

    2012-01-01

    A model-based Observing System Simulation Experiment (OSSE) is a framework for numerical experimentation in which observables are simulated from fields generated by an earth system model, including a parameterized description of observational error characteristics. Simulated observations can be used for sampling studies, quantifying errors in analysis or retrieval algorithms, and ultimately being a planning tool for designing new observing missions. While this framework has traditionally been used to assess the impact of observations on numerical weather prediction, it has a much broader applicability, in particular to aerosols and chemical constituents. In this talk we will give a general overview of Observing System Simulation Experiments (OSSE) activities at NASA's Global Modeling and Assimilation Office, with focus on its emerging atmospheric composition component.

  2. Vertical structure and physical processes of the Madden-Julian Oscillation: Biases and uncertainties at short range

    NASA Astrophysics Data System (ADS)

    Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; Woolnough, Steve J.; Jiang, Xianan; Waliser, Duane E.; Caian, Mihaela; Cole, Jason; Hagos, Samson M.; Hannay, Cecile; Kim, Daehyun; Miyakawa, Tomoki; Pritchard, Michael S.; Roehrig, Romain; Shindo, Eiki; Vitart, Frederic; Wang, Hailan

    2015-05-01

    An analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.

  3. Circular analysis in complex stochastic systems

    PubMed Central

    Valleriani, Angelo

    2015-01-01

    Ruling out observations can lead to wrong models. This danger occurs unwillingly when one selects observations, experiments, simulations or time-series based on their outcome. In stochastic processes, conditioning on the future outcome biases all local transition probabilities and makes them consistent with the selected outcome. This circular self-consistency leads to models that are inconsistent with physical reality. It is also the reason why models built solely on macroscopic observations are prone to this fallacy. PMID:26656656

  4. Analyzing the carbon cycle with the local ensemble transform Kalman filter, online transport model and real observation data

    NASA Astrophysics Data System (ADS)

    Maki, T.; Sekiyama, T. T.; Shibata, K.; Miyazaki, K.; Miyoshi, T.; Yamada, K.; Yokoo, Y.; Iwasaki, T.

    2011-12-01

    In the current carbon cycle analysis, inverse modeling plays an important role. However, it requires enormous computational resources when we deal with more flux regions and more observations. The local ensemble transform Kalman filter (LETKF) is an alternative approach to reduce such problems. We constructed a carbon cycle analysis system with the LETKF and MRI (Meteorological Research Institute) online transport model (MJ98-CDTM). In MJ98-CDTM, an off-line transport model (CDTM) is directly coupled with the MRI/JMA GCM (MJ98). We further improved vertical transport processes in MJ98-CDTM from previous study. The LETKF includes enhanced features such as smoother to assimilate future observations, adaptive inflation and bias correction scheme. In this study, we use CO2 observations of surface data (continuous and flask), aircraft data (CONTRAIL) and satellite data (GOSAT), although we plan to assimilate AIRS tropospheric CO2 data. We developed a quality control system. We estimated 3-day-mean CO2 flux at a resolution of T42. Here, only CO2 concentrations and fluxes are analyzed whereas meteorological fields are nudged by the Japanese reanalysis (JCDAS). The horizontal localization length scale and assimilation window are chosen to be 1000 km and 3 days, respectively. The results indicate that the assimilation system works properly, better than free transport model run when we validate with independent CO2 concentration observational data and CO2 analysis data.

  5. A WRF-Chem Analysis of Flash Rates, Lightning-NOx Production and Subsequent Trace Gas Chemistry of the 29-30 May 2012 Convective Event in Oklahoma During DC3

    NASA Technical Reports Server (NTRS)

    Cummings, Kristin A.; Pickering, Kenneth; Barth, Mary; Weinheimer, A.; Bela, M.; Li, Y; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.; hide

    2015-01-01

    The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). This is a continuation of previous work, which compared lightning observations (Oklahoma Lightning Mapping Array and National Lightning Detection Network) with flashes generated by various flash rate parameterization schemes (FRPSs) from the literature in a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the 29-30 May 2012 Oklahoma thunderstorm. Based on the Oklahoma radar observations and Lightning Mapping Array data, new FRPSs are being generated and incorporated into the model. The focus of this analysis is on estimating the amount of lightning-generated nitrogen oxides (LNOx) produced per flash in this storm through a series of model simulations using different production per flash assumptions and comparisons with DC3 aircraft anvil observations. The result of this analysis will be compared with previously studied mid-latitude storms. Additional model simulations are conducted to investigate the upper troposphere transport, distribution, and chemistry of the LNOx plume during the 24 hours following the convective event to investigate ozone production. These model-simulated mixing ratios are compared against the aircraft observations made on 30 May over the southern Appalachians.

  6. Effect of different transport observations on inverse modeling results: case study of a long-term groundwater tracer test monitored at high resolution

    NASA Astrophysics Data System (ADS)

    Rasa, Ehsan; Foglia, Laura; Mackay, Douglas M.; Scow, Kate M.

    2013-11-01

    Conservative tracer experiments can provide information useful for characterizing various subsurface transport properties. This study examines the effectiveness of three different types of transport observations for sensitivity analysis and parameter estimation of a three-dimensional site-specific groundwater flow and transport model: conservative tracer breakthrough curves (BTCs), first temporal moments of BTCs ( m 1), and tracer cumulative mass discharge ( M d) through control planes combined with hydraulic head observations ( h). High-resolution data obtained from a 410-day controlled field experiment at Vandenberg Air Force Base, California (USA), have been used. In this experiment, bromide was injected to create two adjacent plumes monitored at six different transects (perpendicular to groundwater flow) with a total of 162 monitoring wells. A total of 133 different observations of transient hydraulic head, 1,158 of BTC concentration, 23 of first moment, and 36 of mass discharge were used for sensitivity analysis and parameter estimation of nine flow and transport parameters. The importance of each group of transport observations in estimating these parameters was evaluated using sensitivity analysis, and five out of nine parameters were calibrated against these data. Results showed the advantages of using temporal moment of conservative tracer BTCs and mass discharge as observations for inverse modeling.

  7. Morphofunctional analysis of experimental model of esophageal achalasia in rats.

    PubMed

    Sabirov, A G; Raginov, I S; Burmistrov, M V; Chelyshev, Y A; Khasanov, R Sh; Moroshek, A A; Grigoriev, P N; Zefirov, A L; Mukhamedyarov, M A

    2010-10-01

    We carried out a detailed analysis of rat model of esophageal achalasia previously developed by us. Manifest morphological and functional disorders were observed in experimental achalasia: hyperplasia of the squamous epithelium, reduced number of nerve fibers, excessive growth of fibrous connective tissue in the esophageal wall, high contractile activity of the lower esophageal sphincter, and reduced motility of the longitudinal muscle layer. Changes in rat esophagus observed in experimental achalasia largely correlate with those in esophageal achalasia in humans. Hence, our experimental model can be used for the development of new methods of disease treatment.

  8. Extended atmospheres of outer planet satellites and comets

    NASA Technical Reports Server (NTRS)

    Smyth, W. H.; Combi, M. R.

    1985-01-01

    Model analysis of the extended atmospheres of outer planet satellites and comets are discussed. Understanding the neutral hydrogen distribution in the Saturn system concentrated on assessing the spatial dependence of the lifetime of hydrogen atoms and on obtaining appropriately sorted Lyman ALPHA data from the Voyager 1 UVS instrument. Progress in the area of the extended cometary atmospheres included analysis of Pioneer Venus Layman alpha observations of Comet P/Encke with the fully refined hydrogen cloud model, development of the basic carbon and oxygen models, and planning for the Pioneer Venus UVS observations of Comets P/Giacobini-Zinner and P/Halley.

  9. Multivariate Bias Correction Procedures for Improving Water Quality Predictions from the SWAT Model

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Libera, D.

    2017-12-01

    Water quality observations are usually not available on a continuous basis for longer than 1-2 years at a time over a decadal period given the labor requirements making calibrating and validating mechanistic models difficult. Further, any physical model predictions inherently have bias (i.e., under/over estimation) and require post-simulation techniques to preserve the long-term mean monthly attributes. This study suggests a multivariate bias-correction technique and compares to a common technique in improving the performance of the SWAT model in predicting daily streamflow and TN loads across the southeast based on split-sample validation. The approach is a dimension reduction technique, canonical correlation analysis (CCA) that regresses the observed multivariate attributes with the SWAT model simulated values. The common approach is a regression based technique that uses an ordinary least squares regression to adjust model values. The observed cross-correlation between loadings and streamflow is better preserved when using canonical correlation while simultaneously reducing individual biases. Additionally, canonical correlation analysis does a better job in preserving the observed joint likelihood of observed streamflow and loadings. These procedures were applied to 3 watersheds chosen from the Water Quality Network in the Southeast Region; specifically, watersheds with sufficiently large drainage areas and number of observed data points. The performance of these two approaches are compared for the observed period and over a multi-decadal period using loading estimates from the USGS LOADEST model. Lastly, the CCA technique is applied in a forecasting sense by using 1-month ahead forecasts of P & T from ECHAM4.5 as forcings in the SWAT model. Skill in using the SWAT model for forecasting loadings and streamflow at the monthly and seasonal timescale is also discussed.

  10. Diagnostic evaluation of distributed physically based model at the REW scale (THREW) using rainfall-runoff event analysis

    NASA Astrophysics Data System (ADS)

    Tian, F.; Sivapalan, M.; Li, H.; Hu, H.

    2007-12-01

    The importance of diagnostic analysis of hydrological models is increasingly recognized by the scientific community (M. Sivapalan, et al., 2003; H. V. Gupta, et al., 2007). Model diagnosis refers to model structures and parameters being identified not only by statistical comparison of system state variables and outputs but also by process understanding in a specific watershed. Process understanding can be gained by the analysis of observational data and model results at the specific watershed as well as through regionalization. Although remote sensing technology can provide valuable data about the inputs, state variables, and outputs of the hydrological system, observational rainfall-runoff data still constitute the most accurate, reliable, direct, and thus a basic component of hydrology related database. One critical question in model diagnostic analysis is, therefore, what signature characteristic can we extract from rainfall and runoff data. To this date only a few studies have focused on this question, such as Merz et al. (2006) and Lana-Renault et al. (2007), still none of these studies related event analysis with model diagnosis in an explicit, rigorous, and systematic manner. Our work focuses on the identification of the dominant runoff generation mechanisms from event analysis of rainfall-runoff data, including correlation analysis and analysis of timing pattern. The correlation analysis involves the identification of the complex relationship among rainfall depth, intensity, runoff coefficient, and antecedent conditions, and the timing pattern analysis aims to identify the clustering pattern of runoff events in relation to the patterns of rainfall events. Our diagnostic analysis illustrates the changing pattern of runoff generation mechanisms in the DMIP2 test watersheds located in Oklahoma region, which is also well recognized by numerical simulations based on TsingHua Representative Elementary Watershed (THREW) model. The result suggests the usefulness of rainfall-runoff event analysis for model development as well as model diagnostics.

  11. [Causal analysis approaches in epidemiology].

    PubMed

    Dumas, O; Siroux, V; Le Moual, N; Varraso, R

    2014-02-01

    Epidemiological research is mostly based on observational studies. Whether such studies can provide evidence of causation remains discussed. Several causal analysis methods have been developed in epidemiology. This paper aims at presenting an overview of these methods: graphical models, path analysis and its extensions, and models based on the counterfactual approach, with a special emphasis on marginal structural models. Graphical approaches have been developed to allow synthetic representations of supposed causal relationships in a given problem. They serve as qualitative support in the study of causal relationships. The sufficient-component cause model has been developed to deal with the issue of multicausality raised by the emergence of chronic multifactorial diseases. Directed acyclic graphs are mostly used as a visual tool to identify possible confounding sources in a study. Structural equations models, the main extension of path analysis, combine a system of equations and a path diagram, representing a set of possible causal relationships. They allow quantifying direct and indirect effects in a general model in which several relationships can be tested simultaneously. Dynamic path analysis further takes into account the role of time. The counterfactual approach defines causality by comparing the observed event and the counterfactual event (the event that would have been observed if, contrary to the fact, the subject had received a different exposure than the one he actually received). This theoretical approach has shown limits of traditional methods to address some causality questions. In particular, in longitudinal studies, when there is time-varying confounding, classical methods (regressions) may be biased. Marginal structural models have been developed to address this issue. In conclusion, "causal models", though they were developed partly independently, are based on equivalent logical foundations. A crucial step in the application of these models is the formulation of causal hypotheses, which will be a basis for all methodological choices. Beyond this step, statistical analysis tools recently developed offer new possibilities to delineate complex relationships, in particular in life course epidemiology. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  12. Analysis of capture-recapture models with individual covariates using data augmentation

    USGS Publications Warehouse

    Royle, J. Andrew

    2009-01-01

    I consider the analysis of capture-recapture models with individual covariates that influence detection probability. Bayesian analysis of the joint likelihood is carried out using a flexible data augmentation scheme that facilitates analysis by Markov chain Monte Carlo methods, and a simple and straightforward implementation in freely available software. This approach is applied to a study of meadow voles (Microtus pennsylvanicus) in which auxiliary data on a continuous covariate (body mass) are recorded, and it is thought that detection probability is related to body mass. In a second example, the model is applied to an aerial waterfowl survey in which a double-observer protocol is used. The fundamental unit of observation is the cluster of individual birds, and the size of the cluster (a discrete covariate) is used as a covariate on detection probability.

  13. Improved Understanding of the Modeled QBO Using MLS Observations and MERRA Reanalysis

    NASA Technical Reports Server (NTRS)

    Oman, Luke David; Douglass, Anne Ritger; Hurwitz, Maggie M.; Garfinkel, Chaim I.

    2013-01-01

    The Quasi-Biennial Oscillation (QBO) dominates the variability of the tropical stratosphere on interannual time scales. The QBO has been shown to extend its influence into the chemical composition of this region through dynamical mechanisms. We have started our analysis using the realistic QBO internally generated by the Goddard Earth Observing System Version 5 (GEOS-5) general circulation model coupled to a comprehensive stratospheric and tropospheric chemical mechanism forced with observed sea surface temperatures over the past 33 years. We will show targeted comparisons with observations from NASAs Aura satellite Microwave Limb Sounder (MLS) and the Modern Era Retrospective-Analysis for Research and Applications (MERRA) reanalysis to provide insight into the simulation of the primary and secondary circulations associated with the QBO. Using frequency spectrum analysis and multiple linear regression we can illuminate the resulting circulations and deduce the strengths and weaknesses in their modeled representation. Inclusion of the QBO in our simulation improves the representation of the subtropical barriers and overall tropical variability. The QBO impact on tropical upwelling is important to quantify when calculating trends in sub-decadal scale datasets.

  14. Use of In-Situ and Remotely Sensed Snow Observations for the National Water Model in Both an Analysis and Calibration Framework.

    NASA Astrophysics Data System (ADS)

    Karsten, L. R.; Gochis, D.; Dugger, A. L.; McCreight, J. L.; Barlage, M. J.; Fall, G. M.; Olheiser, C.

    2017-12-01

    Since version 1.0 of the National Water Model (NWM) has gone operational in Summer 2016, several upgrades to the model have occurred to improve hydrologic prediction for the continental United States. Version 1.1 of the NWM (Spring 2017) includes upgrades to parameter datasets impacting land surface hydrologic processes. These parameter datasets were upgraded using an automated calibration workflow that utilizes the Dynamic Data Search (DDS) algorithm to adjust parameter values using observed streamflow. As such, these upgrades to parameter values took advantage of various observations collected for snow analysis. In particular, in-situ SNOTEL observations in the Western US, volunteer in-situ observations across the entire US, gamma-derived snow water equivalent (SWE) observations courtesy of the NWS NOAA Corps program, gridded snow depth and SWE products from the Jet Propulsion Laboratory (JPL) Airborne Snow Observatory (ASO), gridded remotely sensed satellite-based snow products (MODIS,AMSR2,VIIRS,ATMS), and gridded SWE from the NWS Snow Data Assimilation System (SNODAS). This study explores the use of these observations to quantify NWM error and improvements from version 1.0 to version 1.1, along with subsequent work since then. In addition, this study explores the use of snow observations for use within the automated calibration workflow. Gridded parameter fields impacting the accumulation and ablation of snow states in the NWM were adjusted and calibrated using gridded remotely sensed snow states, SNODAS products, and in-situ snow observations. This calibration adjustment took place over various ecological regions in snow-dominated parts of the US for a retrospective period of time to capture a variety of climatological conditions. Specifically, the latest calibrated parameters impacting streamflow were held constant and only parameters impacting snow physics were tuned using snow observations and analysis. The adjusted parameter datasets were then used to force the model over an independent period for analysis against both snow and streamflow observations to see if improvements took place. The goal of this work is to further improve snow physics in the NWM, along with identifying areas where further work will take place in the future, such as data assimilation or further forcing improvements.

  15. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  16. Model Predictive Flight Control System with Full State Observer using H∞ Method

    NASA Astrophysics Data System (ADS)

    Sanwale, Jitu; Singh, Dhan Jeet

    2018-03-01

    This paper presents the application of the model predictive approach to design a flight control system (FCS) for longitudinal dynamics of a fixed wing aircraft. Longitudinal dynamics is derived for a conventional aircraft. Open loop aircraft response analysis is carried out. Simulation studies are illustrated to prove the efficacy of the proposed model predictive controller using H ∞ state observer. The estimation criterion used in the {H}_{∞} observer design is to minimize the worst possible effects of the modelling errors and additive noise on the parameter estimation.

  17. Analyzing Dyadic Sequence Data—Research Questions and Implied Statistical Models

    PubMed Central

    Fuchs, Peter; Nussbeck, Fridtjof W.; Meuwly, Nathalie; Bodenmann, Guy

    2017-01-01

    The analysis of observational data is often seen as a key approach to understanding dynamics in romantic relationships but also in dyadic systems in general. Statistical models for the analysis of dyadic observational data are not commonly known or applied. In this contribution, selected approaches to dyadic sequence data will be presented with a focus on models that can be applied when sample sizes are of medium size (N = 100 couples or less). Each of the statistical models is motivated by an underlying potential research question, the most important model results are presented and linked to the research question. The following research questions and models are compared with respect to their applicability using a hands on approach: (I) Is there an association between a particular behavior by one and the reaction by the other partner? (Pearson Correlation); (II) Does the behavior of one member trigger an immediate reaction by the other? (aggregated logit models; multi-level approach; basic Markov model); (III) Is there an underlying dyadic process, which might account for the observed behavior? (hidden Markov model); and (IV) Are there latent groups of dyads, which might account for observing different reaction patterns? (mixture Markov; optimal matching). Finally, recommendations for researchers to choose among the different models, issues of data handling, and advises to apply the statistical models in empirical research properly are given (e.g., in a new r-package “DySeq”). PMID:28443037

  18. Advanced earth observation spacecraft computer-aided design software: Technical, user and programmer guide

    NASA Technical Reports Server (NTRS)

    Farrell, C. E.; Krauze, L. D.

    1983-01-01

    The IDEAS computer of NASA is a tool for interactive preliminary design and analysis of LSS (Large Space System). Nine analysis modules were either modified or created. These modules include the capabilities of automatic model generation, model mass properties calculation, model area calculation, nonkinematic deployment modeling, rigid-body controls analysis, RF performance prediction, subsystem properties definition, and EOS science sensor selection. For each module, a section is provided that contains technical information, user instructions, and programmer documentation.

  19. Benefit of Modeling the Observation Error in a Data Assimilation Framework Using Vegetation Information Obtained From Passive Based Microwave Data

    NASA Technical Reports Server (NTRS)

    Bolten, John D.; Mladenova, Iliana E.; Crow, Wade; De Jeu, Richard

    2016-01-01

    A primary operational goal of the United States Department of Agriculture (USDA) is to improve foreign market access for U.S. agricultural products. A large fraction of this crop condition assessment is based on satellite imagery and ground data analysis. The baseline soil moisture estimates that are currently used for this analysis are based on output from the modified Palmer two-layer soil moisture model, updated to assimilate near-real time observations derived from the Soil Moisture Ocean Salinity (SMOS) satellite. The current data assimilation system is based on a 1-D Ensemble Kalman Filter approach, where the observation error is modeled as a function of vegetation density. This allows for offsetting errors in the soil moisture retrievals. The observation error is currently adjusted using Normalized Difference Vegetation Index (NDVI) climatology. In this paper we explore the possibility of utilizing microwave-based vegetation optical depth instead.

  20. Attribution of Observed Streamflow Changes in Key British Columbia Drainage Basins

    NASA Astrophysics Data System (ADS)

    Najafi, Mohammad Reza; Zwiers, Francis W.; Gillett, Nathan P.

    2017-11-01

    We study the observed decline in summer streamflow in four key river basins in British Columbia (BC), Canada, using a formal detection and attribution (D&A) analysis procedure. Reconstructed and simulated streamflow is generated using the semidistributed variable infiltration capacity hydrologic model, which is driven by 1/16° gridded observations and downscaled climate model data from the Coupled Model Intercomparison Project phase 5 (CMIP5), respectively. The internal variability of the regional hydrologic components using 5100 years of streamflow was simulated using CMIP5 preindustrial control runs. Results show that the observed changes in summer streamflow are inconsistent with simulations representing the responses to natural forcing factors alone, while the response to anthropogenic and natural forcing factors combined is detected in these changes. A two-signal D&A analysis indicates that the effects of anthropogenic (ANT) forcing factors are discernable from natural forcing in BC, albeit with large uncertainties.

  1. Investigating the Potential Impact of the Surface Water and Ocean Topography (SWOT) Altimeter on Ocean Mesoscale Prediction

    NASA Astrophysics Data System (ADS)

    Carrier, M.; Ngodock, H.; Smith, S. R.; Souopgui, I.

    2016-02-01

    NASA's Surface Water and Ocean Topography (SWOT) satellite, scheduled for launch in 2020, will provide sea surface height anomaly (SSHA) observations with a wider swath width and higher spatial resolution than current satellite altimeters. It is expected that this will help to further constrain ocean models in terms of the mesoscale circulation. In this work, this expectation is investigated by way of twin data assimilation experiments using the Navy Coastal Ocean Model Four Dimensional Variational (NCOM-4DVAR) data assimilation system using a weak constraint formulation. Here, a nature run is created from which SWOT observations are sampled, as well as along-track SSHA observations from simulated Jason-2 tracks. The simulated SWOT data has appropriate spatial coverage, resolution, and noise characteristics based on an observation-simulator program provided by the SWOT science team. The experiment is run for a three-month period during which the analysis is updated every 24 hours and each analysis is used to initialize a 96 hour forecast. The forecasts in each experiment are compared to the available nature run to determine the impact of the assimilated data. It is demonstrated here that the SWOT observations help to constrain the model mesoscale in a more consistent manner than traditional altimeter observations. The findings of this study suggest that data from SWOT may have a substantial impact on improving the ocean model analysis and forecast of mesoscale features and surface ocean transport.

  2. Atmospheric planetary wave response to external forcing

    NASA Technical Reports Server (NTRS)

    Stevens, D. E.; Reiter, E. R.

    1985-01-01

    The tools of observational analysis, complex general circulation modeling, and simpler modeling approaches were combined in order to attack problems on the largest spatial scales of the earth's atmosphere. Two different models were developed and applied. The first is a two level, global spectral model which was designed primarily to test the effects of north-south sea surface temperature anomaly (SSTA) gradients between the equatorial and midlatitude north Pacific. The model is nonlinear, contains both radiation and a moisture budget with associated precipitation and surface evaporation, and utilizes a linear balance dynamical framework. Supporting observational analysis of atmospheric planetary waves is briefly summarized. More extensive general circulation models have also been used to consider the problem of the atmosphere's response, especially in the horizontal propagation of planetary scale waves, to SSTA.

  3. Evaluating Observation Influence on Regional Water Budgets in Reanalyses

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Chern, Jiun-Dar; Mocko, David; Robertson, Franklin R.; daSilva, Arlindo M.

    2014-01-01

    The assimilation of observations in reanalyses incurs the potential for the physical terms of budgets to be balanced by a term relating the fit of the observations relative to a forecast first guess analysis. This may indicate a limitation in the physical processes of the background model, or perhaps inconsistencies in the observing system and its assimilation. In the MERRA reanalysis, an area of long term moisture flux divergence over land has been identified over the Central United States. Here, we evaluate the water vapor budget in this region, taking advantage of two unique features of the MERRA diagnostic output; 1) a closed water budget that includes the analysis increment and 2) a gridded diagnostic output data set of the assimilated observations and their innovations (e.g. forecast departures). In the Central United States, an anomaly occurs where the analysis adds water to the region, while precipitation decreases and moisture flux divergence increases. This is related more to a change in the observing system than to a deficiency in the model physical processes. MERRAs Gridded Innovations and Observations (GIO) data narrow the observations that influence this feature to the ATOVS and Aqua satellites during the 06Z and 18Z analysis cycles. Observing system experiments further narrow the instruments that affect the anomalous feature to AMSUA (mainly window channels) and AIRS. This effort also shows the complexities of the observing system, and the reactions of the regional water budgets in reanalyses to the assimilated observations.

  4. Analysis of out-of-plane thermal microactuators

    NASA Astrophysics Data System (ADS)

    Atre, Amarendra

    2006-02-01

    Out-of-plane thermal microactuators find applications in optical switches to motivate micromirrors. Accurate analysis of such actuators is beneficial for improving existing designs and constructing more energy efficient actuators. However, the analysis is complicated by the nonlinear deformation of the thermal actuators along with temperature-dependent properties of polysilicon. This paper describes the development, modeling issues and results of a three-dimensional multiphysics nonlinear finite element model of surface micromachined out-of-plane thermal actuators. The model includes conductive and convective cooling effects and takes into account the effect of variable air gap on the response of the actuator. The model is implemented to investigate the characteristics of two diverse MUMPs fabricated out-of-plane thermal actuators. Reasonable agreement is observed between simulated and measured results for the model that considers the influence of air gap on actuator response. The usefulness of the model is demonstrated by implementing it to observe the effect of actuator geometry variation on steady-state deflection response.

  5. Ecosystem behavior at Bermuda Station [open quotes]S[close quotes] and ocean weather station [open quotes]India[close quotes]: A general circulation model and observational analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasham, M.J.R.; Sarmiento, J.L.; Slater, R.D.

    1993-06-01

    One important theme of modern biological oceanography has been the attempt to develop models of how the marine ecosystem responds to variations in the physical forcing functions such as solar radiation and the wind field. The authors have addressed the problem by embedding simple ecosystem models into a seasonally forced three-dimensional general circulation model of the North Atlantic ocean. In this paper first, some of the underlying biological assumptions of the ecosystem model are presented, followed by an analysis of how well the model predicts the seasonal cycle of the biological variables at Bermuda Station s' and Ocean Weather Stationmore » India. The model gives a good overall fit to the observations but does not faithfully model the whole seasonal ecosystem model. 57 refs., 25 figs., 5 tabs.« less

  6. "A Bayesian sensitivity analysis to evaluate the impact of unmeasured confounding with external data: a real world comparative effectiveness study in osteoporosis".

    PubMed

    Zhang, Xiang; Faries, Douglas E; Boytsov, Natalie; Stamey, James D; Seaman, John W

    2016-09-01

    Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies. In a real world osteoporosis comparative effectiveness study, the bone mineral density (BMD) score, an important predictor of fracture risk and a factor in the selection of osteoporosis treatments, is unavailable in the data base and lack of baseline BMD could potentially lead to significant selection bias. We implemented Bayesian twin-regression models, which simultaneously model both the observed outcome and the unobserved unmeasured confounder, using information from external sources. A sensitivity analysis was also conducted to assess the robustness of our conclusions to changes in such external data. The use of Bayesian modeling in this study suggests that the lack of baseline BMD did have a strong impact on the analysis, reversing the direction of the estimated effect (odds ratio of fracture incidence at 24 months: 0.40 vs. 1.36, with/without adjusting for unmeasured baseline BMD). The Bayesian twin-regression models provide a flexible sensitivity analysis tool to quantitatively assess the impact of unmeasured confounding in observational studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Inferential ecosystem models, from network data to prediction

    Treesearch

    James S. Clark; Pankaj Agarwal; David M. Bell; Paul G. Flikkema; Alan Gelfand; Xuanlong Nguyen; Eric Ward; Jun Yang

    2011-01-01

    Recent developments suggest that predictive modeling could begin to play a larger role not only for data analysis, but also for data collection. We address the example of efficient wireless sensor networks, where inferential ecosystem models can be used to weigh the value of an observation against the cost of data collection. Transmission costs make observations ‘‘...

  8. Using Combined Marine Spatial Planning Tools and Observing System Experiments to define Gaps in the Emerging European Ocean Observing System.

    NASA Astrophysics Data System (ADS)

    Nolan, G.; Pinardi, N.; Vukicevic, T.; Le Traon, P. Y.; Fernandez, V.

    2016-02-01

    Ocean observations are critical to providing accurate ocean forecasts that support operational decision making in European open and coastal seas. Observations are available in many forms from Fixed platforms e.g. Moored Buoys and tide gauges, underway measurements from Ferrybox systems, High Frequency radars and more recently from underwater Gliders and profiling floats. Observing System Simulation Experiments have been conducted to examine the relative contribution of each type of platform to an improvement in our ability to accurately forecast the future state of the ocean with HF radar and Gliders showing particular promise in improving model skill. There is considerable demand for ecosystem products and services from today's ocean observing system and biogeochemical observations are still relatively sparse particularly in coastal and shelf seas. There is a need to widen the techniques used to assess the fitness for purpose and gaps in the ocean observing system. As well as Observing System Simulation Experiments that quantify the effect of observations on the overall model skill we present a gap analysis based on (1) Examining where high model skill is required based on a marine spatial planning analysis of European seas i.e where does activity take place that requires more accurate forecasts? and (2) assessing gaps based on the capacity of the observing system to answer key societal challenges e.g. site suitability for aquaculture and ocean energy, oil spill response and contextual oceanographic products for fisheries and ecosystems. The broad based analysis will inform the development of the proposed European Ocean Observing System as a contribution to the Global Ocean Observing System (GOOS).

  9. Characterization of the Dynamics of Climate Systems and Identification of Missing Mechanisms Impacting the Long Term Predictive Capabilities of Global Climate Models Utilizing Dynamical Systems Approaches to the Analysis of Observed and Modeled Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatt, Uma S.; Wackerbauer, Renate; Polyakov, Igor V.

    The goal of this research was to apply fractional and non-linear analysis techniques in order to develop a more complete characterization of climate change and variability for the oceanic, sea ice and atmospheric components of the Earth System. This research applied two measures of dynamical characteristics of time series, the R/S method of calculating the Hurst exponent and Renyi entropy, to observational and modeled climate data in order to evaluate how well climate models capture the long-term dynamics evident in observations. Fractional diffusion analysis was applied to ARGO ocean buoy data to quantify ocean transport. Self organized maps were appliedmore » to North Pacific sea level pressure and analyzed in ways to improve seasonal predictability for Alaska fire weather. This body of research shows that these methods can be used to evaluate climate models and shed light on climate mechanisms (i.e., understanding why something happens). With further research, these methods show promise for improving seasonal to longer time scale forecasts of climate.« less

  10. Simultaneous estimation of plasma parameters from spectroscopic data of neutral helium using least square fitting of CR-model

    NASA Astrophysics Data System (ADS)

    Jain, Jalaj; Prakash, Ram; Vyas, Gheesa Lal; Pal, Udit Narayan; Chowdhuri, Malay Bikas; Manchanda, Ranjana; Halder, Nilanjan; Choyal, Yaduvendra

    2015-12-01

    In the present work an effort has been made to estimate the plasma parameters simultaneously like—electron density, electron temperature, ground state atom density, ground state ion density and metastable state density from the observed visible spectra of penning plasma discharge (PPD) source using least square fitting. The analysis is performed for the prominently observed neutral helium lines. The atomic data and analysis structure (ADAS) database is used to provide the required collisional-radiative (CR) photon emissivity coefficients (PECs) values under the optical thin plasma condition in the analysis. With this condition the estimated plasma temperature from the PPD is found rather high. It is seen that the inclusion of opacity in the observed spectral lines through PECs and addition of diffusion of neutrals and metastable state species in the CR-model code analysis improves the electron temperature estimation in the simultaneous measurement.

  11. Assimilating All-Sky GPM Microwave Imager(GMI) Radiance Data in NASA GEOS-5 System for Global Cloud and Precipitation Analyses

    NASA Astrophysics Data System (ADS)

    Kim, M. J.; Jin, J.; McCarty, W.; Todling, R.; Holdaway, D. R.; Gelaro, R.

    2014-12-01

    The NASA Global Modeling and Assimilation Office (GMAO) works to maximize the impact of satellite observations in the analysis and prediction of climate and weather through integrated Earth system modeling and data assimilation. To achieve this goal, the GMAO undertakes model and assimilation development, generates products to support NASA instrument teams and the NASA Earth science program. Currently Atmospheric Data Assimilation System (ADAS) in the Goddard Earth Observing System Model, Version 5(GEOS-5) system combines millions of observations and short-term forecasts to determine the best estimate, or analysis, of the instantaneous atmospheric state. However, ADAS has been geared towards utilization of observations in clear sky conditions and the majority of satellite channel data affected by clouds are discarded. Microwave imager data from satellites can be a significant source of information for clouds and precipitation but the data are presently underutilized, as only surface rain rates from the Tropical Rainfall Measurement Mission (TRMM) Microwave Imager (TMI) are assimilated with small weight assigned in the analysis process. As clouds and precipitation often occur in regions with high forecast sensitivity, improvements in the temperature, moisture, wind and cloud analysis of these regions are likely to contribute to significant gains in numerical weather prediction accuracy. This presentation is intended to give an overview of GMAO's recent progress in assimilating the all-sky GPM Microwave Imager (GMI) radiance data in GEOS-5 system. This includes development of various new components to assimilate cloud and precipitation affected data in addition to data in clear sky condition. New observation operators, quality controls, moisture control variables, observation and background error models, and a methodology to incorporate the linearlized moisture physics in the assimilation system are described. In addition preliminary results showing impacts of assimilating all-sky GMI data on GEOS-5 forecasts are discussed.

  12. Numerical analysis of multicomponent responses of surface-hole transient electromagnetic method

    NASA Astrophysics Data System (ADS)

    Meng, Qing-Xin; Hu, Xiang-Yun; Pan, He-Ping; Zhou, Feng

    2017-03-01

    We calculate the multicomponent responses of surface-hole transient electromagnetic method. The methods and models are unsuitable as geoelectric models of conductive surrounding rocks because they are based on regular local targets. We also propose a calculation and analysis scheme based on numerical simulations of the subsurface transient electromagnetic fields. In the modeling of the electromagnetic fields, the forward modeling simulations are performed by using the finite-difference time-domain method and the discrete image method, which combines the Gaver-Stehfest inverse Laplace transform with the Prony method to solve the initial electromagnetic fields. The precision in the iterative computations is ensured by using the transmission boundary conditions. For the response analysis, we customize geoelectric models consisting of near-borehole targets and conductive wall rocks and implement forward modeling simulations. The observed electric fields are converted into induced electromotive force responses using multicomponent observation devices. By comparing the transient electric fields and multicomponent responses under different conditions, we suggest that the multicomponent-induced electromotive force responses are related to the horizontal and vertical gradient variations of the transient electric field at different times. The characteristics of the response are determined by the varying the subsurface transient electromagnetic fields, i.e., diffusion, attenuation and distortion, under different conditions as well as the electromagnetic fields at the observation positions. The calculation and analysis scheme of the response consider the surrounding rocks and the anomalous field of the local targets. It therefore can account for the geological data better than conventional transient field response analysis of local targets.

  13. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.

    PubMed

    Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E

    2015-09-03

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. The impact of Doppler lidar wind observations on a single-level meteorological analysis

    NASA Technical Reports Server (NTRS)

    Riishojgaard, L. P.; Atlas, R.; Emmitt, G. D.

    2001-01-01

    Through the use of observation operators, modern data assimilation systems have the capability to ingest observations of quantities that are not themselves model variables, but are mathematically related to those variables. An example of this are the so-called LOS (line of sight) winds that a Doppler wind Lidar can provide. The model - or data assimilation system - needs information about both components of the horizontal wind vectors, whereas the observations in this case only provide the projection of the wind vector onto a given direction. The analyzed value is then calculated essentially based on a comparison between the observation itself and the model-simulated value of the observed quantity. However, in order to assess the expected impact of such an observing system, it is important to examine the extent to which a meteorological analysis can be constrained by the LOS winds. The answer to this question depends on the fundamental character of the atmospheric flow fields that are analyzed, but more importantly it also depends on the real and assumed error covariance characteristics of these fields. A single-level wind analysis system designed to explore these issues has been built at the NASA Data Assimilation Office. In this system, simulated wind observations can be evaluated in terms of their impact on the analysis quality under various assumptions about their spatial distribution and error characteristics and about the error covariance of the background fields. The basic design of the system will be presented along with experimental results obtained with it. In particular, the value of simultaneously measuring LOS winds along two different directions for a given location will be discussed.

  15. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  16. Successes and Challenges in Linking Observations and Modeling of Marine and Terrestrial Cryospheric Processes

    NASA Astrophysics Data System (ADS)

    Herzfeld, U. C.; Hunke, E. C.; Trantow, T.; Greve, R.; McDonald, B.; Wallin, B.

    2014-12-01

    Understanding of the state of the cryosphere and its relationship to other components of the Earth system requires both models of geophysical processes and observations of geophysical properties and processes, however linking observations and models is far from trivial. This paper looks at examples from sea ice and land ice model-observation linkages to examine some approaches, challenges and solutions. In a sea-ice example, ice deformation is analyzed as a key process that indicates fundamental changes in the Arctic sea ice cover. Simulation results from the Los Alamos Sea-Ice Model CICE, which is also the sea-ice component of the Community Earth System Model (CESM), are compared to parameters indicative of deformation as derived from mathematical analysis of remote sensing data. Data include altimeter, micro-ASAR and image data from manned and unmanned aircraft campaigns (NASA OIB and Characterization of Arctic Sea Ice Experiment, CASIE). The key problem to linking data and model results is the derivation of matching parameters on both the model and observation side.For terrestrial glaciology, we include an example of a surge process in a glacier system and and example of a dynamic ice sheet model for Greenland. To investigate the surge of the Bering Bagley Glacier System, we use numerical forward modeling experiments and, on the data analysis side, a connectionist approach to analyze crevasse provinces. In the Greenland ice sheet example, we look at the influence of ice surface and bed topography, as derived from remote sensing data, on on results from a dynamic ice sheet model.

  17. Exploring dynamic events in the solar corona

    NASA Astrophysics Data System (ADS)

    Downs, Cooper James

    With the advent of modern computational technology it is now becoming the norm to employ detailed 3D computer models as empirical tools that directly account for the inhomogeneous nature of the Sun-Heliosphere environment. The key advantage of this approach lies in the ability to compare model results directly to observational data and to use a successful comparison (or lack thereof) to glean information on the underlying physical processes. Using extreme ultraviolet waves (EUV waves) as the overarching scientific driver, we apply this observation modeling approach to study the complex dynamics of the magnetic and thermodynamic structures that are observed in the low solar corona. Representing a highly non-trivial effort, this work includes three main scientific thrusts: an initial modeling effort and two EUV wave case-studies. First we document the development of the new Low Corona (LC) model, a 3D time-dependent thermodynamic magnetohydrodynamic (MHD) model implemented within the Space Weather Modeling Framework (SWMF). Observation synthesis methods are integrated within the LC model, which provides the ability to compare model results directly to EUV imaging observations taken by spacecraft. The new model is then used to explore the dynamic interplay between magnetic structures and thermodynamic energy balance in the corona that is caused by coronal heating mechanisms. With the model development complete, we investigate the nature of EUV waves in detail through two case-studies. Starting with the 2008 March 25 event, we conduct a series of numerical simulations that independently vary fundamental parameters thought to govern the physical mechanisms behind EUV waves. Through the subsequent analysis of the 3D data and comparison to observations we find evidence for both wave and non-wave mechanisms contributing to the EUV wave signal. We conclude with a comprehensive observation and modeling analysis of the 2010 June 13 EUV wave event, which was observed by the recently launched Solar Dynamics Observatory. We use a high resolution simulation of the transient to unambiguously characterize the globally propagating front of EUV wave as a fast-mode magnetosonic wave, and use the rich set of observations to place the many other facets of the EUV transient within a unified scenario involving wave and non-wave components.

  18. The Model Experiments and Finite Element Analysis on Deformation and Failure by Excavation of Grounds in Foregoing-roof Method

    NASA Astrophysics Data System (ADS)

    Sotokoba, Yasumasa; Okajima, Kenji; Iida, Toshiaki; Tanaka, Tadatsugu

    We propose the trenchless box culvert construction method to construct box culverts in small covering soil layers while keeping roads or tracks open. When we use this construction method, it is necessary to clarify deformation and shear failure by excavation of grounds. In order to investigate the soil behavior, model experiments and elasto-plactic finite element analysis were performed. In the model experiments, it was shown that the shear failure was developed from the end of the roof to the toe of the boundary surface. In the finite element analysis, a shear band effect was introduced. Comparing the observed shear bands in model experiments with computed maximum shear strain contours, it was found that the observed direction of the shear band could be simulated reasonably by the finite element analysis. We may say that the finite element method used in this study is useful tool for this construction method.

  19. Multi-criteria evaluation of CMIP5 GCMs for climate change impact analysis

    NASA Astrophysics Data System (ADS)

    Ahmadalipour, Ali; Rana, Arun; Moradkhani, Hamid; Sharma, Ashish

    2017-04-01

    Climate change is expected to have severe impacts on global hydrological cycle along with food-water-energy nexus. Currently, there are many climate models used in predicting important climatic variables. Though there have been advances in the field, there are still many problems to be resolved related to reliability, uncertainty, and computing needs, among many others. In the present work, we have analyzed performance of 20 different global climate models (GCMs) from Climate Model Intercomparison Project Phase 5 (CMIP5) dataset over the Columbia River Basin (CRB) in the Pacific Northwest USA. We demonstrate a statistical multicriteria approach, using univariate and multivariate techniques, for selecting suitable GCMs to be used for climate change impact analysis in the region. Univariate methods includes mean, standard deviation, coefficient of variation, relative change (variability), Mann-Kendall test, and Kolmogorov-Smirnov test (KS-test); whereas multivariate methods used were principal component analysis (PCA), singular value decomposition (SVD), canonical correlation analysis (CCA), and cluster analysis. The analysis is performed on raw GCM data, i.e., before bias correction, for precipitation and temperature climatic variables for all the 20 models to capture the reliability and nature of the particular model at regional scale. The analysis is based on spatially averaged datasets of GCMs and observation for the period of 1970 to 2000. Ranking is provided to each of the GCMs based on the performance evaluated against gridded observational data on various temporal scales (daily, monthly, and seasonal). Results have provided insight into each of the methods and various statistical properties addressed by them employed in ranking GCMs. Further; evaluation was also performed for raw GCM simulations against different sets of gridded observational dataset in the area.

  20. Application of Lidar Data to the Performance Evaluations of ...

    EPA Pesticide Factsheets

    The Tropospheric Ozone (O3) Lidar Network (TOLNet) provides time/height O3 measurements from near the surface to the top of the troposphere to describe in high-fidelity spatial-temporal distributions, which is uniquely useful to evaluate the temporal evolution of O3 profiles in air quality models. This presentation describes the application of the Lidar data to the performance evaluation of CMAQ simulated O3 vertical profiles during the summer, 2014. Two-way coupled WRF-CMAQ simulations with 12km and 4km domains centered over Boulder, Colorado were performed during this time period. The analysis on the time series of observed and modeled O3 mixing ratios at different vertical layers indicates that the model frequently underestimated the observed values, and the underestimation was amplified in the middle model layers (~1km above the ground). When the lightning strikes detected by the National Lightning Detection Network (NLDN) were analyzed along with the observed O3 time series, it was found that the daily maximum O3 mixing ratios correlated well with the lightning strikes in the vicinity of the Lidar station. The analysis on temporal vertical profiles of both observed and modeled O3 mixing ratios on episodic days suggests that the model resolutions (12km and 4km) do not make any significant difference for this analysis (at this specific location and simulation period), but high O3 levels in the middle layers were linked to lightning activity that occurred in t

  1. Bayesian Adaptive Lasso for Ordinal Regression with Latent Variables

    ERIC Educational Resources Information Center

    Feng, Xiang-Nan; Wu, Hao-Tian; Song, Xin-Yuan

    2017-01-01

    We consider an ordinal regression model with latent variables to investigate the effects of observable and latent explanatory variables on the ordinal responses of interest. Each latent variable is characterized by correlated observed variables through a confirmatory factor analysis model. We develop a Bayesian adaptive lasso procedure to conduct…

  2. Maximum likelihood-based analysis of single-molecule photon arrival trajectories.

    PubMed

    Hajdziona, Marta; Molski, Andrzej

    2011-02-07

    In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.

  3. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    USGS Publications Warehouse

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  4. Using finite-difference waveform modeling to better understand rupture kinematics and path effects in ground motion modeling: an induced seismicity case study at the Groningen Gas field

    NASA Astrophysics Data System (ADS)

    Zurek, B.; Burnett, W. A.; deMartin, B.

    2017-12-01

    Ground motion models (GMMs) have historically been used as input in the development of probabilistic seismic hazard analysis (PSHA) and as an engineering tool to assess risk in building design. Generally these equations are developed from empirical analysis of observations that come from fairly complete catalogs of seismic events. One of the challenges when doing a PSHA analysis in a region where earthquakes are anthropogenically induced is that the catalog of observations is not complete enough to come up with a set of equations to cover all expected outcomes. For example, PSHA analysis at the Groningen gas field, an area of known induced seismicity, requires estimates of ground motions from tremors up to a maximum magnitude of 6.5 ML. Of the roughly 1300 recordable earthquakes the maximum observed magnitude to date has been 3.6ML. This paper is part of a broader study where we use a deterministic finite-difference wave-form modeling tool to compliment the traditional development of GMMs. Of particular interest is the sensitivity of the GMM's to uncertainty in the rupture process and how this scales to larger magnitude events that have not been observed. A kinematic fault rupture model is introduced to our waveform simulations to test the sensitivity of the GMMs to variability in the fault rupture process that is physically consistent with observations. These tests will aid in constraining the degree of variability in modeled ground motions due to a realistic range of fault parameters and properties. From this study it is our conclusion that in order to properly capture the uncertainty of the GMMs with magnitude up-scaling one needs to address the impact of uncertainty in the near field (<10km) imposed by the lack of constraint on the finite rupture model. By quantifying the uncertainty back to physical principles it is our belief that it can be better constrained and thus reduce exposure to risk. Further, by investigating and constraining the range of fault rupture scenarios and earthquake magnitudes on ground motion models, hazard and risk analysis in regions with incomplete earthquake catalogs, such as the Groningen gas field, can be better understood.

  5. On the Direct Assimilation of Along-track Sea Surface Height Observations into a Free-surface Ocean Model Using a Weak Constraints Four Dimensional Variational (4dvar) Method

    NASA Astrophysics Data System (ADS)

    Ngodock, H.; Carrier, M.; Smith, S. R.; Souopgui, I.; Martin, P.; Jacobs, G. A.

    2016-02-01

    The representer method is adopted for solving a weak constraints 4dvar problem for the assimilation of ocean observations including along-track SSH, using a free surface ocean model. Direct 4dvar assimilation of SSH observations along the satellite tracks requires that the adjoint model be integrated with Dirac impulses on the right hand side of the adjoint equations for the surface elevation equation. The solution of this adjoint model will inevitably include surface gravity waves, and it constitutes the forcing for the tangent linear model (TLM) according to the representer method. This yields an analysis that is contaminated by gravity waves. A method for avoiding the generation of the surface gravity waves in the analysis is proposed in this study; it consists of removing the adjoint of the free surface from the right hand side (rhs) of the free surface mode in the TLM. The information from the SSH observations will still propagate to all other variables via the adjoint of the balance relationship between the barotropic and baroclinic modes, resulting in the correction to the surface elevation. Two assimilation experiments are carried out in the Gulf of Mexico: one with adjoint forcing included on the rhs of the TLM free surface equation, and the other without. Both analyses are evaluated against the assimilated SSH observations, SSH maps from Aviso and independent surface drifters, showing that the analysis that did not include adjoint forcing in the free surface is more accurate. This study shows that when a weak constraint 4dvar approach is considered for the assimilation of along-track SSH observations using a free surface model, with the aim of correcting the mesoscale circulation, an independent model error should not be assigned to the free surface.

  6. Performance and diagnostic evaluation of ozone predictions by the Eta-Community Multiscale Air Quality Forecast System during the 2002 New England Air Quality Study.

    PubMed

    Yu, Shaocai; Mathur, Rohit; Kang, Daiwen; Schere, Kenneth; Eder, Brian; Pleim, Jonathan

    2006-10-01

    A real-time air quality forecasting system (Eta-Community Multiscale Air Quality [CMAQ] model suite) has been developed by linking the National Centers for Environmental Estimation Eta model to the U.S. Environmental Protection Agency (EPA) CMAQ model. This work presents results from the application of the Eta-CMAQ modeling system for forecasting ozone (O3) over the Northeastern United States during the 2002 New England Air Quality Study (NEAQS). Spatial and temporal performance of the Eta-CMAQ model for O3 was evaluated by comparison with observations from the EPA Air Quality System (AQS) network. This study also examines the ability of the model to simulate the processes governing the distributions of tropospheric O3 on the basis of the intensive datasets obtained at the four Atmospheric Investigation, Regional Modeling, Analysis, and Estimation (AIRMAP) and Harvard Forest (HF) surface sites. The episode analysis reveals that the model captured the buildup of O3 concentrations over the northeastern domain from August 11 and reproduced the spatial distributions of observed O3 very well for the daytime (8:00 p.m.) of both August 8 and 12 with most of normalized mean bias (NMB) within +/- 20%. The model reproduced 53.3% of the observed hourly O3 within a factor of 1.5 with NMB of 29.7% and normalized mean error of 46.9% at the 342 AQS sites. The comparison of modeled and observed lidar O3 vertical profiles shows that whereas the model reproduced the observed vertical structure, it tended to overestimate at higher altitude. The model reproduced 64-77% of observed NO2 photolysis rate values within a factor of 1.5 at the AIRMAP sites. At the HF site, comparison of modeled and observed O3/nitrogen oxide (NOx) ratios suggests that the site is mainly under strongly NOx-sensitive conditions (>53%). It was found that the modeled lower limits of the O3 production efficiency values (inferred from O3-CO correlation) are close to the observations.

  7. An Ideal Observer Analysis of Visual Working Memory

    PubMed Central

    Sims, Chris R.; Jacobs, Robert A.; Knill, David C.

    2013-01-01

    Limits in visual working memory (VWM) strongly constrain human performance across many tasks. However, the nature of these limits is not well understood. In this paper we develop an ideal observer analysis of human visual working memory, by deriving the expected behavior of an optimally performing, but limited-capacity memory system. This analysis is framed around rate–distortion theory, a branch of information theory that provides optimal bounds on the accuracy of information transmission subject to a fixed information capacity. The result of the ideal observer analysis is a theoretical framework that provides a task-independent and quantitative definition of visual memory capacity and yields novel predictions regarding human performance. These predictions are subsequently evaluated and confirmed in two empirical studies. Further, the framework is general enough to allow the specification and testing of alternative models of visual memory (for example, how capacity is distributed across multiple items). We demonstrate that a simple model developed on the basis of the ideal observer analysis—one which allows variability in the number of stored memory representations, but does not assume the presence of a fixed item limit—provides an excellent account of the empirical data, and further offers a principled re-interpretation of existing models of visual working memory. PMID:22946744

  8. Extracting galactic structure parameters from multivariated density estimation

    NASA Technical Reports Server (NTRS)

    Chen, B.; Creze, M.; Robin, A.; Bienayme, O.

    1992-01-01

    Multivariate statistical analysis, including includes cluster analysis (unsupervised classification), discriminant analysis (supervised classification) and principle component analysis (dimensionlity reduction method), and nonparameter density estimation have been successfully used to search for meaningful associations in the 5-dimensional space of observables between observed points and the sets of simulated points generated from a synthetic approach of galaxy modelling. These methodologies can be applied as the new tools to obtain information about hidden structure otherwise unrecognizable, and place important constraints on the space distribution of various stellar populations in the Milky Way. In this paper, we concentrate on illustrating how to use nonparameter density estimation to substitute for the true densities in both of the simulating sample and real sample in the five-dimensional space. In order to fit model predicted densities to reality, we derive a set of equations which include n lines (where n is the total number of observed points) and m (where m: the numbers of predefined groups) unknown parameters. A least-square estimation will allow us to determine the density law of different groups and components in the Galaxy. The output from our software, which can be used in many research fields, will also give out the systematic error between the model and the observation by a Bayes rule.

  9. Atmospheric Boundary Layer Dynamics Near Ross Island and Over West Antarctica.

    NASA Astrophysics Data System (ADS)

    Liu, Zhong

    The atmospheric boundary layer dynamics near Ross Island and over West Antarctica has been investigated. The study consists of two parts. The first part involved the use of data from ground-based remote sensing equipment (sodar and RASS), radiosondes, pilot balloons, automatic weather stations, and NOAA AVHRR satellite imagery. The second part involved the use of a high resolution boundary layer model coupled with a three-dimensional primitive equation mesoscale model to simulate the observed atmospheric boundary layer winds and temperatures. Turbulence parameters were simulated with an E-epsilon turbulence model driven by observed winds and temperatures. The observational analysis, for the first time, revealed that the airflow passing through the Ross Island area is supplied mainly by enhanced katabatic drainage from Byrd Glacier and secondarily drainage from Mulock and Skelton glaciers. The observed diurnal variation of the blocking effect near Ross Island is dominated by the changes in the upstream katabatic airflow. The synthesized analysis over West Antarctica found that the Siple Coast katabatic wind confluence zone consists of two superimposed katabatic airflows: a relatively warm and more buoyant katabatic flow from West Antarctica overlies a colder and less buoyant katabatic airflow from East Antarctica. The force balance analysis revealed that, inside the West Antarctic katabatic wind zone, the pressure gradient force associated with the blocked airflow against the Transantarctic Mountains dominates; inside the East Antarctic katabatic wind zone, the downslope buoyancy force due to the cold air overlying the sloping terrain is dominant. The analysis also shows that these forces are in geostrophic balance with the Coriolis force. An E-epsilon turbulence closure model is used to simulate the diurnal variation of sodar backscatter. The results show that the model is capable of qualitatively capturing the main features of the observed sodar backscatter. To improve the representation of the atmospheric boundary layer, a second-order turbulence closure model coupled with the input from a mesoscale model was applied to the springtime Siple Coast katabatic wind confluence zone. The simulation was able to capture the main features of the confluence zone, which were not well resolved by the mesoscale model.

  10. The pre-Argo ocean reanalyses may be seriously affected by the spatial coverage of moored buoys

    PubMed Central

    Sivareddy, S.; Paul, Arya; Sluka, Travis; Ravichandran, M.; Kalnay, Eugenia

    2017-01-01

    Assimilation methods, meant to constrain divergence of model trajectory from reality using observations, do not exactly satisfy the physical laws governing the model state variables. This allows mismatches in the analysis in the vicinity of observation locations where the effect of assimilation is most prominent. These mismatches are usually mitigated either by the model dynamics in between the analysis cycles and/or by assimilation at the next analysis cycle. However, if the observations coverage is limited in space, as it was in the ocean before the Argo era, these mechanisms may be insufficient to dampen the mismatches, which we call shocks, and they may remain and grow. Here we show through controlled experiments, using real and simulated observations in two different ocean models and assimilation systems, that such shocks are generated in the ocean at the lateral boundaries of the moored buoy network. They thrive and propagate westward as Rossby waves along these boundaries. However, these shocks are essentially eliminated by the assimilation of near-homogenous global Argo distribution. These findings question the fidelity of ocean reanalysis products in the pre-Argo era. For example, a reanalysis that ignores Argo floats and assimilates only moored buoys, wrongly represents 2008 as a negative Indian Ocean Dipole year. PMID:28429748

  11. A Method of Relating General Circulation Model Simulated Climate to the Observed Local Climate. Part I: Seasonal Statistics.

    NASA Astrophysics Data System (ADS)

    Karl, Thomas R.; Wang, Wei-Chyung; Schlesinger, Michael E.; Knight, Richard W.; Portman, David

    1990-10-01

    Important surface observations such as the daily maximum and minimum temperature, daily precipitation, and cloud ceilings often have localized characteristics that are difficult to reproduce with the current resolution and the physical parameterizations in state-of-the-art General Circulation climate Models (GCMs). Many of the difficulties can be partially attributed to mismatches in scale, local topography. regional geography and boundary conditions between models and surface-based observations. Here, we present a method, called climatological projection by model statistics (CPMS), to relate GCM grid-point flee-atmosphere statistics, the predictors, to these important local surface observations. The method can be viewed as a generalization of the model output statistics (MOS) and perfect prog (PP) procedures used in numerical weather prediction (NWP) models. It consists of the application of three statistical methods: 1) principle component analysis (FICA), 2) canonical correlation, and 3) inflated regression analysis. The PCA reduces the redundancy of the predictors The canonical correlation is used to develop simultaneous relationships between linear combinations of the predictors, the canonical variables, and the surface-based observations. Finally, inflated regression is used to relate the important canonical variables to each of the surface-based observed variables.We demonstrate that even an early version of the Oregon State University two-level atmospheric GCM (with prescribed sea surface temperature) produces free-atmosphere statistics than can, when standardized using the model's internal means and variances (the MOS-like version of CPMS), closely approximate the observed local climate. When the model data are standardized by the observed free-atmosphere means and variances (the PP version of CPMS), however, the model does not reproduce the observed surface climate as well. Our results indicate that in the MOS-like version of CPMS the differences between the output of a ten-year GCM control run and the surface-based observations are often smaller than the differences between the observations of two ten-year periods. Such positive results suggest that GCMs may already contain important climatological information that can be used to infer the local climate.

  12. Comprehensive analysis of the simplest curvaton model

    NASA Astrophysics Data System (ADS)

    Byrnes, Christian T.; Cortês, Marina; Liddle, Andrew R.

    2014-07-01

    We carry out a comprehensive analysis of the simplest curvaton model, which is based on two noninteracting massive fields. Our analysis encompasses cases where the inflaton and curvaton both contribute to observable perturbations, and where the curvaton itself drives a second period of inflation. We consider both power spectrum and non-Gaussianity observables, and focus on presenting constraints in model parameter space. The fully curvaton-dominated regime is in some tension with observational data, while an admixture of inflaton-generated perturbations improves the fit. The inflating curvaton regime mimics the predictions of Nflation. Some parts of parameter space permitted by power spectrum data are excluded by non-Gaussianity constraints. The recent BICEP2 results [P. A. R. Ade et al. (BICEP2 Collaboration), Phys. Rev. Lett. 112, 241101 (2014)], if confirmed as of predominantly primordial origin, require that the inflaton perturbations provide a significant fraction of the total perturbation, ruling out the usual curvaton scenario in which the inflaton perturbations are negligible, though not the admixture regime where both inflaton and curvaton contribute to the spectrum.

  13. Climate Observing Systems: Where are we and where do we need to be in the future

    NASA Astrophysics Data System (ADS)

    Baker, B.; Diamond, H. J.

    2017-12-01

    Climate research and monitoring requires an observational strategy that blends long-term, carefully calibrated measurements as well as short-term, focused process studies. The operation and implementation of operational climate observing networks and the provision of related climate services, both have a significant role to play in assisting the development of national climate adaptation policies and in facilitating national economic development. Climate observing systems will require a strong research element for a long time to come. This requires improved observations of the state variables and the ability to set them in a coherent physical (as well as a chemical and biological) framework with models. Climate research and monitoring requires an integrated strategy of land/ocean/atmosphere observations, including both in situ and remote sensing platforms, and modeling and analysis. It is clear that we still need more research and analysis on climate processes, sampling strategies, and processing algorithms.

  14. Two-dimensional advective transport in ground-water flow parameter estimation

    USGS Publications Warehouse

    Anderman, E.R.; Hill, M.C.; Poeter, E.P.

    1996-01-01

    Nonlinear regression is useful in ground-water flow parameter estimation, but problems of parameter insensitivity and correlation often exist given commonly available hydraulic-head and head-dependent flow (for example, stream and lake gain or loss) observations. To address this problem, advective-transport observations are added to the ground-water flow, parameter-estimation model MODFLOWP using particle-tracking methods. The resulting model is used to investigate the importance of advective-transport observations relative to head-dependent flow observations when either or both are used in conjunction with hydraulic-head observations in a simulation of the sewage-discharge plume at Otis Air Force Base, Cape Cod, Massachusetts, USA. The analysis procedure for evaluating the probable effect of new observations on the regression results consists of two steps: (1) parameter sensitivities and correlations calculated at initial parameter values are used to assess the model parameterization and expected relative contributions of different types of observations to the regression; and (2) optimal parameter values are estimated by nonlinear regression and evaluated. In the Cape Cod parameter-estimation model, advective-transport observations did not significantly increase the overall parameter sensitivity; however: (1) inclusion of advective-transport observations decreased parameter correlation enough for more unique parameter values to be estimated by the regression; (2) realistic uncertainties in advective-transport observations had a small effect on parameter estimates relative to the precision with which the parameters were estimated; and (3) the regression results and sensitivity analysis provided insight into the dynamics of the ground-water flow system, especially the importance of accurate boundary conditions. In this work, advective-transport observations improved the calibration of the model and the estimation of ground-water flow parameters, and use of regression and related techniques produced significant insight into the physical system.

  15. A Search for the tt¯H (H → bb) Large Hadron Collider with the atlas detector using a matrix element method

    NASA Astrophysics Data System (ADS)

    Basye, Austin T.

    A matrix element method analysis of the Standard Model Higgs boson, produced in association with two top quarks decaying to the lepton-plus-jets channel is presented. Based on 20.3 fb--1 of s=8 TeV data, produced at the Large Hadron Collider and collected by the ATLAS detector, this analysis utilizes multiple advanced techniques to search for ttH signatures with a 125 GeV Higgs boson decaying to two b -quarks. After categorizing selected events based on their jet and b-tag multiplicities, signal rich regions are analyzed using the matrix element method. Resulting variables are then propagated to two parallel multivariate analyses utilizing Neural Networks and Boosted Decision Trees respectively. As no significant excess is found, an observed (expected) limit of 3.4 (2.2) times the Standard Model cross-section is determined at 95% confidence, using the CLs method, for the Neural Network analysis. For the Boosted Decision Tree analysis, an observed (expected) limit of 5.2 (2.7) times the Standard Model cross-section is determined at 95% confidence, using the CLs method. Corresponding unconstrained fits of the Higgs boson signal strength to the observed data result in the measured signal cross-section to Standard Model cross-section prediction of mu = 1.2 +/- 1.3(total) +/- 0.7(stat.) for the Neural Network analysis, and mu = 2.9 +/- 1.4(total) +/- 0.8(stat.) for the Boosted Decision Tree analysis.

  16. Theoretical and observational constraints on Tachyon Inflation

    NASA Astrophysics Data System (ADS)

    Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel; Hidalgo, Juan Carlos; Rigel Mora-Luna, Refugio

    2018-03-01

    We constrain several models in Tachyonic Inflation derived from the large-N formalism by considering theoretical aspects as well as the latest observational data. On the theoretical side, we assess the field range of our models by means of the excursion of the equivalent canonical field. On the observational side, we employ BK14+PLANCK+BAO data to perform a parameter estimation analysis as well as a Bayesian model selection to distinguish the most favoured models among all four classes here presented. We observe that the original potential V propto sech(T) is strongly disfavoured by observations with respect to a reference model with flat priors on inflationary observables. This realisation of Tachyon inflation also presents a large field range which may demand further quantum corrections. We also provide examples of potentials derived from the polynomial and the perturbative classes which are both statistically favoured and theoretically acceptable.

  17. DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER

    2009-04-01

    Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less

  18. Vertical structure and physical processes of the Madden-Julian Oscillation: Biases and uncertainties at short range

    DOE PAGES

    Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; ...

    2015-05-26

    We present an analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models as part of the “Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)” project. A lead time of 12–36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests thatmore » the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. In conclusion, the wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. Additionally, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.« less

  19. Analysis of Asian Outflow over the Western Pacific using Observations from Trace-P

    NASA Technical Reports Server (NTRS)

    Jacob, Daniel J.

    2004-01-01

    Our analysis of the TRACE-P data focused on answering the following questions: 1) How do anthropogenic sources in Asia contribute to chemical outflow over the western Pacific in spring? 2) How does biomass burning in southeast Asia contribute to this outflow? 3) How can the TRACE-P observations be used to better quantify the sources of environmentally important gases in eastern Asia? Our strategy drew on a combination of data analysis and global 3-D modeling, as described below. We also contributed to the planning and execution of TRACE-P through service as mission scientist and by providing chemical model forecasts in the field.

  20. Synthetic Training Data Generation for Activity Monitoring and Behavior Analysis

    NASA Astrophysics Data System (ADS)

    Monekosso, Dorothy; Remagnino, Paolo

    This paper describes a data generator that produces synthetic data to simulate observations from an array of environment monitoring sensors. The overall goal of our work is to monitor the well-being of one occupant in a home. Sensors are embedded in a smart home to unobtrusively record environmental parameters. Based on the sensor observations, behavior analysis and modeling are performed. However behavior analysis and modeling require large data sets to be collected over long periods of time to achieve the level of accuracy expected. A data generator - was developed based on initial data i.e. data collected over periods lasting weeks to facilitate concurrent data collection and development of algorithms. The data generator is based on statistical inference techniques. Variation is introduced into the data using perturbation models.

  1. eGSM: A extended Sky Model of Diffuse Radio Emission

    NASA Astrophysics Data System (ADS)

    Kim, Doyeon; Liu, Adrian; Switzer, Eric

    2018-01-01

    Both cosmic microwave background and 21cm cosmology observations must contend with astrophysical foreground contaminants in the form of diffuse radio emission. For precise cosmological measurements, these foregrounds must be accurately modeled over the entire sky Ideally, such full-sky models ought to be primarily motivated by observations. Yet in practice, these observations are limited, with data sets that are observed not only in a heterogenous fashion, but also over limited frequency ranges. Previously, the Global Sky Model (GSM) took some steps towards solving the problem of incomplete observational data by interpolating over multi-frequency maps using principal component analysis (PCA).In this poster, we present an extended version of GSM (called eGSM) that includes the following improvements: 1) better zero-level calibration 2) incorporation of non-uniform survey resolutions and sky coverage 3) the ability to quantify uncertainties in sky models 4) the ability to optimally select spectral models using Bayesian Evidence techniques.

  2. A critique of supernova data analysis in cosmology

    NASA Astrophysics Data System (ADS)

    Gopal Vishwakarma, Ram; Narlikar, Jayant V.

    2010-12-01

    Observational astronomy has shown significant growth over the last decade and has made important contributions to cosmology. A major paradigm shift in cosmology was brought about by observations of Type Ia supernovae. The notion that the universe is accelerating has led to several theoretical challenges. Unfortunately, although high-quality supernovae data-sets are being produced, their statistical analysis leaves much to be desired. Instead of using the data to directly test the model, several studies seem to concentrate on assuming the model to be correct and limiting themselves to estimating model parameters and internal errors. As shown here, the important purpose of testing a cosmological theory is thereby vitiated.

  3. Some dynamical aspects of interacting quintessence model

    NASA Astrophysics Data System (ADS)

    Choudhury, Binayak S.; Mondal, Himadri Shekhar; Chatterjee, Devosmita

    2018-04-01

    In this paper, we consider a particular form of coupling, namely B=σ (\\dot{ρ _m}-\\dot{ρ _φ }) in spatially flat (k=0) Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time. We perform phase-space analysis for this interacting quintessence (dark energy) and dark matter model for different numerical values of parameters. We also show the phase-space analysis for the `best-fit Universe' or concordance model. In our analysis, we observe the existence of late-time scaling attractors.

  4. Multiscale Modeling for the Analysis for Grain-Scale Fracture Within Aluminum Microstructures

    NASA Technical Reports Server (NTRS)

    Glaessgen, Edward H.; Phillips, Dawn R.; Yamakov, Vesselin; Saether, Erik

    2005-01-01

    Multiscale modeling methods for the analysis of metallic microstructures are discussed. Both molecular dynamics and the finite element method are used to analyze crack propagation and stress distribution in a nanoscale aluminum bicrystal model subjected to hydrostatic loading. Quantitative similarity is observed between the results from the two very different analysis methods. A bilinear traction-displacement relationship that may be embedded into cohesive zone finite elements is extracted from the nanoscale molecular dynamics results.

  5. A 3D model of polarized dust emission in the Milky Way

    NASA Astrophysics Data System (ADS)

    Martínez-Solaeche, Ginés; Karakci, Ata; Delabrouille, Jacques

    2018-05-01

    We present a three-dimensional model of polarized galactic dust emission that takes into account the variation of the dust density, spectral index and temperature along the line of sight, and contains randomly generated small-scale polarization fluctuations. The model is constrained to match observed dust emission on large scales, and match on smaller scales extrapolations of observed intensity and polarization power spectra. This model can be used to investigate the impact of plausible complexity of the polarized dust foreground emission on the analysis and interpretation of future cosmic microwave background polarization observations.

  6. Arctic sea-ice diffusion from observed and simulated Lagrangian trajectories

    NASA Astrophysics Data System (ADS)

    Rampal, Pierre; Bouillon, Sylvain; Bergh, Jon; Ólason, Einar

    2016-07-01

    We characterize sea-ice drift by applying a Lagrangian diffusion analysis to buoy trajectories from the International Arctic Buoy Programme (IABP) dataset and from two different models: the standalone Lagrangian sea-ice model neXtSIM and the Eulerian coupled ice-ocean model used for the TOPAZ reanalysis. By applying the diffusion analysis to the IABP buoy trajectories over the period 1979-2011, we confirm that sea-ice diffusion follows two distinct regimes (ballistic and Brownian) and we provide accurate values for the diffusivity and integral timescale that could be used in Eulerian or Lagrangian passive tracers models to simulate the transport and diffusion of particles moving with the ice. We discuss how these values are linked to the evolution of the fluctuating displacements variance and how this information could be used to define the size of the search area around the position predicted by the mean drift. By comparing observed and simulated sea-ice trajectories for three consecutive winter seasons (2007-2011), we show how the characteristics of the simulated motion may differ from or agree well with observations. This comparison illustrates the usefulness of first applying a diffusion analysis to evaluate the output of modeling systems that include a sea-ice model before using these in, e.g., oil spill trajectory models or, more generally, to simulate the transport of passive tracers in sea ice.

  7. INTEGRATION OF FACILITY MODELING CAPABILITIES FOR NUCLEAR NONPROLIFERATION ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorensek, M.; Hamm, L.; Garcia, H.

    2011-07-18

    Developing automated methods for data collection and analysis that can facilitate nuclear nonproliferation assessment is an important research area with significant consequences for the effective global deployment of nuclear energy. Facility modeling that can integrate and interpret observations collected from monitored facilities in order to ascertain their functional details will be a critical element of these methods. Although improvements are continually sought, existing facility modeling tools can characterize all aspects of reactor operations and the majority of nuclear fuel cycle processing steps, and include algorithms for data processing and interpretation. Assessing nonproliferation status is challenging because observations can come frommore » many sources, including local and remote sensors that monitor facility operations, as well as open sources that provide specific business information about the monitored facilities, and can be of many different types. Although many current facility models are capable of analyzing large amounts of information, they have not been integrated in an analyst-friendly manner. This paper addresses some of these facility modeling capabilities and illustrates how they could be integrated and utilized for nonproliferation analysis. The inverse problem of inferring facility conditions based on collected observations is described, along with a proposed architecture and computer framework for utilizing facility modeling tools. After considering a representative sampling of key facility modeling capabilities, the proposed integration framework is illustrated with several examples.« less

  8. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  9. Preliminary evaluation of the importance of existing hydraulic-head observation locations to advective-transport predictions, Death Valley regional flow system, California and Nevada

    USGS Publications Warehouse

    Hill, Mary C.; Ely, D. Matthew; Tiedeman, Claire; O'Brien, Grady M.; D'Agnese, Frank A.; Faunt, Claudia C.

    2001-01-01

    When a model is calibrated by nonlinear regression, calculated diagnostic statistics and measures of uncertainty provide a wealth of information about many aspects of the system. This report presents a method of ranking the likely importance of existing observation locations using measures of prediction uncertainty. It is suggested that continued monitoring is warranted at more important locations, and unwarranted or less warranted at less important locations. The report develops the methodology and then demonstrates it using the hydraulic-head observation locations of a three-layer model of the Death Valley regional flow system. The predictions of interest are subsurface transport from beneath Yucca Mountain and 14 Underground Test Areas. The advective component of transport is considered because it is the component most affected by the system dynamics represented by the scale model being used. The problem is addressed using the capabilities of the U.S. Geological Survey computer program MODFLOW-2000, with its ADVective-Travel Observation (ADV) Package, and an additional computer program developed for this work. The methods presented in this report are used in three ways. (1) The ratings for individual observations are obtained by manipulating the measures of prediction uncertainty, and do not involve recalibrating the model. In this analysis, observation locations are each omitted individually and the resulting increase in uncertainty in the predictions is calculated. The uncertainty is quantified as standard deviations on the simulated advective transport. The increase in uncertainty is quantified as the percent increase in the standard deviations caused by omitting the one observation location from the calculation of standard deviations. In general, observation locations associated with larger increases are rated as more important. (2) Ratings for largely geographically based groups are obtained using a straightforward extension of the method used for individual observation locations. This analysis is needed where observations are clustered to determine whether the area is important to the predictions of interest. (3) Finally, the method is used to evaluate omitting a set of 100 observation locations. The locations were selected because they had low individual ratings and were not one of the few locations at which hydraulic heads from deep in the system were measured. The major results of the three analyses, when applied to the three-layer DVRFS ground-water flow system, are described in the following paragraphs. The discussion is labeled using the numbers 1 to 3 to clearly relate it to the three ways the method is used, as listed above. (1) The individual observation location analysis indicates that three observation locations are most important. They are located in Emigrant Valley, Oasis Valley, and Beatty. Of importance is that these and other observations shown to be important by this analysis are far from the travel paths considered. This displays the importance of the regional setting within which the transport occurs, the importance of including some sites throughout the area in the monitoring network, and the importance of including sites in these areas in particular. The method considered in this report indicates that the 19 observation locations that reflect hydraulic heads deeper in the system (in model layers 1, 2, and 3) are not very important. This appears to be because the locations of these observations are in the vicinity of shallow observation locations that also generally are rated as low importance, and because the model layers are hydraulically well connected vertically. The value of deep observations to testing conceptual models, however, is stressed. As a result, the deep observations are rated higher than is consistent with the results of the analysis presented, and none of these observations are omitted in the scenario discussed under (3) below. (2) The geographic grouping of th

  10. Empowering Geoscience with Improved Data Assimilation Using the Data Assimilation Research Testbed "Manhattan" Release.

    NASA Astrophysics Data System (ADS)

    Raeder, K.; Hoar, T. J.; Anderson, J. L.; Collins, N.; Hendricks, J.; Kershaw, H.; Ha, S.; Snyder, C.; Skamarock, W. C.; Mizzi, A. P.; Liu, H.; Liu, J.; Pedatella, N. M.; Karspeck, A. R.; Karol, S. I.; Bitz, C. M.; Zhang, Y.

    2017-12-01

    The capabilities of the Data Assimilation Research Testbed (DART) at NCAR have been significantly expanded with the recent "Manhattan" release. DART is an ensemble Kalman filter based suite of tools, which enables researchers to use data assimilation (DA) without first becoming DA experts. Highlights: significant improvement in efficient ensemble DA for very large models on thousands of processors, direct read and write of model state files in parallel, more control of the DA output for finer-grained analysis, new model interfaces which are useful to a variety of geophysical researchers, new observation forward operators and the ability to use precomputed forward operators from the forecast model. The new model interfaces and example applications include the following: MPAS-A; Model for Prediction Across Scales - Atmosphere is a global, nonhydrostatic, variable-resolution mesh atmospheric model, which facilitates multi-scale analysis and forecasting. The absence of distinct subdomains eliminates problems associated with subdomain boundaries. It demonstrates the ability to consistently produce higher-quality analyses than coarse, uniform meshes do. WRF-Chem; Weather Research and Forecasting + (MOZART) Chemistry model assimilates observations from FRAPPÉ (Front Range Air Pollution and Photochemistry Experiment). WACCM-X; Whole Atmosphere Community Climate Model with thermosphere and ionosphere eXtension assimilates observations of electron density to investigate sudden stratospheric warming. CESM (weakly) coupled assimilation; NCAR's Community Earth System Model is used for assimilation of atmospheric and oceanic observations into their respective components using coupled atmosphere+land+ocean+sea+ice forecasts. CESM2.0; Assimilation in the atmospheric component (CAM, WACCM) of the newly released version is supported. This version contains new and extensively updated components and software environment. CICE; Los Alamos sea ice model (in CESM) is used to assimilate multivariate sea ice concentration observations to constrain the model's ice thickness, concentration, and parameters.

  11. MOCCA-SURVEY Database I: Is NGC 6535 a dark star cluster harbouring an IMBH?

    NASA Astrophysics Data System (ADS)

    Askar, Abbas; Bianchini, Paolo; de Vita, Ruggero; Giersz, Mirek; Hypki, Arkadiusz; Kamann, Sebastian

    2017-01-01

    We describe the dynamical evolution of a unique type of dark star cluster model in which the majority of the cluster mass at Hubble time is dominated by an intermediate-mass black hole (IMBH). We analysed results from about 2000 star cluster models (Survey Database I) simulated using the Monte Carlo code MOnte Carlo Cluster simulAtor and identified these dark star cluster models. Taking one of these models, we apply the method of simulating realistic `mock observations' by utilizing the Cluster simulatiOn Comparison with ObservAtions (COCOA) and Simulating Stellar Cluster Observation (SISCO) codes to obtain the photometric and kinematic observational properties of the dark star cluster model at 12 Gyr. We find that the perplexing Galactic globular cluster NGC 6535 closely matches the observational photometric and kinematic properties of the dark star cluster model presented in this paper. Based on our analysis and currently observed properties of NGC 6535, we suggest that this globular cluster could potentially harbour an IMBH. If it exists, the presence of this IMBH can be detected robustly with proposed kinematic observations of NGC 6535.

  12. Planck intermediate results. XLII. Large-scale Galactic magnetic fields

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Adam, R.; Ade, P. A. R.; Alves, M. I. R.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chiang, H. C.; Christensen, P. R.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dolag, K.; Doré, O.; Ducout, A.; Dupac, X.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Ferrière, K.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Galeotta, S.; Ganga, K.; Ghosh, T.; Giard, M.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hobson, M.; Hornstrup, A.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Mennella, A.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Oppermann, N.; Orlando, E.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Pasian, F.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Strong, A. W.; Sudiwala, R.; Sunyaev, R.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Valenziano, L.; Valiviita, J.; Van Tent, F.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-12-01

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. We use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering in the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.

  13. Characterizing observed circulation patterns within a bay using HF radar and numerical model simulations

    NASA Astrophysics Data System (ADS)

    O'Donncha, Fearghal; Hartnett, Michael; Nash, Stephen; Ren, Lei; Ragnoli, Emanuele

    2015-02-01

    In this study, High Frequency Radar (HFR), observations in conjunction with numerical model simulations investigate surface flow dynamics in a tidally-active, wind-driven bay; Galway Bay situated on the West coast of Ireland. Comparisons against ADCP sensor data permit an independent assessment of HFR and model performance, respectively. Results show root-mean-square (rms) differences in the range 10 - 12cm/s while model rms equalled 12 - 14cm/s. Subsequent analysis focus on a detailed comparison of HFR and model output. Harmonic analysis decompose both sets of surface currents based on distinct flow process, enabling a correlation analysis between the resultant output and dominant forcing parameters. Comparisons of barotropic model simulations and HFR tidal signal demonstrate consistently high agreement, particularly of the dominant M2 tidal signal. Analysis of residual flows demonstrate considerably poorer agreement, with the model failing to replicate complex flows. A number of hypotheses explaining this discrepancy are discussed, namely: discrepancies between regional-scale, coastal-ocean models and globally-influenced bay-scale dynamics; model uncertainties arising from highly-variable wind-driven flows across alarge body of water forced by point measurements of wind vectors; and the high dependence of model simulations on empirical wind-stress coefficients. The research demonstrates that an advanced, widely-used hydro-environmental model does not accurately reproduce aspects of surface flow processes, particularly with regards wind forcing. Considering the significance of surface boundary conditions in both coastal and open ocean dynamics, the viability of using a systematic analysis of results to improve model predictions is discussed.

  14. Parameter-expanded data augmentation for Bayesian analysis of capture-recapture models

    USGS Publications Warehouse

    Royle, J. Andrew; Dorazio, Robert M.

    2012-01-01

    Data augmentation (DA) is a flexible tool for analyzing closed and open population models of capture-recapture data, especially models which include sources of hetereogeneity among individuals. The essential concept underlying DA, as we use the term, is based on adding "observations" to create a dataset composed of a known number of individuals. This new (augmented) dataset, which includes the unknown number of individuals N in the population, is then analyzed using a new model that includes a reformulation of the parameter N in the conventional model of the observed (unaugmented) data. In the context of capture-recapture models, we add a set of "all zero" encounter histories which are not, in practice, observable. The model of the augmented dataset is a zero-inflated version of either a binomial or a multinomial base model. Thus, our use of DA provides a general approach for analyzing both closed and open population models of all types. In doing so, this approach provides a unified framework for the analysis of a huge range of models that are treated as unrelated "black boxes" and named procedures in the classical literature. As a practical matter, analysis of the augmented dataset by MCMC is greatly simplified compared to other methods that require specialized algorithms. For example, complex capture-recapture models of an augmented dataset can be fitted with popular MCMC software packages (WinBUGS or JAGS) by providing a concise statement of the model's assumptions that usually involves only a few lines of pseudocode. In this paper, we review the basic technical concepts of data augmentation, and we provide examples of analyses of closed-population models (M 0, M h , distance sampling, and spatial capture-recapture models) and open-population models (Jolly-Seber) with individual effects.

  15. Analysis and fit of stellar spectra using a mega-database of CMFGEN models

    NASA Astrophysics Data System (ADS)

    Fierro-Santillán, Celia; Zsargó, Janos; Klapp, Jaime; Díaz-Azuara, Santiago Alfredo; Arrieta, Anabel; Arias, Lorena

    2017-11-01

    We present a tool for analysis and fit of stellar spectra using a mega database of 15,000 atmosphere models for OB stars. We have developed software tools, which allow us to find the model that best fits to an observed spectrum, comparing equivalent widths and line ratios in the observed spectrum with all models of the database. We use the Hα, Hβ, Hγ, and Hδ lines as criterion of stellar gravity and ratios of He II λ4541/He I λ4471, He II λ4200/(He I+He II λ4026), He II λ4541/He I λ4387, and He II λ4200/He I λ4144 as criterion of T eff.

  16. Factor Analysis for Clustered Observations.

    ERIC Educational Resources Information Center

    Longford, N. T.; Muthen, B. O.

    1992-01-01

    A two-level model for factor analysis is defined, and formulas for a scoring algorithm for this model are derived. A simple noniterative method based on decomposition of total sums of the squares and cross-products is discussed and illustrated with simulated data and data from the Second International Mathematics Study. (SLD)

  17. HYDROLOGIC MODEL CALIBRATION AND UNCERTAINTY IN SCENARIO ANALYSIS

    EPA Science Inventory

    A systematic analysis of model performance during simulations based on

    observed land-cover/use change is used to quantify error associated with water-yield

    simulations for a series of known landscape conditions over a 24-year period with the

    goal of evaluatin...

  18. Evidence of Nanoflare Heating in Coronal Loops Observed with Hinolde-XRT and SDO-AIA

    NASA Technical Reports Server (NTRS)

    Lopez-Fuentes, M. C.; Klimchuk, James

    2013-01-01

    We study a series of coronal loop lightcurves from X-ray and EUV observations. In search for signatures of nanoflare heating, we analyze the statistical properties of the observed lightcurves and compare them with synthetic cases obtained with a 2D cellular-automaton model based on nanoflare heating driven by photospheric motions. Our analysis shows that the observed and the model lightcurves have similar statistical properties. The asymmetries observed in the distribution of the intensity fluctuations indicate the possible presence of widespread cooling processes in sub-resolution magnetic strands.

  19. Developing Kindergarten Children's Mathematical Abilities and Character by Using Area Instruction Model

    ERIC Educational Resources Information Center

    Mardiana, Dinny; Mudrikah, Achmad; Amna, Nurjanah

    2016-01-01

    This study aimed to describe the application of Area Instruction Model on one of the state kindergarten in Bandung city. The study used a qualitative approach with descriptive qualitative design. Data was obtained through interviews, observation, and documentation. The validity of the analysis was guaranteed through perseverance observation and…

  20. An Ethnographic Case Study of the Administrative Organization, Processes, and Behavior in a Model Comprehensive High School.

    ERIC Educational Resources Information Center

    Zimman, Richard N.

    Using ethnographic case study methodology (involving open-ended interviews, participant observation, and document analysis) theories of administrative organization, processes, and behavior were tested during a three-week observation of a model comprehensive (experimental) high school. Although the study is limited in its general application, it…

  1. Prospect of Using Numerical Dynamo Model for Prediction of Geomagnetic Secular Variation

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Tangborn, Andrew

    2003-01-01

    Modeling of the Earth's core has reached a level of maturity to where the incorporation of observations into the simulations through data assimilation has become feasible. Data assimilation is a method by which observations of a system are combined with a model output (or forecast) to obtain a best guess of the state of the system, called the analysis. The analysis is then used as an initial condition for the next forecast. By doing assimilation, not only we shall be able to predict partially secular variation of the core field, we could also use observations to further our understanding of dynamical states in the Earth's core. One of the first steps in the development of an assimilation system is a comparison between the observations and the model solution. The highly turbulent nature of core dynamics, along with the absence of any regular external forcing and constraint (which occurs in atmospheric dynamics, for example) means that short time comparisons (approx. 1000 years) cannot be made between model and observations. In order to make sensible comparisons, a direct insertion assimilation method has been implemented. In this approach, magnetic field observations at the Earth's surface have been substituted into the numerical model, such that the ratio of the multiple components and the dipole component from observation is adjusted at the core-mantle boundary and extended to the interior of the core, while the total magnetic energy remains unchanged. This adjusted magnetic field is then used as the initial field for a new simulation. In this way, a time tugged simulation is created which can then be compared directly with observations. We present numerical solutions with and without data insertion and discuss their implications for the development of a more rigorous assimilation system.

  2. Variational data assimilation for tropospheric chemistry modeling

    NASA Astrophysics Data System (ADS)

    Elbern, Hendrik; Schmidt, Hauke; Ebel, Adolf

    1997-07-01

    The method of variational adjoint data assimilation has been applied to assimilate chemistry observations into a comprehensive tropospheric gas phase model. The rationale of this method is to find the correct initial values for a subsequent atmospheric chemistry model run when observations scattered in time are available. The variational adjoint technique is esteemed to be a promising tool for future advanced meteorological forecasting. The stimulating experience gained with the application of four-dimensional variational data assimilation in this research area has motivated the attempt to apply the technique to air quality modeling and analysis of the chemical state of the atmosphere. The present study describes the development and application of the adjoint of the second-generation regional acid deposition model gas phase mechanism, which is used in the European air pollution dispersion model system. Performance results of the assimilation scheme using both model-generated data and real observations are presented for tropospheric conditions. In the former case it is demonstrated that time series of only few or even one measured key species convey sufficient information to improve considerably the analysis of unobserved species which are directly coupled with the observed species. In the latter case a Lagrangian approach is adopted where trajectory calculations between two comprehensively furnished measurement sites are carried out. The method allows us to analyze initial data for air pollution modeling even when only sparse observations are available. Besides remarkable improvements of the model performance by properly analyzed initial concentrations, it is shown that the adjoint algorithm offers the feasibility to estimate the sensitivity of ozone concentrations relative to its precursors.

  3. Analysis of thermohydraulic explosion energetics

    NASA Astrophysics Data System (ADS)

    Büttner, Ralf; Zimanowski, Bernd; Mohrholz, Chris-Oliver; Kümmel, Reiner

    2005-08-01

    Thermohydraulic explosion, caused by direct contact of hot liquids with cold water, represent a major danger of volcanism and in technical processes. Based on experimental observations and nonequilibrium thermodynamics we propose a model of heat transfer from the hot liquid to the water during the thermohydraulic fragmentation process. The model was validated using the experimentally observed thermal energy release. From a database of more than 1000 experimental runs, conducted during the last 20 years, a standardized entrapment experiment was defined, where a conversion of 1 MJ/kg of thermal energy to kinetic energy within 700μs is observed. The results of the model calculations are in good agreement with this value. Furthermore, the model was found to be robust with respect to the material properties of the hot melt, which also is observed in experiments using different melt compositions. As the model parameters can be easily obtained from size and shape properties of the products of thermohydraulic explosions and from material properties of the hot melt, we believe that this method will not only allow a better analysis of volcanic eruptions or technical accidents, but also significantly improve the quality of hazard assessment and mitigation.

  4. Extending TOPS: Ontology-driven Anomaly Detection and Analysis System

    NASA Astrophysics Data System (ADS)

    Votava, P.; Nemani, R. R.; Michaelis, A.

    2010-12-01

    Terrestrial Observation and Prediction System (TOPS) is a flexible modeling software system that integrates ecosystem models with frequent satellite and surface weather observations to produce ecosystem nowcasts (assessments of current conditions) and forecasts useful in natural resources management, public health and disaster management. We have been extending the Terrestrial Observation and Prediction System (TOPS) to include a capability for automated anomaly detection and analysis of both on-line (streaming) and off-line data. In order to best capture the knowledge about data hierarchies, Earth science models and implied dependencies between anomalies and occurrences of observable events such as urbanization, deforestation, or fires, we have developed an ontology to serve as a knowledge base. We can query the knowledge base and answer questions about dataset compatibilities, similarities and dependencies so that we can, for example, automatically analyze similar datasets in order to verify a given anomaly occurrence in multiple data sources. We are further extending the system to go beyond anomaly detection towards reasoning about possible causes of anomalies that are also encoded in the knowledge base as either learned or implied knowledge. This enables us to scale up the analysis by eliminating a large number of anomalies early on during the processing by either failure to verify them from other sources, or matching them directly with other observable events without having to perform an extensive and time-consuming exploration and analysis. The knowledge is captured using OWL ontology language, where connections are defined in a schema that is later extended by including specific instances of datasets and models. The information is stored using Sesame server and is accessible through both Java API and web services using SeRQL and SPARQL query languages. Inference is provided using OWLIM component integrated with Sesame.

  5. Variation of atmospheric carbon monoxide over the Arctic Ocean during summer 2012

    NASA Astrophysics Data System (ADS)

    Park, Keyhong; Siek Rhee, Tae; Emmons, Louisa

    2014-05-01

    Atmospheric carbon monoxide (CO) plays an important role in ozone-related chemistry in the troposphere, especially under low-NOx conditions like the open ocean. During summer 2012, we performed a continuous high-resolution (0.1Hz) shipboard measurement of atmospheric CO over the Arctic Ocean. We also simulated the observation using a 3-D global chemical transport model (the Model for OZone And Related chemical Tracers-4; MOZART-4) for further analysis of the observed results. In the model, tags for each sources and emission regions of CO are applied and this enables us to delineate the source composition of the observations. Along with the observed variation of CO concentration during the research cruise, we will present in detailed analysis of the variation of source components and change of regional contributions. We found large (~80ppbv) variation of CO concentration in the Arctic Ocean which is mostly influenced by the variation of biomass burning activity. The contribution of anthropogenic emission is limited over the Arctic Ocean, although the northeast Asian anthropogenic emission shows a dominant component of transported anthropogenic CO. Also, our analysis shows, near the Bering Strait, Europe is the main emission region for anthropogenic CO.

  6. Special relativity from observer's mathematics point of view

    NASA Astrophysics Data System (ADS)

    Khots, Boris; Khots, Dmitriy

    2015-09-01

    When we create mathematical models for quantum theory of light we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of "infinitely small" and "infinitely large" quantities in arithmetic and the use of Newton - Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer's Mathematics (see www.mathrelativity.com). Observer's Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We use Einstein special relativity principles and get the analogue of classical Lorentz transformation. This work considers this transformation from Observer's Mathematics point of view.

  7. Chemistry Teachers' Emerging Expertise in Inquiry Teaching: The Effect of a Professional Development Model on Beliefs and Practice

    NASA Astrophysics Data System (ADS)

    Rushton, Gregory T.; Lotter, Christine; Singer, Jonathan

    2011-02-01

    This study investigates the beliefs and practices of seven high school chemistry teachers as a result of their participation in a year-long inquiry professional development (PD) project. An analysis of oral interviews, written reflections, and in-class observations were used to determine the extent to which the PD affected the teachers' beliefs and practice. The data indicated that the teachers developed more complete conceptions of classroom inquiry, valued a "phenomena first" approach to scientific investigations, and viewed inquiry approaches as helpful for facilitating improved student thinking. Analysis of classroom observations with the Reformed Teaching Observation Protocol indicated that features of the PD were observed in the teachers' practice during the academic year follow-up. Implications for effective science teacher professional development models are discussed.

  8. Atmospheric model development in support of SEASAT. Volume 1: Summary of findings

    NASA Technical Reports Server (NTRS)

    Kesel, P. G.

    1977-01-01

    Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.

  9. Development of the Large-Scale Forcing Data to Support MC3E Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Xie, S.; Zhang, Y.

    2011-12-01

    The large-scale forcing fields (e.g., vertical velocity and advective tendencies) are required to run single-column and cloud-resolving models (SCMs/CRMs), which are the two key modeling frameworks widely used to link field data to climate model developments. In this study, we use an advanced objective analysis approach to derive the required forcing data from the soundings collected by the Midlatitude Continental Convective Cloud Experiment (MC3E) in support of its cloud modeling studies. MC3E is the latest major field campaign conducted during the period 22 April 2011 to 06 June 2011 in south-central Oklahoma through a joint effort between the DOE ARM program and the NASA Global Precipitation Measurement Program. One of its primary goals is to provide a comprehensive dataset that can be used to describe the large-scale environment of convective cloud systems and evaluate model cumulus parameterizations. The objective analysis used in this study is the constrained variational analysis method. A unique feature of this approach is the use of domain-averaged surface and top-of-the atmosphere (TOA) observations (e.g., precipitation and radiative and turbulent fluxes) as constraints to adjust atmospheric state variables from soundings by the smallest possible amount to conserve column-integrated mass, moisture, and static energy so that the final analysis data is dynamically and thermodynamically consistent. To address potential uncertainties in the surface observations, an ensemble forcing dataset will be developed. Multi-scale forcing will be also created for simulating various scale convective systems. At the meeting, we will provide more details about the forcing development and present some preliminary analysis of the characteristics of the large-scale forcing structures for several selected convective systems observed during MC3E.

  10. Shuttle user analysis (study 2.2). Volume 3: Business risk and value of operations in space (BRAVO). Part 5: Analysis of GSFC Earth Observation Satellite (EOS) system mission model using BRAVO techniques

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Cost comparisons were made between three modes of operation (expend, ground refurbish, and space resupply) for the Earth Observation System (EOS-B) to furnish data to NASA on alternative ways to use the shuttle/EOS. Results of the analysis are presented in tabular form.

  11. Statistics analysis of distribution of Bradysia Ocellaris insect on Oyster mushroom cultivation

    NASA Astrophysics Data System (ADS)

    Sari, Kurnia Novita; Amelia, Ririn

    2015-12-01

    Bradysia Ocellaris insect is a pest on Oyster mushroom cultivation. The disitribution of Bradysia Ocellaris have a special pattern that can observed every week with several asumption such as independent, normality and homogenity. We can analyze the number of Bradysia Ocellaris for each week through descriptive analysis. Next, the distribution pattern of Bradysia Ocellaris is described through by semivariogram that is diagram of variance from difference value between pair of observation that separeted by d. Semivariogram model that suitable for Bradysia Ocellaris data is spherical isotropic model.

  12. Analysis models for the estimation of oceanic fields

    NASA Technical Reports Server (NTRS)

    Carter, E. F.; Robinson, A. R.

    1987-01-01

    A general model for statistically optimal estimates is presented for dealing with scalar, vector and multivariate datasets. The method deals with anisotropic fields and treats space and time dependence equivalently. Problems addressed include the analysis, or the production of synoptic time series of regularly gridded fields from irregular and gappy datasets, and the estimate of fields by compositing observations from several different instruments and sampling schemes. Technical issues are discussed, including the convergence of statistical estimates, the choice of representation of the correlations, the influential domain of an observation, and the efficiency of numerical computations.

  13. Possible Alternatives to the Supermassive Black Hole at the Galactic Center

    NASA Astrophysics Data System (ADS)

    Zakharov, A. F.

    2015-12-01

    Now there are two basic observational techniques to investigate a gravitational potential at the Galactic Center, namely, (a) monitoring the orbits of bright stars near the Galactic Center to reconstruct a gravitational potential; (b) measuring the size and shape of shadows around black hole giving an alternative possibility to evaluate black hole parameters in mm-band with VLBI-technique. At the moment, one can use a small relativistic correction approach for stellar orbit analysis (however, in the future the approximation will not be precise enough due to enormous progress of observational facilities) while for smallest structure analysis in VLBI observations one really needs a strong gravitational field approximation. We discuss results of observations, their conventional interpretations, tensions between observations and models and possible hints for a new physics from the observational data and tensions between observations and interpretations. We discuss an opportunity to use a Schwarzschild metric for data interpretation or we have to use more exotic models such as Reissner-Nordstrom or Schwarzschild-de-Sitter metrics for better fits.

  14. Air Quality Forecasts Using the NASA GEOS Model

    NASA Technical Reports Server (NTRS)

    Keller, Christoph A.; Knowland, K. Emma; Nielsen, Jon E.; Orbe, Clara; Ott, Lesley; Pawson, Steven; Saunders, Emily; Duncan, Bryan; Follette-Cook, Melanie; Liu, Junhua; hide

    2018-01-01

    We provide an introduction to a new high-resolution (0.25 degree) global composition forecast produced by NASA's Global Modeling and Assimilation office. The NASA Goddard Earth Observing System version 5 (GEOS-5) model has been expanded to provide global near-real-time forecasts of atmospheric composition at a horizontal resolution of 0.25 degrees (25 km). Previously, this combination of detailed chemistry and resolution was only provided by regional models. This system combines the operational GEOS-5 weather forecasting model with the state-of-the-science GEOS-Chem chemistry module (version 11) to provide detailed chemical analysis of a wide range of air pollutants such as ozone, carbon monoxide, nitrogen oxides, and fine particulate matter (PM2.5). The resolution of the forecasts is the highest resolution compared to current, publically-available global composition forecasts. Evaluation and validation of modeled trace gases and aerosols compared to surface and satellite observations will be presented for constituents relative to health air quality standards. Comparisons of modeled trace gases and aerosols against satellite observations show that the model produces realistic concentrations of atmospheric constituents in the free troposphere. Model comparisons against surface observations highlight the model's capability to capture the diurnal variability of air pollutants under a variety of meteorological conditions. The GEOS-5 composition forecasting system offers a new tool for scientists and the public health community, and is being developed jointly with several government and non-profit partners. Potential applications include air quality warnings, flight campaign planning and exposure studies using the archived analysis fields.

  15. VLA observations of radio sources in interacting galaxy pairs in poor clusters

    NASA Technical Reports Server (NTRS)

    Batuski, David J.; Hanisch, Robert J.; Burns, Jack O.

    1992-01-01

    Observations of 16 radio sources in interacting galaxies in 14 poor clusters were made using the Very Large Array in the B configuration at lambda of 6 and 2 cm. These sources had been unresolved in earlier observations at lambda of 21 cm, and were chosen as a sample to determine which of three models for radio source formation actually pertains in interacting galaxies. From the analysis of this sample, the starburst model appears most successful, but the 'central monster' model could pertain in some cases.

  16. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  17. An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements

    NASA Astrophysics Data System (ADS)

    Zhai, Zhongxu; Blanton, Michael; Slosar, Anže; Tinker, Jeremy

    2017-12-01

    We compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtaining data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.

  18. Quantitative investigation of inappropriate regression model construction and the importance of medical statistics experts in observational medical research: a cross-sectional study.

    PubMed

    Nojima, Masanori; Tokunaga, Mutsumi; Nagamura, Fumitaka

    2018-05-05

    To investigate under what circumstances inappropriate use of 'multivariate analysis' is likely to occur and to identify the population that needs more support with medical statistics. The frequency of inappropriate regression model construction in multivariate analysis and related factors were investigated in observational medical research publications. The inappropriate algorithm of using only variables that were significant in univariate analysis was estimated to occur at 6.4% (95% CI 4.8% to 8.5%). This was observed in 1.1% of the publications with a medical statistics expert (hereinafter 'expert') as the first author, 3.5% if an expert was included as coauthor and in 12.2% if experts were not involved. In the publications where the number of cases was 50 or less and the study did not include experts, inappropriate algorithm usage was observed with a high proportion of 20.2%. The OR of the involvement of experts for this outcome was 0.28 (95% CI 0.15 to 0.53). A further, nation-level, analysis showed that the involvement of experts and the implementation of unfavourable multivariate analysis are associated at the nation-level analysis (R=-0.652). Based on the results of this study, the benefit of participation of medical statistics experts is obvious. Experts should be involved for proper confounding adjustment and interpretation of statistical models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Role of observation of live cases done by Japanese experts in the acquisition of ESD skills by a western endoscopist.

    PubMed

    Draganov, Peter V; Chang, Myron; Coman, Roxana M; Wagh, Mihir S; An, Qi; Gotoda, Takuji

    2014-04-28

    To evaluate the role of observation of experts performing endoscopic submucosal dissection (ESD) in the acquisition of ESD skills. This prospective study is documenting the learning curve of one Western endoscopist. The study consisted of three periods. In the first period (pre-observation), the trainee performed ESDs in animal models in his home institution in the United States. The second period (observation) consisted of visit to Japan and observation of live ESD cases done by experts. The observation of cases occurred over a 5-wk period. During the third period (post-observation), the trainee performed ESD in animal models in a similar fashion as in the first period. Three animal models were used: live 40-50 kg Yorkshire pig, explanted pig stomach model, and explanted pig rectum model. The outcomes from the ESDs done in the animal models before and after observation of live human cases (main study intervention) were compared. Statistical analysis of the data included: Fisher's exact test to compare distributions of a categorical variable, Wilcoxon rank sum test to compare distributions of a continuous variable between the two groups (pre-observation and post-observation), and Kruskal-Wallis test to evaluate the impact of lesion location and type of model (ex-vivo vs live pig) on lesion removal time. The trainee performed 38 ESDs in animal model (29 pre-observation/9 post-observation). The removal times post-observation were significantly shorter than those pre-observation (32.7 ± 15.0 min vs 63.5 ± 9.8 min, P < 0.001). To minimize the impact of improving physician skill, the 9 lesions post-observation were compared to the last 9 lesions pre-observation and the removal times remained significantly shorter (32.7 ± 15.0 min vs 61.0 ± 7.4 min, P = 0.0011). Regression analysis showed that ESD observation significantly reduced removal time when controlling for the sequence of lesion removal (P = 0.025). Furthermore, it was also noted a trend towards decrease in failure to remove lesions and decrease in complications after the period of observation. This study did not find a significant difference in the time needed to remove lesions in different animal models. This finding could have important implications in designing training programs due to the substantial difference in cost between live animal and explanted organ models. The main limitation of this study is that it reflects the experience of a single endoscopist. Observation of experts performing ESD over short period of time can significantly contribute to the acquisition of ESD skills.

  20. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  1. An observational model for biomechanical assessment of sprint kayaking technique.

    PubMed

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  2. Analyzing thresholds and efficiency with hierarchical Bayesian logistic regression.

    PubMed

    Houpt, Joseph W; Bittner, Jennifer L

    2018-07-01

    Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Wavelets, non-linearity and turbulence in fusion plasmas

    NASA Astrophysics Data System (ADS)

    van Milligen, B. Ph.

    Introduction Linear spectral analysis tools Wavelet analysis Wavelet spectra and coherence Joint wavelet phase-frequency spectra Non-linear spectral analysis tools Wavelet bispectra and bicoherence Interpretation of the bicoherence Analysis of computer-generated data Coupled van der Pol oscillators A large eddy simulation model for two-fluid plasma turbulence A long wavelength plasma drift wave model Analysis of plasma edge turbulence from Langmuir probe data Radial coherence observed on the TJ-IU torsatron Bicoherence profile at the L/H transition on CCT Conclusions

  4. Analysis of a general circulation model product. I - Frontal systems in the Brazil/Malvinas and Kuroshio/Oyashio regions

    NASA Technical Reports Server (NTRS)

    Garzoli, Silvia L.; Garraffo, Zulema; Podesta, Guillermo; Brown, Otis

    1992-01-01

    The general circulation model (GCM) of Semtner and Chervin (1992) is tested by comparing the fields produced by this model with available observations in two western boundary current regions, the Brazil/Malvinas and the Kuroshio/Oyashio confluences. The two sets of data used are the sea surface temperature from satellite observations and the temperature field product from the GCM at levels 1 (12.5 m), 2 (37.5 m), and 6 (160 m). It is shown that the model reproduces intense thermal fronts at the sea surface and in the upper layers (where they are induced by the internal dynamics of the model). The location of the fronts are reproduced in the model within 4 to 5 deg, compared with observations. However, the variability of these fronts was found to be less pronounced in the model than in the observations.

  5. Assimilation of TOPEX Sea Level Measurements with a Reduced-Gravity, Shallow Water Model of the Tropical Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Fukumori, Ichiro

    1995-01-01

    Sea surface height variability measured by TOPEX is analyzed in the tropical Pacific Ocean by way of assimilation into a wind-driven, reduced-gravity, shallow water model using an approximate Kalman filter and smoother. The analysis results in an optimal fit of the dynamic model to the observations, providing it dynamically consistent interpolation of sea level and estimation of the circulation. Nearly 80% of the expected signal variance is accounted for by the model within 20 deg of the equator, and estimation uncertainty is substantially reduced by the voluminous observation. Notable features resolved by the analysis include seasonal changes associated with the North Equatorial Countercurrent and equatorial Kelvin and Rossby waves. Significant discrepancies are also found between the estimate and TOPEX measurements, especially near the eastern boundary. Improvements in the estimate made by the assimilation are validated by comparisons with independent tide gauge and current meter observations. The employed filter and smoother are based on approximately computed estimation error covariance matrices, utilizing a spatial transformation and an symptotic approximation. The analysis demonstrates the practical utility of a quasi-optimal filter and smoother.

  6. A nonparametric analysis of plot basal area growth using tree based models

    Treesearch

    G. L. Gadbury; H. K. lyer; H. T. Schreuder; C. Y. Ueng

    1997-01-01

    Tree based statistical models can be used to investigate data structure and predict future observations. We used nonparametric and nonlinear models to reexamine the data sets on tree growth used by Bechtold et al. (1991) and Ruark et al. (1991). The growth data were collected by Forest Inventory and Analysis (FIA) teams from 1962 to 1972 (4th cycle) and 1972 to 1982 (...

  7. Separate-channel analysis of two-channel microarrays: recovering inter-spot information.

    PubMed

    Smyth, Gordon K; Altman, Naomi S

    2013-05-26

    Two-channel (or two-color) microarrays are cost-effective platforms for comparative analysis of gene expression. They are traditionally analysed in terms of the log-ratios (M-values) of the two channel intensities at each spot, but this analysis does not use all the information available in the separate channel observations. Mixed models have been proposed to analyse intensities from the two channels as separate observations, but such models can be complex to use and the gain in efficiency over the log-ratio analysis is difficult to quantify. Mixed models yield test statistics for the null distributions can be specified only approximately, and some approaches do not borrow strength between genes. This article reformulates the mixed model to clarify the relationship with the traditional log-ratio analysis, to facilitate information borrowing between genes, and to obtain an exact distributional theory for the resulting test statistics. The mixed model is transformed to operate on the M-values and A-values (average log-expression for each spot) instead of on the log-expression values. The log-ratio analysis is shown to ignore information contained in the A-values. The relative efficiency of the log-ratio analysis is shown to depend on the size of the intraspot correlation. A new separate channel analysis method is proposed that assumes a constant intra-spot correlation coefficient across all genes. This approach permits the mixed model to be transformed into an ordinary linear model, allowing the data analysis to use a well-understood empirical Bayes analysis pipeline for linear modeling of microarray data. This yields statistically powerful test statistics that have an exact distributional theory. The log-ratio, mixed model and common correlation methods are compared using three case studies. The results show that separate channel analyses that borrow strength between genes are more powerful than log-ratio analyses. The common correlation analysis is the most powerful of all. The common correlation method proposed in this article for separate-channel analysis of two-channel microarray data is no more difficult to apply in practice than the traditional log-ratio analysis. It provides an intuitive and powerful means to conduct analyses and make comparisons that might otherwise not be possible.

  8. Some unexamined aspects of analysis of covariance in pretest-posttest studies.

    PubMed

    Ganju, Jitendra

    2004-09-01

    The use of an analysis of covariance (ANCOVA) model in a pretest-posttest setting deserves to be studied separately from its use in other (non-pretest-posttest) settings. For pretest-posttest studies, the following points are made in this article: (a) If the familiar change from baseline model accurately describes the data-generating mechanism for a randomized study then it is impossible for unequal slopes to exist. Conversely, if unequal slopes exist, then it implies that the change from baseline model as a data-generating mechanism is inappropriate. An alternative data-generating model should be identified and the validity of the ANCOVA model should be demonstrated. (b) Under the usual assumptions of equal pretest and posttest within-subject error variances, the ratio of the standard error of a treatment contrast from a change from baseline analysis to that from ANCOVA is less than 2(1)/(2). (c) For an observational study it is possible for unequal slopes to exist even if the change from baseline model describes the data-generating mechanism. (d) Adjusting for the pretest variable in observational studies may actually introduce bias where none previously existed.

  9. Plasmaspheric Erosion via Plasmasphere Coupling to Ring Current Plasmas: EUV Observations and Modeling

    NASA Technical Reports Server (NTRS)

    Adrian, M. L.; Gallagher, D. L.; Khazanov, G. V.; Chsang, S. W.; Liemohn, M. W.; Perez, J. D.; Green, J. L.; Sandel, B. R.; Mitchell, D. G.; Mende, S. B.; hide

    2002-01-01

    During a geomagnetic storm on 24 May 2000, the IMAGE Extreme Ultraviolet (EUV) camera observed a plasmaspheric density trough in the evening sector at L-values inside the plasmapause. Forward modeling of this feature has indicated that plasmaspheric densities beyond the outer wall of the trough are well below model expectations. This diminished plasma condition suggests the presence of an erosion process due to the interaction of the plasmasphere with ring current plasmas. We present an overview of EUV, energetic neutral atom (ENA), and Far Ultraviolet (FUV) camera observations associated with the plasmaspheric density trough of 24 May 2000, as well as forward modeling evidence of the lie existence of a plasmaspheric erosion process during this period. FUV proton aurora image analysis, convolution of ENA observations, and ring current modeling are then presented in an effort to associate the observed erosion with coupling between the plasmasphere and ring-current plasmas.

  10. Comparison of Recent Modeled and Observed Trends in Total Column Ozone

    NASA Technical Reports Server (NTRS)

    Andersen, S. B.; Weatherhead, E. C.; Stevermer, A.; Austin, J.; Bruehl, C.; Fleming, E. L.; deGrandpre, J.; Grewe, V.; Isaksen, I.; Pitari, G.; hide

    2006-01-01

    We present a comparison of trends in total column ozone from 10 two-dimensional and 4 three-dimensional models and solar backscatter ultraviolet-2 (SBUV/2) satellite observations from the period 1979-2003. Trends for the past (1979-2000), the recent 7 years (1996-2003), and the future (2000-2050) are compared. We have analyzed the data using both simple linear trends and linear trends derived with a hockey stick method including a turnaround point in 1996. If the last 7 years, 1996-2003, are analyzed in isolation, the SBUV/2 observations show no increase in ozone, and most of the models predict continued depletion, although at a lesser rate. In sharp contrast to this, the recent data show positive trends for the Northern and the Southern Hemispheres if the hockey stick method with a turnaround point in 1996 is employed for the models and observations. The analysis shows that the observed positive trends in both hemispheres in the recent 7-year period are much larger than what is predicted by the models. The trends derived with the hockey stick method are very dependent on the values just before the turnaround point. The analysis of the recent data therefore depends greatly on these years being representative of the overall trend. Most models underestimate the past trends at middle and high latitudes. This is particularly pronounced in the Northern Hemisphere. Quantitatively, there is much disagreement among the models concerning future trends. However, the models agree that future trends are expected to be positive and less than half the magnitude of the past downward trends. Examination of the model projections shows that there is virtually no correlation between the past and future trends from the individual models.

  11. Comparison of recent modeled and observed trends in total column ozone

    NASA Astrophysics Data System (ADS)

    Andersen, S. B.; Weatherhead, E. C.; Stevermer, A.; Austin, J.; Brühl, C.; Fleming, E. L.; de Grandpré, J.; Grewe, V.; Isaksen, I.; Pitari, G.; Portmann, R. W.; Rognerud, B.; Rosenfield, J. E.; Smyshlyaev, S.; Nagashima, T.; Velders, G. J. M.; Weisenstein, D. K.; Xia, J.

    2006-01-01

    We present a comparison of trends in total column ozone from 10 two-dimensional and 4 three-dimensional models and solar backscatter ultraviolet-2 (SBUV/2) satellite observations from the period 1979-2003. Trends for the past (1979-2000), the recent 7 years (1996-2003), and the future (2000-2050) are compared. We have analyzed the data using both simple linear trends and linear trends derived with a hockey stick method including a turnaround point in 1996. If the last 7 years, 1996-2003, are analyzed in isolation, the SBUV/2 observations show no increase in ozone, and most of the models predict continued depletion, although at a lesser rate. In sharp contrast to this, the recent data show positive trends for the Northern and the Southern Hemispheres if the hockey stick method with a turnaround point in 1996 is employed for the models and observations. The analysis shows that the observed positive trends in both hemispheres in the recent 7-year period are much larger than what is predicted by the models. The trends derived with the hockey stick method are very dependent on the values just before the turnaround point. The analysis of the recent data therefore depends greatly on these years being representative of the overall trend. Most models underestimate the past trends at middle and high latitudes. This is particularly pronounced in the Northern Hemisphere. Quantitatively, there is much disagreement among the models concerning future trends. However, the models agree that future trends are expected to be positive and less than half the magnitude of the past downward trends. Examination of the model projections shows that there is virtually no correlation between the past and future trends from the individual models.

  12. Spherical harmonic analysis of a model-generated climatology

    NASA Technical Reports Server (NTRS)

    Christidis, Z. D.; Spar, J.

    1981-01-01

    Monthly mean fields of 850 mb temperature (T850), 500 mb geopotential height (G500) and sea level pressure (SLP) were generated in the course of a five-year climate simulation run with a global general circulation model. Both the model-generated climatology and an observed climatology were subjected to spherical harmonic analysis, with separate analyses of the globe and the Northern Hemisphere. Comparison of the dominant harmonics of the two climatologies indicates that more than 95% of the area-weighted spatial variance of G500 and more than 90% of that of T850 are explained by fewer than three components, and that the model adequately simulates these large-scale characteristics. On the other hand, as many as 25 harmonics are needed to explain 95% of the observed variance of SLP, and the model simulation of this field is much less satisfactory. The model climatology is also evaluated in terms of the annual cycles of the dominant harmonics.

  13. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  14. Important observations and parameters for a salt water intrusion model

    USGS Publications Warehouse

    Shoemaker, W.B.

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.

  15. Important observations and parameters for a salt water intrusion model.

    PubMed

    Shoemaker, W Barclay

    2004-01-01

    Sensitivity analysis with a density-dependent ground water flow simulator can provide insight and understanding of salt water intrusion calibration problems far beyond what is possible through intuitive analysis alone. Five simple experimental simulations presented here demonstrate this point. Results show that dispersivity is a very important parameter for reproducing a steady-state distribution of hydraulic head, salinity, and flow in the transition zone between fresh water and salt water in a coastal aquifer system. When estimating dispersivity, the following conclusions can be drawn about the data types and locations considered. (1) The "toe" of the transition zone is the most effective location for hydraulic head and salinity observations. (2) Areas near the coastline where submarine ground water discharge occurs are the most effective locations for flow observations. (3) Salinity observations are more effective than hydraulic head observations. (4) The importance of flow observations aligned perpendicular to the shoreline varies dramatically depending on distance seaward from the shoreline. Extreme parameter correlation can prohibit unique estimation of permeability parameters such as hydraulic conductivity and flow parameters such as recharge in a density-dependent ground water flow model when using hydraulic head and salinity observations. Adding flow observations perpendicular to the shoreline in areas where ground water is exchanged with the ocean body can reduce the correlation, potentially resulting in unique estimates of these parameter values. Results are expected to be directly applicable to many complex situations, and have implications for model development whether or not formal optimization methods are used in model calibration.

  16. The footprints of Saharan Air Layer and lightning on the formation of tropical depressions over the eastern Atlantic Ocean

    NASA Astrophysics Data System (ADS)

    Centeno Delgado, Diana C.

    In this study, the results of an observational analysis and a numerical analysis on the role of the Saharan Air Layer during tropical cyclogenesis (TC-genesis) are described. The observational analysis investigates the interaction of dust particles and lightning during the genesis stage of two developed cases (Hurricanes Helene 2006 and Julia 2010). The Weather Research and Forecasting (WRF) and WRF-Chemistry models were used to include and monitor the aerosols and chemical processes that affect TC-genesis. The numerical modeling involved two developed cases (Hurricanes Helene 2006 and Julia 2010) and two non-developed cases (Non-Developed 2011 and Non-Developed 2012). The Aerosol Optical Depth (AOD) and lightning analysis for Hurricane Helene 2006 demonstrated the time-lag connection through their positive contribution to TC-genesis. The observational analyses supported the fact that both systems developed under either strong or weak dust conditions. From the two cases, the location of strong versus weak dust outbreaks in association with lightning was essential interactions that impacted TC-genesis. Furthermore, including dust particles, chemical processes, and aerosol feedback in the simulations with WRF-CHEM provides results closer to observations than regular WRF. The model advantageously shows the location of the dust particles inside of the tropical system. Overall, the results from this study suggest that the SAL is not a determining factor that affects the formation of tropical cyclones.

  17. On-Line Detection and Segmentation of Sports Motions Using a Wearable Sensor.

    PubMed

    Kim, Woosuk; Kim, Myunggyu

    2018-03-19

    In sports motion analysis, observation is a prerequisite for understanding the quality of motions. This paper introduces a novel approach to detect and segment sports motions using a wearable sensor for supporting systematic observation. The main goal is, for convenient analysis, to automatically provide motion data, which are temporally classified according to the phase definition. For explicit segmentation, a motion model is defined as a sequence of sub-motions with boundary states. A sequence classifier based on deep neural networks is designed to detect sports motions from continuous sensor inputs. The evaluation on two types of motions (soccer kicking and two-handed ball throwing) verifies that the proposed method is successful for the accurate detection and segmentation of sports motions. By developing a sports motion analysis system using the motion model and the sequence classifier, we show that the proposed method is useful for observation of sports motions by automatically providing relevant motion data for analysis.

  18. Combining Hydrological Modeling and Remote Sensing Observations to Enable Data-Driven Decision Making for Devils Lake Flood Mitigation in a Changing Climate

    NASA Technical Reports Server (NTRS)

    Zhang, Xiaodong; Kirilenko, Andrei; Lim, Howe; Teng, Williams

    2010-01-01

    This slide presentation reviews work to combine the hydrological models and remote sensing observations to monitor Devils Lake in North Dakota, to assist in flood damage mitigation. This reports on the use of a distributed rainfall-runoff model, HEC-HMS, to simulate the hydro-dynamics of the lake watershed, and used NASA's remote sensing data, including the TRMM Multi-Satellite Precipitation Analysis (TMPA) and AIRS surface air temperature, to drive the model.

  19. Tropospheric ozone in the western Pacific Rim: Analysis of satellite and surface-based observations along with comprehensive 3-D model simulations

    NASA Technical Reports Server (NTRS)

    Young, Sun-Woo; Carmichael, Gregory R.

    1994-01-01

    Tropospheric ozone production and transport in mid-latitude eastern Asia is studied. Data analysis of surface-based ozone measurements in Japan and satellite-based tropospheric column measurements of the entire western Pacific Rim are combined with results from three-dimensional model simulations to investigate the diurnal, seasonal and long-term variations of ozone in this region. Surface ozone measurements from Japan show distinct seasonal variation with a spring peak and summer minimum. Satellite studies of the entire tropospheric column of ozone show high concentrations in both the spring and summer seasons. Finally, preliminary model simulation studies show good agreement with observed values.

  20. Predicted and observed directional dependence of meteoroid/debris impacts on LDEF thermal blankets

    NASA Technical Reports Server (NTRS)

    Drolshagen, Gerhard

    1993-01-01

    The number of impacts from meteoroids and space debris particles to the various LDEF rows is calculated using ESABASE/DEBRIS, a 3-D numerical analysis tool. It is based on recent reference environment flux models and includes geometrical and directional effects. A comparison of model predictions and actual observations is made for penetrations of the thermal blankets which covered the UHCR experiment. The thermal blankets were located on all LDEF rows, except 3, 9, and 12. Because of their uniform composition and thickness, these blankets allow a direct analysis of the directional dependence of impacts and provide a test case for the latest meteoroid and debris flux models.

  1. Quantitative model analysis with diverse biological data: applications in developmental pattern formation.

    PubMed

    Pargett, Michael; Umulis, David M

    2013-07-15

    Mathematical modeling of transcription factor and signaling networks is widely used to understand if and how a mechanism works, and to infer regulatory interactions that produce a model consistent with the observed data. Both of these approaches to modeling are informed by experimental data, however, much of the data available or even acquirable are not quantitative. Data that is not strictly quantitative cannot be used by classical, quantitative, model-based analyses that measure a difference between the measured observation and the model prediction for that observation. To bridge the model-to-data gap, a variety of techniques have been developed to measure model "fitness" and provide numerical values that can subsequently be used in model optimization or model inference studies. Here, we discuss a selection of traditional and novel techniques to transform data of varied quality and enable quantitative comparison with mathematical models. This review is intended to both inform the use of these model analysis methods, focused on parameter estimation, and to help guide the choice of method to use for a given study based on the type of data available. Applying techniques such as normalization or optimal scaling may significantly improve the utility of current biological data in model-based study and allow greater integration between disparate types of data. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Physical properties of solar chromospheric plages. III - Models based on Ca II and Mg II observations

    NASA Technical Reports Server (NTRS)

    Kelch, W. L.; Linsky, J. L.

    1978-01-01

    Solar plages are modeled using observations of both the Ca II K and the Mg II h and k lines. A partial-redistribution approach is employed for calculating the line profiles on the basis of a grid of five model chromospheres. The computed integrated emission intensities for the five atmospheric models are compared with observations of six regions on the sun as well as with models of active-chromosphere stars. It is concluded that the basic plage model grid proposed by Shine and Linsky (1974) is still valid when the Mg II lines are included in the analysis and the Ca II and Mg II lines are analyzed using partial-redistribution diagnostics.

  3. Parameterization and Observability Analysis of Scalable Battery Clusters for Onboard Thermal Management

    DTIC Science & Technology

    2011-12-01

    the designed parameterization scheme and adaptive observer. A cylindri- cal battery thermal model in Eq. (1) with parameters of an A123 32157 LiFePO4 ...Morcrette, M. and Delacourt, C. (2010) Thermal modeling of a cylindrical LiFePO4 /graphite lithium-ion battery. Journal of Power Sources. 195, 2961

  4. Further comments on sensitivities, parameter estimation, and sampling design in one-dimensional analysis of solute transport in porous media

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1988-01-01

    Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.

  5. Bayesian Network Meta-Analysis for Unordered Categorical Outcomes with Incomplete Data

    ERIC Educational Resources Information Center

    Schmid, Christopher H.; Trikalinos, Thomas A.; Olkin, Ingram

    2014-01-01

    We develop a Bayesian multinomial network meta-analysis model for unordered (nominal) categorical outcomes that allows for partially observed data in which exact event counts may not be known for each category. This model properly accounts for correlations of counts in mutually exclusive categories and enables proper comparison and ranking of…

  6. A comparative analysis of simulated and observed landslide locations triggered by Hurricane Camille in Nelson County, Virginia

    USGS Publications Warehouse

    Morrissey, M.M.; Wieczorek, G.F.; Morgan, B.A.

    2008-01-01

    In 1969, Nelson County, Virginia received up to 71 cm of rain within 12 h starting at 7 p.m. on August 19. The total rainfall from the storm exceeded the 1000-year return period in the region. Several thousands of landslides were induced by rainfall associated with Hurricane Camille causing fatalities and destroying infrastructure. We apply a distributed transient response model for regional slope stability analysis to shallow landslides. Initiation points of over 3000 debris flows and effects of flooding from this storm are applied to the model. Geotechnical data used in the calculations are published data from samples of colluvium. Results from these calculations are compared with field observations such as landslide trigger location and timing of debris flows to assess how well the model predicts the spatial and temporal distribution. of landslide initiation locations. The model predicts many of the initiation locations in areas where debris flows are observed. Copyright ?? 2007 John Wiley & Sons, Ltd.

  7. Biospheric Monitoring and Ecological Forecasting using EOS/MODIS data, ecosystem modeling, planning and scheduling technologies

    NASA Astrophysics Data System (ADS)

    Nemani, R. R.; Votava, P.; Golden, K.; Hashimoto, H.; Jolly, M.; White, M.; Running, S.; Coughlan, J.

    2003-12-01

    The latest generation of NASA Earth Observing System satellites has brought a new dimension to continuous monitoring of the living part of the Earth System, the Biosphere. EOS data can now provide weekly global measures of vegetation productivity and ocean chlorophyll, and many related biophysical factors such as land cover changes or snowmelt rates. However, information with the highest economic value would be forecasting impending conditions of the biosphere that would allow advanced decision-making to mitigate dangers, or exploit positive trends. We have developed a software system called the Terrestrial Observation and Prediction System (TOPS) to facilitate rapid analysis of ecosystem states/functions by integrating EOS data with ecosystem models, surface weather observations and weather/climate forecasts. Land products from MODIS (Moderate Resolution Imaging Spectroradiometer) including land cover, albedo, snow, surface temperature, leaf area index are ingested into TOPS for parameterization of models and for verifying model outputs such as snow cover and vegetation phenology. TOPS is programmed to gather data from observing networks such as USDA soil moisture, AMERIFLUX, SNOWTEL to further enhance model predictions. Key technologies enabling TOPS implementation include the ability to understand and process heterogeneous-distributed data sets, automated planning and execution of ecosystem models, causation analysis for understanding model outputs. Current TOPS implementations at local (vineyard) to global scales (global net primary production) can be found at http://www.ntsg.umt.edu/tops.

  8. Energetics and dynamics of simple impulsive solar flares

    NASA Technical Reports Server (NTRS)

    Starr, R.; Heindl, W. A.; Crannell, C. J.; Thomas, R. J.; Batchelor, D. A.; Magun, A.

    1987-01-01

    Flare energetics and dynamics were studied using observations of simple impulsive spike bursts. A large, homogeneous set of events was selected to enable the most definite tests possible of competing flare models, in the absence of spatially resolved observations. The emission mechanisms and specific flare models that were considered in this investigation are described, and the derivations of the parameters that were tested are presented. Results of the correlation analysis between soft and hard X-ray energetics are also presented. The ion conduction front model and tests of that model with the well-observed spike bursts are described. Finally, conclusions drawn from this investigation and suggestions for future studies are discussed.

  9. MMA, A Computer Code for Multi-Model Analysis

    USGS Publications Warehouse

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using

  10. Analysis of in vitro fertilization data with multiple outcomes using discrete time-to-event analysis

    PubMed Central

    Maity, Arnab; Williams, Paige; Ryan, Louise; Missmer, Stacey; Coull, Brent; Hauser, Russ

    2014-01-01

    In vitro fertilization (IVF) is an increasingly common method of assisted reproductive technology. Because of the careful observation and followup required as part of the procedure, IVF studies provide an ideal opportunity to identify and assess clinical and demographic factors along with environmental exposures that may impact successful reproduction. A major challenge in analyzing data from IVF studies is handling the complexity and multiplicity of outcome, resulting from both multiple opportunities for pregnancy loss within a single IVF cycle in addition to multiple IVF cycles. To date, most evaluations of IVF studies do not make use of full data due to its complex structure. In this paper, we develop statistical methodology for analysis of IVF data with multiple cycles and possibly multiple failure types observed for each individual. We develop a general analysis framework based on a generalized linear modeling formulation that allows implementation of various types of models including shared frailty models, failure specific frailty models, and transitional models, using standard software. We apply our methodology to data from an IVF study conducted at the Brigham and Women’s Hospital, Massachusetts. We also summarize the performance of our proposed methods based on a simulation study. PMID:24317880

  11. Validation and Verification of Operational Land Analysis Activities at the Air Force Weather Agency

    NASA Technical Reports Server (NTRS)

    Shaw, Michael; Kumar, Sujay V.; Peters-Lidard, Christa D.; Cetola, Jeffrey

    2012-01-01

    The NASA developed Land Information System (LIS) is the Air Force Weather Agency's (AFWA) operational Land Data Assimilation System (LDAS) combining real time precipitation observations and analyses, global forecast model data, vegetation, terrain, and soil parameters with the community Noah land surface model, along with other hydrology module options, to generate profile analyses of global soil moisture, soil temperature, and other important land surface characteristics. (1) A range of satellite data products and surface observations used to generate the land analysis products (2) Global, 1/4 deg spatial resolution (3) Model analysis generated at 3 hours. AFWA recognizes the importance of operational benchmarking and uncertainty characterization for land surface modeling and is developing standard methods, software, and metrics to verify and/or validate LIS output products. To facilitate this and other needs for land analysis activities at AFWA, the Model Evaluation Toolkit (MET) -- a joint product of the National Center for Atmospheric Research Developmental Testbed Center (NCAR DTC), AFWA, and the user community -- and the Land surface Verification Toolkit (LVT), developed at the Goddard Space Flight Center (GSFC), have been adapted to operational benchmarking needs of AFWA's land characterization activities.

  12. Studies for Io's extended atmosphere and neutral clouds and their impact on the local satellite atmosphere and on the planetary magnetosphere

    NASA Technical Reports Server (NTRS)

    Smyth, William H.

    1993-01-01

    The research performed in this project is divided in two main investigations: (1) the synthesis and analysis of a collection of independent observations for Io's sodium corona, its sodium extended atmosphere, and the sodium cloud, and (2) the analysis of a (System III longitude correlated) space-time 'bite-out' near western elongation in the 1981 sodium cloud images from the JPL Table Mountain Sodium Cloud Data Set. For the first investigation, modeling analysis of the collective observed spatial profiles has shown that they are reproduced by adopting at Io's exobase a modified sputtering flux speed distribution function which is peaked near 0.5 km/s and has a small high-speed (15-20 km/s) nonisotropic component. The nonisotropic high-speed component is consistent with earlier modeling of the trailing directional feature. For the second investigation, modeling analysis of the 'bite-out' observed near western elongation (but not eastern elongation) has shown that it is reproduced in model calculation by adopting a plasma torus description for the sodium lifetime that is inherently asymmetric in System III longitudes of the active sector and that also has an east-west asymmetry. The east-west and System III longitude asymmetries were determined from independent observations for the plasma torus in 1981. The presence of the 'bite-out' feature only near western elongation may be understood in terms of the relative value for sodium of its lifetime and its transport time through the System III enhanced plasma torus region.

  13. Quantifying How Observations Inform a Numerical Reanalysis of Hawaii

    NASA Astrophysics Data System (ADS)

    Powell, B. S.

    2017-11-01

    When assimilating observations into a model via state-estimation, it is possible to quantify how each observation changes the modeled estimate of a chosen oceanic metric. Using an existing 2 year reanalysis of Hawaii that includes more than 31 million observations from satellites, ships, SeaGliders, and autonomous floats, I assess which observations most improve the estimates of the transport and eddy kinetic energy. When the SeaGliders were in the water, they comprised less than 2.5% of the data, but accounted for 23% of the transport adjustment. Because the model physics constrains advanced state-estimation, the prescribed covariances are propagated in time to identify observation-model covariance. I find that observations that constrain the isopycnal tilt across the transport section provide the greatest impact in the analysis. In the case of eddy kinetic energy, observations that constrain the surface-driven upper ocean have more impact. This information can help to identify optimal sampling strategies to improve both state-estimates and forecasts.

  14. Evaluating Transient Global and Regional Model Simulations: Bridging the Model/Observations Information Gap

    NASA Astrophysics Data System (ADS)

    Rutledge, G. K.; Karl, T. R.; Easterling, D. R.; Buja, L.; Stouffer, R.; Alpert, J.

    2001-05-01

    A major transition in our ability to evaluate transient Global Climate Model (GCM) simulations is occurring. Real-time and retrospective numerical weather prediction analysis, model runs, climate simulations and assessments are proliferating from a handful of national centers to dozens of groups across the world. It is clear that it is no longer sufficient for any one national center to develop its data services alone. The comparison of transient GCM results with the observational climate record is difficult for several reasons. One limitation is that the global distributions of a number of basic climate quantities, such as precipitation, are not well known. Similarly, observational limitations exist with model re-analysis data. Both the NCEP/NCAR, and the ECMWF, re-analysis eliminate the problems of changing analysis systems but observational data also contain time-dependant biases. These changes in input data are blended with the natural variability making estimates of true variability uncertain. The need for data homogeneity is critical to study questions related to the ability to evaluate simulation of past climate. One approach to correct for time-dependant biases and data sparse regions is the development and use of high quality 'reference' data sets. The primary U.S. National responsibility for the archive and service of weather and climate data rests with the National Climatic Data Center (NCDC). However, as supercomputers increase the temporal and spatial resolution of both Numerical Weather Prediction (NWP) and GCM models, the volume and varied formats of data presented for archive at NCDC, using current communications technologies and data management techniques is limiting the scientific access of these data. To address this ever expanding need for climate and NWP information, NCDC along with the National Center's for Environmental Prediction (NCEP) have initiated the NOAA Operational Model Archive and Distribution System (NOMADS). NOMADS is a collaboration between the Center for Ocean-Land-Atmosphere studies (COLA); the Geophysical Fluid Dynamics Laboratory (GFDL); the George Mason University (GMU); the National Center for Atmospheric Research (NCAR); the NCDC; NCEP; the Pacific Marine Environmental Laboratory (PMEL); and the University of Washington. The objective of the NOMADS is to preserve and provide retrospective access to GCM's and reference quality long-term observational and high volume three dimensional data as well as NCEP NWP models and re-start and re-analysis information. The creation of the NOMADS features a data distribution, format independent, methodology enabling scientific collaboration between researchers. The NOMADS configuration will allow a researcher to transparently browse, extract and intercompare retrospective observational and model data products from any of the participating centers. NOMADS will provide the ability to easily initialize and compare the results of ongoing climate model assessments and NWP output. Beyond the ingest and access capability soon to be implemented with NOMADS is the challenge of algorithm development for the inter-comparison of large-array data (e.g., satellite and radar) with surface, upper-air, and sub-surface ocean observational data. The implementation of NOMADS should foster the development of new quality control processes by taking advantage of distributed data access.

  15. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  16. The Use of AMET and Automated Scripts for Model Evaluation

    EPA Science Inventory

    The Atmospheric Model Evaluation Tool (AMET) is a suite of software designed to facilitate the analysis and evaluation of meteorological and air quality models. AMET matches the model output for particular locations to the corresponding observed values from one or more networks ...

  17. Using Analysis Increments (AI) to Estimate and Correct Systematic Errors in the Global Forecast System (GFS) Online

    NASA Astrophysics Data System (ADS)

    Bhargava, K.; Kalnay, E.; Carton, J.; Yang, F.

    2017-12-01

    Systematic forecast errors, arising from model deficiencies, form a significant portion of the total forecast error in weather prediction models like the Global Forecast System (GFS). While much effort has been expended to improve models, substantial model error remains. The aim here is to (i) estimate the model deficiencies in the GFS that lead to systematic forecast errors, (ii) implement an online correction (i.e., within the model) scheme to correct GFS following the methodology of Danforth et al. [2007] and Danforth and Kalnay [2008, GRL]. Analysis Increments represent the corrections that new observations make on, in this case, the 6-hr forecast in the analysis cycle. Model bias corrections are estimated from the time average of the analysis increments divided by 6-hr, assuming that initial model errors grow linearly and first ignoring the impact of observation bias. During 2012-2016, seasonal means of the 6-hr model bias are generally robust despite changes in model resolution and data assimilation systems, and their broad continental scales explain their insensitivity to model resolution. The daily bias dominates the sub-monthly analysis increments and consists primarily of diurnal and semidiurnal components, also requiring a low dimensional correction. Analysis increments in 2015 and 2016 are reduced over oceans, which is attributed to improvements in the specification of the SSTs. These results encourage application of online correction, as suggested by Danforth and Kalnay, for mean, seasonal and diurnal and semidiurnal model biases in GFS to reduce both systematic and random errors. As the error growth in the short-term is still linear, estimated model bias corrections can be added as a forcing term in the model tendency equation to correct online. Preliminary experiments with GFS, correcting temperature and specific humidity online show reduction in model bias in 6-hr forecast. This approach can then be used to guide and optimize the design of sub-grid scale physical parameterizations, more accurate discretization of the model dynamics, boundary conditions, radiative transfer codes, and other potential model improvements which can then replace the empirical correction scheme. The analysis increments also provide guidance in testing new physical parameterizations.

  18. Dynamic Analysis of Tunnel in Weathered Rock Subjected to Internal Blast Loading

    NASA Astrophysics Data System (ADS)

    Tiwari, Rohit; Chakraborty, Tanusree; Matsagar, Vasant

    2016-11-01

    The present study deals with three-dimensional nonlinear finite element (FE) analyses of a tunnel in rock with reinforced concrete (RC) lining subjected to internal blast loading. The analyses have been performed using the coupled Eulerian-Lagrangian analysis tool available in FE software Abaqus/Explicit. Rock and RC lining are modeled using three-dimensional Lagrangian elements. Beam elements have been used to model reinforcement in RC lining. Three different rock types with different weathering conditions have been used to understand the response of rock when subjected to blast load. The trinitrotoluene (TNT) explosive and surrounding air have been modeled using the Eulerian elements. The Drucker-Prager plasticity model with strain rate-dependent material properties has been used to simulate the stress-strain response of rock. The concrete damaged plasticity model and Johnson-Cook plasticity model have been used for the simulation of stress-strain response of concrete and steel, respectively. The explosive (TNT) has been modeled using Jones-Wilkins-Lee (JWL) equation of state. The analysis results have been studied for stresses, deformation and damage of RC lining and the surrounding rock. It is observed that damage in RC lining results in higher stress in rock. Rocks with low modulus and high weathering conditions show higher attenuation of shock wave. Higher amount of ground shock wave propagation is observed in case of less weathered rock. Ground heave is observed under blast loading for tunnel close to ground surface.

  19. GCIP water and energy budget synthesis (WEBS)

    USGS Publications Warehouse

    Roads, J.; Lawford, R.; Bainto, E.; Berbery, E.; Chen, S.; Fekete, B.; Gallo, K.; Grundstein, A.; Higgins, W.; Kanamitsu, M.; Krajewski, W.; Lakshmi, V.; Leathers, D.; Lettenmaier, D.; Luo, L.; Maurer, E.; Meyers, T.; Miller, D.; Mitchell, Ken; Mote, T.; Pinker, R.; Reichler, T.; Robinson, D.; Robock, A.; Smith, J.; Srinivasan, G.; Verdin, K.; Vinnikov, K.; Vonder, Haar T.; Vorosmarty, C.; Williams, S.; Yarosh, E.

    2003-01-01

    As part of the World Climate Research Program's (WCRPs) Global Energy and Water-Cycle Experiment (GEWEX) Continental-scale International Project (GCIP), a preliminary water and energy budget synthesis (WEBS) was developed for the period 1996-1999 fromthe "best available" observations and models. Besides this summary paper, a companion CD-ROM with more extensive discussion, figures, tables, and raw data is available to the interested researcher from the GEWEX project office, the GAPP project office, or the first author. An updated online version of the CD-ROM is also available at http://ecpc.ucsd.edu/gcip/webs.htm/. Observations cannot adequately characterize or "close" budgets since too many fundamental processes are missing. Models that properly represent the many complicated atmospheric and near-surface interactions are also required. This preliminary synthesis therefore included a representative global general circulation model, regional climate model, and a macroscale hydrologic model as well as a global reanalysis and a regional analysis. By the qualitative agreement among the models and available observations, it did appear that we now qualitatively understand water and energy budgets of the Mississippi River Basin. However, there is still much quantitative uncertainty. In that regard, there did appear to be a clear advantage to using a regional analysis over a global analysis or a regional simulation over a global simulation to describe the Mississippi River Basin water and energy budgets. There also appeared to be some advantage to using a macroscale hydrologic model for at least the surface water budgets. Copyright 2003 by the American Geophysical Union.

  20. From the Cluster Temperature Function to the Mass Function at Low Z

    NASA Technical Reports Server (NTRS)

    Mushotzky, Richard (Technical Monitor); Markevitch, Maxim

    2004-01-01

    This XMM project consisted of three observations of the nearby, hot galaxy cluster Triangulum Australis, one of the cluster center and two offsets. The goal was to measure the radial gas temperature profile out to large radii and derive the total gravitating mass within the radius of average mass overdensity 500. The central pointing also provides data for a detailed two-dimensional gas temperature map of this interesting cluster. We have analyzed all three observations. The derivation of the temperature map using the central pointing is complete, and the paper is soon to be submitted. During the course of this study and of the analysis of archival XMM cluster observations, it became apparent that the commonly used XMM background flare screening techniques are often not accurate enough for studies of the cluster outer regions. The information on the cluster's total masses is contained at large off-center distances, and it is precisely the temperatures for those low-brightness regions that are most affected by the detector background anomalies. In particular, our two offset observations of the Triangulum have been contaminated by the background flares ("bad cosmic weather") to a degree where they could not be used for accurate spectral analysis. This forced us to expand the scope of our project. We needed to devise a more accurate method of screening and modeling the background flares, and to evaluate the uncertainty of the XMM background modeling. To do this, we have analyzed a large number of archival EPIC blank-field and closed-cover observations. As a result, we have derived stricter background screening criteria. It also turned out that mild flares affecting EPIC-pn can be modeled with an adequate accuracy. Such modeling has been used to derive our Triangulum temperature map. The results of our XMM background analysis, including the modeling recipes, are presented in a paper which is in final preparation and will be submitted soon. It will be useful not only for our future analysis but for other XMM cluster observations as well.

  1. Observing the earth radiation budget from satellites - Past, present, and a look to the future

    NASA Technical Reports Server (NTRS)

    House, F. B.

    1985-01-01

    Satellite measurements of the radiative exchange between the planet earth and space have been the objective of many experiments since the beginning of the space age in the late 1950's. The on-going mission of the Earth Radiation Budget (ERB) experiments has been and will be to consider flight hardware, data handling and scientific analysis methods in a single design strategy. Research and development on observational data has produced an analysis model of errors associated with ERB measurement systems on polar satellites. Results show that the variability of reflected solar radiation from changing meteorology dominates measurement uncertainties. As an application, model calculations demonstrate that measurement requirements for the verification of climate models may be satisfied with observations from one polar satellite, provided there is information on diurnal variations of the radiation budget from the ERBE mission.

  2. Logit Models for the Analysis of Two-Way Categorical Data

    ERIC Educational Resources Information Center

    Draxler, Clemens

    2011-01-01

    This article discusses the application of logit models for the analyses of 2-way categorical observations. The models described are generalized linear models using the logit link function. One of the models is the Rasch model (Rasch, 1960). The objective is to test hypotheses of marginal and conditional independence between explanatory quantities…

  3. Distribution of CO2 in Saturn's Atmosphere from Cassini/cirs Infrared Observations

    NASA Astrophysics Data System (ADS)

    Abbas, M. M.; LeClair, A.; Woodard, E.; Young, M.; Stanbro, M.; Flasar, F. M.; Kunde, V. G.; Achterberg, R. K.; Bjoraker, G.; Brasunas, J.; Jennings, D. E.; the Cassini/CIRS Team

    2013-10-01

    This paper focuses on the CO2 distribution in Saturn's atmosphere based on analysis of infrared spectral observations of Saturn made by the Composite Infrared Spectrometer aboard the Cassini spacecraft. The Cassini spacecraft was launched in 1997 October, inserted in Saturn's orbit in 2004 July, and has been successfully making infrared observations of Saturn, its rings, Titan, and other icy satellites during well-planned orbital tours. The infrared observations, made with a dual Fourier transform spectrometer in both nadir- and limb-viewing modes, cover spectral regions of 10-1400 cm-1, with the option of variable apodized spectral resolutions from 0.53 to 15 cm-1. An analysis of the observed spectra with well-developed radiative transfer models and spectral inversion techniques has the potential to provide knowledge of Saturn's thermal structure and composition with global distributions of a series of gases. In this paper, we present an analysis of a large observational data set for retrieval of Saturn's CO2 distribution utilizing spectral features of CO2 in the Q-branch of the ν2 band, and discuss its possible relationship to the influx of interstellar dust grains. With limited spectral regions available for analysis, due to low densities of CO2 and interference from other gases, the retrieved CO2 profile is obtained as a function of a model photochemical profile, with the retrieved values at atmospheric pressures in the region of ~1-10 mbar levels. The retrieved CO2 profile is found to be in good agreement with the model profile based on Infrared Space Observatory measurements with mixing ratios of ~4.9 × 10-10 at atmospheric pressures of ~1 mbar.

  4. Planck intermediate results: XLII. Large-scale Galactic magnetic fields

    DOE PAGES

    Adam, R.; Ade, P. A. R.; Alves, M. I. R.; ...

    2016-12-12

    Recent models for the large-scale Galactic magnetic fields in the literature have been largely constrained by synchrotron emission and Faraday rotation measures. In this paper, we use three different but representative models to compare their predicted polarized synchrotron and dust emission with that measured by the Planck satellite. We first update these models to match the Planck synchrotron products using a common model for the cosmic-ray leptons. We discuss the impact on this analysis of the ongoing problems of component separation in the Planck microwave bands and of the uncertain cosmic-ray spectrum. In particular, the inferred degree of ordering inmore » the magnetic fields is sensitive to these systematic uncertainties, and we further show the importance of considering the expected variations in the observables in addition to their mean morphology. We then compare the resulting simulated emission to the observed dust polarization and find that the dust predictions do not match the morphology in the Planck data but underpredict the dust polarization away from the plane. We modify one of the models to roughly match both observables at high latitudes by increasing the field ordering in the thin disc near the observer. Finally, though this specific analysis is dependent on the component separation issues, we present the improved model as a proof of concept for how these studies can be advanced in future using complementary information from ongoing and planned observational projects.« less

  5. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  6. On the far-IR and sub-mm spectra of spiral galaxies

    NASA Technical Reports Server (NTRS)

    Stark, A. A.; Davidson, J. A.; Harper, D. A.; Pernic, R.; Loewenstein, R.

    1989-01-01

    Photometric measurements of three Virgo cluster spirals (NGC4254, NGC4501, and NGC4654) at 160-microns (far-infrared) and 360-microns (submillimeter) wavelengths are compared with theoretical models and observations at other wavelengths. It is shown that the data at the observed wavelengths do not fit any of interstellar dust grain models very well; four possibilities are given in order to explain discrepancies: the observed wavelength points are incorrect; previously observed data is incorrect; both data are incorrect; and the premise of the analysis is incorrect - a composite far-infrared spectrum of normal spiral galaxies is meaningless because they vary considerably in their far-infrared properties. It is also noted that the observed data are inconsistent with models having large cold grains.

  7. Observational analysis on inflammatory reaction to talc pleurodesis: Small and large animal model series review

    PubMed Central

    Vannucci, Jacopo; Bellezza, Guido; Matricardi, Alberto; Moretti, Giulia; Bufalari, Antonello; Cagini, Lucio; Puma, Francesco; Daddi, Niccolò

    2018-01-01

    Talc pleurodesis has been associated with pleuropulmonary damage, particularly long-term damage due to its inert nature. The present model series review aimed to assess the safety of this procedure by examining inflammatory stimulus, biocompatibility and tissue reaction following talc pleurodesis. Talc slurry was performed in rabbits: 200 mg/kg checked at postoperative day 14 (five models), 200 mg/kg checked at postoperative day 28 (five models), 40 mg/kg, checked at postoperative day 14 (five models), 40 mg/kg checked at postoperative day 28 (five models). Talc poudrage was performed in pigs: 55 mg/kg checked at postoperative day 60 (18 models). Tissue inspection and data collection followed the surgical pathology approach currently used in clinical practice. As this was an observational study, no statistical analysis was performed. Regarding the rabbit model (Oryctolagus cunicoli), the extent of adhesions ranged between 0 and 30%, and between 0 and 10% following 14 and 28 days, respectively. No intraparenchymal granuloma was observed whereas, pleural granulomas were extensively encountered following both talc dosages, with more evidence of visceral pleura granulomas following 200 mg/kg compared with 40 mg/kg. Severe florid inflammation was observed in 2/10 cases following 40 mg/kg. Parathymic, pericardium granulomas and mediastinal lymphadenopathy were evidenced at 28 days. At 60 days, from rare adhesions to extended pleurodesis were observed in the pig model (Sus Scrofa domesticus). Pleural granulomas were ubiquitous on visceral and parietal pleurae. Severe spotted inflammation among the adhesions were recorded in 15/18 pigs. Intraparenchymal granulomas were observed in 9/18 lungs. Talc produced unpredictable pleurodesis in both animal models with enduring pleural inflammation whether it was performed via slurry or poudrage. Furthermore, talc appeared to have triggered extended pleural damage, intraparenchymal nodules (porcine poudrage) and mediastinal migration (rabbit slurry). PMID:29403549

  8. Technical report series on global modeling and data assimilation. Volume 4: Documentation of the Goddard Earth Observing System (GEOS) data assimilation system, version 1

    NASA Technical Reports Server (NTRS)

    Suarez, Max J. (Editor); Pfaendtner, James; Bloom, Stephen; Lamich, David; Seablom, Michael; Sienkiewicz, Meta; Stobie, James; Dasilva, Arlindo

    1995-01-01

    This report describes the analysis component of the Goddard Earth Observing System, Data Assimilation System, Version 1 (GEOS-1 DAS). The general features of the data assimilation system are outlined, followed by a thorough description of the statistical interpolation algorithm, including specification of error covariances and quality control of observations. We conclude with a discussion of the current status of development of the GEOS data assimilation system. The main components of GEOS-1 DAS are an atmospheric general circulation model and an Optimal Interpolation algorithm. The system is cycled using the Incremental Analysis Update (IAU) technique in which analysis increments are introduced as time independent forcing terms in a forecast model integration. The system is capable of producing dynamically balanced states without the explicit use of initialization, as well as a time-continuous representation of non- observables such as precipitation and radiational fluxes. This version of the data assimilation system was used in the five-year reanalysis project completed in April 1994 by Goddard's Data Assimilation Office (DAO) Data from this reanalysis are available from the Goddard Distributed Active Center (DAAC), which is part of NASA's Earth Observing System Data and Information System (EOSDIS). For information on how to obtain these data sets, contact the Goddard DAAC at (301) 286-3209, EMAIL daac@gsfc.nasa.gov.

  9. Impacts of different characterizations of large-scale background on simulated regional-scale ozone over the continental United States

    NASA Astrophysics Data System (ADS)

    Hogrefe, Christian; Liu, Peng; Pouliot, George; Mathur, Rohit; Roselle, Shawn; Flemming, Johannes; Lin, Meiyun; Park, Rokjin J.

    2018-03-01

    This study analyzes simulated regional-scale ozone burdens both near the surface and aloft, estimates process contributions to these burdens, and calculates the sensitivity of the simulated regional-scale ozone burden to several key model inputs with a particular emphasis on boundary conditions derived from hemispheric or global-scale models. The Community Multiscale Air Quality (CMAQ) model simulations supporting this analysis were performed over the continental US for the year 2010 within the context of the Air Quality Model Evaluation International Initiative (AQMEII) and Task Force on Hemispheric Transport of Air Pollution (TF-HTAP) activities. CMAQ process analysis (PA) results highlight the dominant role of horizontal and vertical advection on the ozone burden in the mid-to-upper troposphere and lower stratosphere. Vertical mixing, including mixing by convective clouds, couples fluctuations in free-tropospheric ozone to ozone in lower layers. Hypothetical bounding scenarios were performed to quantify the effects of emissions, boundary conditions, and ozone dry deposition on the simulated ozone burden. Analysis of these simulations confirms that the characterization of ozone outside the regional-scale modeling domain can have a profound impact on simulated regional-scale ozone. This was further investigated by using data from four hemispheric or global modeling systems (Chemistry - Integrated Forecasting Model (C-IFS), CMAQ extended for hemispheric applications (H-CMAQ), the Goddard Earth Observing System model coupled to chemistry (GEOS-Chem), and AM3) to derive alternate boundary conditions for the regional-scale CMAQ simulations. The regional-scale CMAQ simulations using these four different boundary conditions showed that the largest ozone abundance in the upper layers was simulated when using boundary conditions from GEOS-Chem, followed by the simulations using C-IFS, AM3, and H-CMAQ boundary conditions, consistent with the analysis of the ozone fields from the global models along the CMAQ boundaries. Using boundary conditions from AM3 yielded higher springtime ozone columns burdens in the middle and lower troposphere compared to boundary conditions from the other models. For surface ozone, the differences between the AM3-driven CMAQ simulations and the CMAQ simulations driven by other large-scale models are especially pronounced during spring and winter where they can reach more than 10 ppb for seasonal mean ozone mixing ratios and as much as 15 ppb for domain-averaged daily maximum 8 h average ozone on individual days. In contrast, the differences between the C-IFS-, GEOS-Chem-, and H-CMAQ-driven regional-scale CMAQ simulations are typically smaller. Comparing simulated surface ozone mixing ratios to observations and computing seasonal and regional model performance statistics revealed that boundary conditions can have a substantial impact on model performance. Further analysis showed that boundary conditions can affect model performance across the entire range of the observed distribution, although the impacts tend to be lower during summer and for the very highest observed percentiles. The results are discussed in the context of future model development and analysis opportunities.

  10. Research Review, 1983

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The Global Modeling and Simulation Branch (GMSB) of the Laboratory for Atmospheric Sciences (GLAS) is engaged in general circulation modeling studies related to global atmospheric and oceanographic research. The research activities discussed are organized into two disciplines: Global Weather/Observing Systems and Climate/Ocean-Air Interactions. The Global Weather activities are grouped in four areas: (1) Analysis and Forecast Studies, (2) Satellite Observing Systems, (3) Analysis and Model Development, (4) Atmospheric Dynamics and Diagnostic Studies. The GLAS Analysis/Forecast/Retrieval System was applied to both FGGE and post FGGE periods. The resulting analyses have already been used in a large number of theoretical studies of atmospheric dynamics, forecast impact studies and development of new or improved algorithms for the utilization of satellite data. Ocean studies have focused on the analysis of long-term global sea surface temperature data, for use in the study of the response of the atmosphere to sea surface temperature anomalies. Climate research has concentrated on the simulation of global cloudiness, and on the sensitivities of the climate to sea surface temperature and ground wetness anomalies.

  11. Observability Analysis of a Matrix Kalman Filter-Based Navigation System Using Visual/Inertial/Magnetic Sensors

    PubMed Central

    Feng, Guohu; Wu, Wenqi; Wang, Jinling

    2012-01-01

    A matrix Kalman filter (MKF) has been implemented for an integrated navigation system using visual/inertial/magnetic sensors. The MKF rearranges the original nonlinear process model in a pseudo-linear process model. We employ the observability rank criterion based on Lie derivatives to verify the conditions under which the nonlinear system is observable. It has been proved that such observability conditions are: (a) at least one degree of rotational freedom is excited, and (b) at least two linearly independent horizontal lines and one vertical line are observed. Experimental results have validated the correctness of these observability conditions. PMID:23012523

  12. Modelling short time series in metabolomics: a functional data analysis approach.

    PubMed

    Montana, Giovanni; Berk, Maurice; Ebbels, Tim

    2011-01-01

    Metabolomics is the study of the complement of small molecule metabolites in cells, biofluids and tissues. Many metabolomic experiments are designed to compare changes observed over time under two or more experimental conditions (e.g. a control and drug-treated group), thus producing time course data. Models from traditional time series analysis are often unsuitable because, by design, only very few time points are available and there are a high number of missing values. We propose a functional data analysis approach for modelling short time series arising in metabolomic studies which overcomes these obstacles. Our model assumes that each observed time series is a smooth random curve, and we propose a statistical approach for inferring this curve from repeated measurements taken on the experimental units. A test statistic for detecting differences between temporal profiles associated with two experimental conditions is then presented. The methodology has been applied to NMR spectroscopy data collected in a pre-clinical toxicology study.

  13. Model synthesis in frequency analysis of Missouri floods

    USGS Publications Warehouse

    Hauth, Leland D.

    1974-01-01

    Synthetic flood records for 43 small-stream sites aided in definition of techniques for estimating the magnitude and frequency of floods in Missouri. The long-term synthetic flood records were generated by use of a digital computer model of the rainfall-runoff process. A relatively short period of concurrent rainfall and runoff data observed at each of the 43 sites was used to calibrate the model, and rainfall records covering from 66 to 78 years for four Missouri sites and pan-evaporation data were used to generate the synthetic records. Flood magnitude and frequency characteristics of both the synthetic records and observed long-term flood records available for 109 large-stream sites were used in a multiple-regression analysis to define relations for estimating future flood characteristics at ungaged sites. That analysis indicated that drainage basin size and slope were the most useful estimating variables. It also indicated that a more complex regression model than the commonly used log-linear one was needed for the range of drainage basin sizes available in this study.

  14. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  15. Bayesian uncertainty analysis for complex systems biology models: emulation, global parameter searches and evaluation of gene functions.

    PubMed

    Vernon, Ian; Liu, Junli; Goldstein, Michael; Rowe, James; Topping, Jen; Lindsey, Keith

    2018-01-02

    Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data of various forms. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embeded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to a model of hormonal crosstalk in Arabidopsis root development, which has 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches between model output and observed trend data. The multiple insights into the model's structure that this analysis provides are discussed. The methodology is applied to a second related model, and the biological consequences of the resulting comparison, including the evaluation of gene functions, are described. Bayesian uncertainty analysis for complex models using both emulators and history matching is shown to be a powerful technique that can greatly aid the study of a large class of systems biology models. It both provides insight into model behaviour and identifies the sets of rate parameters of interest.

  16. Survival analysis with functional covariates for partial follow-up studies.

    PubMed

    Fang, Hong-Bin; Wu, Tong Tong; Rapoport, Aaron P; Tan, Ming

    2016-12-01

    Predictive or prognostic analysis plays an increasingly important role in the era of personalized medicine to identify subsets of patients whom the treatment may benefit the most. Although various time-dependent covariate models are available, such models require that covariates be followed in the whole follow-up period. This article studies a new class of functional survival models where the covariates are only monitored in a time interval that is shorter than the whole follow-up period. This paper is motivated by the analysis of a longitudinal study on advanced myeloma patients who received stem cell transplants and T cell infusions after the transplants. The absolute lymphocyte cell counts were collected serially during hospitalization. Those patients are still followed up if they are alive after hospitalization, while their absolute lymphocyte cell counts cannot be measured after that. Another complication is that absolute lymphocyte cell counts are sparsely and irregularly measured. The conventional method using Cox model with time-varying covariates is not applicable because of the different lengths of observation periods. Analysis based on each single observation obviously underutilizes available information and, more seriously, may yield misleading results. This so-called partial follow-up study design represents increasingly common predictive modeling problem where we have serial multiple biomarkers up to a certain time point, which is shorter than the total length of follow-up. We therefore propose a solution to the partial follow-up design. The new method combines functional principal components analysis and survival analysis with selection of those functional covariates. It also has the advantage of handling sparse and irregularly measured longitudinal observations of covariates and measurement errors. Our analysis based on functional principal components reveals that it is the patterns of the trajectories of absolute lymphocyte cell counts, instead of the actual counts, that affect patient's disease-free survival time. © The Author(s) 2014.

  17. Obs4MIPS: Satellite Observations for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2017-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.

  18. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data

    USGS Publications Warehouse

    Dorazio, Robert M.

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar – and often identical – inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

  19. Bayes and empirical Bayes estimators of abundance and density from spatial capture-recapture data.

    PubMed

    Dorazio, Robert M

    2013-01-01

    In capture-recapture and mark-resight surveys, movements of individuals both within and between sampling periods can alter the susceptibility of individuals to detection over the region of sampling. In these circumstances spatially explicit capture-recapture (SECR) models, which incorporate the observed locations of individuals, allow population density and abundance to be estimated while accounting for differences in detectability of individuals. In this paper I propose two Bayesian SECR models, one for the analysis of recaptures observed in trapping arrays and another for the analysis of recaptures observed in area searches. In formulating these models I used distinct submodels to specify the distribution of individual home-range centers and the observable recaptures associated with these individuals. This separation of ecological and observational processes allowed me to derive a formal connection between Bayes and empirical Bayes estimators of population abundance that has not been established previously. I showed that this connection applies to every Poisson point-process model of SECR data and provides theoretical support for a previously proposed estimator of abundance based on recaptures in trapping arrays. To illustrate results of both classical and Bayesian methods of analysis, I compared Bayes and empirical Bayes esimates of abundance and density using recaptures from simulated and real populations of animals. Real populations included two iconic datasets: recaptures of tigers detected in camera-trap surveys and recaptures of lizards detected in area-search surveys. In the datasets I analyzed, classical and Bayesian methods provided similar - and often identical - inferences, which is not surprising given the sample sizes and the noninformative priors used in the analyses.

  20. Gravity Modeling for Variable Fidelity Environments

    NASA Technical Reports Server (NTRS)

    Madden, Michael M.

    2006-01-01

    Aerospace simulations can model worlds, such as the Earth, with differing levels of fidelity. The simulation may represent the world as a plane, a sphere, an ellipsoid, or a high-order closed surface. The world may or may not rotate. The user may select lower fidelity models based on computational limits, a need for simplified analysis, or comparison to other data. However, the user will also wish to retain a close semblance of behavior to the real world. The effects of gravity on objects are an important component of modeling real-world behavior. Engineers generally equate the term gravity with the observed free-fall acceleration. However, free-fall acceleration is not equal to all observers. To observers on the sur-face of a rotating world, free-fall acceleration is the sum of gravitational attraction and the centrifugal acceleration due to the world's rotation. On the other hand, free-fall acceleration equals gravitational attraction to an observer in inertial space. Surface-observed simulations (e.g. aircraft), which use non-rotating world models, may choose to model observed free fall acceleration as the gravity term; such a model actually combines gravitational at-traction with centrifugal acceleration due to the Earth s rotation. However, this modeling choice invites confusion as one evolves the simulation to higher fidelity world models or adds inertial observers. Care must be taken to model gravity in concert with the world model to avoid denigrating the fidelity of modeling observed free fall. The paper will go into greater depth on gravity modeling and the physical disparities and synergies that arise when coupling specific gravity models with world models.

  1. Standard surface-reflectance model and illuminant estimation

    NASA Technical Reports Server (NTRS)

    Tominaga, Shoji; Wandell, Brian A.

    1989-01-01

    A vector analysis technique was adopted to test the standard reflectance model. A computational model was developed to determine the components of the observed spectra and an estimate of the illuminant was obtained without using a reference white standard. The accuracy of the standard model is evaluated.

  2. Description of the GMAO OSSE for Weather Analysis Software Package: Version 3

    NASA Technical Reports Server (NTRS)

    Koster, Randal D. (Editor); Errico, Ronald M.; Prive, Nikki C.; Carvalho, David; Sienkiewicz, Meta; El Akkraoui, Amal; Guo, Jing; Todling, Ricardo; McCarty, Will; Putman, William M.; hide

    2017-01-01

    The Global Modeling and Assimilation Office (GMAO) at the NASA Goddard Space Flight Center has developed software and products for conducting observing system simulation experiments (OSSEs) for weather analysis applications. Such applications include estimations of potential effects of new observing instruments or data assimilation techniques on improving weather analysis and forecasts. The GMAO software creates simulated observations from nature run (NR) data sets and adds simulated errors to those observations. The algorithms employed are much more sophisticated, adding a much greater degree of realism, compared with OSSE systems currently available elsewhere. The algorithms employed, software designs, and validation procedures are described in this document. Instructions for using the software are also provided.

  3. Coupled Data Assimilation for Integrated Earth System Analysis and Prediction: Goals, Challenges, and Recommendations

    NASA Technical Reports Server (NTRS)

    Penny, Stephen G.; Akella, Santha; Buehner, Mark; Chevallier, Matthieu; Counillon, Francois; Draper, Clara; Frolov, Sergey; Fujii, Yosuke; Karspeck, Alicia; Kumar, Arun

    2017-01-01

    The purpose of this report is to identify fundamental issues for coupled data assimilation (CDA), such as gaps in science and limitations in forecasting systems, in order to provide guidance to the World Meteorological Organization (WMO) on how to facilitate more rapid progress internationally. Coupled Earth system modeling provides the opportunity to extend skillful atmospheric forecasts beyond the traditional two-week barrier by extracting skill from low-frequency state components such as the land, ocean, and sea ice. More generally, coupled models are needed to support seamless prediction systems that span timescales from weather, subseasonal to seasonal (S2S), multiyear, and decadal. Therefore, initialization methods are needed for coupled Earth system models, either applied to each individual component (called Weakly Coupled Data Assimilation - WCDA) or applied the coupled Earth system model as a whole (called Strongly Coupled Data Assimilation - SCDA). Using CDA, in which model forecasts and potentially the state estimation are performed jointly, each model domain benefits from observations in other domains either directly using error covariance information known at the time of the analysis (SCDA), or indirectly through flux interactions at the model boundaries (WCDA). Because the non-atmospheric domains are generally under-observed compared to the atmosphere, CDA provides a significant advantage over single-domain analyses. Next, we provide a synopsis of goals, challenges, and recommendations to advance CDA: Goals: (a) Extend predictive skill beyond the current capability of NWP (e.g. as demonstrated by improving forecast skill scores), (b) produce physically consistent initial conditions for coupled numerical prediction systems and reanalyses (including consistent fluxes at the domain interfaces), (c) make best use of existing observations by allowing observations from each domain to influence and improve the full earth system analysis, (d) develop a robust observation-based identification and understanding of mechanisms that determine the variability of weather and climate, (e) identify critical weaknesses in coupled models and the earth observing system, (f) generate full-field estimates of unobserved or sparsely observed variables, (g) improve the estimation of the external forcings causing changes to climate, (h) transition successes from idealized CDA experiments to real-world applications. Challenges: (a) Modeling at the interfaces between interacting components of coupled Earth system models may be inadequate for estimating uncertainty or error covariances between domains, (b) current data assimilation methods may be insufficient to simultaneously analyze domains containing multiple spatiotemporal scales of interest, (c) there is no standardization of observation data or their delivery systems across domains, (d) the size and complexity of many large-scale coupled Earth system models makes it is difficult to accurately represent uncertainty due to model parameters and coupling parameters, (e) model errors lead to local biases that can transfer between the different Earth system components and lead to coupled model biases and long-term model drift, (e) information propagation across model components with different spatiotemporal scales is extremely complicated, and must be improved in current coupled modeling frameworks, (h) there is insufficient knowledge on how to represent evolving errors in non-atmospheric model components (e.g. as sea ice, land and ocean) on the timescales of NWP.

  4. Surface Winds and Dust Biases in Climate Models

    NASA Astrophysics Data System (ADS)

    Evan, A. T.

    2018-01-01

    An analysis of North African dust from models participating in the Fifth Climate Models Intercomparison Project (CMIP5) suggested that, when forced by observed sea surface temperatures, these models were unable to reproduce any aspects of the observed year-to-year variability in dust from North Africa. Consequently, there would be little reason to have confidence in the models' projections of changes in dust over the 21st century. However, no subsequent study has elucidated the root causes of the disagreement between CMIP5 and observed dust. Here I develop an idealized model of dust emission and then use this model to show that, over North Africa, such biases in CMIP5 models are due to errors in the surface wind fields and not due to the representation of dust emission processes. These results also suggest that because the surface wind field over North Africa is highly spatially autocorrelated, intermodel differences in the spatial structure of dust emission have little effect on the relative change in year-to-year dust emission over the continent. I use these results to show that similar biases in North African dust from the NASA Modern Era Retrospective analysis for Research and Applications (MERRA) version 2 surface wind field biases but that these wind biases were not present in the first version of MERRA.

  5. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    NASA Technical Reports Server (NTRS)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  6. Air Quality Forecasts Using the NASA GEOS Model: A Unified Tool from Local to Global Scales

    NASA Technical Reports Server (NTRS)

    Knowland, E. Emma; Keller, Christoph; Nielsen, J. Eric; Orbe, Clara; Ott, Lesley; Pawson, Steven; Saunders, Emily; Duncan, Bryan; Cook, Melanie; Liu, Junhua; hide

    2017-01-01

    We provide an introduction to a new high-resolution (0.25 degree) global composition forecast produced by NASA's Global Modeling and Assimilation office. The NASA Goddard Earth Observing System version 5 (GEOS-5) model has been expanded to provide global near-real-time forecasts of atmospheric composition at a horizontal resolution of 0.25 degrees (approximately 25 km). Previously, this combination of detailed chemistry and resolution was only provided by regional models. This system combines the operational GEOS-5 weather forecasting model with the state-of-the-science GEOS-Chem chemistry module (version 11) to provide detailed chemical analysis of a wide range of air pollutants such as ozone, carbon monoxide, nitrogen oxides, and fine particulate matter (PM2.5). The resolution of the forecasts is the highest resolution compared to current, publically-available global composition forecasts. Evaluation and validation of modeled trace gases and aerosols compared to surface and satellite observations will be presented for constituents relative to health air quality standards. Comparisons of modeled trace gases and aerosols against satellite observations show that the model produces realistic concentrations of atmospheric constituents in the free troposphere. Model comparisons against surface observations highlight the model's capability to capture the diurnal variability of air pollutants under a variety of meteorological conditions. The GEOS-5 composition forecasting system offers a new tool for scientists and the public health community, and is being developed jointly with several government and non-profit partners. Potential applications include air quality warnings, flight campaign planning and exposure studies using the archived analysis fields.

  7. Diagnostics for generalized linear hierarchical models in network meta-analysis.

    PubMed

    Zhao, Hong; Hodges, James S; Carlin, Bradley P

    2017-09-01

    Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.

  8. V3885 Sagittarius: A Comparison With a Range of Standard Model Accretion Disks

    NASA Technical Reports Server (NTRS)

    Linnell, Albert P.; Godon, Patrick; Hubeny, Ivan; Sion, Edward M; Szkody, Paula; Barrett, Paul E.

    2009-01-01

    A chi-squared analysis of standard model accretion disk synthetic spectrum fits to combined Far Ultraviolet Spectroscopic Explorer and Space Telescope Imaging Spectrograph spectra of V3885 Sagittarius, on an absolute flux basis, selects a model that accurately represents the observed spectral energy distribution. Calculation of the synthetic spectrum requires the following system parameters. The cataclysmic variable secondary star period-mass relation calibrated by Knigge in 2006 and 2007 sets the secondary component mass. A mean white dwarf (WD) mass from the same study, which is consistent with an observationally determined mass ratio, sets the adopted WD mass of 0.7M(solar mass), and the WD radius follows from standard theoretical models. The adopted inclination, i = 65 deg, is a literature consensus, and is subsequently supported by chi-squared analysis. The mass transfer rate is the remaining parameter to set the accretion disk T(sub eff) profile, and the Hipparcos parallax constrains that parameter to mas transfer = (5.0 +/- 2.0) x 10(exp -9) M(solar mass)/yr by a comparison with observed spectra. The fit to the observed spectra adopts the contribution of a 57,000 +/- 5000 K WD. The model thus provides realistic constraints on mass transfer and T(sub eff) for a large mass transfer system above the period gap.

  9. Vector space methods of photometric analysis - Applications to O stars and interstellar reddening

    NASA Technical Reports Server (NTRS)

    Massa, D.; Lillie, C. F.

    1978-01-01

    A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.

  10. An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements

    DOE PAGES

    Zhai, Zhongxu; Blanton, Michael; Slosar, Anze; ...

    2017-12-01

    Here, we compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtainingmore » data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.« less

  11. An Evaluation of Cosmological Models from the Expansion and Growth of Structure Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Zhongxu; Blanton, Michael; Slosar, Anze

    Here, we compare a large suite of theoretical cosmological models to observational data from the cosmic microwave background, baryon acoustic oscillation measurements of expansion, Type Ia supernova measurements of expansion, redshift space distortion measurements of the growth of structure, and the local Hubble constant. Our theoretical models include parametrizations of dark energy as well as physical models of dark energy and modified gravity. We determine the constraints on the model parameters, incorporating the redshift space distortion data directly in the analysis. To determine whether models can be ruled out, we evaluate the p-value (the probability under the model of obtainingmore » data as bad or worse than the observed data). In our comparison, we find the well-known tension of H 0 with the other data; no model resolves this tension successfully. Among the models we consider, the large-scale growth of structure data does not affect the modified gravity models as a category particularly differently from dark energy models; it matters for some modified gravity models but not others, and the same is true for dark energy models. We compute predicted observables for each model under current observational constraints, and identify models for which future observational constraints will be particularly informative.« less

  12. Ensemble-sensitivity Analysis Based Observation Targeting for Mesoscale Convection Forecasts and Factors Influencing Observation-Impact Prediction

    NASA Astrophysics Data System (ADS)

    Hill, A.; Weiss, C.; Ancell, B. C.

    2017-12-01

    The basic premise of observation targeting is that additional observations, when gathered and assimilated with a numerical weather prediction (NWP) model, will produce a more accurate forecast related to a specific phenomenon. Ensemble-sensitivity analysis (ESA; Ancell and Hakim 2007; Torn and Hakim 2008) is a tool capable of accurately estimating the proper location of targeted observations in areas that have initial model uncertainty and large error growth, as well as predicting the reduction of forecast variance due to the assimilated observation. ESA relates an ensemble of NWP model forecasts, specifically an ensemble of scalar forecast metrics, linearly to earlier model states. A thorough investigation is presented to determine how different factors of the forecast process are impacting our ability to successfully target new observations for mesoscale convection forecasts. Our primary goals for this work are to determine: (1) If targeted observations hold more positive impact over non-targeted (i.e. randomly chosen) observations; (2) If there are lead-time constraints to targeting for convection; (3) How inflation, localization, and the assimilation filter influence impact prediction and realized results; (4) If there exist differences between targeted observations at the surface versus aloft; and (5) how physics errors and nonlinearity may augment observation impacts.Ten cases of dryline-initiated convection between 2011 to 2013 are simulated within a simplified OSSE framework and presented here. Ensemble simulations are produced from a cycling system that utilizes the Weather Research and Forecasting (WRF) model v3.8.1 within the Data Assimilation Research Testbed (DART). A "truth" (nature) simulation is produced by supplying a 3-km WRF run with GFS analyses and integrating the model forward 90 hours, from the beginning of ensemble initialization through the end of the forecast. Target locations for surface and radiosonde observations are computed 6, 12, and 18 hours into the forecast based on a chosen scalar forecast response metric (e.g., maximum reflectivity at convection initiation). A variety of experiments are designed to achieve the aforementioned goals and will be presented, along with their results, detailing the feasibility of targeting for mesoscale convection forecasts.

  13. Distribution of lod scores in oligogenic linkage analysis.

    PubMed

    Williams, J T; North, K E; Martin, L J; Comuzzie, A G; Göring, H H; Blangero, J

    2001-01-01

    In variance component oligogenic linkage analysis it can happen that the residual additive genetic variance bounds to zero when estimating the effect of the ith quantitative trait locus. Using quantitative trait Q1 from the Genetic Analysis Workshop 12 simulated general population data, we compare the observed lod scores from oligogenic linkage analysis with the empirical lod score distribution under a null model of no linkage. We find that zero residual additive genetic variance in the null model alters the usual distribution of the likelihood-ratio statistic.

  14. Vii. New Kr IV - VII Oscillator Strengths and an Improved Spectral Analysis of the Hot, Hydrogen-deficient Do-type White Dwarf RE 0503-289

    NASA Technical Reports Server (NTRS)

    Rauch, T.; Quinet, P.; Hoyer, D.; Werner, K.; Richter, P.; Kruk, J. W.; Demleitner, M.

    2016-01-01

    For the spectral analysis of high-resolution and high signal-to-noise (SN) spectra of hot stars, state-of-the-art non-local thermodynamic equilibrium (NLTE) model atmospheres are mandatory. These are strongly dependent on the reliability of the atomic data that is used for their calculation. Aims. New Krivvii oscillator strengths for a large number of lines enable us to construct more detailed model atoms for our NLTEmodel-atmosphere calculations. This enables us to search for additional Kr lines in observed spectra and to improve Kr abundance determinations. Methods. We calculated Krivvii oscillator strengths to consider radiative and collisional bound-bound transitions in detail in our NLTE stellar-atmosphere models for the analysis of Kr lines that are exhibited in high-resolution and high SN ultraviolet (UV)observations of the hot white dwarf RE 0503.

  15. Six-hourly time series of horizontal troposphere gradients in VLBI analyis

    NASA Astrophysics Data System (ADS)

    Landskron, Daniel; Hofmeister, Armin; Mayer, David; Böhm, Johannes

    2016-04-01

    Consideration of horizontal gradients is indispensable for high-precision VLBI and GNSS analysis. As a rule of thumb, all observations below 15 degrees elevation need to be corrected for the influence of azimuthal asymmetry on the delay times, which is mainly a product of the non-spherical shape of the atmosphere and ever-changing weather conditions. Based on the well-known gradient estimation model by Chen and Herring (1997), we developed an augmented gradient model with additional parameters which are determined from ray-traced delays for the complete history of VLBI observations. As input to the ray-tracer, we used operational and re-analysis data from the European Centre for Medium-Range Weather Forecasts. Finally, we applied those a priori gradient parameters to VLBI analysis along with other empirical gradient models and assessed their impact on baseline length repeatabilities as well as on celestial and terrestrial reference frames.

  16. Water quality management using statistical analysis and time-series prediction model

    NASA Astrophysics Data System (ADS)

    Parmar, Kulwinder Singh; Bhardwaj, Rashmi

    2014-12-01

    This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.

  17. VLA OH Zeeman Observations of the NGC 6334 Complex Source A

    NASA Astrophysics Data System (ADS)

    Mayo, E. A.; Sarma, A. P.; Troland, T. H.; Abel, N. P.

    2004-12-01

    We present a detailed analysis of the NGC 6334 complex source A, a compact continuum source in the SW region of the complex. Our intent is to determine the significance of the magnetic field in the support of the surrounding molecular cloud against gravitational collapse. We have performed OH 1665 and 1667 MHz observations taken with the Very Large Array in the BnA configuration and combined these data with the lower resolution CnB data of Sarma et al. (2000). These observations reveal magnetic fields with values of the order of 350 μ G toward source A, with maximum fields reaching 500 μ G. We have also theoretically modeled the molecular cloud surrounding source A using Cloudy, with the constraints to the model based on observation. This model provides significant information on the density of H2 through the cloud and also the relative density of H2 to OH which is important to our analysis of the region. We will combine the knowledge gained through the Cloudy modeling with Virial estimates to determine the significance of the magnetic field to the dynamics and evolution of source A.

  18. X-ray peak broadening analysis of AA 6061{sub 100-x} - x wt.% Al{sub 2}O{sub 3} nanocomposite prepared by mechanical alloying

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sivasankaran, S., E-mail: sivasankarangs1979@gmail.com; Sivaprasad, K., E-mail: ksp@nitt.edu; Narayanasamy, R., E-mail: narayan@nitt.edu

    2011-07-15

    Nanocrystalline AA 6061 alloy reinforced with alumina (0, 4, 8, and 12 wt.%) in amorphized state composite powder was synthesized by mechanical alloying and consolidated by conventional powder metallurgy route. The as-milled and as-sintered (573 K and 673 K) nanocomposites were characterized by X-ray diffraction (XRD) and transmission electron microscopy (TEM). The peaks corresponding to fine alumina was not observed by XRD patterns due to amorphization. Using high-resolution transmission electron microscope, it is confirmed that the presence of amorphized alumina observed in Al lattice fringes. The crystallite size, lattice strain, deformation stress, and strain energy density of AA 6061 matrixmore » were determined precisely from the first five most intensive reflection of XRD using simple Williamson-Hall models; uniform deformation model, uniform stress deformation model, and uniform energy density deformation model. Among the developed models, uniform energy density deformation model was observed to be the best fit and realistic model for mechanically alloyed powders. This model evidenced the more anisotropic nature of the ball milled powders. The XRD peaks of as-milled powder samples demonstrated a considerable broadening with percentage of reinforcement due to grain refinement and lattice distortions during same milling time (40 h). The as-sintered (673 K) unreinforced AA 6061 matrix crystallite size from well fitted uniform energy density deformation model was 98 nm. The as-milled and as-sintered (673 K) nanocrystallite matrix sizes for 12 wt.% Al{sub 2}O{sub 3} well fitted by uniform energy density deformation model were 38 nm and 77 nm respectively, which indicate that the fine Al{sub 2}O{sub 3} pinned the matrix grain boundary and prevented the grain growth during sintering. Finally, the lattice parameter of Al matrix in as-milled and as-sintered conditions was also investigated in this paper. Research highlights: {yields} Integral breadth methods using various Williamson-Hall models were investigated for line profile analysis. {yields} Uniform energy density deformation model is observed to the best realistic model. {yields} The present analysis is used for understanding the stress and the strain present in the nanocomposites.« less

  19. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth

    PubMed Central

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production. PMID:28848565

  20. A New Strategy in Observer Modeling for Greenhouse Cucumber Seedling Growth.

    PubMed

    Qiu, Quan; Zheng, Chenfei; Wang, Wenping; Qiao, Xiaojun; Bai, He; Yu, Jingquan; Shi, Kai

    2017-01-01

    State observer is an essential component in computerized control loops for greenhouse-crop systems. However, the current accomplishments of observer modeling for greenhouse-crop systems mainly focus on mass/energy balance, ignoring physiological responses of crops. As a result, state observers for crop physiological responses are rarely developed, and control operations are typically made based on experience rather than actual crop requirements. In addition, existing observer models require a large number of parameters, leading to heavy computational load and poor application feasibility. To address these problems, we present a new state observer modeling strategy that takes both environmental information and crop physiological responses into consideration during the observer modeling process. Using greenhouse cucumber seedlings as an instance, we sample 10 physiological parameters of cucumber seedlings at different time point during the exponential growth stage, and employ them to build growth state observers together with 8 environmental parameters. Support vector machine (SVM) acts as the mathematical tool for observer modeling. Canonical correlation analysis (CCA) is used to select the dominant environmental and physiological parameters in the modeling process. With the dominant parameters, simplified observer models are built and tested. We conduct contrast experiments with different input parameter combinations on simplified and un-simplified observers. Experimental results indicate that physiological information can improve the prediction accuracies of the growth state observers. Furthermore, the simplified observer models can give equivalent or even better performance than the un-simplified ones, which verifies the feasibility of CCA. The current study can enable state observers to reflect crop requirements and make them feasible for applications with simplified shapes, which is significant for developing intelligent greenhouse control systems for modern greenhouse production.

  1. Ozone Temporal Variability in the Subarctic Region: Comparison of Satellite Measurements with Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Shved, G. M.; Virolainen, Ya. A.; Timofeyev, Yu. M.; Ermolenko, S. I.; Smyshlyaev, S. P.; Motsakov, M. A.; Kirner, O.

    2018-01-01

    Fourier and wavelet spectra of time series for the ozone column abundance in the atmospheric 0-25 and 25-60 km layers are analyzed from SBUV satellite observations and from numerical simulations based on the RSHU and EMAC models. The analysis uses datasets for three subarctic locations (St. Petersburg, Harestua, and Kiruna) for 2000-2014. The Fourier and wavelet spectra show periodicities in the range from 10 days to 10 years and from 1 day to 2 years, respectively. The comparison of the spectra shows overall agreement between the observational and modeled datasets. However, the analysis has revealed differences both between the measurements and the models and between the models themselves. The differences primarily concern the Rossby wave period region and the 11-year and semiannual periodicities. Possible reasons are given for the differences between the models and the measurements.

  2. Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data

    ERIC Educational Resources Information Center

    Xu, Shu; Blozis, Shelley A.

    2011-01-01

    Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…

  3. Global Energy and Water Budgets in MERRA

    NASA Technical Reports Server (NTRS)

    Bosilovich, Michael G.; Robertson, Franklin R.; Chen, Junye

    2010-01-01

    Reanalyses, retrospectively analyzing observations over climatological time scales, represent a merger between satellite observations and models to provide globally continuous data and have improved over several generations. Balancing the Earth s global water and energy budgets has been a focus of research for more than two decades. Models tend to their own climate while remotely sensed observations have had varying degrees of uncertainty. This study evaluates the latest NASA reanalysis, called the Modern Era Retrospective-analysis for Research and Applications (MERRA), from a global water and energy cycles perspective. MERRA was configured to provide complete budgets in its output diagnostics, including the Incremental Analysis Update (IAU), the term that represents the observations influence on the analyzed states, alongside the physical flux terms. Precipitation in reanalyses is typically sensitive to the observational analysis. For MERRA, the global mean precipitation bias and spatial variability are more comparable to merged satellite observations (GPCP and CMAP) than previous generations of reanalyses. Ocean evaporation also has a much lower value which is comparable to observed data sets. The global energy budget shows that MERRA cloud effects may be generally weak, leading to excess shortwave radiation reaching the ocean surface. Evaluating the MERRA time series of budget terms, a significant change occurs, which does not appear to be represented in observations. In 1999, the global analysis increments of water vapor changes sign from negative to positive, and primarily lead to more oceanic precipitation. This change is coincident with the beginning of AMSU radiance assimilation. Previous and current reanalyses all exhibit some sensitivity to perturbations in the observation record, and this remains a significant research topic for reanalysis development. The effect of the changing observing system is evaluated for MERRA water and energy budget terms.

  4. Ocean Carbon States: Data Mining in Observations and Numerical Simulations Results

    NASA Astrophysics Data System (ADS)

    Latto, R.; Romanou, A.

    2017-12-01

    Advanced data mining techniques are rapidly becoming widely used in Climate and Earth Sciences with the purpose of extracting new meaningful information from increasingly larger and more complex datasets. This is particularly important in studies of the global carbon cycle, where any lack of understanding of its combined physical and biogeochemical drivers is detrimental to our ability to accurately describe, understand, and predict CO2 concentrations and their changes in the major carbon reservoirs. The analysis presented here evaluates the use of cluster analysis as a means of identifying and comparing spatial and temporal patterns extracted from observational and model datasets. As the observational data is organized into various regimes, which we will call "ocean carbon states", we gain insight into the physical and/or biogeochemical processes controlling the ocean carbon cycle as well as how well these processes are simulated by a state-of-the-art climate model. We find that cluster analysis effectively produces realistic, dynamic regimes that can be associated with specific processes at different temporal scales for both observations and the model. In addition, we show how these regimes can be used to illustrate and characterize the model biases in the model air-sea flux of CO2. These biases are attributed to biases in salinity, sea surface temperature, wind speed, and nitrate, which are then used to identify the physical processes that are inaccurately reproduced by the model. In this presentation, we provide a proof-of-concept application using simple datasets, and we expand to more complex ones, using several physical and biogeochemical variable pairs, thus providing considerable insight into the mechanisms and phases of the ocean carbon cycle over different temporal and spatial scales.

  5. Diagnosing Warm Frontal Cloud Formation in a GCM: A Novel Approach Using Conditional Subsetting

    NASA Technical Reports Server (NTRS)

    Booth, James F.; Naud, Catherine M.; DelGenio, Anthony D.

    2013-01-01

    This study analyzes characteristics of clouds and vertical motion across extratropical cyclone warm fronts in the NASA Goddard Institute for Space Studies general circulation model. The validity of the modeled clouds is assessed using a combination of satellite observations from CloudSat, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO), Advanced Microwave Scanning Radiometer for Earth Observing System (AMSR-E), and the NASA Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalysis. The analysis focuses on developing cyclones, to test the model's ability to generate their initial structure. To begin, the extratropical cyclones and their warm fronts are objectively identified and cyclone-local fields are mapped into a vertical transect centered on the surface warm front. To further isolate specific physics, the cyclones are separated using conditional subsetting based on additional cyclone-local variables, and the differences between the subset means are analyzed. Conditional subsets are created based on 1) the transect clouds and 2) vertical motion; 3) the strength of the temperature gradient along the warm front, as well as the storm-local 4) wind speed and 5) precipitable water (PW). The analysis shows that the model does not generate enough frontal cloud, especially at low altitude. The subsetting results reveal that, compared to the observations, the model exhibits a decoupling between cloud formation at high and low altitudes across warm fronts and a weak sensitivity to moisture. These issues are caused in part by the parameterized convection and assumptions in the stratiform cloud scheme that are valid in the subtropics. On the other hand, the model generates proper covariability of low-altitude vertical motion and cloud at the warm front and a joint dependence of cloudiness on wind and PW.

  6. A Wrf-Chem Flash Rate Parameterization Scheme and LNO(x) Analysis of the 29-30 May 2012 Convective Event in Oklahoma During DC3

    NASA Technical Reports Server (NTRS)

    Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Weinheimer, A.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.; hide

    2014-01-01

    The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). Based on the measurements taken of the 29-30 May 2012 Oklahoma thunderstorm, an analysis against a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the same event at 3-km horizontal resolution was performed. One of the main objectives was to include various flash rate parameterization schemes (FRPSs) in the model and identify which scheme(s) best captured the flash rates observed by the National Lightning Detection Network (NLDN) and Oklahoma Lightning Mapping Array (LMA). The comparison indicates how well the schemes predicted the timing, location, and number of lightning flashes. The FRPSs implemented in the model were based on the simulated thunderstorms physical features, such as maximum vertical velocity, cloud top height, and updraft volume. Adjustment factors were added to each FRPS to best capture the observed flash trend and a sensitivity study was performed to compare the range in model-simulated lightning-generated nitrogen oxides (LNOx) generated by each FRPS over the storms lifetime. Based on the best FRPS, model-simulated LNOx was compared against aircraft measured NOx. The trace gas analysis, along with the increased detail in the model specification of the vertical distribution of lightning flashes as suggested by the LMA data, provide guidance in determining the scenario of NO production per intracloud and cloud-to-ground flash that best matches the NOx mixing ratios observed by the aircraft.

  7. A WRF-Chem Flash Rate Parameterization Scheme and LNOx Analysis of the 29-30 May 2012 Convective Event in Oklahoma During DC3

    NASA Technical Reports Server (NTRS)

    Cummings, Kristin A.; Pickering, Kenneth E.; Barth, M.; Weinheimer, A.; Bela, M.; Li, Y.; Allen, D.; Bruning, E.; MacGorman, D.; Rutledge, S.; hide

    2014-01-01

    The Deep Convective Clouds and Chemistry (DC3) field campaign in 2012 provided a plethora of aircraft and ground-based observations (e.g., trace gases, lightning and radar) to study deep convective storms, their convective transport of trace gases, and associated lightning occurrence and production of nitrogen oxides (NOx). Based on the measurements taken of the 29-30 May 2012 Oklahoma thunderstorm, an analysis against a Weather Research and Forecasting Chemistry (WRF-Chem) model simulation of the same event at 3-km horizontal resolution was performed. One of the main objectives was to include various flash rate parameterization schemes (FRPSs) in the model and identify which scheme(s) best captured the flash rates observed by the National Lightning Detection Network (NLDN) and Oklahoma Lightning Mapping Array (LMA). The comparison indicates how well the schemes predicted the timing, location, and number of lightning flashes. The FRPSs implemented in the model were based on the simulated thunderstorms physical features, such as maximum vertical velocity, cloud top height, and updraft volume. Adjustment factors were applied to each FRPS to best capture the observed flash trend and a sensitivity study was performed to compare the range in model-simulated lightning-generated nitrogen oxides (LNOx) generated by each FRPS over the storms lifetime. Based on the best FRPS, model-simulated LNOx was compared against aircraft measured NOx. The trace gas analysis, along with the increased detail in the model specification of the vertical distribution of lightning flashes as suggested by the LMA data, provide guidance in determining the scenario of NO production per intracloud and cloud-to-ground flash that best matches the NOx mixing ratios observed by the aircraft.

  8. Evaluation of NASA GEOS-ADAS Modeled Diurnal Warming Through Comparisons to SEVIRI and AMSR2 SST Observations

    NASA Astrophysics Data System (ADS)

    Gentemann, C. L.; Akella, S.

    2018-02-01

    An analysis of the ocean skin Sea Surface Temperature (SST) has been included in the Goddard Earth Observing System (GEOS) - Atmospheric Data Assimilation System (ADAS), Version 5 (GEOS-ADAS). This analysis is based on the GEOS atmospheric general circulation model (AGCM) that simulates near-surface diurnal warming and cool skin effects. Analysis for the skin SST is performed along with the atmospheric state, including Advanced Very High Resolution Radiometer (AVHRR) satellite radiance observations as part of the data assimilation system. One month (September, 2015) of GEOS-ADAS SSTs were compared to collocated satellite Spinning Enhanced Visible and InfraRed Imager (SEVIRI) and Advanced Microwave Scanning Radiometer 2 (AMSR2) SSTs to examine how the GEOS-ADAS diurnal warming compares to the satellite measured warming. The spatial distribution of warming compares well to the satellite observed distributions. Specific diurnal events are analyzed to examine variability within a single day. The dependence of diurnal warming on wind speed, time of day, and daily average insolation is also examined. Overall the magnitude of GEOS-ADAS warming is similar to the warming inferred from satellite retrievals, but several weaknesses in the GEOS-AGCM simulated diurnal warming are identified and directly related back to specific features in the formulation of the diurnal warming model.

  9. Transonic Unsteady Aerodynamics of the F/A-18E at Conditions Promoting Abrupt Wing Stall

    NASA Technical Reports Server (NTRS)

    Schuster, David M.; Byrd, James E.

    2003-01-01

    A transonic wind tunnel test of an 8% F/A-18E model was conducted in the NASA Langley Research Center (LaRC) 16-Foot Transonic Tunnel (16-Ft TT) to investigate the Abrupt Wing Stall (AWS) characteristics of this aircraft. During this test, both steady and unsteady measurements of balance loads, wing surface pressures, wing root bending moments, and outer wing accelerations were performed. The test was conducted with a wide range of model configurations and test conditions in an attempt to reproduce behavior indicative of the AWS phenomenon experienced on full-scale aircraft during flight tests. This paper focuses on the analysis of the unsteady data acquired during this test. Though the test apparatus was designed to be effectively rigid. model motions due to sting and balance flexibility were observed during the testing, particularly when the model was operating in the AWS flight regime. Correlation between observed aerodynamic frequencies and model structural frequencies are analyzed and presented. Significant shock motion and separated flow is observed as the aircraft pitches through the AWS region. A shock tracking strategy has been formulated to observe this phenomenon. Using this technique, the range of shock motion is readily determined as the aircraft encounters AWS conditions. Spectral analysis of the shock motion shows the frequencies at which the shock oscillates in the AWS region, and probability density function analysis of the shock location shows the propensity of the shock to take on a bi-stable and even tri-stable character in the AWS flight regime.

  10. On the morphology of the scattering medium as seen by MST/ST radars

    NASA Technical Reports Server (NTRS)

    Gage, K. S.

    1983-01-01

    Much is learned about the morphology of the small scale structures of the atmosphere from analysis of echoes observed by MST radars. The use of physical models enables a synthesis of diverse observations. Each model contains an implicit assumption about the nature of the irregularity structure of the medium. A comparison is made between the irregularity structure implicit in several models and what is known about the structure of the medium.

  11. On the Predictability of Northeast Monsoon Rainfall over South Peninsular India in General Circulation Models

    NASA Astrophysics Data System (ADS)

    Nair, Archana; Acharya, Nachiketa; Singh, Ankita; Mohanty, U. C.; Panda, T. C.

    2013-11-01

    In this study the predictability of northeast monsoon (Oct-Nov-Dec) rainfall over peninsular India by eight general circulation model (GCM) outputs was analyzed. These GCM outputs (forecasts for the whole season issued in September) were compared with high-resolution observed gridded rainfall data obtained from the India Meteorological Department for the period 1982-2010. Rainfall, interannual variability (IAV), correlation coefficients, and index of agreement were examined for the outputs of eight GCMs and compared with observation. It was found that the models are able to reproduce rainfall and IAV to different extents. The predictive power of GCMs was also judged by determining the signal-to-noise ratio and the external error variance; it was noted that the predictive power of the models was usually very low. To examine dominant modes of interannual variability, empirical orthogonal function (EOF) analysis was also conducted. EOF analysis of the models revealed they were capable of representing the observed precipitation variability to some extent. The teleconnection between the sea surface temperature (SST) and northeast monsoon rainfall was also investigated and results suggest that during OND the SST over the equatorial Indian Ocean, the Bay of Bengal, the central Pacific Ocean (over Nino3 region), and the north and south Atlantic Ocean enhances northeast monsoon rainfall. This observed phenomenon is only predicted by the CCM3v6 model.

  12. A data model for environmental scientists

    NASA Astrophysics Data System (ADS)

    Kapeljushnik, O.; Beran, B.; Valentine, D.; van Ingen, C.; Zaslavsky, I.; Whitenack, T.

    2008-12-01

    Environmental science encompasses a wide range of disciplines from water chemistry to microbiology, ecology and atmospheric sciences. Studies often require working across disciplines which differ in their ways of describing and storing data such that it is not possible to devise a monolithic one-size-fits-all data solution. Based on our experiences with Consortium of the Universities for the Advancement of Hydrologic Science Inc. (CUAHSI) Observations Data Model, Berkeley Water Center FLUXNET carbon-climate work and by examining standards like EPA's Water Quality Exchange (WQX), we have developed a flexible data model that allows extensions without need to altering the schema such that scientists can define custom metadata elements to describe their data including observations, analysis methods as well as sensors and geographical features. The data model supports various types of observations including fixed point and moving sensors, bottled samples, rasters from remote sensors and models, and categorical descriptions (e.g. taxonomy) by employing user-defined-types when necessary. It leverages ADO .NET Entity Framework to provide the semantic data models for differing disciplines, while maintaining a common schema below the entity layer. This abstraction layer simplifies data retrieval and manipulation by hiding the logic and complexity of the relational schema from users thus allows programmers and scientists to deal directly with objects such as observations, sensors, watersheds, river reaches, channel cross-sections, laboratory analysis methods and samples as opposed to table joins, columns and rows.

  13. Association between the SUMO4 M55V Polymorphism and Susceptibility to Type 2 Diabetes Mellitus: A Meta-analysis.

    PubMed

    Zhang, Qun; Liu, Di; Zhao, Zhong Yao; Sun, Qi; Ding, Li Xiang; Wang, You Xin

    2017-04-01

    The aim of this study is to determine whether the SUMO4 M55V polymorphism is associated with susceptibility to type 2 diabetes mellitus (T2DM). A meta-analysis was performed to detect the potential association of the SUMO4 M55V polymorphism and susceptibility to T2DM under dominant, recessive, co-dominant (homogeneous and heterogeneous), and additive models. A total of eight articles including 10 case-control studies, with a total of 2932 cases and 2679 controls, were included in this meta-analysis. The significant association between the SUMO4 M55V polymorphism and susceptibility to T2DM was observed in the dominant model (GG + GA versus AA: OR = 1.21, 95% CI = 1.05-1.40, P = 0.009), recessive model (GG versus GA + AA: OR = 1.29, 95% CI = 1.07-1.356, P = 0.010), homozygous model (GG versus AA: OR = 1.41, 95% CI = 1.06-1.56, P = 0.001), and additive model (G versus A: OR = 1.18, 95% CI = 1.08-1.29, P = 0.001), and marginally significant in the heterozygous model (GA versus AA: OR = 1.16, 95% CI = 0.98-1.36, P = 0.080). In subgroup analyses, significant associations were observed in the Chinese population under four genetic models excluding the heterozygous model, whereas no statistically significant associations were observed in the Japanese population under each of the five genetic models. The meta-analysis demonstrated that the G allele of the SUMO4 M55V polymorphism could be a susceptible risk locus to T2DM, mainly in the Chinese population, while the association in other ethnic population needs to be further validated in studies with relatively large samples. Copyright © 2017 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  14. Use of Linear Prediction Uncertainty Analysis to Guide Conditioning of Models Simulating Surface-Water/Groundwater Interactions

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.; Doherty, J.

    2011-12-01

    Linear prediction uncertainty analysis in a Bayesian framework was applied to guide the conditioning of an integrated surface water/groundwater model that will be used to predict the effects of groundwater withdrawals on surface-water and groundwater flows. Linear prediction uncertainty analysis is an effective approach for identifying (1) raw and processed data most effective for model conditioning prior to inversion, (2) specific observations and periods of time critically sensitive to specific predictions, and (3) additional observation data that would reduce model uncertainty relative to specific predictions. We present results for a two-dimensional groundwater model of a 2,186 km2 area of the Biscayne aquifer in south Florida implicitly coupled to a surface-water routing model of the actively managed canal system. The model domain includes 5 municipal well fields withdrawing more than 1 Mm3/day and 17 operable surface-water control structures that control freshwater releases from the Everglades and freshwater discharges to Biscayne Bay. More than 10 years of daily observation data from 35 groundwater wells and 24 surface water gages are available to condition model parameters. A dense parameterization was used to fully characterize the contribution of the inversion null space to predictive uncertainty and included bias-correction parameters. This approach allows better resolution of the boundary between the inversion null space and solution space. Bias-correction parameters (e.g., rainfall, potential evapotranspiration, and structure flow multipliers) absorb information that is present in structural noise that may otherwise contaminate the estimation of more physically-based model parameters. This allows greater precision in predictions that are entirely solution-space dependent, and reduces the propensity for bias in predictions that are not. Results show that application of this analysis is an effective means of identifying those surface-water and groundwater data, both raw and processed, that minimize predictive uncertainty, while simultaneously identifying the maximum solution-space dimensionality of the inverse problem supported by the data.

  15. Continuing Studies in Support of Ultraviolet Observations of Planetary Atmospheres

    NASA Technical Reports Server (NTRS)

    Clark, John

    1997-01-01

    This program was a one-year extension of an earlier Planetary Atmospheres program grant, covering the period 1 August 1996 through 30 September 1997. The grant was for supporting work to complement an active program observing planetary atmospheres with Earth-orbital telescopes, principally the Hubble Space Telescope (HST). The recent concentration of this work has been on HST observations of Jupiter's upper atmosphere and aurora, but it has also included observations of Io, serendipitous observations of asteroids, and observations of the velocity structure in the interplanetary medium. The observations of Jupiter have been at vacuum ultraviolet wavelengths, including imaging and spectroscopy of the auroral and airglow emissions. The most recent HST observations have been at the same time as in situ measurements made by the Galileo orbiter instruments, as reflected in the meeting presentations listed below. Concentrated efforts have been applied in this year to the following projects: The analysis of HST WFPC 2 images of Jupiter's aurora, including the Io footprint emissions. We have performed a comparative analysis of the lo footprint locations with two magnetic field models, studied the statistical properties of the apparent dawn auroral storms on Jupiter, and found various other repeated patterns in Jupiter's aurora. Analysis and modeling of airglow and auroral Ly alpha emission line profiles from Jupiter. This has included modeling the aurora] line profiles, including the energy degradation of precipitating charged particles and radiative transfer of the emerging emissions. Jupiter's auroral emission line profile is self-absorbed, since it is produced by an internal source, and the resulting emission with a deep central absorption from the overlying atmosphere permits modeling of the depth of the emissions, plus the motion of the emitting layer with respect to the overlying atmospheric column from the observed Doppler shift of the central absorption. By contrast the airglow emission line, which is dominated by resonant scattering of solar emission, has no central absorption, but displays rapid time variations and broad wings, indicative of a superthermal component (or corona) in Jupiter's upper atmosphere. Modeling of the observed motions of the plumes produced after the impacts of the fragments of Comet S/L-9 with Jupiter in July 1994, from the HST WFPC 2 imaging series.

  16. Evaluation of the Analysis Influence on Transport in Reanalysis Regional Water Cycles

    NASA Technical Reports Server (NTRS)

    Bosilovich, M. G.; Chen, J.; Robertson, F. R.

    2011-01-01

    Regional water cycles of reanalyses do not follow theoretical assumptions applicable to pure simulated budgets. The data analysis changes the wind, temperature and moisture, perturbing the theoretical balance. Of course, the analysis is correcting the model forecast error, so that the state fields should be more aligned with observations. Recently, it has been reported that the moisture convergence over continental regions, even those with significant quantities of radiosonde profiles present, can produce long term values not consistent with theoretical bounds. Specifically, long averages over continents produce some regions of moisture divergence. This implies that the observational analysis leads to a source of water in the region. One such region is the Unite States Great Plains, which many radiosonde and lidar wind observations are assimilated. We will utilize a new ancillary data set from the MERRA reanalysis called the Gridded Innovations and Observations (GIO) which provides the assimilated observations on MERRA's native grid allowing more thorough consideration of their impact on regional and global climatology. Included with the GIO data are the observation minus forecast (OmF) and observation minus analysis (OmA). Using OmF and OmA, we can identify the bias of the analysis against each observing system and gain a better understanding of the observations that are controlling the regional analysis. In this study we will focus on the wind and moisture assimilation.

  17. Data Assimilation in the Presence of Forecast Bias: The GEOS Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; Todling, Ricardo

    1999-01-01

    We describe the application of the unbiased sequential analysis algorithm developed by Dee and da Silva (1998) to the GEOS DAS moisture analysis. The algorithm estimates the persistent component of model error using rawinsonde observations and adjusts the first-guess moisture field accordingly. Results of two seasonal data assimilation cycles show that moisture analysis bias is almost completely eliminated in all observed regions. The improved analyses cause a sizable reduction in the 6h-forecast bias and a marginal improvement in the error standard deviations.

  18. SPITZER IRAC OBSERVATIONS OF IR EXCESS IN HOLMBERG IX X-1: A CIRCUMBINARY DISK OR A VARIABLE JET?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dudik, R. P.; Berghea, C. T.; Roberts, T. P.

    2016-11-01

    We present Spitzer Infrared Array Camera photometric observations of the ultraluminous X-ray source (ULX, X-1) in Holmberg IX. We construct a spectral energy distribution (SED) for Holmberg IX X-1 based on published optical, UV, and X-ray data combined with the IR data from this analysis. We modeled the X-ray and optical data with disk and stellar models; however, we find a clear IR excess in the ULX SED that cannot be explained by fits or extrapolations of any of these models. Instead, further analysis suggests that the IR excess results from dust emission, possibly a circumbinary disk, or a variablemore » jet.« less

  19. An Analysis of Simulated Wet Deposition of Mercury from the North American Mercury Model Intercomparison Study

    EPA Science Inventory

    A previous intercomparison of atmospheric mercury models in North America has been extended to compare simulated and observed wet deposition of mercury. Three regional-scale atmospheric mercury models were tested; CMAQ, REMSAD and TEAM. These models were each employed using thr...

  20. Chandra Interactive Analysis of Observations (CIAO)

    NASA Technical Reports Server (NTRS)

    Dobrzycki, Adam

    2000-01-01

    The Chandra (formerly AXAF) telescope, launched on July 23, 1999, provides X-rays data with unprecedented spatial and spectral resolution. As part of the Chandra scientific support, the Chandra X-ray Observatory Center provides a new data analysis system, CIAO ("Chandra Interactive Analysis of Observations"). We will present the main components of the system: "First Look" analysis; SHERPA: a multi-dimensional, multi-mission modeling and fitting application; Chandra Imaging and Plotting System; Detect package-source detection algorithms; and DM package generic data manipulation tools, We will set up a demonstration of the portable version of the system and show examples of Chandra Data Analysis.

  1. Is there an ordinary supermassive black hole at the Galactic Center?

    NASA Astrophysics Data System (ADS)

    Zakharov, A. F.

    Now there are two basic observational techniques to investigate a gravitational potential at the Galactic Center, namely, a) monitoring the orbits of bright stars near the Galactic Center to reconstruct a gravitational potential; b) measuring a size and a shape of shadows around black hole giving an alternative possibility to evaluate black hole parameters in mm-band with VLBI-technique. At the moment one can use a small relativistic correction approach for stellar orbit analysis (however, in the future the approximation will not be not precise enough due to enormous progress of observational facilities) while now for smallest structure analysis in VLBI observations one really needs a strong gravitational field approximation. We discuss results of observations, their conventional interpretations, tensions between observations and models and possible hints for a new physics from the observational data and tensions between observations and interpretations. We will discuss an opportunity to use a Schwarzschild metric for data interpretation or we have to use more exotic models such as Reissner - Nordström or Schwarzschild - de-Sitter metrics for better fits.

  2. An ordinary supermassive black hole at the Galactic Center: pro and contra

    NASA Astrophysics Data System (ADS)

    Zakharov, Alexander

    2016-07-01

    Now there are two basic observational techniques to investigate a gravitational potential at the Galactic Center, namely, a) monitoring the orbits of bright stars near the Galactic Center to reconstruct a gravitational potential; b) measuring a size and a shape of shadows around black hole giving an alternative possibility to evaluate black hole parameters in mm-band with VLBI-technique. At the moment one can use a small relativistic correction approach for stellar orbit analysis (however, in the future the approximation will not be not precise enough due to enormous progress of observational facilities) while now for smallest structure analysis in VLBI observations one really needs a strong gravitational field approximation. We discuss results of observations, their conventional interpretations, tensions between observations and models and possible hints for a new physics from the observational data and tensions between observations and interpretations. We will discuss an opportunity to use a Schwarzschild metric for data interpretation or we have to use more exotic models such as Yukawa potential, Reissner -- Nordstrom or Schwarzschild -- de-Sitter metrics for better fits.

  3. Data Analysis, Modeling, and Ensemble Forecasting to Support NOWCAST and Forecast Activities at the Fallon Naval Station

    DTIC Science & Technology

    2010-09-30

    and climate forecasting and use of satellite data assimilation for model evaluation. He is a task leader on another NSF_EPSCoR project for the...1 DISTRIBUTION STATEMENT A: Approved for public release; distribution is unlimited. Data Analysis, Modeling, and Ensemble Forecasting to...observations including remotely sensed data . OBJECTIVES The main objectives of the study are: 1) to further develop, test, and continue twice daily

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nasrollahi, Nasrin; AghaKouchak, Amir; Cheng, Linyin

    Assessing the uncertainties and understanding the deficiencies of climate models are fundamental to developing adaptation strategies. The objective of this study is to understand how well Coupled Model Intercomparison-Phase 5 (CMIP5) climate model simulations replicate ground-based observations of continental drought areas and their trends. The CMIP5 multimodel ensemble encompasses the Climatic Research Unit (CRU) ground-based observations of area under drought at all time steps. However, most model members overestimate the areas under extreme drought, particularly in the Southern Hemisphere (SH). Furthermore, the results show that the time series of observations and CMIP5 simulations of areas under drought exhibit more variabilitymore » in the SH than in the Northern Hemisphere (NH). The trend analysis of areas under drought reveals that the observational data exhibit a significant positive trend at the significance level of 0.05 over all land areas. The observed trend is reproduced by about three-fourths of the CMIP5 models when considering total land areas in drought. While models are generally consistent with observations at a global (or hemispheric) scale, most models do not agree with observed regional drying and wetting trends. Over many regions, at most 40% of the CMIP5 models are in agreement with the trends of CRU observations. The drying/wetting trends calculated using the 3 months Standardized Precipitation Index (SPI) values show better agreement with the corresponding CRU values than with the observed annual mean precipitation rates. As a result, pixel-scale evaluation of CMIP5 models indicates that no single model demonstrates an overall superior performance relative to the other models.« less

  5. Investigation of physical parameters in stellar flares observed by GINGA

    NASA Technical Reports Server (NTRS)

    Stern, Robert A.

    1994-01-01

    This program involves analysis and interpretation of results from GINGA Large Area Counter (LAC) observations from a group of large stellar x-ray flares. All LAC data are re-extracted using the standard Hayashida method of LAC background subtraction and analyzed using various models available with the XSPEC spectral fitting program. Temperature-emission measure histories are available for a total of 5 flares observed by GINGA. These will be used to compare physical parameters of these flares with solar and stellar flare models.

  6. Investigation of physical parameters in stellar flares observed by GINGA

    NASA Technical Reports Server (NTRS)

    Stern, Robert A.

    1994-01-01

    This program involves analysis and interpretation of results from GINGA Large Area Counter (LAC) observations from a group of large stellar X-ray flares. All LAC data are re-extracted using the standard Hayashida method of LAC background subtraction and analyzed using various models available with the XSPEC spectral fitting program.Temperature-emission measure histories are available for a total of 5 flares observed by GINGA. These will be used to compare physical parameters of these flares with solar and stellar flare models.

  7. Numerical simulation of terrain-induced mesoscale circulation in the Chiang Mai area, Thailand

    NASA Astrophysics Data System (ADS)

    Sathitkunarat, Surachai; Wongwises, Prungchan; Pan-Aram, Rudklao; Zhang, Meigen

    2008-11-01

    The regional atmospheric modeling system (RAMS) was applied to Chiang Mai province, a mountainous area in Thailand, to study terrain-induced mesoscale circulations. Eight cases in wet and dry seasons under different weather conditions were analyzed to show thermal and dynamic impacts on local circulations. This is the first study of RAMS in Thailand especially investigating the effect of mountainous area on the simulated meteorological data. Analysis of model results indicates that the model can reproduce major features of local circulation and diurnal variations in temperatures. For evaluating the model performance, model results were compared with observed wind speed, wind direction, and temperature monitored at a meteorological tower. Comparison shows that the modeled values are generally in good agreement with observations and that the model captured many of the observed features.

  8. EMC: Verification

    Science.gov Websites

    , GFS, RAP, HRRR, HIRESW, SREF mean, International Global Models, HPC analysis Precipitation Skill Scores : 1995-Present NAM, GFS, NAM CONUS nest, International Models EMC Forecast Verfication Stats: NAM ) Real Time Verification of NCEP Operational Models against observations Real Time Verification of NCEP

  9. Tilt observations using borehole tiltmeters: 2. Analysis of data from Yellowstone National Park

    NASA Astrophysics Data System (ADS)

    Meertens, Charles; Levine, Judah; Busby, Robert

    1989-01-01

    We have installed borehole tiltmeters at five sites in Yellowstone National Park, Wyoming, and have used these instruments to measure the spatial variation of the amplitude and phase of the principal semidiurnal tide. The measured tides vary both with position and azimuth and differ from the sum of the body tide and the ocean load by up to 50%. The difference predicted by a finite element model constructed from seismic, refraction, and gravity data has a maximum value of only 12%, although the discrepancy between our observations and the model is only marginally significant at some sites. The disagreement between the model and our observations is much larger than we observed using the same instruments at other sites and cannot be attributed to an instrumental effect. We have been unable to modify the model to explain our results while keeping it consistent with the previous observations.

  10. Effects of neutrino mass hierarchies on dynamical dark energy models

    NASA Astrophysics Data System (ADS)

    Yang, Weiqiang; Nunes, Rafael C.; Pan, Supriya; Mota, David F.

    2017-05-01

    We investigate how three different possibilities of neutrino mass hierarchies, namely normal, inverted, and degenerate, can affect the observational constraints on three well-known dynamical dark energy models, namely the Chevallier-Polarski-Linder, logarithmic, and the Jassal-Bagla-Padmanabhan parametrizations. In order to impose the observational constraints on the models, we performed a robust analysis using Planck 2015 temperature and polarization data, supernovae type Ia from the joint light curve analysis, baryon acoustic oscillation distance measurements, redshift space distortion characterized by f (z )σ8(z ) data, weak gravitational lensing data from the Canada-France-Hawaii Telescope Lensing Survey, and cosmic chronometer data plus the local value of the Hubble parameter. We find that different neutrino mass hierarchies return similar fits on almost all model parameters and mildly change the dynamical dark energy properties.

  11. The interpretation of simultaneous soft X-ray spectroscopic and imaging observations of an active region. [in solar corona

    NASA Technical Reports Server (NTRS)

    Davis, J. M.; Gerassimenko, M.; Krieger, A. S.; Vaiana, G. S.

    1975-01-01

    Simultaneous soft X-ray spectroscopic and broad-band imaging observations of an active region have been analyzed together to determine the parameters which describe the coronal plasma. From the spectroscopic data, models of temperature-emission measure-elemental abundance have been constructed which provide acceptable statistical fits. By folding these possible models through the imaging analysis, models which are not self-consistent can be rejected. In this way, only the oxygen, neon, and iron abundances of Pottasch (1967), combined with either an isothermal or exponential temperature-emission-measure model, are consistent with both sets of data. Contour maps of electron temperature and density for the active region have been constructed from the imaging data. The implications of the analysis for the determination of coronal abundances and for future satellite experiments are discussed.

  12. A theoretical analysis of the effect of thrust-related turbulence distortion on helicopter rotor low-frequency broadband noise

    NASA Technical Reports Server (NTRS)

    Williams, M.; Harris, W. L.

    1984-01-01

    The purpose of the analysis is to determine if inflow turbulence distortion may be a cause of experimentally observed changes in sound pressure levels when the rotor mean loading is varied. The effect of helicopter rotor mean aerodynamics on inflow turbulence is studied within the framework of the turbulence rapid distortion theory developed by Pearson (1959) and Deissler (1961). The distorted inflow turbulence is related to the resultant noise by conventional broadband noise theory. A comparison of the distortion model with experimental data shows that the theoretical model is unable to totally explain observed increases in model rotor sound pressures with increased rotor mean thrust. Comparison of full scale rotor data with the theoretical model shows that a shear-type distortion may explain decreasing sound pressure levels with increasing thrust.

  13. Revisiting of Multiscale Static Analysis of Notched Laminates Using the Generalized Method of Cells

    NASA Technical Reports Server (NTRS)

    Naghipour Ghezeljeh, Paria; Arnold, Steven M.; Pineda, Evan J.

    2016-01-01

    Composite material systems generally exhibit a range of behavior on different length scales (from constituent level to macro); therefore, a multiscale framework is beneficial for the design and engineering of these material systems. The complex nature of the observed composite failure during experiments suggests the need for a three-dimensional (3D) multiscale model to attain a reliable prediction. However, the size of a multiscale three-dimensional finite element model can become prohibitively large and computationally costly. Two-dimensional (2D) models are preferred due to computational efficiency, especially if many different configurations have to be analyzed for an in-depth damage tolerance and durability design study. In this study, various 2D and 3D multiscale analyses will be employed to conduct a detailed investigation into the tensile failure of a given multidirectional, notched carbon fiber reinforced polymer laminate. Threedimensional finite element analysis is typically considered more accurate than a 2D finite element model, as compared with experiments. Nevertheless, in the absence of adequate mesh refinement, large differences may be observed between a 2D and 3D analysis, especially for a shear-dominated layup. This observed difference has not been widely addressed in previous literature and is the main focus of this paper.

  14. Modelling, design and stability analysis of an improved SEPIC converter for renewable energy systems

    NASA Astrophysics Data System (ADS)

    G, Dileep; Singh, S. N.; Singh, G. K.

    2017-09-01

    In this paper, a detailed modelling and analysis of a switched inductor (SI)-based improved single-ended primary inductor converter (SEPIC) has been presented. To increase the gain of conventional SEPIC converter, input and output side inductors are replaced with SI structures. Design and stability analysis for continuous conduction mode operation of the proposed SI-SEPIC converter has also been presented in this paper. State space averaging technique is used to model the converter and carry out the stability analysis. Performance and stability analysis of closed loop configuration is predicted by observing the open loop behaviour using Nyquist diagram and Nichols chart. System was found to stable and critically damped.

  15. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Kubar, T. L.; Li, J.; Zhang, J.; Wang, W.

    2015-12-01

    Both the National Research Council Decadal Survey and the latest Intergovernmental Panel on Climate Change Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with the synergistic use of global satellite observations in order to improve our weather and climate simulation and prediction capabilities. The abundance of satellite observations for fundamental climate parameters and the availability of coordinated model outputs from CMIP5 for the same parameters offer a great opportunity to understand and diagnose model biases in climate models. In addition, the Obs4MIPs efforts have created several key global observational datasets that are readily usable for model evaluations. However, a model diagnostic evaluation process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. In response, we have developed a novel methodology to diagnose model biases in contemporary climate models and implementing the methodology as a web-service based, cloud-enabled, provenance-supported climate-model evaluation system. The evaluation system is named Climate Model Diagnostic Analyzer (CMDA), which is the product of the research and technology development investments of several current and past NASA ROSES programs. The current technologies and infrastructure of CMDA are designed and selected to address several technical challenges that the Earth science modeling and model analysis community faces in evaluating and diagnosing climate models. In particular, we have three key technology components: (1) diagnostic analysis methodology; (2) web-service based, cloud-enabled technology; (3) provenance-supported technology. The diagnostic analysis methodology includes random forest feature importance ranking, conditional probability distribution function, conditional sampling, and time-lagged correlation map. We have implemented the new methodology as web services and incorporated the system into the Cloud. We have also developed a provenance management system for CMDA where CMDA service semantics modeling, service search and recommendation, and service execution history management are designed and implemented.

  16. A Luenberger observer for reaction-diffusion models with front position data

    NASA Astrophysics Data System (ADS)

    Collin, Annabelle; Chapelle, Dominique; Moireau, Philippe

    2015-11-01

    We propose a Luenberger observer for reaction-diffusion models with propagating front features, and for data associated with the location of the front over time. Such models are considered in various application fields, such as electrophysiology, wild-land fire propagation and tumor growth modeling. Drawing our inspiration from image processing methods, we start by proposing an observer for the eikonal-curvature equation that can be derived from the reaction-diffusion model by an asymptotic expansion. We then carry over this observer to the underlying reaction-diffusion equation by an ;inverse asymptotic analysis;, and we show that the associated correction in the dynamics has a stabilizing effect for the linearized estimation error. We also discuss the extension to joint state-parameter estimation by using the earlier-proposed ROUKF strategy. We then illustrate and assess our proposed observer method with test problems pertaining to electrophysiology modeling, including with a realistic model of cardiac atria. Our numerical trials show that state estimation is directly very effective with the proposed Luenberger observer, while specific strategies are needed to accurately perform parameter estimation - as is usual with Kalman filtering used in a nonlinear setting - and we demonstrate two such successful strategies.

  17. Towards validation of the Canadian precipitation analysis (CaPA) for hydrologic modeling applications in the Canadian Prairies

    NASA Astrophysics Data System (ADS)

    Boluwade, Alaba; Zhao, K.-Y.; Stadnyk, T. A.; Rasmussen, P.

    2018-01-01

    This study presents a three-step validation technique to compare the performance of the Canadian Precipitation Analysis (CaPA) product relative to actual observation as a hydrologic forcing in regional watershed simulation. CaPA is an interpolated (6 h or 24 h accumulation) reanalysis precipitation product in near real time covering all of North America. The analysis procedure involves point-to-point (P2P) and map-to-map (M2M) comparisons, followed by proxy validation using an operational version of the WATFLOOD™ hydrologic model from 2002 to 2005 in the Lake Winnipeg Basin (LWB), Canada. The P2P technique using a Bayesian change point analysis shows that CaPA corresponds with actual observations (Canadian daily climate data, CDCD), on both an annual and seasonal basis. CaPA has the same spatial pattern, dependency and autocorrelation properties as CDCD pixel by pixel (M2M). When used as hydrologic forcing in WATFLOOD™, results indicate that CaPA is a reliable product for water resource modeling and predictions, but that the quality of CaPA data varies annually and seasonally, as does the quality of observations. CaPA proved most beneficial as a hydrologic forcing during winter seasons where observation quality is the lowest. Reanalysis products, such as CaPA, can be a reliable option in sparse network areas, and is beneficial for regional governments when the cost of new weather stations is prohibitive.

  18. Impact of four-dimensional data assimilation (FDDA) on urban climate analysis

    NASA Astrophysics Data System (ADS)

    Pan, Linlin; Liu, Yubao; Liu, Yuewei; Li, Lei; Jiang, Yin; Cheng, Will; Roux, Gregory

    2015-12-01

    This study investigates the impact of four-dimensional data assimilation (FDDA) on urban climate analysis, which employs the NCAR (National Center for Atmospheric Research) WRF (the weather research and forecasting model) based on climate FDDA (CFDDA) technology to develop an urban-scale microclimatology database for the Shenzhen area, a rapidly developing metropolitan located along the southern coast of China, where uniquely high-density observations, including ultrahigh-resolution surface AWS (automatic weather station) network, radio sounding, wind profilers, radiometers, and other weather observation platforms, have been installed. CFDDA is an innovative dynamical downscaling regional climate analysis system that assimilates diverse regional observations; and has been employed to produce a 5 year multiscale high-resolution microclimate analysis by assimilating high-density observations at Shenzhen area. The CFDDA system was configured with four nested-grid domains at grid sizes of 27, 9, 3, and 1 km, respectively. This research evaluates the impact of assimilating high-resolution observation data on reproducing the refining features of urban-scale circulations. Two experiments were conducted with a 5 year run using CFSR (climate forecast system reanalysis) as boundary and initial conditions: one with CFDDA and the other without. The comparisons of these two experiments with observations indicate that CFDDA greatly reduces the model analysis error and is able to realistically analyze the microscale features such as urban-rural-coastal circulation, land/sea breezes, and local-hilly terrain thermal circulations. It is demonstrated that the urbanization can produce 2.5 k differences in 2 m temperatures, delays/speeds up the land/sea breeze development, and interacts with local mountain-valley circulations.

  19. Stratospheric Assimilation of Chemical Tracer Observations Using a Kalman Filter. Pt. 2; Chi-Square Validated Results and Analysis of Variance and Correlation Dynamics

    NASA Technical Reports Server (NTRS)

    Menard, Richard; Chang, Lang-Ping

    1998-01-01

    A Kalman filter system designed for the assimilation of limb-sounding observations of stratospheric chemical tracers, which has four tunable covariance parameters, was developed in Part I (Menard et al. 1998) The assimilation results of CH4 observations from the Cryogenic Limb Array Etalon Sounder instrument (CLAES) and the Halogen Observation Experiment instrument (HALOE) on board of the Upper Atmosphere Research Satellite are described in this paper. A robust (chi)(sup 2) criterion, which provides a statistical validation of the forecast and observational error covariances, was used to estimate the tunable variance parameters of the system. In particular, an estimate of the model error variance was obtained. The effect of model error on the forecast error variance became critical after only three days of assimilation of CLAES observations, although it took 14 days of forecast to double the initial error variance. We further found that the model error due to numerical discretization as arising in the standard Kalman filter algorithm, is comparable in size to the physical model error due to wind and transport modeling errors together. Separate assimilations of CLAES and HALOE observations were compared to validate the state estimate away from the observed locations. A wave-breaking event that took place several thousands of kilometers away from the HALOE observation locations was well captured by the Kalman filter due to highly anisotropic forecast error correlations. The forecast error correlation in the assimilation of the CLAES observations was found to have a structure similar to that in pure forecast mode except for smaller length scales. Finally, we have conducted an analysis of the variance and correlation dynamics to determine their relative importance in chemical tracer assimilation problems. Results show that the optimality of a tracer assimilation system depends, for the most part, on having flow-dependent error correlation rather than on evolving the error variance.

  20. Computing Linear Mathematical Models Of Aircraft

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.

    1991-01-01

    Derivation and Definition of Linear Aircraft Model (LINEAR) computer program provides user with powerful, and flexible, standard, documented, and verified software tool for linearization of mathematical models of aerodynamics of aircraft. Intended for use in software tool to drive linear analysis of stability and design of control laws for aircraft. Capable of both extracting such linearized engine effects as net thrust, torque, and gyroscopic effects, and including these effects in linear model of system. Designed to provide easy selection of state, control, and observation variables used in particular model. Also provides flexibility of allowing alternate formulations of both state and observation equations. Written in FORTRAN.

  1. Numerical prediction of turbulent flame stability in premixed/prevaporized (HSCT) combustors

    NASA Technical Reports Server (NTRS)

    Winowich, Nicholas S.

    1990-01-01

    A numerical analysis of combustion instabilities that induce flashback in a lean, premixed, prevaporized dump combustor is performed. KIVA-II, a finite volume CFD code for the modeling of transient, multidimensional, chemically reactive flows, serves as the principal analytical tool. The experiment of Proctor and T'ien is used as a reference for developing the computational model. An experimentally derived combustion instability mechanism is presented on the basis of the observations of Proctor and T'ien and other investigators of instabilities in low speed (M less than 0.1) dump combustors. The analysis comprises two independent procedures that begin from a calculated stable flame: The first is a linear increase of the equivalence ratio and the second is the linear decrease of the inflow velocity. The objective is to observe changes in the aerothermochemical features of the flow field prior to flashback. It was found that only the linear increase of the equivalence ratio elicits a calculated flashback result. Though this result did not exhibit large scale coherent vortices in the turbulent shear layer coincident with a flame flickering mode as was observed experimentally, there were interesting acoustic effects which were resolved quite well in the calculation. A discussion of the k-e turbulence model used by KIVA-II is prompted by the absence of combustion instabilities in the model as the inflow velocity is linearly decreased. Finally, recommendations are made for further numerical analysis that may improve correlation with experimentally observed combustion instabilities.

  2. Flares, ejections, proton events

    NASA Astrophysics Data System (ADS)

    Belov, A. V.

    2017-11-01

    Statistical analysis is performed for the relationship of coronal mass ejections (CMEs) and X-ray flares with the fluxes of solar protons with energies >10 and >100 MeV observed near the Earth. The basis for this analysis was the events that took place in 1976-2015, for which there are reliable observations of X-ray flares on GOES satellites and CME observations with SOHO/LASCO coronagraphs. A fairly good correlation has been revealed between the magnitude of proton enhancements and the power and duration of flares, as well as the initial CME speed. The statistics do not give a clear advantage either to CMEs or the flares concerning their relation with proton events, but the characteristics of the flares and ejections complement each other well and are reasonable to use together in the forecast models. Numerical dependences are obtained that allow estimation of the proton fluxes to the Earth expected from solar observations; possibilities for improving the model are discussed.

  3. Toward synthesizing executable models in biology.

    PubMed

    Fisher, Jasmin; Piterman, Nir; Bodik, Rastislav

    2014-01-01

    Over the last decade, executable models of biological behaviors have repeatedly provided new scientific discoveries, uncovered novel insights, and directed new experimental avenues. These models are computer programs whose execution mechanistically simulates aspects of the cell's behaviors. If the observed behavior of the program agrees with the observed biological behavior, then the program explains the phenomena. This approach has proven beneficial for gaining new biological insights and directing new experimental avenues. One advantage of this approach is that techniques for analysis of computer programs can be applied to the analysis of executable models. For example, one can confirm that a model agrees with experiments for all possible executions of the model (corresponding to all environmental conditions), even if there are a huge number of executions. Various formal methods have been adapted for this context, for example, model checking or symbolic analysis of state spaces. To avoid manual construction of executable models, one can apply synthesis, a method to produce programs automatically from high-level specifications. In the context of biological modeling, synthesis would correspond to extracting executable models from experimental data. We survey recent results about the usage of the techniques underlying synthesis of computer programs for the inference of biological models from experimental data. We describe synthesis of biological models from curated mutation experiment data, inferring network connectivity models from phosphoproteomic data, and synthesis of Boolean networks from gene expression data. While much work has been done on automated analysis of similar datasets using machine learning and artificial intelligence, using synthesis techniques provides new opportunities such as efficient computation of disambiguating experiments, as well as the ability to produce different kinds of models automatically from biological data.

  4. Modelling near field regional uplift patterns in West Greenland/Disko Bay with plane-Earth finite element models.

    NASA Astrophysics Data System (ADS)

    Meldgaard, Asger; Nielsen, Lars; Iaffaldano, Giampiero

    2017-04-01

    Relative sea level data, primarily obtained through isolation basin analysis in western Greenland and on Disko Island, indicates asynchronous rates of uplift during the Early Holocene with larger rates of uplift in southern Disko Bay compared to the northern part of the bay. Similar short-wavelength variations can be inferred from the Holocene marine limit as observations on the north and south side of Disko Island differ by as much as 60 m. While global isostatic adjustment models are needed to account for far field contributions to the relative sea level and for the calculation of accurate ocean functions, they are generally not suited for a detailed analysis of the short-wavelength uplift patterns observed close to present ice margins. This is in part due to the excessive computational cost required for sufficient resolution, and because these models generally ignore regional lateral heterogeneities in mantle and lithosphere rheology. To mitigate this problem, we perform sensitivity tests to investigate the effects of near field loading on a regional plane-Earth finite element model of the lithosphere and mantle of the Disko Bay area, where the global isostatic uplift chronology is well documented. By loading the model area through detailed regional ocean function and ice models, and by including a high resolution topography model of the area, we seek to assess the isostatic rebound generated by surface processes with wavelengths similar to those of the observed rebound signal. We also investigate possible effects of varying lithosphere and mantle rheology, which may play an important role in explaining the rebound signal. We use the abundance of relative sea level curves obtained in the region primarily through isolation basin analysis on Disko Island to constrain the parameters of the Earth model.

  5. Composition Changes After the "Halloween" Solar Proton Event: The High-Energy Particle Precipitation in the Atmosphere (HEPPA) Model Versus MIPAS Data Intercomparison Study

    NASA Technical Reports Server (NTRS)

    Funke, B.; Baumgaertner, A.; Calisto, M.; Egorova, T.; Jackman, C. H.; Kieser, J.; Krivolutsky, A.; Lopez-Puertas, M.; Marsh. D. R.; Reddmann, T.; hide

    2010-01-01

    We have compared composition changes of NO, NO2, H2O2,O3, N2O, HNO3 , N2O5, HNO4, ClO, HOCl, and ClONO2 as observed by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) on Envisat in the aftermath of the "Halloween" solar proton event (SPE) in October/November 2003 at 25-0.01 hPa in the Northern hemisphere (40-90 N) and simulations performed by the following atmospheric models: the Bremen 2D model (B2dM) and Bremen 3D Chemical Transport Model (B3dCTM), the Central Aerological Observatory (CAO) model, FinROSE, the Hamburg Model of the Neutral and Ionized Atmosphere (HAMMONIA), the Karlsruhe Simulation Model of the Middle Atmosphere (KASIMA), the ECHAM5/MESSY Atmospheric Chemistry (EMAC) model, the modeling tool for SO1ar Climate Ozone Links studies (SOCOL and SOCOLi), and the Whole Atmosphere Community Climate Model (WACCM4). The large number of participating models allowed for an evaluation of the overall ability of atmospheric models to reproduce observed atmospheric perturbations generated by SPEs, particularly with respect to NOS, and ozone changes. We have further assessed the meteorological conditions and their implications on the chemical response to the SPE in both the models and observations by comparing temperature and tracer (CH4 and CO) fields. Simulated SPE-induced ozone losses agree on average within 5% with the observations. Simulated NO(y) enhancements around 1 hPa, however, are typically 30% higher than indicated by the observations which can be partly attributed to an overestimation of simulated electron-induced ionization. The analysis of the observed and modeled NO(y) partitioning in the aftermath of the SPE has demonstrated the need to implement additional ion chemistry (HNO3 formation via ion-ion recombination and water cluster ions) into the chemical schemes. An overestimation of observed H2O2 enhancements by all models hints at an underestimation of the OH/HO2 ratio in the upper polar stratosphere during the SPE. The analysis of chlorine species perturbations has shown that the encountered differences between models and observations, particularly the underestimation of observed ClONO2 enhancements, are related to a smaller availability of ClO in the polar night region already before the SPE. In general, the intercomparison has demonstrated that differences in the meteorology and/or initial state of the atmosphere in the simulations causes a relevant variability of the model results, even on a short timescale of only a few days.

  6. Efficient strategies for leave-one-out cross validation for genomic best linear unbiased prediction.

    PubMed

    Cheng, Hao; Garrick, Dorian J; Fernando, Rohan L

    2017-01-01

    A random multiple-regression model that simultaneously fit all allele substitution effects for additive markers or haplotypes as uncorrelated random effects was proposed for Best Linear Unbiased Prediction, using whole-genome data. Leave-one-out cross validation can be used to quantify the predictive ability of a statistical model. Naive application of Leave-one-out cross validation is computationally intensive because the training and validation analyses need to be repeated n times, once for each observation. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis. Efficient Leave-one-out cross validation strategies is 786 times faster than the naive application for a simulated dataset with 1,000 observations and 10,000 markers and 99 times faster with 1,000 observations and 100 markers. These efficiencies relative to the naive approach using the same model will increase with increases in the number of observations. Efficient Leave-one-out cross validation strategies are presented here, requiring little more effort than a single analysis.

  7. Experiments with the Mesoscale Atmospheric Simulation System (MASS) using the synthetic relative humidity

    NASA Technical Reports Server (NTRS)

    Chang, Chia-Bo

    1994-01-01

    This study is intended to examine the impact of the synthetic relative humidity on the model simulation of mesoscale convective storm environment. The synthetic relative humidity is derived from the National Weather Services surface observations, and non-conventional sources including aircraft, radar, and satellite observations. The latter sources provide the mesoscale data of very high spatial and temporal resolution. The synthetic humidity data is used to complement the National Weather Services rawinsonde observations. It is believed that a realistic representation of initial moisture field in a mesoscale model is critical for the model simulation of thunderstorm development, and the formation of non-convective clouds as well as their effects on the surface energy budget. The impact will be investigated based on a real-data case study using the mesoscale atmospheric simulation system developed by Mesoscale Environmental Simulations Operations, Inc. The mesoscale atmospheric simulation system consists of objective analysis and initialization codes, and the coarse-mesh and fine-mesh dynamic prediction models. Both models are a three dimensional, primitive equation model containing the essential moist physics for simulating and forecasting mesoscale convective processes in the atmosphere. The modeling system is currently implemented at the Applied Meteorology Unit, Kennedy Space Center. Two procedures involving the synthetic relative humidity to define the model initial moisture fields are considered. It is proposed to perform several short-range (approximately 6 hours) comparative coarse-mesh simulation experiments with and without the synthetic data. They are aimed at revealing the model sensitivities should allow us both to refine the specification of the observational requirements, and to develop more accurate and efficient objective analysis schemes. The goal is to advance the MASS (Mesoscal Atmospheric Simulation System) modeling expertise so that the model output can provide reliable guidance for thunderstorm forecasting.

  8. Examination of influential observations in penalized spline regression

    NASA Astrophysics Data System (ADS)

    Türkan, Semra

    2013-10-01

    In parametric or nonparametric regression models, the results of regression analysis are affected by some anomalous observations in the data set. Thus, detection of these observations is one of the major steps in regression analysis. These observations are precisely detected by well-known influence measures. Pena's statistic is one of them. In this study, Pena's approach is formulated for penalized spline regression in terms of ordinary residuals and leverages. The real data and artificial data are used to see illustrate the effectiveness of Pena's statistic as to Cook's distance on detecting influential observations. The results of the study clearly reveal that the proposed measure is superior to Cook's Distance to detect these observations in large data set.

  9. Improving the Canadian Precipitation Analysis Estimates through an Observing System Simulation Experiment

    NASA Astrophysics Data System (ADS)

    Abbasnezhadi, K.; Rasmussen, P. F.; Stadnyk, T.

    2014-12-01

    To gain a better understanding of the spatiotemporal distribution of rainfall over the Churchill River basin, this study was undertaken. The research incorporates gridded precipitation data from the Canadian Precipitation Analysis (CaPA) system. CaPA has been developed by Environment Canada and provides near real-time precipitation estimates on a 10 km by 10 km grid over North America at a temporal resolution of 6 hours. The spatial fields are generated by combining forecasts from the Global Environmental Multiscale (GEM) model with precipitation observations from the network of synoptic weather stations. CaPA's skill is highly influenced by the number of weather stations in the region of interest as well as by the quality of the observations. In an attempt to evaluate the performance of CaPA as a function of the density of the weather station network, a dual-stage design algorithm to simulate CaPA is proposed which incorporates generated weather fields. More specifically, we are adopting a controlled design algorithm which is generally known as Observing System Simulation Experiment (OSSE). The advantage of using the experiment is that one can define reference precipitation fields assumed to represent the true state of rainfall over the region of interest. In the first stage of the defined OSSE, a coupled stochastic model of precipitation and temperature gridded fields is calibrated and validated. The performance of the generator is then validated by comparing model statistics with observed statistics and by using the generated samples as input to the WATFLOOD™ hydrologic model. In the second stage of the experiment, in order to account for the systematic error of station observations and GEM fields, representative errors are to be added to the reference field using by-products of CaPA's variographic analysis. These by-products explain the variance of station observations and background errors.

  10. Comparative analysis of stress in a new proposal of dental implants.

    PubMed

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido

    2017-08-01

    The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Generalized Structured Component Analysis

    ERIC Educational Resources Information Center

    Hwang, Heungsun; Takane, Yoshio

    2004-01-01

    We propose an alternative method to partial least squares for path analysis with components, called generalized structured component analysis. The proposed method replaces factors by exact linear combinations of observed variables. It employs a well-defined least squares criterion to estimate model parameters. As a result, the proposed method…

  12. Sensitivity Analysis of a Lagrangian Sea Ice Model

    NASA Astrophysics Data System (ADS)

    Rabatel, Matthias; Rampal, Pierre; Bertino, Laurent; Carrassi, Alberto; Jones, Christopher K. R. T.

    2017-04-01

    Large changes in the Arctic sea ice have been observed in the last decades in terms of the ice thickness, extension and drift. Understanding the mechanisms behind these changes is of paramount importance to enhance our modeling and forecasting capabilities. For 40 years, models have been developed to describe the non-linear dynamical response of the sea ice to a number of external and internal factors. Nevertheless, there still exists large deviations between predictions and observations. There are related to incorrect descriptions of the sea ice response and/or to the uncertainties about the different sources of information: parameters, initial and boundary conditions and external forcing. Data assimilation (DA) methods are used to combine observations with models, and there is nowadays an increasing interest of DA for sea-ice models and observations. We consider here the state-of-the art sea-ice model, neXtSIM te{Rampal2016a}, which is based on a time-varying Lagrangian mesh and makes use of the Elasto-Brittle rheology. Our ultimate goal is designing appropriate DA scheme for such a modelling facility. This contribution reports about the first milestone along this line: a sensitivity analysis in order to quantify forecast error to guide model development and to set basis for further Lagrangian DA methods. Specific features of the sea-ice dynamics in relation to the wind are thus analysed. Virtual buoys are deployed across the Arctic domain and their trajectories of motion are analysed. The simulated trajectories are also compared to real buoys trajectories observed. The model response is also compared with that one from a model version not including internal forcing to highlight the role of the rheology. Conclusions and perspectives for the general DA implementation are also discussed. \\bibitem{Rampal2016a} P. Rampal, S. Bouillon, E. Ólason, and M. Morlighem. ne{X}t{SIM}: a new {L}agrangian sea ice model. The Cryosphere, 10 (3): 1055-1073, 2016.

  13. Closing the loop: integrating human impacts on water resources to advanced land surface models

    NASA Astrophysics Data System (ADS)

    Zaitchik, B. F.; Nie, W.; Rodell, M.; Kumar, S.; Li, B.

    2016-12-01

    Advanced Land Surface Models (LSMs), including those used in the North American Land Data Assimilation System (NLDAS), offer a physically consistent and spatially and temporally complete analysis of the distributed water balance. These models are constrained both by physically-based process representation and by observations ingested as meteorological forcing or as data assimilation updates. As such, they have become important tools for hydrological monitoring and long-term climate analysis. The representation of water management, however, is extremely limited in these models. Recent advances have brought prognostic irrigation routines into models used in NLDAS, while assimilation of Gravity Recovery and Climate Experiment (GRACE) derived estimates of terrestrial water storage anomaly has made it possible to nudge models towards observed states in water storage below the root zone. But with few exceptions these LSMs do not account for the source of irrigation water, leading to a disconnect between the simulated water balance and the observed human impact on water resources. This inconsistency is unacceptable for long-term studies of climate change and human impact on water resources in North America. Here we define the modeling challenge, review instances of models that have begun to account for water withdrawals (e.g., CLM), and present ongoing efforts to improve representation of human impacts on water storage across models through integration of irrigation routines, water withdrawal information, and GRACE Data Assimilation in NLDAS LSMs.

  14. Resonant structure, formation and stability of the planetary system HD155358

    NASA Astrophysics Data System (ADS)

    Silburt, Ari; Rein, Hanno

    2017-08-01

    Two Jovian-sized planets are orbiting the star HD155358 near exact mean motion resonance (MMR) commensurability. In this work, we re-analyse the radial velocity (RV) data previously collected by Robertson et al. Using a Bayesian framework, we construct two models - one that includes and the other that excludes gravitational planet-planet interactions (PPIs). We find that the orbital parameters from our PPI and no planet-planet interaction (noPPI) models differ by up to 2σ, with our noPPI model being statistically consistent with previous results. In addition, our new PPI model strongly favours the planets being in MMR, while our noPPI model strongly disfavours MMR. We conduct a stability analysis by drawing samples from our PPI model's posterior distribution and simulating them for 109 yr, finding that our best-fitting values land firmly in a stable region of parameter space. We explore a series of formation models that migrate the planets into their observed MMR. We then use these models to directly fit to the observed RV data, where each model is uniquely parametrized by only three constants describing its migration history. Using a Bayesian framework, we find that a number of migration models fit the RV data surprisingly well, with some migration parameters being ruled out. Our analysis shows that PPIs are important to take into account when modelling observations of multiplanetary systems. The additional information that one can gain from interacting models can help constrain planet migration parameters.

  15. Structural Identifiability of Dynamic Systems Biology Models

    PubMed Central

    Villaverde, Alejandro F.

    2016-01-01

    A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726

  16. The GAW Aerosol Lidar Observation Network (GALION) as a source of near-real time aerosol profile data for model evaluation and assimilation

    NASA Astrophysics Data System (ADS)

    Hoff, R. M.; Pappalardo, G.

    2010-12-01

    In 2007, the WMO Global Atmospheric Watch’s Science Advisory Group on Aerosols described a global network of lidar networks called GAW Aerosol Lidar Observation Network (GALION). GALION has a purpose of providing expanded coverage of aerosol observations for climate and air quality use. Comprised of networks in Asia (AD-NET), Europe (EARLINET and CIS-LINET), North America (CREST and CORALNET), South America (ALINE) and with contribution from global networks such as MPLNET and NDACC, the collaboration provides a unique capability to define aerosol profiles in the vertical. GALION is designed to supplement existing ground-based and column profiling (AERONET, PHOTONS, SKYNET, GAWPFR) stations. In September 2010, GALION held its second workshop and one component of discussion focussed how the network would integrate into model needs. GALION partners have contributed to the Sand and Dust Storm Warning and Analysis System (SDS-WAS) and to assimilation in models such as DREAM. This paper will present the conclusions of those discussions and how these observations can fit into a global model analysis framework. Questions of availability, latency, and aerosol parameters that might be ingested into models will be discussed. An example of where EARLINET and GALION have contributed in near-real time observations was the suite of measurements during the Eyjafjallajokull eruption in Iceland and its impact on European air travel. Lessons learned from this experience will be discussed.

  17. NACP Synthesis: Evaluating modeled carbon state and flux variables against multiple observational constraints (Invited)

    NASA Astrophysics Data System (ADS)

    Thornton, P. E.; Nacp Site Synthesis Participants

    2010-12-01

    The North American Carbon Program (NACP) synthesis effort includes an extensive intercomparison of modeled and observed ecosystem states and fluxes preformed with multiple models across multiple sites. The participating models span a range of complexity and intended application, while the participating sites cover a broad range of natural and managed ecosystems in North America, from the subtropics to arctic tundra, and coastal to interior climates. A unique characteristic of this collaborative effort is that multiple independent observations are available at all sites: fluxes are measured with the eddy covariance technique, and standard biometric and field sampling methods provide estimates of standing stock and annual production in multiple categories. In addition, multiple modeling approaches are employed to make predictions at each site, varying, for example, in the use of diagnostic vs. prognostic leaf area index. Given multiple independent observational constraints and multiple classes of model, we evaluate the internal consistency of observations at each site, and use this information to extend previously derived estimates of uncertainty in the flux observations. Model results are then compared with all available observations and models are ranked according to their consistency with each type of observation (high frequency flux measurement, carbon stock, annual production). We demonstrate a range of internal consistency across the sites, and show that some models which perform well against one observational metric perform poorly against others. We use this analysis to construct a hypothesis for combining eddy covariance, biometrics, and other standard physiological and ecological measurements which, as data collection proceeded over several years, would present an increasingly challenging target for next generation models.

  18. Evaluation of Satellite and Model Precipitation Products Over Turkey

    NASA Astrophysics Data System (ADS)

    Yilmaz, M. T.; Amjad, M.

    2017-12-01

    Satellite-based remote sensing, gauge stations, and models are the three major platforms to acquire precipitation dataset. Among them satellites and models have the advantage of retrieving spatially and temporally continuous and consistent datasets, while the uncertainty estimates of these retrievals are often required for many hydrological studies to understand the source and the magnitude of the uncertainty in hydrological response parameters. In this study, satellite and model precipitation data products are validated over various temporal scales (daily, 3-daily, 7-daily, 10-daily and monthly) using in-situ measured precipitation observations from a network of 733 gauges from all over the Turkey. Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 version 7 and European Center of Medium-Range Weather Forecast (ECMWF) model estimates (daily, 3-daily, 7-daily and 10-daily accumulated forecast) are used in this study. Retrievals are evaluated for their mean and standard deviation and their accuracies are evaluated via bias, root mean square error, error standard deviation and correlation coefficient statistics. Intensity vs frequency analysis and some contingency table statistics like percent correct, probability of detection, false alarm ratio and critical success index are determined using daily time-series. Both ECMWF forecasts and TRMM observations, on average, overestimate the precipitation compared to gauge estimates; wet biases are 10.26 mm/month and 8.65 mm/month, respectively for ECMWF and TRMM. RMSE values of ECMWF forecasts and TRMM estimates are 39.69 mm/month and 41.55 mm/month, respectively. Monthly correlations between Gauges-ECMWF, Gauges-TRMM and ECMWF-TRMM are 0.76, 0.73 and 0.81, respectively. The model and the satellite error statistics are further compared against the gauges error statistics based on inverse distance weighting (IWD) analysis. Both the model and satellite data have less IWD errors (14.72 mm/month and 10.75 mm/month, respectively) compared to gauges IWD error (21.58 mm/month). These results show that, on average, ECMWF forecast data have higher skill than TRMM observations. Overall, both ECMWF forecast data and TRMM observations show good potential for catchment scale hydrological analysis.

  19. Quality Reporting of Multivariable Regression Models in Observational Studies: Review of a Representative Sample of Articles Published in Biomedical Journals.

    PubMed

    Real, Jordi; Forné, Carles; Roso-Llorach, Albert; Martínez-Sánchez, Jose M

    2016-05-01

    Controlling for confounders is a crucial step in analytical observational studies, and multivariable models are widely used as statistical adjustment techniques. However, the validation of the assumptions of the multivariable regression models (MRMs) should be made clear in scientific reporting. The objective of this study is to review the quality of statistical reporting of the most commonly used MRMs (logistic, linear, and Cox regression) that were applied in analytical observational studies published between 2003 and 2014 by journals indexed in MEDLINE.Review of a representative sample of articles indexed in MEDLINE (n = 428) with observational design and use of MRMs (logistic, linear, and Cox regression). We assessed the quality of reporting about: model assumptions and goodness-of-fit, interactions, sensitivity analysis, crude and adjusted effect estimate, and specification of more than 1 adjusted model.The tests of underlying assumptions or goodness-of-fit of the MRMs used were described in 26.2% (95% CI: 22.0-30.3) of the articles and 18.5% (95% CI: 14.8-22.1) reported the interaction analysis. Reporting of all items assessed was higher in articles published in journals with a higher impact factor.A low percentage of articles indexed in MEDLINE that used multivariable techniques provided information demonstrating rigorous application of the model selected as an adjustment method. Given the importance of these methods to the final results and conclusions of observational studies, greater rigor is required in reporting the use of MRMs in the scientific literature.

  20. Sampling strategies based on singular vectors for assimilated models in ocean forecasting systems

    NASA Astrophysics Data System (ADS)

    Fattorini, Maria; Brandini, Carlo; Ortolani, Alberto

    2016-04-01

    Meteorological and oceanographic models do need observations, not only as a ground truth element to verify the quality of the models, but also to keep model forecast error acceptable: through data assimilation techniques which merge measured and modelled data, natural divergence of numerical solutions from reality can be reduced / controlled and a more reliable solution - called analysis - is computed. Although this concept is valid in general, its application, especially in oceanography, raises many problems due to three main reasons: the difficulties that have ocean models in reaching an acceptable state of equilibrium, the high measurements cost and the difficulties in realizing them. The performances of the data assimilation procedures depend on the particular observation networks in use, well beyond the background quality and the used assimilation method. In this study we will present some results concerning the great impact of the dataset configuration, in particular measurements position, on the evaluation of the overall forecasting reliability of an ocean model. The aim consists in identifying operational criteria to support the design of marine observation networks at regional scale. In order to identify the observation network able to minimize the forecast error, a methodology based on Singular Vectors Decomposition of the tangent linear model is proposed. Such a method can give strong indications on the local error dynamics. In addition, for the purpose of avoiding redundancy of information contained in the data, a minimal distance among data positions has been chosen on the base of a spatial correlation analysis of the hydrodynamic fields under investigation. This methodology has been applied for the choice of data positions starting from simplified models, like an ideal double-gyre model and a quasi-geostrophic one. Model configurations and data assimilation are based on available ROMS routines, where a variational assimilation algorithm (4D-var) is included as part of the code These first applications have provided encouraging results in terms of increased predictability time and reduced forecast error, also improving the quality of the analysis used to recover the real circulation patterns from a first guess quite far from the real state.

  1. Structural Equation Models in a Redundancy Analysis Framework With Covariates.

    PubMed

    Lovaglio, Pietro Giorgio; Vittadini, Giorgio

    2014-01-01

    A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.

  2. Composition/Structure/Dynamics of comet and planetary satellite atmospheres

    NASA Technical Reports Server (NTRS)

    Combi, Michael R. (Principal Investigator)

    1995-01-01

    This research program addresses two cases of tenuous planetary atmospheres: comets and Io. The comet atmospheric research seeks to analyze a set of spatial profiles of CN in comet Halley taken in a 7.4-day period in April 1986; to apply a new dust coma model to various observations; and to analyze observations of the inner hydrogen coma, which can be optically thick to the resonance scattering of Lyman-alpha radiation, with the newly developed approach that combines a spherical radiative transfer model with our Monte Carlo H coma model. The Io research seeks to understand the atmospheric escape from Io with a hybrid-kinetic model for neutral gases and plasma given methods and algorithms developed for the study of neutral gas cometary atmospheres and the earth's polar wind and plasmasphere. Progress is reported on cometary Hydrogen Lyman-alpha studies; time-series analysis of cometary spatial profiles; model analysis of the dust comae of comets; and a global kinetic atmospheric model of Io.

  3. A Lagrangian analysis of a sudden stratospheric warming - Comparison of a model simulation and LIMS observations

    NASA Technical Reports Server (NTRS)

    Pierce, R. B.; Remsberg, Ellis E.; Fairlie, T. D.; Blackshear, W. T.; Grose, William L.; Turner, Richard E.

    1992-01-01

    Lagrangian area diagnostics and trajectory techniques are used to investigate the radiative and dynamical characteristics of a spontaneous sudden warming which occurred during a 2-yr Langley Research Center model simulation. The ability of the Langley Research Center GCM to simulate the major features of the stratospheric circulation during such highly disturbed periods is illustrated by comparison of the simulated warming to the observed circulation during the LIMS observation period. The apparent sink of vortex area associated with Rossby wave-breaking accounts for the majority of the reduction of the size of the vortex and also acts to offset the radiatively driven increase in the area occupied by the 'surf zone'. Trajectory analysis of selected material lines substantiates the conclusions from the area diagnostics.

  4. Global Magnetosphere Evolution During 22 June 2015 Geomagnetic Storm as Seen From Multipoint Observations and Comparison With MHD-Ring Rurrent Model

    NASA Astrophysics Data System (ADS)

    Buzulukova, N.; Moore, T. E.; Dorelli, J.; Fok, M. C. H.; Sibeck, D. G.; Angelopoulos, V.; Goldstein, J.; Valek, P. W.; McComas, D. J.

    2015-12-01

    On 22-23 June 2015 a severe geomagnetic storm occurred with Dst minimum of approximately -200nT. During this extreme event, multipoint observations of magnetospheric dynamics were obtained by a fleet of Geospace spacecraft including MMS, TWINS, Van-Allen and THEMIS. We present analysis of satellite data during that event, and use a global coupled MHD-ring current model (BATSRUS-CRCM) to connect multipoint observations from different parts of the magnetosphere. The analysis helps to identify different magnetospheric domains from multipoint measurements and various magnetospheric boundary motions. We will explore how the initial disturbance from the solar wind propagates through the magnetosphere causing energization of plasma in the inner magnetosphere and producing an extreme geomagnetic storm.

  5. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  6. Quantitative image analysis of immunohistochemical stains using a CMYK color model

    PubMed Central

    Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W

    2007-01-01

    Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824

  7. Incorporating Parallel Computing into the Goddard Earth Observing System Data Assimilation System (GEOS DAS)

    NASA Technical Reports Server (NTRS)

    Larson, Jay W.

    1998-01-01

    Atmospheric data assimilation is a method of combining actual observations with model forecasts to produce a more accurate description of the earth system than the observations or forecast alone can provide. The output of data assimilation, sometimes called the analysis, are regular, gridded datasets of observed and unobserved variables. Analysis plays a key role in numerical weather prediction and is becoming increasingly important for climate research. These applications, and the need for timely validation of scientific enhancements to the data assimilation system pose computational demands that are best met by distributed parallel software. The mission of the NASA Data Assimilation Office (DAO) is to provide datasets for climate research and to support NASA satellite and aircraft missions. The system used to create these datasets is the Goddard Earth Observing System Data Assimilation System (GEOS DAS). The core components of the the GEOS DAS are: the GEOS General Circulation Model (GCM), the Physical-space Statistical Analysis System (PSAS), the Observer, the on-line Quality Control (QC) system, the Coupler (which feeds analysis increments back to the GCM), and an I/O package for processing the large amounts of data the system produces (which will be described in another presentation in this session). The discussion will center on the following issues: the computational complexity for the whole GEOS DAS, assessment of the performance of the individual elements of GEOS DAS, and parallelization strategy for some of the components of the system.

  8. Constraining Plasma Conditions of the IPT via Spectral Analysis of UV & Visible Emissions and Comparing with a Physical Chemistry Model

    NASA Astrophysics Data System (ADS)

    Nerney, E. G.; Bagenal, F.; Yoshioka, K.; Schmidt, C.

    2017-12-01

    Io emits volcanic gases into space at a rate of about a ton per second. The gases become ionized and trapped in Jupiter's strong magnetic field, forming a torus of plasma that emits 2 terawatts of UV emissions. In recent work re-analyzing UV emissions observed by Voyager, Galileo, & Cassini, we found plasma conditions consistent with a physical chemistry model with a neutral source of dissociated sulfur dioxide from Io (Nerney et al., 2017). In further analysis of UV observations from JAXA's Hisaki mission (using our spectral emission model) we constrain the torus composition with ground based observations. The physical chemistry model (adapted from Delamere et al., 2005) is then used to match derived plasma conditions. We correlate the oxygen to sulfur ratio of the neutral source with volcanic eruptions to understand the change in magnetospheric plasma conditions. Our goal is to better understand and constrain both the temporal and spatial variability of the flow of mass and energy from Io's volcanic atmosphere to Jupiter's dynamic magnetosphere.

  9. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. Indirect detection constraints on s- and t-channel simplified models of dark matter

    NASA Astrophysics Data System (ADS)

    Carpenter, Linda M.; Colburn, Russell; Goodman, Jessica; Linden, Tim

    2016-09-01

    Recent Fermi-LAT observations of dwarf spheroidal galaxies in the Milky Way have placed strong limits on the gamma-ray flux from dark matter annihilation. In order to produce the strongest limit on the dark matter annihilation cross section, the observations of each dwarf galaxy have typically been "stacked" in a joint-likelihood analysis, utilizing optical observations to constrain the dark matter density profile in each dwarf. These limits have typically been computed only for singular annihilation final states, such as b b ¯ or τ+τ- . In this paper, we generalize this approach by producing an independent joint-likelihood analysis to set constraints on models where the dark matter particle annihilates to multiple final-state fermions. We interpret these results in the context of the most popular simplified models, including those with s- and t-channel dark matter annihilation through scalar and vector mediators. We present our results as constraints on the minimum dark matter mass and the mediator sector parameters. Additionally, we compare our simplified model results to those of effective field theory contact interactions in the high-mass limit.

  11. Support of surgical process modeling by using adaptable software user interfaces

    NASA Astrophysics Data System (ADS)

    Neumuth, T.; Kaschek, B.; Czygan, M.; Goldstein, D.; Strauß, G.; Meixensberger, J.; Burgert, O.

    2010-03-01

    Surgical Process Modeling (SPM) is a powerful method for acquiring data about the evolution of surgical procedures. Surgical Process Models are used in a variety of use cases including evaluation studies, requirements analysis and procedure optimization, surgical education, and workflow management scheme design. This work proposes the use of adaptive, situation-aware user interfaces for observation support software for SPM. We developed a method to support the modeling of the observer by using an ontological knowledge base. This is used to drive the graphical user interface for the observer to restrict the search space of terminology depending on the current situation. In the evaluation study it is shown, that the workload of the observer was decreased significantly by using adaptive user interfaces. 54 SPM observation protocols were analyzed by using the NASA Task Load Index and it was shown that the use of the adaptive user interface disburdens the observer significantly in workload criteria effort, mental demand and temporal demand, helping him to concentrate on his essential task of modeling the Surgical Process.

  12. The Application of Systems Analysis and Mathematical Models to the Study of Erythropoiesis During Space Flight

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    Included in the report are: (1) review of the erythropoietic mechanisms; (2) an evaluation of existing models for the control of erythropoiesis; (3) a computer simulation of the model's response to hypoxia; (4) an hypothesis to explain observed decreases in red blood cell mass during weightlessness; (5) suggestions for further research; and (6) an assessment of the role that systems analysis can play in the Skylab hematological program.

  13. The Coronal Analysis of SHocks and Waves (CASHeW) framework

    NASA Astrophysics Data System (ADS)

    Kozarev, Kamen A.; Davey, Alisdair; Kendrick, Alexander; Hammer, Michael; Keith, Celeste

    2017-11-01

    Coronal bright fronts (CBF) are large-scale wavelike disturbances in the solar corona, related to solar eruptions. They are observed (mostly in extreme ultraviolet (EUV) light) as transient bright fronts of finite width, propagating away from the eruption source location. Recent studies of individual solar eruptive events have used EUV observations of CBFs and metric radio type II burst observations to show the intimate connection between waves in the low corona and coronal mass ejection (CME)-driven shocks. EUV imaging with the atmospheric imaging assembly instrument on the solar dynamics observatory has proven particularly useful for detecting large-scale short-lived CBFs, which, combined with radio and in situ observations, holds great promise for early CME-driven shock characterization capability. This characterization can further be automated, and related to models of particle acceleration to produce estimates of particle fluxes in the corona and in the near Earth environment early in events. We present a framework for the coronal analysis of shocks and waves (CASHeW). It combines analysis of NASA Heliophysics System Observatory data products and relevant data-driven models, into an automated system for the characterization of off-limb coronal waves and shocks and the evaluation of their capability to accelerate solar energetic particles (SEPs). The system utilizes EUV observations and models written in the interactive data language. In addition, it leverages analysis tools from the SolarSoft package of libraries, as well as third party libraries. We have tested the CASHeW framework on a representative list of coronal bright front events. Here we present its features, as well as initial results. With this framework, we hope to contribute to the overall understanding of coronal shock waves, their importance for energetic particle acceleration, as well as to the better ability to forecast SEP events fluxes.

  14. An Assessment of ECMWF Analyses and Model Forecasts over the North Slope of Alaska Using Observations from the ARM Mixed-Phase Arctic Cloud Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Shaocheng; Klein, Stephen A.; Yio, J. John

    2006-03-11

    European Centre for Medium-Range Weather Forecasts (ECMWF) analysis and model forecast data are evaluated using observations collected during the Atmospheric Radiation Measurement (ARM) October 2004 Mixed-Phase Arctic Cloud Experiment (M-PACE) at its North Slope of Alaska (NSA) site. It is shown that the ECMWF analysis reasonably represents the dynamic and thermodynamic structures of the large-scale systems that affected the NSA during M-PACE. The model-analyzed near-surface horizontal winds, temperature, and relative humidity also agree well with the M-PACE surface measurements. Given the well-represented large-scale fields, the model shows overall good skill in predicting various cloud types observed during M-PACE; however, themore » physical properties of single-layer boundary layer clouds are in substantial error. At these times, the model substantially underestimates the liquid water path in these clouds, with the concomitant result that the model largely underpredicts the downwelling longwave radiation at the surface and overpredicts the outgoing longwave radiation at the top of the atmosphere. The model also overestimates the net surface shortwave radiation, mainly because of the underestimation of the surface albedo. The problem in the surface albedo is primarily associated with errors in the surface snow prediction. Principally because of the underestimation of the surface downwelling longwave radiation at the times of single-layer boundary layer clouds, the model shows a much larger energy loss (-20.9 W m-2) than the observation (-9.6 W m-2) at the surface during the M-PACE period.« less

  15. Analysis and modeling of infrasound from a four-stage rocket launch.

    PubMed

    Blom, Philip; Marcillo, Omar; Arrowsmith, Stephen

    2016-06-01

    Infrasound from a four-stage sounding rocket was recorded by several arrays within 100 km of the launch pad. Propagation modeling methods have been applied to the known trajectory to predict infrasonic signals at the ground in order to identify what information might be obtained from such observations. There is good agreement between modeled and observed back azimuths, and predicted arrival times for motor ignition signals match those observed. The signal due to the high-altitude stage ignition is found to be low amplitude, despite predictions of weak attenuation. This lack of signal is possibly due to inefficient aeroacoustic coupling in the rarefied upper atmosphere.

  16. A refined 'standard' thermal model for asteroids based on observations of 1 Ceres and 2 Pallas

    NASA Technical Reports Server (NTRS)

    Lebofsky, Larry A.; Sykes, Mark V.; Tedesco, Edward F.; Veeder, Glenn J.; Matson, Dennis L.

    1986-01-01

    An analysis of ground-based thermal IR observations of 1 Ceres and 2 Pallas in light of their recently determined occultation diameters and small amplitude light curves has yielded a new value for the IR beaming parameter employed in the standard asteroid thermal emission model which is significantly lower than the previous one. When applied to the reduction of thermal IR observations of other asteroids, this new value is expected to yield model diameters closer to actual values. The present formulation incorporates the IAU magnitude convention for asteroids that employs zero-phase magnitudes, including the opposition effect.

  17. A methodology for spectral wave model evaluation

    NASA Astrophysics Data System (ADS)

    Siqueira, S. A.; Edwards, K. L.; Rogers, W. E.

    2017-12-01

    Model evaluation is accomplished by comparing bulk parameters (e.g., significant wave height, energy period, and mean square slope (MSS)) calculated from the model energy spectra with those calculated from buoy energy spectra. Quality control of the observed data and choice of the frequency range from which the bulk parameters are calculated are critical steps in ensuring the validity of the model-data comparison. The compared frequency range of each observation and the analogous model output must be identical, and the optimal frequency range depends in part on the reliability of the observed spectra. National Data Buoy Center 3-m discus buoy spectra are unreliable above 0.3 Hz due to a non-optimal buoy response function correction. As such, the upper end of the spectrum should not be included when comparing a model to these data. Bioufouling of Waverider buoys must be detected, as it can harm the hydrodynamic response of the buoy at high frequencies, thereby rendering the upper part of the spectrum unsuitable for comparison. An important consideration is that the intentional exclusion of high frequency energy from a validation due to data quality concerns (above) can have major implications for validation exercises, especially for parameters such as the third and fourth moments of the spectrum (related to Stokes drift and MSS, respectively); final conclusions can be strongly altered. We demonstrate this by comparing outcomes with and without the exclusion, in a case where a Waverider buoy is believed to be free of biofouling. Determination of the appropriate frequency range is not limited to the observed spectra. Model evaluation involves considering whether all relevant frequencies are included. Guidance to make this decision is based on analysis of observed spectra. Two model frequency lower limits were considered. Energy in the observed spectrum below the model lower limit was calculated for each. For locations where long swell is a component of the wave climate, omitting the energy in the frequency band between the two lower limits tested can lead to an incomplete characterization of model performance. This methodology was developed to aid in selecting a comparison frequency range that does not needlessly increase computational expense and does not exclude energy to the detriment of model performance analysis.

  18. Dynamic behavior of acrylic acid clusters as quasi-mobile nodes in a model of hydrogel network

    NASA Astrophysics Data System (ADS)

    Zidek, Jan; Milchev, Andrey; Vilgis, Thomas A.

    2012-12-01

    Using a molecular dynamics simulation, we study the thermo-mechanical behavior of a model hydrogel subject to deformation and change in temperature. The model is found to describe qualitatively poly-lactide-glycolide hydrogels in which acrylic acid (AA)-groups are believed to play the role of quasi-mobile nodes in the formation of a network. From our extensive analysis of the structure, formation, and disintegration of the AA-groups, we are able to elucidate the relationship between structure and viscous-elastic behavior of the model hydrogel. Thus, in qualitative agreement with observations, we find a softening of the mechanical response at large deformations, which is enhanced by growing temperature. Several observables as the non-affinity parameter A and the network rearrangement parameter V indicate the existence of a (temperature-dependent) threshold degree of deformation beyond which the quasi-elastic response of the model system turns over into plastic (ductile) one. The critical stretching when the affinity of the deformation is lost can be clearly located in terms of A and V as well as by analysis of the energy density of the system. The observed stress-strain relationship matches that of known experimental systems.

  19. Predictability of the 1997 and 1998 South Asian Summer Monsoons on the Intraseasonal Time Scale Based on 10 AMIP2 Model Runs

    NASA Technical Reports Server (NTRS)

    Wu, Man Li C.; Schubert, Siegfried; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Predictability of the 1997 and 1998 South Asian summer monsoons is examined using National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalyses, and 100 two-year simulations with ten different Atmospheric General Circulation Models (AGCMs) with prescribed sea surface temperature (SST). We focus on the intraseasonal variations of the south Asian summer monsoon associated with the Madden-Julian Oscillation (MJO). The NCEP/NCAR reanalysis shows a clear coupling between SST anomalies and upper level velocity potential anomalies associated with the MJO. We analyze several MJO events that developed during the 1997 and 1998 focusing of the coupling with the SST. The same analysis is carried out for the model simulations. Remarkably, the ensemble mean of the two-year AGCM simulations show a signature of the observed MJO events. The ensemble mean simulated MJO events are approximately in phase with the observed events, although they are weaker, the period of oscillation is somewhat longer, and their onset is delayed by about ten days compared with the observations. Details of the analysis and comparisons among the ten AMIP2 (Atmospheric Model Intercomparison Project) models will be presented in the conference.

  20. Wave Modeling of the Solar Wind.

    PubMed

    Ofman, Leon

    The acceleration and heating of the solar wind have been studied for decades using satellite observations and models. However, the exact mechanism that leads to solar wind heating and acceleration is poorly understood. In order to improve the understanding of the physical mechanisms that are involved in these processes a combination of modeling and observational analysis is required. Recent models constrained by satellite observations show that wave heating in the low-frequency (MHD), and high-frequency (ion-cyclotron) range may provide the necessary momentum and heat input to coronal plasma and produce the solar wind. This review is focused on the results of several recent solar modeling studies that include waves explicitly in the MHD and the kinetic regime. The current status of the understanding of the solar wind acceleration and heating by waves is reviewed.

  1. Ionospheric Simulation System for Satellite Observations and Global Assimilative Modeling Experiments (ISOGAME)

    NASA Technical Reports Server (NTRS)

    Pi, Xiaoqing; Mannucci, Anthony J.; Verkhoglyadova, Olga P.; Stephens, Philip; Wilson, Brian D.; Akopian, Vardan; Komjathy, Attila; Lijima, Byron A.

    2013-01-01

    ISOGAME is designed and developed to assess quantitatively the impact of new observation systems on the capability of imaging and modeling the ionosphere. With ISOGAME, one can perform observation system simulation experiments (OSSEs). A typical OSSE using ISOGAME would involve: (1) simulating various ionospheric conditions on global scales; (2) simulating ionospheric measurements made from a constellation of low-Earth-orbiters (LEOs), particularly Global Navigation Satellite System (GNSS) radio occultation data, and from ground-based global GNSS networks; (3) conducting ionospheric data assimilation experiments with the Global Assimilative Ionospheric Model (GAIM); and (4) analyzing modeling results with visualization tools. ISOGAME can provide quantitative assessment of the accuracy of assimilative modeling with the interested observation system. Other observation systems besides those based on GNSS are also possible to analyze. The system is composed of a suite of software that combines the GAIM, including a 4D first-principles ionospheric model and data assimilation modules, an Internal Reference Ionosphere (IRI) model that has been developed by international ionospheric research communities, observation simulator, visualization software, and orbit design, simulation, and optimization software. The core GAIM model used in ISOGAME is based on the GAIM++ code (written in C++) that includes a new high-fidelity geomagnetic field representation (multi-dipole). New visualization tools and analysis algorithms for the OSSEs are now part of ISOGAME.

  2. Gender differences in exercise dependence and eating disorders in young adults: a path analysis of a conceptual model.

    PubMed

    Meulemans, Shelli; Pribis, Peter; Grajales, Tevni; Krivak, Gretchen

    2014-11-05

    The purpose of our study was to study the prevalence of exercise dependence (EXD) among college students and to investigate the role of EXD and gender on exercise behavior and eating disorders. Excessive exercise can become an addiction known as exercise dependence. In our population of 517 college students, 3.3% were at risk for EXD and 8% were at risk for an eating disorder. We used Path analysis the simplest case of Structural Equation Modeling (SEM) to investigate the role of EXD and exercise behavior on eating disorders. We observed a small direct effect from gender to eating disorders. In females we observed significant direct effect between exercise behavior (r = -0.17, p = 0.009) and EXD (r = 0.34, p < 0.001) on eating pathology. We also observed an indirect effect of exercise behavior on eating pathology (r = 0.16) through EXD (r = 0.48, r2 = 0.23, p < 0.001). In females the total variance of eating pathology explained by the SEM model was 9%. In males we observed a direct effect between EXD (r = 0.23, p < 0.001) on eating pathology. We also observed indirect effect of exercise behavior on eating pathology (r = 0.11) through EXD (r = 0.49, r2 = 0.24, p < 0.001). In males the total variance of eating pathology explained by the SEM model was 5%.

  3. Learning Together: The Role of the Online Community in Army Professional Education

    DTIC Science & Technology

    2005-05-26

    Kolb , Experiential Learning : Experience as the Source of Learning and Development... Experiential Learning One model frequently discussed is experiential learning .15 Kolb develops this model through analysis of older models. One of the...observations about the experience. Kolb develops several characteristics of adult learning . Kolb discusses his model of experiential learning

  4. The Effects of Sex Typing and Sex Appropriateness of Modeled Behavior on Children's Imitation

    ERIC Educational Resources Information Center

    Barkley, Russell A.; And Others

    1977-01-01

    Analysis of the modeled behaviors of 64 children from 4 to 11 years of age indicated that a major factor in sex differences in children's imitation is the sex appropriateness of the modeled behavior relative to the observer when a sex-typed behavior is modeled. (Author/JMB)

  5. An Objective Verification of the North American Mesoscale Model for Kennedy Space Center and Cape Canaveral Air Force Station

    NASA Technical Reports Server (NTRS)

    Bauman, William H., III

    2010-01-01

    The 45th Weather Squadron (45 WS) Launch Weather Officers use the 12-km resolution North American Mesoscale (NAM) model (MesoNAM) text and graphical product forecasts extensively to support launch weather operations. However, the actual performance of the model at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) has not been measured objectively. In order to have tangible evidence of model performance, the 45 WS tasked the Applied Meteorology Unit to conduct a detailed statistical analysis of model output compared to observed values. The model products are provided to the 45 WS by ACTA, Inc. and include hourly forecasts from 0 to 84 hours based on model initialization times of 00, 06, 12 and 18 UTC. The objective analysis compared the MesoNAM forecast winds, temperature and dew point, as well as the changes in these parameters over time, to the observed values from the sensors in the KSC/CCAFS wind tower network. Objective statistics will give the forecasters knowledge of the model's strength and weaknesses, which will result in improved forecasts for operations.

  6. Scientific analysis of satellite ranging data

    NASA Technical Reports Server (NTRS)

    Smith, David E.

    1994-01-01

    A network of satellite laser ranging (SLR) tracking systems with continuously improving accuracies is challenging the modelling capabilities of analysts worldwide. Various data analysis techniques have yielded many advances in the development of orbit, instrument and Earth models. The direct measurement of the distance to the satellite provided by the laser ranges has given us a simple metric which links the results obtained by diverse approaches. Different groups have used SLR data, often in combination with observations from other space geodetic techniques, to improve models of the static geopotential, the solid Earth, ocean tides, and atmospheric drag models for low Earth satellites. Radiation pressure models and other non-conservative forces for satellite orbits above the atmosphere have been developed to exploit the full accuracy of the latest SLR instruments. SLR is the baseline tracking system for the altimeter missions TOPEX/Poseidon, and ERS-1 and will play an important role in providing the reference frame for locating the geocentric position of the ocean surface, in providing an unchanging range standard for altimeter calibration, and for improving the geoid models to separate gravitational from ocean circulation signals seen in the sea surface. However, even with the many improvements in the models used to support the orbital analysis of laser observations, there remain systematic effects which limit the full exploitation of SLR accuracy today.

  7. Direct Comparisons of Ice Cloud Macro- and Microphysical Properties Simulated by the Community Atmosphere Model CAM5 with HIPPO Aircraft Observations

    NASA Astrophysics Data System (ADS)

    Wu, C.; Liu, X.; Diao, M.; Zhang, K.; Gettelman, A.

    2015-12-01

    A dominant source of uncertainty within climate system modeling lies in the representation of cloud processes. This is not only because of the great complexity in cloud microphysics, but also because of the large variations of cloud amount and macroscopic properties in time and space. In this study, the cloud properties simulated by the Community Atmosphere Model version 5.4 (CAM5.4) are evaluated using the HIAPER Pole-to-Pole Observations (HIPPO, 2009-2011). CAM5.4 is driven by the meteorology (U, V, and T) from GEOS5 analysis, while water vapor, hydrometeors and aerosols are calculated by the model itself. For direct comparison of CAM5.4 and HIPPO observations, model output is collocated with HIPPO flights. Generally, the model has an ability to capture specific cloud systems of meso- to large-scales. In total, the model can reproduce 80% of observed cloud occurrences inside model grid boxes, and even higher (93%) for ice clouds (T≤-40°C). However, the model produces plenty of clouds that are not presented in the observation. The model also simulates significantly larger cloud fraction including for ice clouds compared to the observation. Further analysis shows that the overestimation is a result of bias in relative humidity (RH) in the model. The bias of RH can be mostly attributed to the discrepancies of water vapor, and to a lesser extent to those of temperature. Down to the micro-scale level of ice clouds, the model can simulate reasonably well the magnitude of ice and snow number concentration (Ni, with diameter larger than 75 μm). However, the model simulates fewer occurrences of Ni>50 L-1. This can be partially ascribed to the low bias of aerosol number concentration (Naer, with diameter between 0.1-1 μm) simulated by the model. Moreover, the model significantly underestimates both the number mean diameter (Di,n) and the volume mean diameter (Di,v) of ice/snow. The result shows that the underestimation may be related to a weaker positive relationship between Di,n and Naer and/or the underestimation of Naer. Finally, it is suggested that better representation of sub-grid variability of meteorology (e.g., water vapor) is needed to improve the formation and evolution of ice clouds in the model.

  8. AVO Analysis of a Shallow Gas Accumulation in the Marmara Sea

    NASA Astrophysics Data System (ADS)

    Er, M.; Dondurur, D.; Çifçi, G.

    2012-04-01

    In recent years, Amplitude versus Offset-AVO analysis is widely used in determination and classification of gas anomalies from wide-offset seismic data. Bright spots which are among the significant factors in determining the hydrocarbon accumulations, can also be determined sucessfully using AVO analysis. A bright spot anomaly were identified on the multi-channel seismic data collected by R/V K. Piri Reis research vessel in the Marmara Sea in 2008. On prestack seismic data, the associated AVO anomalies are clearly identified on the supergathers. Near- and far-offset stack sections are plotted to show the amplitudes changes at different offsets and the bright amplitudes were observed on the far-offset stack. AVO analysis was applied to the observed bright spot anomaly following the standart data processing steps. The analysis includes the preparation of Intercept, Gradient and Fluid Factor sections of AVO attribues. Top and base boundaries of gas bearing sediment were shown by intercept - gradient crossplot method. 1D modelling was also performed to show AVO classes and models were compared with the analysis results. It is interpreted that the bright spot anomaly arises from a shallow gas accumulation. In addition, the gas saturation from P-wave velocity was also estimated by the analysis. AVO analysis indicated Class 3 and Class 4 AVO anomalies observed on the bright spot anomaly.

  9. A Linear Stochastic Dynamical Model of ENSO. Part II: Analysis.

    NASA Astrophysics Data System (ADS)

    Thompson, C. J.; Battisti, D. S.

    2001-02-01

    In this study the behavior of a linear, intermediate model of ENSO is examined under stochastic forcing. The model was developed in a companion paper (Part I) and is derived from the Zebiak-Cane ENSO model. Four variants of the model are used whose stabilities range from slightly damped to moderately damped. Each model is run as a simulation while being perturbed by noise that is uncorrelated (white) in space and time. The statistics of the model output show the moderately damped models to be more realistic than the slightly damped models. The moderately damped models have power spectra that are quantitatively quite similar to observations, and a seasonal pattern of variance that is qualitatively similar to observations. All models produce ENSOs that are phase locked to the annual cycle, and all display the `spring barrier' characteristic in their autocorrelation patterns, though in the models this `barrier' occurs during the summer and is less intense than in the observations (inclusion of nonlinear effects is shown to partially remedy this deficiency). The more realistic models also show a decadal variability in the lagged autocorrelation pattern that is qualitatively similar to observations.Analysis of the models shows that the greatest part of the variability comes from perturbations that project onto the first singular vector, which then grow rapidly into the ENSO mode. Essentially, the model output represents many instances of the ENSO mode, with random phase and amplitude, stimulated by the noise through the optimal transient growth of the singular vectors.The limit of predictability for each model is calculated and it is shown that the more realistic (moderately damped) models have worse potential predictability (9-15 months) than the deterministic chaotic models that have been studied widely in the literature. The predictability limits are strongly correlated with the stability of the models' ENSO mode-the more highly damped models having much shorter limits of predictability. A comparison of the two most realistic models shows that even though these models have similar statistics, they have very different predictability limits. The models have a strong seasonal dependence to their predictability limits.The results of this study (with the companion paper) suggest that the linear, stable dynamical model of ENSO is indeed a plausible hypothesis for the observed ENSO. With very reasonable levels of stochastic forcing, the model produces realistic levels of variance, has a realistic spectrum, and qualitatively reproduces the observed seasonal pattern of variance, the autocorrelation pattern, and the ENSO-like decadal variability.

  10. Improving Estimates and Forecasts of Lake Carbon Pools and Fluxes Using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zwart, J. A.; Hararuk, O.; Prairie, Y.; Solomon, C.; Jones, S.

    2017-12-01

    Lakes are biogeochemical hotspots on the landscape, contributing significantly to the global carbon cycle despite their small areal coverage. Observations and models of lake carbon pools and fluxes are rarely explicitly combined through data assimilation despite significant use of this technique in other fields with great success. Data assimilation adds value to both observations and models by constraining models with observations of the system and by leveraging knowledge of the system formalized by the model to objectively fill information gaps. In this analysis, we highlight the utility of data assimilation in lake carbon cycling research by using the Ensemble Kalman Filter to combine simple lake carbon models with observations of lake carbon pools. We demonstrate the use of data assimilation to improve a model's representation of lake carbon dynamics, to reduce uncertainty in estimates of lake carbon pools and fluxes, and to improve the accuracy of carbon pool size estimates relative to estimates derived from observations alone. Data assimilation techniques should be embraced as valuable tools for lake biogeochemists interested in learning about ecosystem dynamics and forecasting ecosystem states and processes.

  11. Observerʼs mathematics applications to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Khots, B.; Khots, D.

    2014-12-01

    When we consider and analyze physical events with the purpose of creating corresponding models we often assume that the mathematical apparatus used in modeling is infallible. In particular, this relates to the use of infinity in various aspects and the use of Newton's definition of a limit in analysis. We believe that is where the main problem lies in the contemporary study of nature. This work considers physical aspects in a setting of arithmetic, algebra, geometry, analysis, and topology provided by Observer's Mathematics (see www.mathrelativity.com). In this paper, we consider Dirac equations for free electrons. Certain results and communications pertaining to solutions of these problems are provided.

  12. Degeneracy between nonadiabatic dark energy models and Λ CDM : Integrated Sachs-Wolfe effect and the cross correlation of CMB with galaxy clustering data

    NASA Astrophysics Data System (ADS)

    Velten, Hermano; Fazolo, Raquel Emy; von Marttens, Rodrigo; Gomes, Syrios

    2018-05-01

    As recently pointed out in [Phys. Rev. D 96, 083502 (2017), 10.1103/PhysRevD.96.083502] the evolution of the linear matter perturbations in nonadiabatic dynamical dark energy models is almost indistinguishable (quasidegenerated) to the standard Λ CDM scenario. In this work we extend this analysis to CMB observables in particular the integrated Sachs-Wolfe effect and its cross-correlation with large scale structure. We find that this feature persists for such CMB related observable reinforcing that new probes and analysis are necessary to reveal the nonadiabatic features in the dark energy sector.

  13. THE NANOGRAV NINE-YEAR DATA SET: OBSERVATIONS, ARRIVAL TIME MEASUREMENTS, AND ANALYSIS OF 37 MILLISECOND PULSARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arzoumanian, Zaven; Brazier, Adam; Chatterjee, Shami

    2015-11-01

    We present high-precision timing observations spanning up to nine years for 37 millisecond pulsars monitored with the Green Bank and Arecibo radio telescopes as part of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) project. We describe the observational and instrumental setups used to collect the data, and methodology applied for calculating pulse times of arrival; these include novel methods for measuring instrumental offsets and characterizing low signal-to-noise ratio timing results. The time of arrival data are fit to a physical timing model for each source, including terms that characterize time-variable dispersion measure and frequency-dependent pulse shape evolution. Inmore » conjunction with the timing model fit, we have performed a Bayesian analysis of a parameterized timing noise model for each source, and detect evidence for excess low-frequency, or “red,” timing noise in 10 of the pulsars. For 5 of these cases this is likely due to interstellar medium propagation effects rather than intrisic spin variations. Subsequent papers in this series will present further analysis of this data set aimed at detecting or limiting the presence of nanohertz-frequency gravitational wave signals.« less

  14. Tropospheric delay ray tracing applied in VLBI analysis

    NASA Astrophysics Data System (ADS)

    Eriksson, David; MacMillan, D. S.; Gipson, John M.

    2014-12-01

    Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI (very long baseline interferometry) analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium-Range Weather Forecasts data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption is not true, we have instead determined the ray trace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA Goddard Space Flight Center Goddard Earth Observing System version 5 numerical weather model. When applied in VLBI analysis, baseline length repeatabilities were improved compared with using the VMF1 mapping function model for 72% of the baselines and site vertical repeatabilities were better for 11 of 13 sites during the 2 week CONT11 observing period in September 2011. When applied to a larger data set (2011-2013), we see a similar improvement in baseline length and also in site position repeatabilities for about two thirds of the stations in each of the site topocentric components.

  15. A dynamical study of Galactic globular clusters under different relaxation conditions

    NASA Astrophysics Data System (ADS)

    Zocchi, A.; Bertin, G.; Varri, A. L.

    2012-03-01

    Aims: We perform a systematic combined photometric and kinematic analysis of a sample of globular clusters under different relaxation conditions, based on their core relaxation time (as listed in available catalogs), by means of two well-known families of spherical stellar dynamical models. Systems characterized by shorter relaxation time scales are expected to be better described by isotropic King models, while less relaxed systems might be interpreted by means of non-truncated, radially-biased anisotropic f(ν) models, originally designed to represent stellar systems produced by a violent relaxation formation process and applied here for the first time to the study of globular clusters. Methods: The comparison between dynamical models and observations is performed by fitting simultaneously surface brightness and velocity dispersion profiles. For each globular cluster, the best-fit model in each family is identified, along with a full error analysis on the relevant parameters. Detailed structural properties and mass-to-light ratios are also explicitly derived. Results: We find that King models usually offer a good representation of the observed photometric profiles, but often lead to less satisfactory fits to the kinematic profiles, independently of the relaxation condition of the systems. For some less relaxed clusters, f(ν) models provide a good description of both observed profiles. Some derived structural characteristics, such as the total mass or the half-mass radius, turn out to be significantly model-dependent. The analysis confirms that, to answer some important dynamical questions that bear on the formation and evolution of globular clusters, it would be highly desirable to acquire larger numbers of accurate kinematic data-points, well distributed over the cluster field. Appendices are available in electronic form at http://www.aanda.org

  16. The Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2)

    NASA Technical Reports Server (NTRS)

    Gelaro, Ronald; McCarty, Will; Randles, Cynthia; Darmenov, Anton; Bosilovich, Michael G.; Cullather, Richard; Buchard, Virginie; Gu, Wei; Putman, William; Schubert, Siegfried D.; hide

    2017-01-01

    The Modern-Era Retrospective Analysis for Research and Applications, Version 2 (MERRA-2) is the latest atmospheric reanalysis of the modern satellite era produced by NASAs Global Modeling and Assimilation Office (GMAO). MERRA-2 assimilates observation types not available to its predecessor, MERRA, and includes updates to the Goddard Earth Observing System (GEOS) model and analysis scheme so as to provide a viable ongoing climate analysis beyond MERRAs terminus. While addressing known limitations of MERRA, MERRA-2 is also intended to be a development milestone for a future integrated Earth system analysis (IESA) currently under development at GMAO. This paper provides an overview of the MERRA-2 system and various performance metrics. Among the advances in MERRA-2 relevant to IESA are the assimilation of aerosol observations, several improvements to the representation of the stratosphere including ozone, and improved representations of cryospheric processes. Other improvements in the quality of MERRA-2 compared with MERRA include the reduction of some spurious trends and jumps related to changes in the observing system, and reduced biases and imbalances in aspects of the water cycle. Remaining deficiencies are also identified. Production of MERRA-2 began in June 2014 in four processing streams, and converged to a single near-real time stream in mid 2015. MERRA-2 products are accessible online through the NASA Goddard Earth Sciences Data Information Services Center (GESDISC).

  17. Flows of dioxins and furans in coastal food webs: inverse modeling, sensitivity analysis, and applications of linear system theory.

    PubMed

    Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer

    2006-01-01

    Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.

  18. Transient segregation behavior in Cd1-xZnxTe with low Zn content-A qualitative and quantitative analysis

    NASA Astrophysics Data System (ADS)

    Neubert, M.; Jurisch, M.

    2015-06-01

    The paper analyzes experimental compositional profiles in Vertical Bridgman (VB, VGF) grown (Cd,Zn)Te crystals, found in the literature. The origin of the observed axial ZnTe-distribution profiles is attributed to dendritic growth after initial nucleation from supercooled melts. The analysis was done by utilizing a boundary layer model providing a very good approximation of the experimental data. Besides the discussion of the qualitative results also a quantitative analysis of the fitted model parameters is presented as far as it is possible by the utilized model.

  19. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.

  20. TOWARDS AN IMPROVED UNDERSTANDING OF SIMULATED AND OBSERVED CHANGES IN EXTREME PRECIPITATION

    EPA Science Inventory

    The evaluation of climate model precipitation is expected to reveal biases in simulated mean and extreme precipitation which may be a result of coarse model resolution or inefficiencies in the precipitation generating mechanisms in models. The analysis of future extreme precip...

  1. Development of the Integrated Communication Model

    ERIC Educational Resources Information Center

    Ho, Hua-Kuo

    2008-01-01

    Human communication is a critical issue in personal life. It also should be the indispensable core element of general education curriculum in universities and colleges. Based on literature analysis and the author's clinical observation, the importance of human communication, functions of model, and often seen human communication models were…

  2. Time-dependent inhomogeneous jet models for BL Lac objects

    NASA Technical Reports Server (NTRS)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-01-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  3. Time-dependent inhomogeneous jet models for BL Lac objects

    NASA Astrophysics Data System (ADS)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-05-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  4. Problem Solving Model for Science Learning

    NASA Astrophysics Data System (ADS)

    Alberida, H.; Lufri; Festiyed; Barlian, E.

    2018-04-01

    This research aims to develop problem solving model for science learning in junior high school. The learning model was developed using the ADDIE model. An analysis phase includes curriculum analysis, analysis of students of SMP Kota Padang, analysis of SMP science teachers, learning analysis, as well as the literature review. The design phase includes product planning a science-learning problem-solving model, which consists of syntax, reaction principle, social system, support system, instructional impact and support. Implementation of problem-solving model in science learning to improve students' science process skills. The development stage consists of three steps: a) designing a prototype, b) performing a formative evaluation and c) a prototype revision. Implementation stage is done through a limited trial. A limited trial was conducted on 24 and 26 August 2015 in Class VII 2 SMPN 12 Padang. The evaluation phase was conducted in the form of experiments at SMPN 1 Padang, SMPN 12 Padang and SMP National Padang. Based on the development research done, the syntax model problem solving for science learning at junior high school consists of the introduction, observation, initial problems, data collection, data organization, data analysis/generalization, and communicating.

  5. Classification of Clouds and Deep Convection from GEOS-5 Using Satellite Observations

    NASA Technical Reports Server (NTRS)

    Putman, William; Suarez, Max

    2010-01-01

    With the increased resolution of global atmospheric models and the push toward global cloud resolving models, the resemblance of model output to satellite observations has become strikingly similar. As we progress with our adaptation of the Goddard Earth Observing System Model, Version 5 (GEOS-5) as a high resolution cloud system resolving model, evaluation of cloud properties and deep convection require in-depth analysis beyond a visual comparison. Outgoing long-wave radiation (OLR) provides a sufficient comparison with infrared (IR) satellite imagery to isolate areas of deep convection. We have adopted a binning technique to generate a series of histograms for OLR which classify the presence and fraction of clear sky versus deep convection in the tropics that can be compared with a similar analyses of IR imagery from composite Geostationary Operational Environmental Satellite (GOES) observations. We will present initial results that have been used to evaluate the amount of deep convective parameterization required within the model as we move toward cloud system resolving resolutions of 10- to 1-km globally.

  6. Morphology of the winter anomaly in NmF2 and Total Electron Content

    NASA Astrophysics Data System (ADS)

    Yasyukevich, Yury; Ratovsky, Konstantin; Yasyukevich, Anna; Klimenko, Maksim; Klimenko, Vladimir; Chirik, Nikolay

    2017-04-01

    We analyzed the winter anomaly manifestation in the F2 peak electron density (NmF2) and Total Electron Content (TEC) based on the observation data and model calculation results. For the analysis we used 1998-2015 TEC Global Ionospheric Maps (GIM) and NmF2 ground-based ionosonde observation data from and COSMIC, CHAMP and GRACE radio occultation data. We used Global Self-consistent Model of the Thermosphere, Ionosphere, and Protonosphere (GSM TIP) and International Reference Ionosphere model (IRI-2012). Based on the observation data and model calculation results we constructed the maps of the winter anomaly intensity in TEC and NmF2 for the different solar and geomagnetic activity levels. The winter anomaly intensity was found to be higher in NmF2 than in TEC according to both observation and modeling. In this report we show the similarity and difference in winter anomaly as revealed in experimental data and model results.

  7. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  8. Analysis of time dependent phenomena observed with the LPSP OSO-8 instrument

    NASA Technical Reports Server (NTRS)

    Leibacher, J. W.

    1979-01-01

    The dynamics of the solar photosphere and chromosphere are studied. Observations obtained by the Laboratorie de Physique Stellaire et Planetaire's (LPSP) ultraviolet spectrometer onboard the OSO-8 spacecraft are analyzed, and dynamic models of the chromosphere and the emitted resonance line spectrum are calculated. Some of the unpublished data analysis and theoretical modeling which are being prepared for publication are discussed. A discussion of the state of the theory of velocity fields in the solar atmosphere is also presented. An invited review presented at the OSO-8 Workshop on the topic of oscillatory motions in the quiet sun is included. The results of the OSO-8 data analysis prepared in close collaboration with LPSP scientists are presented. Material for two articles is also presented.

  9. Comparison Campaign of VLBI Data Analysis Software - First Results

    NASA Technical Reports Server (NTRS)

    Plank, Lucia; Bohm, Johannes; Schuh, Harald

    2010-01-01

    During the development of the Vienna VLBI Software VieVS at the Institute of Geodesy and Geophysics at Vienna University of Technology, a special comparison setup was developed with the goal of easily finding links between deviations of results achieved with different software packages and certain parameters of the observation. The object of comparison is the computed time delay, a value calculated for each observation including all relevant models and corrections that need to be applied in geodetic VLBI analysis. Besides investigating the effects of the various models on the total delay, results of comparisons between VieVS and Occam 6.1 are shown. Using the same methods, a Comparison Campaign of VLBI data analysis software called DeDeCC is about to be launched within the IVS soon.

  10. How well do CMIP5 climate simulations replicate historical trends and patterns of droughts?

    DOE PAGES

    Nasrollahi, Nasrin; AghaKouchak, Amir; Cheng, Linyin; ...

    2015-04-26

    Assessing the uncertainties and understanding the deficiencies of climate models are fundamental to developing adaptation strategies. The objective of this study is to understand how well Coupled Model Intercomparison-Phase 5 (CMIP5) climate model simulations replicate ground-based observations of continental drought areas and their trends. The CMIP5 multimodel ensemble encompasses the Climatic Research Unit (CRU) ground-based observations of area under drought at all time steps. However, most model members overestimate the areas under extreme drought, particularly in the Southern Hemisphere (SH). Furthermore, the results show that the time series of observations and CMIP5 simulations of areas under drought exhibit more variabilitymore » in the SH than in the Northern Hemisphere (NH). The trend analysis of areas under drought reveals that the observational data exhibit a significant positive trend at the significance level of 0.05 over all land areas. The observed trend is reproduced by about three-fourths of the CMIP5 models when considering total land areas in drought. While models are generally consistent with observations at a global (or hemispheric) scale, most models do not agree with observed regional drying and wetting trends. Over many regions, at most 40% of the CMIP5 models are in agreement with the trends of CRU observations. The drying/wetting trends calculated using the 3 months Standardized Precipitation Index (SPI) values show better agreement with the corresponding CRU values than with the observed annual mean precipitation rates. As a result, pixel-scale evaluation of CMIP5 models indicates that no single model demonstrates an overall superior performance relative to the other models.« less

  11. Ensemble-Based Data Assimilation With a Martian GCM

    NASA Astrophysics Data System (ADS)

    Lawson, W.; Richardson, M. I.; McCleese, D. J.; Anderson, J. L.; Chen, Y.; Snyder, C.

    2007-12-01

    Quantitative study of Mars weather and climate will ultimately stem from analysis of its dynamic and thermodynamic fields. Of all the observations of Mars available to date, such fields are most easily derived from mapping data (radiances) of the martian atmosphere as measured by orbiting infrared spectrometers and radiometers (e.g., MGS / TES and MRO / MCS). Such data-derived products are the solutions to inverse problems, and while individual profile retrievals have been the popular data-derived products in the planetary sciences, the terrestrial meteorological community has gained much ground over the last decade by employing techniques of data assimilation (DA) to analyze radiances. Ancillary information is required to close an inverse problem (i.e., to disambiguate the family of possibilities that are consistent with the observations), and DA practitioners inevitably rely on numerical models for this information (e.g., general circulation models (GCMs)). Data assimilation elicits maximal information content from available observations, and, by way of the physics encoded in the numerical model, spreads this information spatially, temporally, and across variables, thus allowing global extrapolation of limited and non-simultaneous observations. If the model is skillful, then a given, specific model integration can be corrected by the information spreading abilities of DA, and the resulting time sequence of "analysis" states are brought into agreement with the observations. These analysis states are complete, gridded estimates of all the fields one might wish to diagnose for scientific study of the martian atmosphere. Though a numerical model has been used to obtain these estimates, their fidelity rests in their simultaneous consistency with both the observations (to within their stated uncertainties) and the physics contained in the model. In this fashion, radiance observations can, say, be used to deduce the wind field. A new class of DA approaches based on Monte Carlo approximations, "ensemble-based methods," has matured enough to be both appropriate for use in planetary problems and exploitably within the reach of planetary scientists. Capitalizing on this new class of methods, the National Center for Atmospheric Research (NCAR) has developed a framework for ensemble-based DA that is flexible and modular in its use of various forecast models and data sets. The framework is called DART, the Data Assimilation Research Testbed, and it is freely available on-line. We have begun to take advantage of this rich software infrastructure, and are on our way toward performing state of the art DA in the martian atmosphere using Caltech's martian general circulation model, PlanetWRF. We have begun by testing and validating the model within DART under idealized scenarios, and we hope to address actual, available infrared remote sensing datasets from Mars orbiters in the coming year. We shall present the details of this approach and our progress to date.

  12. Evolution of Precipitation Particle Size Distributions within MC3E Systems and its Impact on Aerosol-Cloud-Precipitation Interactions: Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollias, Pavlos

    2017-08-08

    This is a multi-institutional, collaborative project using observations and modeling to study the evolution (e.g. formation and growth) of hydrometeors in continental convective clouds. Our contribution was in data analysis for the generation of high-value cloud and precipitation products and derive cloud statistics for model validation. There are two areas in data analysis that we contributed: i) the development of novel, state-of-the-art dual-wavelength radar algorithms for the retrieval of cloud microphysical properties and ii) the evaluation of large domain, high-resolution models using comprehensive multi-sensor observations. Our research group developed statistical summaries from numerous sensors and developed retrievals of vertical airmore » motion in deep convection.« less

  13. Plasmoids as magnetic flux ropes. [in geomagnetic tail

    NASA Technical Reports Server (NTRS)

    Moldwin, Mark B.; Hughes, W. J.

    1991-01-01

    A magnetic flux rope model is developed and used to determine whether the principal axis analysis (PAA) of magnetometer signatures from a single satellite pass is sufficient to obtain the magnetic topology of plasmoids. The model is also used to determine if plasmoid observations are best explained by the flux rope, closed loop, or large-amplitude wave picture. It was found that the principal axis directions is highly dependent on the satellite trajectory through the structure and, therefore, the PAA of magnetometer data from a single satellite pass is insufficient to differentiate between magnetic closed loop and flux rope models. Results also indicate that the flux rope model of plasmoid formation is well suited to unify the observations of various magnetic structures observed by ISEE 3.

  14. Multi Resolution In-Situ Testing and Multiscale Simulation for Creep Fatigue Damage Analysis of Alloy 617

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yongming; Oskay, Caglar

    This report outlines the research activities that were carried out for the integrated experimental and simulation investigation of creep-fatigue damage mechanism and life prediction of Nickel-based alloy, Inconel 617 at high temperatures (950° and 850°). First, a novel experimental design using a hybrid control technique is proposed. The newly developed experimental technique can generate different combinations of creep and fatigue damage by changing the experimental design parameters. Next, detailed imaging analysis and statistical data analysis are performed to quantify the failure mechanisms of the creep fatigue of alloy 617 at high temperatures. It is observed that the creep damage ismore » directly associated with the internal voids at the grain boundaries and the fatigue damage is directly related to the surface cracking. It is also observed that the classical time fraction approach does not has a good correlation with the experimental observed damage features. An effective time fraction parameter is seen to have an excellent correlation with the material microstructural damage. Thus, a new empirical damage interaction diagram is proposed based on the experimental observations. Following this, a macro level viscoplastic model coupled with damage is developed to simulate the stress/strain response under creep fatigue loadings. A damage rate function based on the hysteresis energy and creep energy is proposed to capture the softening behavior of the material and a good correlation with life prediction and material hysteresis behavior is observed. The simulation work is extended to include the microstructural heterogeneity. A crystal plasticity finite element model considering isothermal and large deformation conditions at the microstructural scale has been developed for fatigue, creep-fatigue as well as creep deformation and rupture at high temperature. The model considers collective dislocation glide and climb of the grains and progressive damage accumulation of the grain boundaries. The glide model incorporates a slip resistance evolution model that characterizes the solute-drag creep effects and can capture well the stress-strain and stress time response of fatigue and creep-fatigue tests at various strain ranges and hold times. In order to accurately capture the creep strains that accumulate particularly at relatively low stress levels, a dislocation climb model has been incorporated into the crystal plasticity modeling framework. The dislocation climb model parameters are calibrated and verified through experimental creep tests performed at 950°. In addition, a cohesive zone model has been fully implemented in the context of the crystal plasticity finite element model to capture the intergranular creep damage. The parameters of the cohesive zone model have been calibrated using available experimental data. The numerical simulations illustrate the capability of the proposed model in capturing damage initiation and growth under creep loads as compared to the experimental observations. The microscale analysis sheds light on the crack initiation sites and propagation patterns within the microstructure. The model is also utilized to investigate the hybrid-controlled creep-fatigue tests and has been found to capture reasonably well the stress-strain response with different hold times and hold stress magnitudes.« less

  15. A New Unified Analysis of Estimate Errors by Model-Matching Phase-Estimation Methods for Sensorless Drive of Permanent-Magnet Synchronous Motors and New Trajectory-Oriented Vector Control, Part I

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji; Sano, Kousuke

    This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.

  16. The SpeX Prism Library Analysis Toolkit: Design Considerations and First Results

    NASA Astrophysics Data System (ADS)

    Burgasser, Adam J.; Aganze, Christian; Escala, Ivana; Lopez, Mike; Choban, Caleb; Jin, Yuhui; Iyer, Aishwarya; Tallis, Melisa; Suarez, Adrian; Sahi, Maitrayee

    2016-01-01

    Various observational and theoretical spectral libraries now exist for galaxies, stars, planets and other objects, which have proven useful for classification, interpretation, simulation and model development. Effective use of these libraries relies on analysis tools, which are often left to users to develop. In this poster, we describe a program to develop a combined spectral data repository and Python-based analysis toolkit for low-resolution spectra of very low mass dwarfs (late M, L and T dwarfs), which enables visualization, spectral index analysis, classification, atmosphere model comparison, and binary modeling for nearly 2000 library spectra and user-submitted data. The SpeX Prism Library Analysis Toolkit (SPLAT) is being constructed as a collaborative, student-centered, learning-through-research model with high school, undergraduate and graduate students and regional science teachers, who populate the database and build the analysis tools through quarterly challenge exercises and summer research projects. In this poster, I describe the design considerations of the toolkit, its current status and development plan, and report the first published results led by undergraduate students. The combined data and analysis tools are ideal for characterizing cool stellar and exoplanetary atmospheres (including direct exoplanetary spectra observations by Gemini/GPI, VLT/SPHERE, and JWST), and the toolkit design can be readily adapted for other spectral datasets as well.This material is based upon work supported by the National Aeronautics and Space Administration under Grant No. NNX15AI75G. SPLAT code can be found at https://github.com/aburgasser/splat.

  17. The atmosphere- and hydrosphere-correlated signals in GPS observations

    NASA Astrophysics Data System (ADS)

    Bogusz, Janusz; Boy, Jean-Paul; Klos, Anna; Figurski, Mariusz

    2015-04-01

    The circulation of surface geophysical fluids (e.g. atmosphere, ocean, continental hydrology, etc.) induces global mass redistribution at the Earth's surface, and then surface deformations and gravity variations. The deformations can be reliably recorded by permanent GPS observations nowadays. The loading effects can be precisely modelled by convolving outputs from global general circulation models and Green's functions describing the Earth's response. Previously published papers showed that either surface gravity records or space-based observations can be efficiently corrected for atmospheric loading effects using surface pressure fields from atmospheric models. In a similar way, loading effects due to continental hydrology can be corrected from precise positioning observations. We evaluated 3-D displacement at the selected ITRF2008 core sites that belong to IGS (International GNSS Service) network due to atmospheric, oceanic and hydrological circulation using different models. Atmospheric and induced oceanic loading estimates were computed using the ECMWF (European Centre for Medium Range Weather Forecasts) operational and reanalysis (ERA interim) surface pressure fields, assuming an inverted barometer ocean response or a barotropic ocean model forced by air pressure and winds (MOG2D). The IB (Inverted Barometer) hypothesis was classically chosen, in which atmospheric pressure variations are fully compensated by static sea height variations. This approximation is valid for periods exceeding typically 5 to 20 days. At higher frequencies, dynamic effects cannot be neglected. Hydrological loading were provided using MERRA land (Modern-Era Retrospective Analysis for Research and Applications - NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5)) for the different stations. After that we compared the results to the GPS-derived time series of North, East and Up components. The analysis of satellite data was performed twofold: firstly, the time series from network solution (NS) processed in Bernese 5.0 software by the Military University of Technology EPN Local Analysis Centre, secondly, the ones from PPP (Precise Point Positioning) from JPL (Jet Propulsion Laboratory) processing in Gipsy-Oasis were analyzed. Both were modelled with wavelet decomposition with Meyer orthogonal mother wavelet. Here, nine levels of decomposition were applied and eighth detail of it was interpreted as changes close to one year. In this way, both NS and PPP time series where presented as curves with annual period with amplitudes and phases changeable in time. The same analysis was performed for atmospheric (ATM) and hydrospheric (HYDR) models. All annual curves (modelled from NS, PPP, ATM and HYDR) were then compared to each other to investigate whether GPS observations contain the atmosphere and hydrosphere correlated signals and in what way the amplitudes of them may disrupt the GPS time series.

  18. The MSFC Solar Activity Future Estimation (MSAFE) Model

    NASA Technical Reports Server (NTRS)

    Suggs, Ron

    2017-01-01

    The MSAFE model provides forecasts for the solar indices SSN, F10.7, and Ap. These solar indices are used as inputs to space environment models used in orbital spacecraft operations and space mission analysis. Forecasts from the MSAFE model are provided on the MSFC Natural Environments Branch's solar web page and are updated as new monthly observations become available. The MSAFE prediction routine employs a statistical technique that calculates deviations of past solar cycles from the mean cycle and performs a regression analysis to calculate the deviation from the mean cycle of the solar index at the next future time interval. The forecasts are initiated for a given cycle after about 8 to 9 monthly observations from the start of the cycle are collected. A forecast made at the beginning of cycle 24 using the MSAFE program captured the cycle fairly well with some difficulty in discerning the double peak that occurred at solar cycle maximum.

  19. Improved model for the design and analysis of centrifugal compressor volutes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van den Braembussche, R.A.; Ayder, E.; Hagelstein, D.

    1999-07-01

    This paper describes a new model for the analysis of the flow in volutes of centrifugal compressors. It explicitly takes into account the vortical structure of the flow that has been observed during detailed three-dimensional flow measurements. It makes use of an impeller and diffuser response model to predict the nonuniformity of the volute inlet flow due, to the circumferential variation of the pressure at the volute inlet, and is therefore applicable also at off-design operation of the volute. Predicted total pressure loss and static pressure rise coefficients at design and off-design operation have been compared with experimental data formore » different volute geometries but only one test case is presented here. Good agreement in terms of losses and pressure rise is observed at most operating points and confirms the validity of the impeller and diffuser response model.« less

  20. Evaluating litter decomposition and soil organic matter dynamics in earth system models: contrasting analysis of long-term litter decomposition and steady-state soil carbon

    NASA Astrophysics Data System (ADS)

    Bonan, G. B.; Wieder, W. R.

    2012-12-01

    Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.

  1. iClimate: a climate data and analysis portal

    NASA Astrophysics Data System (ADS)

    Goodman, P. J.; Russell, J. L.; Merchant, N.; Miller, S. J.; Juneja, A.

    2015-12-01

    We will describe a new climate data and analysis portal called iClimate that facilitates direct comparisons between available climate observations and climate simulations. Modeled after the successful iPlant Collaborative Discovery Environment (www.iplantcollaborative.org) that allows plant scientists to trade and share environmental, physiological and genetic data and analyses, iClimate provides an easy-to-use platform for large-scale climate research, including the storage, sharing, automated preprocessing, analysis and high-end visualization of large and often disparate observational and model datasets. iClimate will promote data exploration and scientific discovery by providing: efficient and high-speed transfer of data from nodes around the globe (e.g. PCMDI and NASA); standardized and customized data/model metrics; efficient subsampling of datasets based on temporal period, geographical region or variable; and collaboration tools for sharing data, workflows, analysis results, and data visualizations with collaborators or with the community at large. We will present iClimate's capabilities, and demonstrate how it will simplify and enhance the ability to do basic or cutting-edge climate research by professionals, laypeople and students.

  2. Time Series Analysis and Forecasting of Wastewater Inflow into Bandar Tun Razak Sewage Treatment Plant in Selangor, Malaysia

    NASA Astrophysics Data System (ADS)

    Abunama, Taher; Othman, Faridah

    2017-06-01

    Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.

  3. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  4. Old and New Ideas for Data Screening and Assumption Testing for Exploratory and Confirmatory Factor Analysis

    PubMed Central

    Flora, David B.; LaBrish, Cathy; Chalmers, R. Philip

    2011-01-01

    We provide a basic review of the data screening and assumption testing issues relevant to exploratory and confirmatory factor analysis along with practical advice for conducting analyses that are sensitive to these concerns. Historically, factor analysis was developed for explaining the relationships among many continuous test scores, which led to the expression of the common factor model as a multivariate linear regression model with observed, continuous variables serving as dependent variables, and unobserved factors as the independent, explanatory variables. Thus, we begin our paper with a review of the assumptions for the common factor model and data screening issues as they pertain to the factor analysis of continuous observed variables. In particular, we describe how principles from regression diagnostics also apply to factor analysis. Next, because modern applications of factor analysis frequently involve the analysis of the individual items from a single test or questionnaire, an important focus of this paper is the factor analysis of items. Although the traditional linear factor model is well-suited to the analysis of continuously distributed variables, commonly used item types, including Likert-type items, almost always produce dichotomous or ordered categorical variables. We describe how relationships among such items are often not well described by product-moment correlations, which has clear ramifications for the traditional linear factor analysis. An alternative, non-linear factor analysis using polychoric correlations has become more readily available to applied researchers and thus more popular. Consequently, we also review the assumptions and data-screening issues involved in this method. Throughout the paper, we demonstrate these procedures using an historic data set of nine cognitive ability variables. PMID:22403561

  5. Assessing a local ensemble Kalman filter: perfect model experiments with the National Centers for Environmental Prediction global model

    NASA Astrophysics Data System (ADS)

    Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.

    2005-08-01

    The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.

  6. Long-term observations minus background monitoring of ground-based brightness temperatures from a microwave radiometer network

    NASA Astrophysics Data System (ADS)

    De Angelis, Francesco; Cimini, Domenico; Löhnert, Ulrich; Caumont, Olivier; Haefele, Alexander; Pospichal, Bernhard; Martinet, Pauline; Navas-Guzmán, Francisco; Klein-Baltink, Henk; Dupont, Jean-Charles; Hocking, James

    2017-10-01

    Ground-based microwave radiometers (MWRs) offer the capability to provide continuous, high-temporal-resolution observations of the atmospheric thermodynamic state in the planetary boundary layer (PBL) with low maintenance. This makes MWR an ideal instrument to supplement radiosonde and satellite observations when initializing numerical weather prediction (NWP) models through data assimilation. State-of-the-art data assimilation systems (e.g. variational schemes) require an accurate representation of the differences between model (background) and observations, which are then weighted by their respective errors to provide the best analysis of the true atmospheric state. In this perspective, one source of information is contained in the statistics of the differences between observations and their background counterparts (O-B). Monitoring of O-B statistics is crucial to detect and remove systematic errors coming from the measurements, the observation operator, and/or the NWP model. This work illustrates a 1-year O-B analysis for MWR observations in clear-sky conditions for an European-wide network of six MWRs. Observations include MWR brightness temperatures (TB) measured by the two most common types of MWR instruments. Background profiles are extracted from the French convective-scale model AROME-France before being converted into TB. The observation operator used to map atmospheric profiles into TB is the fast radiative transfer model RTTOV-gb. It is shown that O-B monitoring can effectively detect instrument malfunctions. O-B statistics (bias, standard deviation, and root mean square) for water vapour channels (22.24-30.0 GHz) are quite consistent for all the instrumental sites, decreasing from the 22.24 GHz line centre ( ˜ 2-2.5 K) towards the high-frequency wing ( ˜ 0.8-1.3 K). Statistics for zenith and lower-elevation observations show a similar trend, though values increase with increasing air mass. O-B statistics for temperature channels show different behaviour for relatively transparent (51-53 GHz) and opaque channels (54-58 GHz). Opaque channels show lower uncertainties (< 0.8-0.9 K) and little variation with elevation angle. Transparent channels show larger biases ( ˜ 2-3 K) with relatively low standard deviations ( ˜ 1-1.5 K). The observations minus analysis TB statistics are similar to the O-B statistics, suggesting a possible improvement to be expected by assimilating MWR TB into NWP models. Lastly, the O-B TB differences have been evaluated to verify the normal-distribution hypothesis underlying variational and ensemble Kalman filter-based DA systems. Absolute values of excess kurtosis and skewness are generally within 1 and 0.5, respectively, for all instrumental sites, demonstrating O-B normal distribution for most of the channels and elevations angles.

  7. Interactive visualization to advance earthquake simulation

    USGS Publications Warehouse

    Kellogg, L.H.; Bawden, G.W.; Bernardin, T.; Billen, M.; Cowgill, E.; Hamann, B.; Jadamec, M.; Kreylos, O.; Staadt, O.; Sumner, D.

    2008-01-01

    The geological sciences are challenged to manage and interpret increasing volumes of data as observations and simulations increase in size and complexity. For example, simulations of earthquake-related processes typically generate complex, time-varying data sets in two or more dimensions. To facilitate interpretation and analysis of these data sets, evaluate the underlying models, and to drive future calculations, we have developed methods of interactive visualization with a special focus on using immersive virtual reality (VR) environments to interact with models of Earth's surface and interior. Virtual mapping tools allow virtual "field studies" in inaccessible regions. Interactive tools allow us to manipulate shapes in order to construct models of geological features for geodynamic models, while feature extraction tools support quantitative measurement of structures that emerge from numerical simulation or field observations, thereby enabling us to improve our interpretation of the dynamical processes that drive earthquakes. VR has traditionally been used primarily as a presentation tool, albeit with active navigation through data. Reaping the full intellectual benefits of immersive VR as a tool for scientific analysis requires building on the method's strengths, that is, using both 3D perception and interaction with observed or simulated data. This approach also takes advantage of the specialized skills of geological scientists who are trained to interpret, the often limited, geological and geophysical data available from field observations. ?? Birkhaueser 2008.

  8. Observing and modelling phytoplankton community structure in the North Sea

    NASA Astrophysics Data System (ADS)

    Ford, David A.; van der Molen, Johan; Hyder, Kieran; Bacon, John; Barciela, Rosa; Creach, Veronique; McEwan, Robert; Ruardij, Piet; Forster, Rodney

    2017-03-01

    Phytoplankton form the base of the marine food chain, and knowledge of phytoplankton community structure is fundamental when assessing marine biodiversity. Policy makers and other users require information on marine biodiversity and other aspects of the marine environment for the North Sea, a highly productive European shelf sea. This information must come from a combination of observations and models, but currently the coastal ocean is greatly under-sampled for phytoplankton data, and outputs of phytoplankton community structure from models are therefore not yet frequently validated. This study presents a novel set of in situ observations of phytoplankton community structure for the North Sea using accessory pigment analysis. The observations allow a good understanding of the patterns of surface phytoplankton biomass and community structure in the North Sea for the observed months of August 2010 and 2011. Two physical-biogeochemical ocean models, the biogeochemical components of which are different variants of the widely used European Regional Seas Ecosystem Model (ERSEM), were then validated against these and other observations. Both models were a good match for sea surface temperature observations, and a reasonable match for remotely sensed ocean colour observations. However, the two models displayed very different phytoplankton community structures, with one better matching the in situ observations than the other. Nonetheless, both models shared some similarities with the observations in terms of spatial features and inter-annual variability. An initial comparison of the formulations and parameterizations of the two models suggests that diversity between the parameter settings of model phytoplankton functional types, along with formulations which promote a greater sensitivity to changes in light and nutrients, is key to capturing the observed phytoplankton community structure. These findings will help inform future model development, which should be coupled with detailed validation studies, in order to help facilitate the wider application of marine biogeochemical modelling to user and policy needs.

  9. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    USGS Publications Warehouse

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.

  10. Inter-model comparison of the landscape determinants of vector-borne disease: implications for epidemiological and entomological risk modeling.

    PubMed

    Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V

    2014-01-01

    Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.

  11. A Meta-Analysis of Children's Object-to-Mouth Frequency Data for Estimating Non-Dietary Ingestion Exposure

    EPA Science Inventory

    To improve estimates of non-dietary ingestion in probabilistic exposure modeling, a meta-analysis of children's object-to-mouth frequency was conducted using data from seven available studies representing 438 participants and ~ 1500 h of behavior observation. The analysis repres...

  12. Strong-lensing analysis of MACS J0717.5+3745 from Hubble Frontier Fields observations: How well can the mass distribution be constrained?

    NASA Astrophysics Data System (ADS)

    Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.

    2016-04-01

    We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.

  13. Combined analysis of field and model data: A case study of the phosphate dynamics in the German Bight in summer 1994

    NASA Astrophysics Data System (ADS)

    Pohlmann, Th.; Raabe, Th.; Doerffer, R.; Beddig, S.; Brockmann, U.; Dick, S.; Engel, M.; Hesse, K.-J.; König, P.; Mayer, B.; Moll, A.; Murphy, D.; Puls, W.; Rick, H.-J.; Schmidt-Nia, R.; Schönfeld, W.; Sündermann, J.

    1999-09-01

    The intention of this paper is to analyse a specific phenomenon observed during the KUSTOS campaigns in order to demonstrate the general capability of the KUSTOS and TRANSWATT approach, i.e. the combination of field and modelling activities in an interdisciplinary framework. The selected phenomenon is the increase in phosphate concentrations off the peninsula of Eiderstedt on the North Frisian coast sampled during four subsequent station grids of the KUSTOS summer campaign in 1994. First of all, a characterisation of the observed summer situation is given. The phosphate increase is described in detail in relation to the dynamics of other nutrients. In a second step, a first-order estimate of the dispersion of phosphate is discussed. The estimate is based on the box model approach and will focus on the effects of the river Elbe and Wadden Sea inputs on phosphate dynamics. Thirdly, a fully three-dimensional model system is presented, which was implemented in order to analyse the phosphate development. The model system is discussed briefly, with emphasis on phosphorus-related processes. The reliability of one of the model components, i.e. the hydrodynamical model, is demonstrated by means of a comparison of model results with observed current data. Thereafter, results of the German Bight seston model are employed to interpret the observed phosphate increase. From this combined analysis, it was possible to conclude that the phosphate increase during the first three surveys was due to internal transformation processes within the phosphorus cycle. On the other hand, the higher phosphate concentrations measured in the last station grid survey were caused by a horizontal transport of phosphate being remobilised in the Wadden Sea.

  14. Using SMOS brightness temperature and derived surface-soil moisture to characterize surface conditions and validate land surface models.

    NASA Astrophysics Data System (ADS)

    Polcher, Jan; Barella-Ortiz, Anaïs; Piles, Maria; Gelati, Emiliano; de Rosnay, Patricia

    2017-04-01

    The SMOS satellite, operated by ESA, observes the surface in the L-band. On continental surface these observations are sensitive to moisture and in particular surface-soil moisture (SSM). In this presentation we will explore how the observations of this satellite can be exploited over the Iberian Peninsula by comparing its results with two land surface models : ORCHIDEE and HTESSEL. Measured and modelled brightness temperatures show a good agreement in their temporal evolution, but their spatial structures are not consistent. An empirical orthogonal function analysis of the brightness temperature's error identifies a dominant structure over the south-west of the Iberian Peninsula which evolves during the year and is maximum in autumn and winter. Hypotheses concerning forcing-induced biases and assumptions made in the radiative transfer model are analysed to explain this inconsistency, but no candidate is found to be responsible for the weak spatial correlations. The analysis of spatial inconsistencies between modelled and measured TBs is important, as these can affect the estimation of geophysical variables and TB assimilation in operational models, as well as result in misleading validation studies. When comparing the surface-soil moisture of the models with the product derived operationally by ESA from SMOS observations similar results are found. The spatial correlation over the IP between SMOS and ORCHIDEE SSM estimates is poor (ρ 0.3). A single value decomposition (SVD) analysis of rainfall and SSM shows that the co-varying patterns of these variables are in reasonable agreement between both products. Moreover the first three SVD soil moisture patterns explain over 80% of the SSM variance simulated by the model while the explained fraction is only 52% of the remotely sensed values. These results suggest that the rainfall-driven soil moisture variability may not account for the poor spatial correlation between SMOS and ORCHIDEE products. Other reasons have to be sought to explain the poor agreement in spatial patterns between satellite derived and modelled SSM. This presentation will hopefully contribute to the discussion of how SMOS and other observations can be used to prepare, carry-out and exploit a field campaign over the Iberian Peninsula which aims at improving our understanding of semi-arid land surface processes.

  15. Detailed analysis and test correlation of a stiffened composite wing panel

    NASA Technical Reports Server (NTRS)

    Davis, D. Dale, Jr.

    1991-01-01

    Nonlinear finite element analysis techniques are evaluated by applying them to a realistic aircraft structural component. A wing panel from the V-22 tiltrotor aircraft is chosen because it is a typical modern aircraft structural component for which there is experimental data for comparison of results. From blueprints and drawings supplied by the Bell Helicopter Textron Corporation, a very detailed finite element model containing 2284 9-node Assumed Natural-Coordinate Strain (ANS) elements was generated. A novel solution strategy which accounts for geometric nonlinearity through the use of corotating element reference frames and nonlinear strain displacements relations is used to analyze this detailed model. Results from linear analyses using the same finite element model are presented in order to illustrate the advantages and costs of the nonlinear analysis as compared with the more traditional linear analysis. Strain predictions from both the linear and nonlinear stress analyses are shown to compare well with experimental data up through the Design Ultimate Load (DUL) of the panel. However, due to the extreme nonlinear response of the panel, the linear analysis was not accurate at loads above the DUL. The nonlinear analysis more accurately predicted the strain at high values of applied load, and even predicted complicated nonlinear response characteristics, such as load reversals, at the observed failure load of the test panel. In order to understand the failure mechanism of the panel, buckling and first ply failure analyses were performed. The buckling load was 17 percent above the observed failure load while first ply failure analyses indicated significant material damage at and below the observed failure load.

  16. On the explaining-away phenomenon in multivariate latent variable models.

    PubMed

    van Rijn, Peter; Rijmen, Frank

    2015-02-01

    Many probabilistic models for psychological and educational measurements contain latent variables. Well-known examples are factor analysis, item response theory, and latent class model families. We discuss what is referred to as the 'explaining-away' phenomenon in the context of such latent variable models. This phenomenon can occur when multiple latent variables are related to the same observed variable, and can elicit seemingly counterintuitive conditional dependencies between latent variables given observed variables. We illustrate the implications of explaining away for a number of well-known latent variable models by using both theoretical and real data examples. © 2014 The British Psychological Society.

  17. Multi-Model Ensemble Approaches to Data Assimilation Using the 4D-Local Ensemble Transform Kalman Filter

    DTIC Science & Technology

    2013-09-30

    accuracy of the analysis . Root mean square difference ( RMSD ) is much smaller for RIP than for either Simple Ocean Data Assimilation or Incremental... Analysis Update globally for temperature as well as salinity. Regionally the same results were found, with only one exception in which the salinity RMSD ...short-term forecast using a numerical model with the observations taken within the forecast time window. The resulting state is the so-called “ analysis

  18. A Complex Network Approach to Distributional Semantic Models

    PubMed Central

    Utsumi, Akira

    2015-01-01

    A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models. PMID:26295940

  19. Finite element analysis of different loading conditions for implant-supported overdentures supported by conventional or mini implants.

    PubMed

    Solberg, K; Heinemann, F; Pellikaan, P; Keilig, L; Stark, H; Bourauel, C; Hasan, I

    2017-05-01

    The effect of implants' number on overdenture stability and stress distribution in edentulous mandible, implants and overdenture was numerically investigated for implant-supported overdentures. Three models were constructed. Overdentures were connected to implants by means of ball head abutments and rubber ring. In model 1, the overdenture was retained by two conventional implants; in model 2, by four conventional implants; and in model 3, by five mini implants. The overdenture was subjected to a symmetrical load at an angle of 20 degrees to the overdenture at the canine regions and vertically at the first molars. Four different loading conditions with two total forces (120, 300 N) were considered for the numerical analysis. The overdenture displacement was about 2.2 times higher when five mini implants were used rather than four conventional implants. The lowest stress in bone bed was observed with four conventional implants. Stresses in bone were reduced by 61% in model 2 and by 6% in model 3 in comparison to model 1. The highest stress was observed with five mini implants. Stresses in implants were reduced by 76% in model 2 and 89% increased in model 3 compared to model 1. The highest implant displacement was observed with five mini implants. Implant displacements were reduced by 29% in model 2, and increased by 273% in model 3 compared to model 1. Conventional implants proved better stability for overdenture than mini implants. Regardless the type and number of implants, the stress within the bone and implants are below the critical limits.

  20. Investigation of nuclear stopping observable in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Deepshikha; Kumar, Suneel

    2018-07-01

    Detailed analysis has been made on nuclear stopping using various observable. Transport model, isospin dependent quantum molecular dynamics model (IQMD) has been used to study stopping over the whole mass range at incident energies between 10 MeV/nucleon and 1000 MeV/nucleon. Our study proves that ratio of width of transverse to longitudinal rapidity distribution i.e., < varxz > is the most suitable parameter to study nuclear stopping. Also, it has been observed that light mass fragments (LMF's) emitted from participant region can be used as barometer to study nuclear stopping.

  1. Assessment of the Value, Impact, and Validity of the Jobs and Economic Development Impacts (JEDI) Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billman, L.; Keyser, D.

    The Jobs and Economic Development Impacts (JEDI) models, developed by the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), use input-output methodology to estimate gross (not net) jobs and economic impacts of building and operating selected types of renewable electricity generation and fuel plants. This analysis provides the DOE with an assessment of the value, impact, and validity of the JEDI suite of models. While the models produce estimates of jobs, earnings, and economic output, this analysis focuses only on jobs estimates. This validation report includes an introductionmore » to JEDI models, an analysis of the value and impact of the JEDI models, and an analysis of the validity of job estimates generated by JEDI model through comparison to other modeled estimates and comparison to empirical, observed jobs data as reported or estimated for a commercial project, a state, or a region.« less

  2. State space model approach for forecasting the use of electrical energy (a case study on: PT. PLN (Persero) district of Kroya)

    NASA Astrophysics Data System (ADS)

    Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik

    2018-05-01

    Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.

  3. Structural Equation Model Trees

    ERIC Educational Resources Information Center

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  4. A study of zodiacal light models

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Craven, P. D.

    1973-01-01

    A review is presented of the basic equations used in the analysis of photometric observations of zodiacal light. A survey of the methods used to model the zodiacal light in and out of the ecliptic is given. Results and comparison of various models are presented, as well as recent results by the authors.

  5. Repeated holdout Cross-Validation of Model to Estimate Risk of Lyme Disease by Landscape Attributes

    EPA Science Inventory

    We previously modeled Lyme disease (LD) risk at the landscape scale; here we evaluate the model's overall goodness-of-fit using holdout validation. Landscapes were characterized within road-bounded analysis units (AU). Observed LD cases (obsLD) were ascertained per AU. Data were ...

  6. Truck Size and Weight Modelling Workshop : activity 2 : Task C refine freight diversion models for all modes

    DOT National Transportation Integrated Search

    1995-09-01

    One final overall observation from the workshop was that FHWA should consider case studies o f : industry practices. This approach differs from TS&W modeling, but could be used to build a microapproach for national TS&W analysis. For example, differe...

  7. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  8. Accuracy of gap analysis habitat models in predicting physical features for wildlife-habitat associations in the southwest U.S.

    USGS Publications Warehouse

    Boykin, K.G.; Thompson, B.C.; Propeck-Gray, S.

    2010-01-01

    Despite widespread and long-standing efforts to model wildlife-habitat associations using remotely sensed and other spatially explicit data, there are relatively few evaluations of the performance of variables included in predictive models relative to actual features on the landscape. As part of the National Gap Analysis Program, we specifically examined physical site features at randomly selected sample locations in the Southwestern U.S. to assess degree of concordance with predicted features used in modeling vertebrate habitat distribution. Our analysis considered hypotheses about relative accuracy with respect to 30 vertebrate species selected to represent the spectrum of habitat generalist to specialist and categorization of site by relative degree of conservation emphasis accorded to the site. Overall comparison of 19 variables observed at 382 sample sites indicated ???60% concordance for 12 variables. Directly measured or observed variables (slope, soil composition, rock outcrop) generally displayed high concordance, while variables that required judgments regarding descriptive categories (aspect, ecological system, landform) were less concordant. There were no differences detected in concordance among taxa groups, degree of specialization or generalization of selected taxa, or land conservation categorization of sample sites with respect to all sites. We found no support for the hypothesis that accuracy of habitat models is inversely related to degree of taxa specialization when model features for a habitat specialist could be more difficult to represent spatially. Likewise, we did not find support for the hypothesis that physical features will be predicted with higher accuracy on lands with greater dedication to biodiversity conservation than on other lands because of relative differences regarding available information. Accuracy generally was similar (>60%) to that observed for land cover mapping at the ecological system level. These patterns demonstrate resilience of gap analysis deductive model processes to the type of remotely sensed or interpreted data used in habitat feature predictions. ?? 2010 Elsevier B.V.

  9. RXTE Observation of Cygnus X-1: III. Implications for Compton Corona and ADAF Models. Report 3; Implications for Compton Corona and ADAF Models

    NASA Technical Reports Server (NTRS)

    Nowak, Michael A.; Wilms, Joern; Vaughan, Brian A.; Dove, James B.; Begelman, Mitchell C.

    1999-01-01

    We have recently shown that a 'sphere + disk' geometry Compton corona model provides a good description of Rossi X-ray Timing Explorer (RXTE) observations of the hard/low state of Cygnus X-1. Separately, we have analyzed the temporal data provided by RXTE. In this paper we consider the implications of this timing analysis for our best-fit 'sphere + disk' Comptonization models. We focus our attention on the observed Fourier frequency-dependent time delays between hard and soft photons. We consider whether the observed time delays are: created in the disk but are merely reprocessed by the corona; created by differences between the hard and soft photon diffusion times in coronae with extremely large radii; or are due to 'propagation' of disturbances through the corona. We find that the time delays are most likely created directly within the corona; however, it is currently uncertain which specific model is the most likely explanation. Models that posit a large coronal radius [or equivalently, a large Advection Dominated Accretion Flow (ADAF) region] do not fully address all the details of the observed spectrum. The Compton corona models that do address the full spectrum do not contain dynamical information. We show, however, that simple phenomenological propagation models for the observed time delays for these latter models imply extremely slow characteristic propagation speeds within the coronal region.

  10. Model-based analysis of multi-shell diffusion MR data for tractography: How to get over fitting problems

    PubMed Central

    Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ

    2012-01-01

    In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356

  11. How fast is fisheries-induced evolution? Quantitative analysis of modelling and empirical studies

    PubMed Central

    Audzijonyte, Asta; Kuparinen, Anna; Fulton, Elizabeth A

    2013-01-01

    A number of theoretical models, experimental studies and time-series studies of wild fish have explored the presence and magnitude of fisheries-induced evolution (FIE). While most studies agree that FIE is likely to be happening in many fished stocks, there are disagreements about its rates and implications for stock viability. To address these disagreements in a quantitative manner, we conducted a meta-analysis of FIE rates reported in theoretical and empirical studies. We discovered that rates of phenotypic change observed in wild fish are about four times higher than the evolutionary rates reported in modelling studies, but correlation between the rate of change and instantaneous fishing mortality (F) was very similar in the two types of studies. Mixed-model analyses showed that in the modelling studies traits associated with reproductive investment and growth evolved slower than rates related to maturation. In empirical observations age-at-maturation was changing faster than other life-history traits. We also found that, despite different assumption and modelling approaches, rates of evolution for a given F value reported in 10 of 13 modelling studies were not significantly different. PMID:23789026

  12. Accuracy of topographic index models at identifying ephemeral gully trajectories on agricultural fields

    NASA Astrophysics Data System (ADS)

    Sheshukov, Aleksey Y.; Sekaluvu, Lawrence; Hutchinson, Stacy L.

    2018-04-01

    Topographic index (TI) models have been widely used to predict trajectories and initiation points of ephemeral gullies (EGs) in agricultural landscapes. Prediction of EGs strongly relies on the selected value of critical TI threshold, and the accuracy depends on topographic features, agricultural management, and datasets of observed EGs. This study statistically evaluated the predictions by TI models in two paired watersheds in Central Kansas that had different levels of structural disturbances due to implemented conservation practices. Four TI models with sole dependency on topographic factors of slope, contributing area, and planform curvature were used in this study. The observed EGs were obtained by field reconnaissance and through the process of hydrological reconditioning of digital elevation models (DEMs). The Kernel Density Estimation analysis was used to evaluate TI distribution within a 10-m buffer of the observed EG trajectories. The EG occurrence within catchments was analyzed using kappa statistics of the error matrix approach, while the lengths of predicted EGs were compared with the observed dataset using the Nash-Sutcliffe Efficiency (NSE) statistics. The TI frequency analysis produced bi-modal distribution of topographic indexes with the pixels within the EG trajectory having a higher peak. The graphs of kappa and NSE versus critical TI threshold showed similar profile for all four TI models and both watersheds with the maximum value representing the best comparison with the observed data. The Compound Topographic Index (CTI) model presented the overall best accuracy with NSE of 0.55 and kappa of 0.32. The statistics for the disturbed watershed showed higher best critical TI threshold values than for the undisturbed watershed. Structural conservation practices implemented in the disturbed watershed reduced ephemeral channels in headwater catchments, thus producing less variability in catchments with EGs. The variation in critical thresholds for all TI models suggested that TI models tend to predict EG occurrence and length over a range of thresholds rather than find a single best value.

  13. Solar Signals in CMIP-5 Simulations: The Stratospheric Pathway

    NASA Technical Reports Server (NTRS)

    Mitchell, D.M.; Misios, S.; Gray, L. J.; Tourpali, K.; Matthes, K.; Hood, L.; Schmidt, H.; Chiodo, G.; Thieblemont, R.; Rozanov, E.; hide

    2015-01-01

    The 11 year solar-cycle component of climate variability is assessed in historical simulations of models taken from the Coupled Model Intercomparison Project, phase 5 (CMIP-5). Multiple linear regression is applied to estimate the zonal temperature, wind and annular mode responses to a typical solar cycle, with a focus on both the stratosphere and the stratospheric influence on the surface over the period approximately 1850-2005. The analysis is performed on all CMIP-5 models but focuses on the 13 CMIP-5 models that resolve the stratosphere (high-top models) and compares the simulated solar cycle signature with reanalysis data. The 11 year solar cycle component of climate variability is found to be weaker in terms of magnitude and latitudinal gradient around the stratopause in the models than in the reanalysis. The peak in temperature in the lower equatorial stratosphere (approximately 70 hPa) reported in some studies is found in the models to depend on the length of the analysis period, with the last 30 years yielding the strongest response. A modification of the Polar Jet Oscillation (PJO) in response to the 11 year solar cycle is not robust across all models, but is more apparent in models with high spectral resolution in the short-wave region. The PJO evolution is slower in these models, leading to a stronger response during February, whereas observations indicate it to be weaker. In early winter, the magnitude of the modeled response is more consistent with observations when only data from 1979-2005 are considered. The observed North Pacific high-pressure surface response during the solar maximum is only simulated in some models, for which there are no distinguishing model characteristics. The lagged North Atlantic surface response is reproduced in both high- and low-top models, but is more prevalent in the former. In both cases, the magnitude of the response is generally lower than in observations.

  14. Empirically Derived and Simulated Sensitivity of Vegetation to Climate Across Global Gradients of Temperature and Precipitation

    NASA Astrophysics Data System (ADS)

    Quetin, G. R.; Swann, A. L. S.

    2017-12-01

    Successfully predicting the state of vegetation in a novel environment is dependent on our process level understanding of the ecosystem and its interactions with the environment. We derive a global empirical map of the sensitivity of vegetation to climate using the response of satellite-observed greenness and leaf area to interannual variations in temperature and precipitation. Our analysis provides observations of ecosystem functioning; the vegetation interactions with the physical environment, across a wide range of climates and provide a functional constraint for hypotheses engendered in process-based models. We infer mechanisms constraining ecosystem functioning by contrasting how the observed and simulated sensitivity of vegetation to climate varies across climate space. Our analysis yields empirical evidence for multiple physical and biological mediators of the sensitivity of vegetation to climate as a systematic change across climate space. Our comparison of remote sensing-based vegetation sensitivity with modeled estimates provides evidence for which physiological mechanisms - photosynthetic efficiency, respiration, water supply, atmospheric water demand, and sunlight availability - dominate the ecosystem functioning in places with different climates. Earth system models are generally successful in reproducing the broad sign and shape of ecosystem functioning across climate space. However, this general agreement breaks down in hot wet climates where models simulate less leaf area during a warmer year, while observations show a mixed response but overall more leaf area during warmer years. In addition, simulated ecosystem interaction with temperature is generally larger and changes more rapidly across a gradient of temperature than is observed. We hypothesize that the amplified interaction and change are both due to a lack of adaptation and acclimation in simulations. This discrepancy with observations suggests that simulated responses of vegetation to global warming, and feedbacks between vegetation and climate, are too strong in the models.

  15. An analysis of the Kalman filter in the Gamma Ray Observatory (GRO) onboard attitude determination subsystem

    NASA Technical Reports Server (NTRS)

    Snow, Frank; Harman, Richard; Garrick, Joseph

    1988-01-01

    The Gamma Ray Observatory (GRO) spacecraft needs a highly accurate attitude knowledge to achieve its mission objectives. Utilizing the fixed-head star trackers (FHSTs) for observations and gyroscopes for attitude propagation, the discrete Kalman Filter processes the attitude data to obtain an onboard accuracy of 86 arc seconds (3 sigma). A combination of linear analysis and simulations using the GRO Software Simulator (GROSS) are employed to investigate the Kalman filter for stability and the effects of corrupted observations (misalignment, noise), incomplete dynamic modeling, and nonlinear errors on Kalman filter. In the simulations, on-board attitude is compared with true attitude, the sensitivity of attitude error to model errors is graphed, and a statistical analysis is performed on the residuals of the Kalman Filter. In this paper, the modeling and sensor errors that degrade the Kalman filter solution beyond mission requirements are studied, and methods are offered to identify the source of these errors.

  16. PROPERTIES OF 42 SOLAR-TYPE KEPLER TARGETS FROM THE ASTEROSEISMIC MODELING PORTAL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metcalfe, T. S.; Mathur, S.; Creevey, O. L.

    2014-10-01

    Recently the number of main-sequence and subgiant stars exhibiting solar-like oscillations that are resolved into individual mode frequencies has increased dramatically. While only a few such data sets were available for detailed modeling just a decade ago, the Kepler mission has produced suitable observations for hundreds of new targets. This rapid expansion in observational capacity has been accompanied by a shift in analysis and modeling strategies to yield uniform sets of derived stellar properties more quickly and easily. We use previously published asteroseismic and spectroscopic data sets to provide a uniform analysis of 42 solar-type Kepler targets from the Asteroseismicmore » Modeling Portal. We find that fitting the individual frequencies typically doubles the precision of the asteroseismic radius, mass, and age compared to grid-based modeling of the global oscillation properties, and improves the precision of the radius and mass by about a factor of three over empirical scaling relations. We demonstrate the utility of the derived properties with several applications.« less

  17. A multivariate variational objective analysis-assimilation method. Part 1: Development of the basic model

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.; Ochs, Harry T., III

    1988-01-01

    The variational method of undetermined multipliers is used to derive a multivariate model for objective analysis. The model is intended for the assimilation of 3-D fields of rawinsonde height, temperature and wind, and mean level temperature observed by satellite into a dynamically consistent data set. Relative measurement errors are taken into account. The dynamic equations are the two nonlinear horizontal momentum equations, the hydrostatic equation, and an integrated continuity equation. The model Euler-Lagrange equations are eleven linear and/or nonlinear partial differential and/or algebraic equations. A cyclical solution sequence is described. Other model features include a nonlinear terrain-following vertical coordinate that eliminates truncation error in the pressure gradient terms of the horizontal momentum equations and easily accommodates satellite observed mean layer temperatures in the middle and upper troposphere. A projection of the pressure gradient onto equivalent pressure surfaces removes most of the adverse impacts of the lower coordinate surface on the variational adjustment.

  18. Predictions of structural integrity of steam generator tubes under normal operating, accident, an severe accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Majumdar, S.

    1997-02-01

    Available models for predicting failure of flawed and unflawed steam generator tubes under normal operating, accident, and severe accident conditions are reviewed. Tests conducted in the past, though limited, tended to show that the earlier flow-stress model for part-through-wall axial cracks overestimated the damaging influence of deep cracks. This observation was confirmed by further tests at high temperatures, as well as by finite-element analysis. A modified correlation for deep cracks can correct this shortcoming of the model. Recent tests have shown that lateral restraint can significantly increase the failure pressure of tubes with unsymmetrical circumferential cracks. This observation was confirmedmore » by finite-element analysis. The rate-independent flow stress models that are successful at low temperatures cannot predict the rate-sensitive failure behavior of steam generator tubes at high temperatures. Therefore, a creep rupture model for predicting failure was developed and validated by tests under various temperature and pressure loadings that can occur during postulated severe accidents.« less

  19. Orbit-determination performance of Doppler data for interplanetary cruise trajectories. Part 1: Error analysis methodology

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.; Thurman, S. W.

    1992-01-01

    An error covariance analysis methodology is used to investigate different weighting schemes for two-way (coherent) Doppler data in the presence of transmission-media and observing-platform calibration errors. The analysis focuses on orbit-determination performance in the interplanetary cruise phase of deep-space missions. Analytical models for the Doppler observable and for transmission-media and observing-platform calibration errors are presented, drawn primarily from previous work. Previously published analytical models were improved upon by the following: (1) considering the effects of errors in the calibration of radio signal propagation through the troposphere and ionosphere as well as station-location errors; (2) modelling the spacecraft state transition matrix using a more accurate piecewise-linear approximation to represent the evolution of the spacecraft trajectory; and (3) incorporating Doppler data weighting functions that are functions of elevation angle, which reduce the sensitivity of the estimated spacecraft trajectory to troposphere and ionosphere calibration errors. The analysis is motivated by the need to develop suitable weighting functions for two-way Doppler data acquired at 8.4 GHz (X-band) and 32 GHz (Ka-band). This weighting is likely to be different from that in the weighting functions currently in use; the current functions were constructed originally for use with 2.3 GHz (S-band) Doppler data, which are affected much more strongly by the ionosphere than are the higher frequency data.

  20. Evaluation of solar irradiance models for climate studies

    NASA Astrophysics Data System (ADS)

    Ball, William; Yeo, Kok-Leng; Krivova, Natalie; Solanki, Sami; Unruh, Yvonne; Morrill, Jeff

    2015-04-01

    Instruments on satellites have been observing both Total Solar Irradiance (TSI) and Spectral Solar Irradiance (SSI), mainly in the ultraviolet (UV), since 1978. Models were developed to reproduce the observed variability and to compute the variability at wavelengths that were not observed or had an uncertainty too high to determine an accurate rotational or solar cycle variability. However, various models and measurements show different solar cycle SSI variability that lead to different modelled responses of ozone and temperature in the stratosphere, mainly due to the different UV variability in each model, and the global energy balance. The NRLSSI and SATIRE-S models are the most comprehensive reconstructions of solar irradiance variability for the period from 1978 to the present day. But while NRLSSI and SATIRE-S show similar solar cycle variability below 250 nm, between 250 and 400 nm SATIRE-S typically displays 50% larger variability, which is however, still significantly less then suggested by recent SORCE data. Due to large uncertainties and inconsistencies in some observational datasets, it is difficult to determine in a simple way which model is likely to be closer to the true solar variability. We review solar irradiance variability measurements and modelling and employ new analysis that sheds light on the causes of the discrepancies between the two models and with the observations.

  1. Low-Metallicity Lead Stars: Comparison between Theory and Observations

    NASA Astrophysics Data System (ADS)

    Bisterzo, S.; Gallino, R.; Straniero, O.; Aoki, W.; Ryan, S.; Beers, T. C.

    2006-07-01

    We compare AGB theoretical models with spectroscopic abundances of a sample of very metal-poor, C-rich, s-rich and lead-rich stars observed at high-resolution spectroscopy. Fits are obtained for AGB models with different 13C-pocket efficiencies and initial masses. The two intrinsic indicators, [hs/ls] and [Pb/hs] versus [Fe/H], are analyzed. An extended analysis of all the observed elements is made, outlining apparent discrepancies for a few elements. The analysis of C and N abundances strengthen the need of a strong cool bottom process occurring during the AGB. A significant number of these stars are both s-enriched and r-enriched. For them, the envelope abundances are predicted by mass transfer from the more massive AGB companion in a binary system from a parental cloud already enriched in r-elements.

  2. Gravity model development for precise orbit computations for satellite altimetry

    NASA Technical Reports Server (NTRS)

    Marsh, James G.; Lerch, Francis, J.; Smith, David E.; Klosko, Steven M.; Pavlis, Erricos

    1986-01-01

    Two preliminary gravity models developed as a first step in reaching the TOPEX/Poseidon modeling goals are discussed. They were obtained by NASA-Goddard from an analysis of exclusively satellite tracking observations. With the new Preliminary Gravity Solution-T2 model, an improved global estimate of the field is achieved with an improved description of the geoid.

  3. Dynamical Analysis in the Mathematical Modelling of Human Blood Glucose

    ERIC Educational Resources Information Center

    Bae, Saebyok; Kang, Byungmin

    2012-01-01

    We want to apply the geometrical method to a dynamical system of human blood glucose. Due to the educational importance of model building, we show a relatively general modelling process using observational facts. Next, two models of some concrete forms are analysed in the phase plane by means of linear stability, phase portrait and vector…

  4. Analysis of Polder Polarization Measurements During Astex and Eucrex Experiments

    NASA Technical Reports Server (NTRS)

    Chen, Hui; Han, Qingyuan; Chou, Joyce; Welch, Ronald M.

    1997-01-01

    Polarization is more sensitive than intensity to cloud microstructure such as the particle size and shape, and multiple scattering does not wash out features in polarization as effectively as it does in the intensity. Polarization measurements, particularly in the near IR, are potentially a valuable tool for cloud identification and for studies of the microphysics of clouds. The POLDER instrument is designed to provide wide field of view bidirectional images in polarized light. During the ASTEX-SOFIA campaign on June 12th, 1992, over the Atlantic Ocean (near the Azores Islands), images of homogeneous thick stratocumulus cloud fields were acquired. During the EUCREX'94 (April, 1994) campaign, the POLDER instrument was flying over the region of Brittany (France), taking observations of cirrus clouds. This study involves model studies and data analysis of POLDER observations. Both models and data analysis show that POLDER can be used to detect cloud thermodynamic phases. Model results show that polarized reflection in the Lamda =0.86 micron band is sensitive to cloud droplet sizes but not to cloud optical thickness. Comparison between model and data analysis reveals that cloud droplet sizes during ASTEX are about 5 microns, which agrees very well with the results of in situ measurements (4-5 microns). Knowing the retrieved cloud droplet sizes, the total reflected intensity of the POLDER measurements then can be used to retrieve cloud optical thickness. The close agreement between data analysis and model results during ASTEX also suggests the homogeneity of the cloud layer during that campaign.

  5. Pikalert(R) System Vehicle Data Translator (VDT) Utilizing Integrated Mobile Observations Pikalert VDT Enhancements, Operations, & Maintenance

    DOT National Transportation Integrated Search

    2017-03-24

    The Pikalert System provides high precision road weather guidance. It assesses current weather and road conditions based on observations from connected vehicles, road weather information stations, radar, and weather model analysis fields. It also for...

  6. Correlative analysis of hard and soft x ray observations of solar flares

    NASA Technical Reports Server (NTRS)

    Zarro, Dominic M.

    1994-01-01

    We have developed a promising new technique for jointly analyzing BATSE hard X-ray observations of solar flares with simultaneous soft X-ray observations. The technique is based upon a model in which electric currents and associated electric fields are responsible for the respective heating and particle acceleration that occur in solar flares. A useful by-product of this technique is the strength and evolution of the coronal electric field. The latter permits one to derive important flare parameters such as the current density, the number of current filaments composing the loop, and ultimately the hard X-ray spectrum produced by the runaway electrons. We are continuing to explore the technique by applying it to additional flares for which we have joint BATSE/Yohkoh observations. A central assumption of our analysis is the constant of proportionality alpha relating the hard X-ray flux above 50 keV and the rate of electron acceleration. For a thick-target model of hard X-ray production, it can be shown that cv is in fact related to the spectral index and low-energy cutoff of precipitating electrons. The next step in our analysis is to place observational constraints on the latter parameters using the joint BATSE/Yohkoh data.

  7. An Information-theoretic Approach to Optimize JWST Observations and Retrievals of Transiting Exoplanet Atmospheres

    NASA Astrophysics Data System (ADS)

    Howe, Alex R.; Burrows, Adam; Deming, Drake

    2017-01-01

    We provide an example of an analysis to explore the optimization of observations of transiting hot Jupiters with the James Webb Space Telescope (JWST) to characterize their atmospheres based on a simple three-parameter forward model. We construct expansive forward model sets for 11 hot Jupiters, 10 of which are relatively well characterized, exploring a range of parameters such as equilibrium temperature and metallicity, as well as considering host stars over a wide range in brightness. We compute posterior distributions of our model parameters for each planet with all of the available JWST spectroscopic modes and several programs of combined observations and compute their effectiveness using the metric of estimated mutual information per degree of freedom. From these simulations, clear trends emerge that provide guidelines for designing a JWST observing program. We demonstrate that these guidelines apply over a wide range of planet parameters and target brightnesses for our simple forward model.

  8. Overpressures in the Uinta Basin, Utah: Analysis using a three-dimensional basin evolution model

    NASA Astrophysics Data System (ADS)

    McPherson, Brian J. O. L.; Bredehoeft, John D.

    2001-04-01

    High pore fluid pressures, approaching lithostatic, are observed in the deepest sections of the Uinta basin, Utah. Geologic observations and previous modeling studies suggest that the most likely cause of observed overpressures is hydrocarbon generation. We studied Uinta overpressures by developing and applying a three-dimensional, numerical model of the evolution of the basin. The model was developed from a public domain computer code, with addition of a new mesh generator that builds the basin through time, coupling the structural, thermal, and hydrodynamic evolution. Also included in the model are in situ hydrocarbon generation and multiphase migration. The modeling study affirmed oil generation as an overpressure mechanism, but also elucidated the relative roles of multiphase fluid interaction, oil density and viscosity, and sedimentary compaction. An important result is that overpressures by oil generation create conditions for rock fracturing, and associated fracture permeability may regulate or control the propensity to maintain overpressures.

  9. Observational constraint on spherical inhomogeneity with CMB and local Hubble parameter

    NASA Astrophysics Data System (ADS)

    Tokutake, Masato; Ichiki, Kiyotomo; Yoo, Chul-Moon

    2018-03-01

    We derive an observational constraint on a spherical inhomogeneity of the void centered at our position from the angular power spectrum of the cosmic microwave background (CMB) and local measurements of the Hubble parameter. The late time behaviour of the void is assumed to be well described by the so-called Λ-Lemaȋtre-Tolman-Bondi (ΛLTB) solution. Then, we restrict the models to the asymptotically homogeneous models each of which is approximated by a flat Friedmann-Lemaȋtre-Robertson-Walker model. The late time ΛLTB models are parametrized by four parameters including the value of the cosmological constant and the local Hubble parameter. The other two parameters are used to parametrize the observed distance-redshift relation. Then, the ΛLTB models are constructed so that they are compatible with the given distance-redshift relation. Including conventional parameters for the CMB analysis, we characterize our models by seven parameters in total. The local Hubble measurements are reflected in the prior distribution of the local Hubble parameter. As a result of a Markov-Chains-Monte-Carlo analysis for the CMB temperature and polarization anisotropies, we found that the inhomogeneous universe models with vanishing cosmological constant are ruled out as is expected. However, a significant under-density around us is still compatible with the angular power spectrum of CMB and the local Hubble parameter.

  10. Joint Center for Satellite Data Assimilation Overview and Research Activities

    NASA Astrophysics Data System (ADS)

    Auligne, T.

    2017-12-01

    In 2001 NOAA/NESDIS, NOAA/NWS, NOAA/OAR, and NASA, subsequently joined by the US Navy and Air Force, came together to form the Joint Center for Satellite Data Assimilation (JCSDA) for the common purpose of accelerating the use of satellite data in environmental numerical prediction modeling by developing, using, and anticipating advances in numerical modeling, satellite-based remote sensing, and data assimilation methods. The primary focus was to bring these advances together to improve operational numerical model-based forecasting, under the premise that these partners have common technical and logistical challenges assimilating satellite observations into their modeling enterprises that could be better addressed through cooperative action and/or common solutions. Over the last 15 years, the JCSDA has made and continues to make major contributions to operational assimilation of satellite data. The JCSDA is a multi-agency U.S. government-owned-and-operated organization that was conceived as a venue for the several agencies NOAA, NASA, USAF and USN to collaborate on advancing the development and operational use of satellite observations into numerical model-based environmental analysis and forecasting. The primary mission of the JCSDA is to "accelerate and improve the quantitative use of research and operational satellite data in weather, ocean, climate and environmental analysis and prediction systems." This mission is fulfilled through directed research targeting the following key science objectives: Improved radiative transfer modeling; new instrument assimilation; assimilation of humidity, clouds, and precipitation observations; assimilation of land surface observations; assimilation of ocean surface observations; atmospheric composition; and chemistry and aerosols. The goal of this presentation is to briefly introduce the JCSDA's mission and vision, and to describe recent research activities across various JCSDA partners.

  11. The Chandra Source Catalog 2.0: Spectral Properties

    NASA Astrophysics Data System (ADS)

    McCollough, Michael L.; Siemiginowska, Aneta; Burke, Douglas; Nowak, Michael A.; Primini, Francis Anthony; Laurino, Omar; Nguyen, Dan T.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula; Chandra Source Catalog Team

    2018-01-01

    The second release of the Chandra Source Catalog (CSC) contains all sources identified from sixteen years' worth of publicly accessible observations. The vast majority of these sources have been observed with the ACIS detector and have spectral information in 0.5-7 keV energy range. Here we describe the methods used to automatically derive spectral properties for each source detected by the standard processing pipeline and included in the final CSC. The sources with high signal to noise ratio (exceeding 150 net counts) were fit in Sherpa (the modeling and fitting application from the Chandra Interactive Analysis of Observations package) using wstat as a fit statistic and Bayesian draws method to determine errors. Three models were fit to each source: an absorbed power-law, blackbody, and Bremsstrahlung emission. The fitted parameter values for the power-law, blackbody, and Bremsstrahlung models were included in the catalog with the calculated flux for each model. The CSC also provides the source energy fluxes computed from the normalizations of predefined absorbed power-law, black-body, Bremsstrahlung, and APEC models needed to match the observed net X-ray counts. For sources that have been observed multiple times we performed a Bayesian Blocks analysis will have been performed (see the Primini et al. poster) and the most significant block will have a joint fit performed for the mentioned spectral models. In addition, we provide access to data products for each source: a file with source spectrum, the background spectrum, and the spectral response of the detector. Hardness ratios were calculated for each source between pairs of energy bands (soft, medium and hard). This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  12. Time-series photometric spot modeling. 2: Fifteen years of photometry of the bright RS CVn binary HR 7275

    NASA Technical Reports Server (NTRS)

    Strassmeier, K. G.; Hall, D. S.; Henry, G. W.

    1994-01-01

    We present a time-dependent spot modeling analysis of 15 consecutive years of V-band photometry of the long-period (P(sub orb) = 28.6 days) RS CVn binary HR 7275. This baseline in time is one of the longest, uninterrupted intervals a spotted star has been observed. The spot modeling analysis yields a total of 20 different spots throughout the time span of our observations. The distribution of the observed spot migration rates is consistent with solar-type differential rotation and suggests a lower limit of the differential-rotation coefficient of 0.022 +/-0.004. The observed, maximum lifetime of a single spot (or spot group) is 4.5 years, the minimum lifetime is approximately one year, but an average spot lives for 2.2 years. If we assume that the mechanical shear by differential rotation sets the upper limit to the spot lifetime, the observed maximum lifetime in turn sets an upper limit to the differential-rotation coefficient, namely 0.04 +/- 0.01. This would be differential rotation just 5 to 8 times less than the solar value and one of the strongest among active binaries. We found no conclusive evidence for the existence of a periodic phenomenon that could be attributed to a stellar magnetic cycle.

  13. The impact of missing trauma data on predicting massive transfusion

    PubMed Central

    Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.

    2013-01-01

    INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514

  14. Observations and NLTE modeling of Ellerman bombs

    NASA Astrophysics Data System (ADS)

    Berlicki, A.; Heinzel, P.

    2014-07-01

    Context. Ellerman bombs (EBs) are short-lived, compact, and spatially well localized emission structures that are observed well in the wings of the hydrogen Hα line. EBs are also observed in the chromospheric CaII lines and in UV continua as bright points located within active regions. Hα line profiles of EBs show a deep absorption at the line center and enhanced emission in the line wings with maxima around ±1 Å from the line center. Similar shapes of the line profiles are observed for the CaII IR line at 8542 Å. In CaII H and K lines the emission peaks are much stronger, and EBs emission is also enhanced in the line center. Aims: It is generally accepted that EBs may be considered as compact microflares located in lower solar atmosphere that contribute to the heating of these low-lying regions, close to the temperature minimum of the atmosphere. However, it is still not clear where exactly the emission of EBs is formed in the solar atmosphere. High-resolution spectrophotometric observations of EBs were used for determining of their physical parameters and construction of semi-empirical models. Obtained models allow us to determine the position of EBs in the solar atmosphere, as well as the vertical structure of the activated EB atmosphere Methods: In our analysis we used observations of EBs obtained in the Hα and CaII H lines with the Dutch Open Telescope (DOT). These one-hour long simultaneous sequences obtained with high temporal and spatial resolution were used to determine the line emissions. To analyze them, we used NLTE numerical codes for the construction of grids of 243 semi-empirical models simulating EBs structures. In this way, the observed emission could be compared with the synthetic line spectra calculated for all such models. Results: For a specific model we found reasonable agreement between the observed and theoretical emission and thus we consider such model as a good approximation to EBs atmospheres. This model is characterized by an enhanced temperature in the lower chromosphere and can be considered as a compact structure (hot spot), which is responsible for the emission observed in the wings of chromospheric lines, in particular in the Hα and CaII H lines. Conclusions: For the first time the set of two lines Hα and CaII H was used to construct semi-empirical models of EBs. Our analysis shows that EBs can be described by a "hot spot" model, with the temperature and/or density increase through a few hundred km atmospheric structure. We confirmed that EBs are located close to the temperature minimum or in the lower chromosphere. Two spectral features (lines in our case), observed simultaneously, significantly strengthen the constraints on a realistic model.

  15. Data Assimilation with the Extended Cmam: Nudging to Re-Analyses of the Lower Atmosphere

    NASA Astrophysics Data System (ADS)

    Fomichev, V. I.; Beagley, S. R.; Shepherd, M. G.; Semeniuk, K.; Mclandress, C. W.; Scinocca, J.; McConnell, J. C.

    2012-12-01

    The extended CMAM is currently being run in a forecast mode allowing the use of the model to simulate specific events. The current analysis period covers 1990-2010. The model is forced using ERA-Interim re-analyses via a nudging technique for the troposphere/stratosphere in combination with the GCM evolution in the lower atmosphere. Thus a transient forced model state is created in the lower atmosphere. The upper atmosphere is allowed to evolve in response to the observed conditions occurring in the lower atmosphere and in response to other transient forcing's such as SSTs, solar flux, and CO2 and CFC boundary changes. This methodology allows specific events and observations to be more successfully compared with the model. The model results compared to TOMS and ACE observations show a good agreement.

  16. Connecting Satellite Observations with Water Cycle Variables Through Land Data Assimilation: Examples Using the NASA GEOS-5 LDAS

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf H.; De Lannoy, Gabrielle J. M.; Forman, Barton A.; Draper, Clara S.; Liu, Qing

    2013-01-01

    A land data assimilation system (LDAS) can merge satellite observations (or retrievals) of land surface hydrological conditions, including soil moisture, snow, and terrestrial water storage (TWS), into a numerical model of land surface processes. In theory, the output from such a system is superior to estimates based on the observations or the model alone, thereby enhancing our ability to understand, monitor, and predict key elements of the terrestrial water cycle. In practice, however, satellite observations do not correspond directly to the water cycle variables of interest. The present paper addresses various aspects of this seeming mismatch using examples drawn from recent research with the ensemble-based NASA GEOS-5 LDAS. These aspects include (1) the assimilation of coarse-scale observations into higher-resolution land surface models, (2) the partitioning of satellite observations (such as TWS retrievals) into their constituent water cycle components, (3) the forward modeling of microwave brightness temperatures over land for radiance-based soil moisture and snow assimilation, and (4) the selection of the most relevant types of observations for the analysis of a specific water cycle variable that is not observed (such as root zone soil moisture). The solution to these challenges involves the careful construction of an observation operator that maps from the land surface model variables of interest to the space of the assimilated observations.

  17. Computational investigation of kinetics of cross-linking reactions in proteins: importance in structure prediction.

    PubMed

    Bandyopadhyay, Pradipta; Kuntz, Irwin D

    2009-01-01

    The determination of protein structure using distance constraints is a new and promising field of study. One implementation involves attaching residues of a protein using a cross-linking agent, followed by protease digestion, analysis of the resulting peptides by mass spectroscopy, and finally sequence threading to detect the protein folds. In the present work, we carry out computational modeling of the kinetics of cross-linking reactions in proteins using the master equation approach. The rate constants of the cross-linking reactions are estimated using the pKas and the solvent-accessible surface areas of the residues involved. This model is tested with fibroblast growth factor (FGF) and cytochrome C. It is consistent with the initial experimental rate data for individual lysine residues for cytochrome C. Our model captures all observed cross-links for FGF and almost 90% of the observed cross-links for cytochrome C, although it also predicts cross-links that were not observed experimentally (false positives). However, the analysis of the false positive results is complicated by the fact that experimental detection of cross-links can be difficult and may depend on specific experimental conditions such as pH, ionic strength. Receiver operator characteristic plots showed that our model does a good job in predicting the observed cross-links. Molecular dynamics simulations showed that for cytochrome C, in general, the two lysines come closer for the observed cross-links as compared to the false positive ones. For FGF, no such clear pattern exists. The kinetic model and MD simulation can be used to study proposed cross-linking protocols.

  18. Spectro-Timing Study of GX 339-4 in a Hard Intermediate State

    NASA Technical Reports Server (NTRS)

    Furst, F.; Grinberg, V.; Tomsick, J. A.; Bachetti, M.; Boggs, S. E.; Brightman, M.; Christensen, F. E.; Craig, W. W.; Ghandi, P.; Zhang, William W.

    2016-01-01

    We present an analysis of Nuclear Spectroscopic Telescope Array observations of a hard intermediate state of the transient black hole GX 339-4 taken in 2015 January. With the source softening significantly over the course of the 1.3 day long observation we split the data into 21 sub-sets and find that the spectrum of all of them can be well described by a power-law continuum with an additional relativistically blurred reflection component. The photon index increases from approx. 1.69 to approx. 1.77 over the course of the observation. The accretion disk is truncated at around nine gravitational radii in all spectra. We also perform timing analysis on the same 21 individual data sets, and find a strong type-C quasi-periodic oscillation (QPO), which increases in frequency from approx. 0.68 to approx. 1.05 Hz with time. The frequency change is well correlated with the softening of the spectrum. We discuss possible scenarios for the production of the QPO and calculate predicted inner radii in the relativistic precession model as well as the global disk mode oscillations model. We find discrepancies with respect to the observed values in both models unless we allow for a black hole mass of approx. 100 Mass compared to the Sun, which is highly unlikely. We discuss possible systematic uncertainties, in particular with the measurement of the inner accretion disk radius in the relativistic reflection model. We conclude that the combination of observed QPO frequencies and inner accretion disk radii, as obtained from spectral fitting, is difficult to reconcile with current models.

  19. Mean field dynamics of some open quantum systems

    NASA Astrophysics Data System (ADS)

    Merkli, Marco; Rafiyi, Alireza

    2018-04-01

    We consider a large number N of quantum particles coupled via a mean field interaction to another quantum system (reservoir). Our main result is an expansion for the averages of observables, both of the particles and of the reservoir, in inverse powers of √{N }. The analysis is based directly on the Dyson series expansion of the propagator. We analyse the dynamics, in the limit N →∞ , of observables of a fixed number n of particles, of extensive particle observables and their fluctuations, as well as of reservoir observables. We illustrate our results on the infinite mode Dicke model and on various energy-conserving models.

  20. Mean field dynamics of some open quantum systems.

    PubMed

    Merkli, Marco; Rafiyi, Alireza

    2018-04-01

    We consider a large number N of quantum particles coupled via a mean field interaction to another quantum system (reservoir). Our main result is an expansion for the averages of observables, both of the particles and of the reservoir, in inverse powers of [Formula: see text]. The analysis is based directly on the Dyson series expansion of the propagator. We analyse the dynamics, in the limit [Formula: see text], of observables of a fixed number n of particles, of extensive particle observables and their fluctuations, as well as of reservoir observables. We illustrate our results on the infinite mode Dicke model and on various energy-conserving models.

Top